The purpose of this chapter is to explore the fundamentals of SQL Server 7.0. We’ll start by providing an overview of the installation process. We’ll follow that with a discussion of databases, the transaction log and how SQL Server organizes storage. The remaining portion of the chapter is devoted to indexes, locking, stored procedures, enforcing data integrity and the other features of SQL Server that are specific to the implementation of a database application.
In November 1998, Microsoft announced SQL Server 7.0, a significant new release of SQL Server that included important improvements in the areas of ease of use, scalability, reliability and data warehousing.
Microsoft saw a need for a database management system that eliminated the more common administrative functions and provided a simple programming model for developers. They wanted to produce a product that would protect the investments made by their customers. They also wanted a product that had the capability to grow with the customer—a single product that would offer great performance in the workgroup or enterprise setting and improve reliability. Finally, Microsoft wanted to provide its customers with a powerful, yet cost-effective data- warehousing platform.
Why move to SQL Server?
Microsoft SQL Server 7.0 is a scalable, high-performance client/server database. It was designed to support high-volume transaction processing (Online Transaction Processing, or OLTP) systems as well as data warehousing and decision support (Online Analytical Processing,or OLAP) systems.
As many developers know, Visual FoxPro is also capable of astonishing feats of performance. Why, then, would a Visual FoxPro developer consider using SQL Server as the data store for an application? Moreover, why would a customer consider the additional expense ofusing a client/server back end instead of a VFP data store?
Capacity
Microsoft Visual FoxPro has a maximum capacity of 2GB per table. Though it happens infrequently, developers facing this limitation have several choices, such as moving older data to a separate historical table or partitioning data into separate tables by year, region or other criteria.These compromised designs generally result in systems that are expensive and difficult to develop and maintain.
Microsoft SQL Server has an almost unlimited capacity. In fact, if you were to stretch SQL Server to its theoretical limit, you would have roughly one million TB of storage.
Concurrency
Concurrency is the ability of multiple users to access data at the same time. The database
engine must be able to serve those users in a timely manner. SQL Server is capable of handling hundreds (even thousands, depending on hardware) of simultaneous users.
Robustness
SQL Server has many mechanisms that provide a more robust environment than that provided by Visual FoxPro:
• A new storage structure replaces the frail double-linked chains used by previous versions of SQL Server.
• SQL Server’s online backup allows the database to be backed up while users are actively manipulating the data. SQL Server provides a variety of backup types, allowing the database administrator to create a backup strategy suited for any environment.
• The transaction log and Autorecovery process ensure that the database will be restored to a stable state in the event that the server is shut down unexpectedly, such as by a power failure. We’ll cover the Autorecovery process later when we discuss the transaction log.
Security
Visual FoxPro does not support security within the database engine. Developers have implemented application-based security, but this type of security cannot prevent someone from using Visual FoxPro, Access or even Excel to access Visual FoxPro data directly.
SQL Server provides multiple layers of security that a user must cross before obtaining access to the data. The first layer controls access to the server itself. Before a user can access the server, the user must be authenticated. During the authentication process, SQL Server determines the identity of the user who is attempting to gain access. SQL Server provides two authentication methods: SQL Server Authentication and NT Authentication.
• SQL Server Authentication: Using this method, SQL Server requires that the user provide a piece of information that only the user would know: a password. When the user logs in to the server, he or she provides a login name and password. SQL Server searches aninternal table, and if it finds the login name and password, it permits the user to access the server.
• NT Authentication: Using this method, SQL Server relies on the Windows NT Domain to verify the user. In other words, NT is vouching for the user. When a user tries to connect using an NT Domain account, SQL Server verifies that the user’s account or the group that he or she is a member of has been granted or denied permission to access the server.
Which is better? As usual, the answer is not clear-cut. The advantage of NT Authentication is that users don’t need to remember another password. The downside is that you must have a
Windows NT network and domain in place at the site. The advantage of SQL Server Authentication is that a user from a non-Windows NT network can access the server. The downside is that users must remember yet another password.
Gaining access to the server is only the first step. In order to access data in a database, the user must be mapped to a database user in that database. This type of security allows the database administrator to grant the user access to specific parts of the data stored on theserver, as opposed to an all-or-nothing arrangement.
The third and final layer of security is within the database itself. Permissions to access and manipulate database objects (tables, views, stored procedures and so forth) are granted to database users. When a user submits a query to SQL Server, SQL Server verifies that the user has been granted permission to execute the query. If the user does not have the proper permissions, SQL Server returns an error.
Installation
SQL Server is one of the easiest Microsoft BackOffice products to install. Once you have the hardware set up and an operating system installed, installing SQL Server is nothing more than inserting the CD and answering a half-dozen questions. Since SQL Server is self-configuring, there’s very little, if any, post-installation configuration.
SQL Server editions
The first decision you’ll have to make is which edition of SQL Server to install. SQL Server is available in three editions: Standard, Enterprise and Small Business Server (SBS). Table 1, originally published in the SQL Server Books Online, summarizes the capabilities of the different editions.
Table 1. Comparing the capabilities of the Standard, Enterprise and Small Business
Server editions.
Feature
Standard
Enterprise
SBS
Runs on Microsoft BackOffice
Small Business Server
Yes
No
Yes
Runs on Microsoft Windows
NT Server
Yes
No
No
Runs on Microsoft Windows
NT Server, Enterprise Edition
No
Yes
No
Maximum Database Size
Unlimited
Unlimited
10GB
Number of SMP CPUs
4
32
4
Extended Memory Support
No
Yes
No
SQL Server Failover Support
No
Yes
No
Supports Microsoft Search
Service, full-text catalogs,and full-text indexes
Yes
Yes
Yes
Supports Microsoft SQL Server
OLAP Services
Yes (No user-defined
cube partitions)
Yes (includes user-
defined cube partitions)
No
There is one more edition of SQL Server not listed in Table 1. If you are covered by a Per- Seat licensing agreement for any server edition listed in the table, you may choose to install the Desktop SQL Server edition on any client computer. It is not sold as a separate product; it’s included on the CD. The Desktop edition was designed for the “Road Warrior” user (the user who will be disconnected from the main server but will occasionally need to connect and synchronize). The Desktop edition can be installed on Microsoft Windows NT Server,Microsoft Windows NT Workstation and Windows 95/98, but it does not provide support for the following features:
• Parallel queries
• Fiber-mode scheduling
• Read-ahead scans
• Hash and merge joins
• Failover clusters
• Extended memory addressing
• Full-text catalogs and indexes
• Microsoft SQL Server OLAP Services
For more information regarding installation of SQL Server on Windows 95/98, see the topic “SQL Server 7.0 on Windows 95/98” in the SQL Server Books Online.
Licensing
During the installation process, you’ll be asked to choose between two licensing modes: Per-Server and Per-Seat.
With Per-Server licensing, the administrator will specify the maximum number of concurrent users that can connect to the SQL Server at any one time. Concurrent users should not be confused with connections. A specific workstation can have multiple connections to the server, but all of those connections still count as only one user. Per-Server licensing is best if your organization has a single SQL Server or if you have a large number of users but only a few of them are connected at any one time.
A Per-Seat license allows a specific workstation to connect to an unlimited number of SQL Servers. If subsequent SQL Servers are installed, the existing user license will cover the new servers. The only additional licenses necessary are for the new servers.
Your installation can begin with Per-Server licensing. Then, as your organization grows
and more SQL Servers are required, you can take a one-time, one-way upgrade from Per-Server licensing to Per-Seat.
‡
You will not need a Client Access License (CAL) for the installation of NT Server that is hosting SQL Server unless you are using file and/or print services of the NT Server.
Character sets
A character set (or code page) is the list of 256 characters that make up the legal values for SQL Server character data types (char, varchar, text). The first 128 printable characters are the same for all character sets.
During installation, you must specify the character set that SQL Server will use to
represent characters within the server. Your choice of a character set is very important. There is only one character set for the entire server, and it affects all databases on the server. Changing the character set requires rebuilding the master database (something like a mini-reinstall),
re-creating all user databases, and reloading the data.
It is also important that the client workstations use a code page that is consistent with the character set that was installed on the server. If not, two workstations may have different representations for the same bit pattern that is stored within SQL Server.
Code page 1252 is the default and is compatible with the ANSI character set used by the
Microsoft Windows operating systems.
Sort order
The sort order determines how two characters compare to each other during sorting or logical comparison operations. During installation, you will specify two sort orders. The first is specific to the selected character set and will be for non-Unicode character data. The second sortorder will be for Unicode character data.
Sort orders fall into three categories: binary, dictionary order–case-sensitive, and dictionary order–case-insensitive. With binary sorting, each character is sorted and compared according to its binary representation. If two characters have the same binary representation, they’re the same. If not, the lower numerical value is sorted higher in the list. A binary sort order is the fastest sort order because SQL Server does a simple byte-by-byte comparison. Also, binary sort orders are always case-sensitive because each character has a unique
binary representation.
With the dictionary sort orders, all the letters are sorted case-insensitive. An a will sort into the same position as the character A. However, for string comparisons, the case sensitivity of
the sort order determines whether the characters are the same. If you install a dictionary order– case-insensitive sort order (the default), an A will be treated identically to a (A = a). So the character strings age, Age and AGE are considered identical. If a case-sensitive sort order is installed, an A is considered different from a (A ≠ a).
Network libraries
Network libraries identify the method that clients will use to communicate with SQL Server. Each network library represents a different type of Interprocess Communication (IPC) mechanism that SQL Server will recognize. Network libraries work in pairs; both the client and servermust have the same library. To make communications more flexible, SQL Server is capable of listening on multiple IPCs simultaneously.
Some of the network libraries support only one type of physical network protocol, while others are capable of using multiple protocols. For example, TCP/IP sockets requires that the TCP/IP protocol be installed, whereas the Named Pipes and Multiprotocol network libraries will support multiple physical network protocols.
Across the IPC, SQL Server and the client exchange queries, result sets, error messages, and status information (see Figure 1).
During the setup, you will be asked for the network libraries to install. The setup will
default to installing Named Pipes, Multiprotocol and TCP/IP sockets. The client will default to
Named Pipes unless configured otherwise.
Figure 1. The architecture involved in the communications between a client application and SQL Server.
Databases, database files and the transaction log Logically a SQL Server database is identical to Visual FoxPro’s. Both store a collection of tables, indexes, views and stored procedures. Physically, though, they’re implemented very differently. ASQL Server database is implemented as a collection of operating system files called “database files.”
Types of databases
Microsoft SQL Server supports two types of databases: user and system. User databases are the ones you’ll create for your applications. Although SQL Server allows a maximum of 32,767 databases on any one server, the typical server contains only one or two. The other type of database is the system database, which contains the metadata that controls the operation of the server. Descriptions follow of the four SQL Server system databases.
master
The master database contains the System Catalog, a collection of tables that stores information about databases, logins, system configurations, locks and processes. It is the most important of all the system databases.
model
model is both the name and the function of this system database. It is used as a template whenever a new user database is created. When a new database is created, SQL Server makes a copy of the model database and then expands it to the size specified by the user.
tempdb
tempdb is SQL Server’s work space, similar to VFP’s work files. When SQL Server needs a temporary table for solving a query, sorting or implementing cursors, it creates one in tempdb. In addition, temporary objects created by a user exist in tempdb. Unlike other databases, tempdb is reinitialized every time SQL Server is started. Operations within tempdb are logged, but only to support transaction rollbacks.
msdb
msdb contains the metadata that drives SQL Server Agent. SQL Server Agent is the service that supports scheduling of periodic activities such as backups, and responds to events that are posted into NT’s Event log. The information for Jobs, Alerts, Operators, and backup
and restore history is held here. You’ll probably have little use for directly accessing the msdb database.
Database files
A database is physically stored in a collection of database files. A database file is an operating system file, created and maintained by SQL Server. When you create a database, you specify a list of files. You can specify three types of files: primary, secondary and log.
• Primary data files: Every database must have one primary database file. In
addition to storing data, this file contains the database catalog as well as references to the other files that comprise the database. By convention, the primary file has an
.MDF extension.
• Secondary data files: A database may have additional files, called secondary database files. You might create secondary files if you were running out of space in the primary file or you wanted to distribute disk activity across multiple physical drives. By convention,secondary files have an .NDF extension. Note that secondary files require special consideration, as they complicate the backup and restore process.
• Log files: Every database must have at least one log file. Log files contain the transaction log. By convention, log files have an .LDF extension.
When you create a database file, you’ll specify several properties including the physical file name, the path, the initial size, a growth increment, the maximum file size and the logical name of the file. You’ll use the logical file name whenever you manipulate the file properties using the SQL Server Enterprise Manager or Transact-SQL.
Creating a database
There are many ways to create a database. The easiest way is to use either the Create Database
Wizard (see Figure 2) or the Database Properties dialog (see Figure 3) from within the SQL
Server Enterprise Manager (SEM). Both are graphical wrappers for the CREATE DATABASE
command that does the actual work.
Figure 2. The third page of the Create Database Wizard.
Figure 3. The General page of the Database Properties dialog when creating
a new database. The key symbol to the left of the first file specifies that that file is the primary.
An alternative way of creating a database is with the Transact-SQL CREATE DATABASE statement. To create a database this way, you list the database files that SQL Server should create. For example, the following CREATE DATABASE command will create a databasenamed Sales that is made up of two database files: saledat.mdf and salelog.ldf. Note that this command does not create any tables, indexes or any other database objects—it merely creates the database, just like Visual FoxPro’s CREATE DATABASE command.
CREATE DATABASE Sales
ON
( NAME = Sales_dat,
FILENAME = 'c:\mssql7\data\saledat.mdf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 5)
LOG ON
( NAME = 'Sales_log',
FILENAME = 'c:\mssql7\data\salelog.ldf',
SIZE = 5MB,
MAXSIZE = 25MB,
FILEGROWTH = 5MB)
The following five properties are used to describe each file:
• NAME: The logical name of the file. The logical name will be used to reference the database file after the database has been created. For example, if you needed to increase the size of a database, you would use the Transact-SQL ALTER DATABASE command andspecify the database file to be resized and what the new size should be.
• FILENAME: The physical operating system name of the file, including path. Although it is not required, the convention is to use the appropriate extension (.MDF, .NDF or
.LDF) depending on the type of file.
• SIZE: The initial size of the file. If you’re using the SQL Server Enterprise Manager to create the database, you must specify the initial size in MB. The Transact-SQL CREATE DATABASE command accepts MB or KB, but the initial size must be equal to or greater thanthe size of the Model database.
• FILEGROWTH: New to SQL Server 7.0 is the ability of a database file to increase in size automatically if it fills. As the database creator, you must specify the
increment to use for the automatic growth. If you’re using the SQL Server Enterprise Manager to create the database, you’ll specify the increment in MB or as a percentage of the current file size. The Transact-SQL CREATE DATABASE command also accepts KB.
• MAXSIZE: The maximum size to which the database file is allowed to grow. If you
use the SQL Server Enterprise Manager, you’ll be able to specify the maximum size in MB, or you can specify that there is to be no restriction. The Transact-SQL CREATE DATABASE command will also accept a maximum size in KB. Omit the MAXSIZE option if you want unrestricted growth.
The transaction log
The transaction log is one of the most important pieces of the database. The transaction log records all changes made to the database. When a transaction is rolled back, the information to reverse the changes is taken from the transaction log.
The SQL Server transaction log is a Write-ahead log. All data modifications are first made to the SQL Server buffer cache. A record of these changes is written to the transaction log, and then the log is written to disk. Later, the data cache will be flushed in a process called Checkpointing. Any cache pages that have been changed will be written to disk. In the event of a system failure, the Autorecovery process will use the information in the log to restore the database to a stable state.
‡
The buffer cache is a place in memory where SQL Server caches data pages that have been read from disk. It will also contain the execution plan for stored procedures.
Each time SQL Server starts, every database goes through a recovery phase. During the recovery phase, SQL Server examines the transaction log looking for transactions that were committed but not written to disk. SQL Server will reprocess or roll forward those transactions. Inaddition, while scanning the transaction log, SQL Server looks for incomplete transactions that were written to disk. These transactions will be reversed or rolled back.
How SQL Server allocates storage
When you create a database object such as a table, index or stored procedure, SQL Server must allocate space to store the object in the database. The basic unit of storage is an extent. There are two types of extents: mixed and uniform. A mixed extent contains data from multiple
objects. In previous versions of SQL Server, each extent was dedicated to exactly one object. A very small object would use very little space within the extent. Since the extent contained only that one object, the unused portion would be wasted. Mixed extents permit more efficient space utilization for small objects.
A uniform extent is an extent that has been reserved for a specific object. SQL Server will only allocate a uniform extent for larger objects.
Both types of extents are 64K in length and subdivided into eight 8K pieces called pages. A page is SQL Server’s basic unit of I/O—all database access is done in pages.
When you create a new database object, SQL Server looks for space in an existing mixed extent. As the object grows in size, SQL Server continues to allocate space in the mixed extent. However, if the object grows to a size of nine pages, the ninth page (and all pages thereafter) will be allocated to a uniform extent. From that point forward, all space allocation for the
object will be to uniform extents.
‡
SQL Server’s row size is limited to roughly 8060 bytes because a row cannot span multiple pages. The rest of the space on the page is taken up by a 96-byte page header and some overhead for each row.
Transactions and locking
In SQL Server, transactions serve two purposes. First, a transaction ensures that all the commands within the transaction are performed as a single unit of work regardless of the number of tables affected. In the event of an error or system failure, all the modifications that occurredup until the error or system failure would be reversed.
The second purpose of a transaction is to form a unit of recovery. In the event of a system failure, when the server comes back on line, the Autorecovery process will roll back transactions that were partially complete at the time of the failure and were partially written to disk. Also, the Autorecovery process will roll forward (that is, write to disk) transactions that were committed but not written to disk.
Implicit and explicit transactions
SQL Server supports two types of transactions: implicit (Autocommit) and explicit. Implicit transactions are independent statements and commit automatically as long as no errors are encountered.
Each of the following statements are a part of implicit transactions that operate independently of each other:
UPDATE account SET balance = balance - 100 WHERE ac_num = 14356
UPDATE account SET balance = balance + 100 WHERE ac_num = 45249
If the first statement succeeds but the second fails, there is no mechanism to reverse the first statement. To correct this problem, both statements need to be treated as a single unit. The following example uses an explicit transaction to do just that:
BEGIN TRANSACTION
UPDATE account SET balance = balance - 100 WHERE ac_num = 14356
UPDATE account SET balance = balance + 100 WHERE ac_num = 45249
COMMIT TRANSACTION
The BEGIN TRANSACTION statement starts the explicit transaction. In the event of an error, it is now possible to undo the work done by either statement by issuing the ROLLBACK TRANSACTION statement. If no error occurs, the transaction must be completed with the COMMIT TRANSACTION statement.
Locking
All database management systems employ some type of concurrency control to prevent users from interfering with each others’ updates. SQL Server, like most, uses locks for this purpose. The query optimizer will determine the best type of lock for a given situation, and the Lock Manager will handle acquiring and releasing the locks, managing lock compatibilities, and detecting and resolving deadlocks.
There are three types of locks: shared locks, exclusive locks and update locks.
Shared locks
The optimizer acquires shared locks when reading data in order to prevent one process from changing data that another process is reading. SQL Server normally releases a shared lock once it is finished reading the data.
Exclusive locks
The optimizer acquires exclusive locks prior to modifying data. The exclusive lock prevents two processes from attempting to change the same data simultaneously. It also prevents one process from reading data that is being changed by another process. Unlike shared locks, exclusive locks are held until the end of the transaction.
Update locks
An update lock contains aspects of both a shared lock and an exclusive lock and is required to prevent a special kind of deadlock. To understand the reason for update locks, consider that most data modification operations actually consist of two phases. In the first phase, SQL Server finds the data to modify. In the second phase, exclusive locks are acquired and the data is modified.
SQL Server uses an update lock as it’s searching for the data to change. An update lock is compatible with existing shared locks but not with other update or exclusive locks. After the update lock has been applied, no other process may acquire a shared, update or exclusivelock on the same resource. As soon as all the other locks have been released, SQL Server will promote (that is, change) the update lock to an exclusive lock, make the change, and then release the lock when the transaction terminates.
Resources
The optimizer determines which resources to lock based on the query that it is trying to solve. For example, if the optimizer decides that the best way to solve a query is to do a table scan, it may acquire a lock on the entire table. SQL Server usually prefers to acquire row locks.
The following is a list of the resources that can be locked:
• Database
• Table
• Extent
• Page
• Index Key
• Row
Deadlocks
If two processes have acquired locks on separate resources but also require a lock on the resource held by the other process, and neither process will continue until it achieves the lock, a deadlock condition has occurred. Without intervention, both processes will wait forever.
SQL Server detects deadlock conditions automatically and corrects the problem by choosing one of the processes and making it the deadlock victim. The deadlock victim will be the process that will break the deadlock and has the least amount of work for SQL Server to undo.Deadlocks are covered in detail in Chapter 11, “Transactions.”
Database objects
Each SQL Server database consists of a collection of objects such as tables, indexes and stored procedures. We’ll begin our discovery of database objects with a discussion of object names.
SQL Server object names
A database object name consists of four components: the server name, the database name, the name of the object’s owner (the user that created the database object) and the object name. Database object names are usually written using the following form:
Server.database.owner.name
The server name, database name and owner name are called qualifiers. When all four components have been supplied, the name is considered fully qualified. You don’t always have to specify a fully qualified name when referencing an object—all of the qualifiers are optional. If the server name is omitted, SQL Server defaults to the name of the current server. If the database name is omitted, SQL Server defaults to the current database. If the owner name is omitted, SQL Server attempts to access the object using the user’s username. If that fails, SQL Server will look for an object with the same name but that is owned by dbo. dbo (“database owner”) is a special database user that is automatically mapped to the creator of
the database.
The following are examples of valid object references:
nts1.northwind.dbo.products northwind.dbo.products northwind..products dbo.products
mlevy.products
Products
The first example is a fully qualified name. The second example omits the server name. The third omits the owner name but retains the dot delimiters, as they are required. This notation tells SQL Server that the owner of the object could be either the current user or dbo.
The fourth example omits both the server name and the database name. The fifth uses a specific database user. The last example shows the most common way to refer to a database object— just the name. In this case, SQL Server looks for an object owned by the user making the connection; if one is not found, SQL Server refers to the object of the same name owned by the database owner.
A legal object name must follow the Rules for Regular Identifiers as follows (see also the
SQL Server Books Online):
1. The first character must be one of the following:
• A letter as defined by the Unicode Standard 2.0. The Unicode definition of letters includes Latin characters a-z and A-Z, in addition to letter characters from other languages.
• The _ (underscore), @ (at sign) or # (number sign) symbol.
Note that @ and # have special meaning when they are used as the first character of the identifier. The @ symbol denotes a local variable, while a # symbol denotes a temporary object.
2. Subsequent characters can be:
• Letters as defined in the Unicode Standard 2.0.
• Decimal numbers from either Basic Latin or other national scripts.
• The @, $, # or _ symbols.
3. The identifier must not be a Transact-SQL reserved word. SQL Server reserves both the uppercase and lowercase versions of reserved words.
4. Embedded spaces or special characters are not allowed.
If you require an object name that does not conform to these rules, it’s okay. As long as the identifier is delimited by square brackets, SQL Server will accept it.
Tables
A table is a collection of rows where each row describes a unique entity (for example, customers, employees or sales orders). A row is a collection of columns, each of which represents one attribute of the entity (such as name, address and quantity). In SQL Server, a table is often referred to as a base table. You will see this term used often, especially during discussions about views.
Theoretically, a database can have a maximum of 2,147,483,647 tables.
‡
Actually, 2,147,483,647 is the maximum number of database objects within a database. You would only be able to get that many tables if you had no other database objects.
As with most database objects, there are two ways to create tables in SQL Server. You can use the SQL Server Enterprise Manager or the Transact-SQL CREATE TABLE command.
To create a table using the SQL Server Enterprise Manager, follow these steps:
1. From within the SQL Server Enterprise Manager, expand a server group and then expand the server.
2. Expand Databases and then expand the database that will contain the new table.
3. Right-click on Tables and select New Table…
4. In the Choose Name dialog, enter the name for the new table and click OK.
5. Fill in the grid columns to define the columns of the new table. Column names must follow the same rules for identifiers that were discussed in the section on object names.
6. Click the Save button to have SQL Server create the table.
To create a table using Transact-SQL, use the CREATE TABLE command. This is a simplified example of the CREATE TABLE statement that would create the northwind..employees table:
CREATE TABLE Employees ( EmployeeID int,
LastName nvarchar (20), FirstName nvarchar (10), Title nvarchar (30),
TitleOfCourtesy nvarchar (25), BirthDate datetime,
HireDate datetime, Address nvarchar (60), City nvarchar (15), Region nvarchar (15),
PostalCode nvarchar (10), Country nvarchar (15), HomePhone nvarchar (24), Extension nvarchar (4), Photo image,
Notes ntext, ReportsTo int,
PhotoPath nvarchar (255))
Note that your CREATE TABLE statements will probably be more complex.
Enforcing data integrity
One role of the database designer is to create rules that will prevent bad data from getting into the database. These rules must also prevent the user or application from corrupting good data. SQL Server provides several excellent tools to assist in this task.
There are three types of data integrity: entity, domain and referential.
Entity integrity requires that no duplicate rows exist. To accomplish this, one or more columns are marked as unique. Then the database engine assures that every row has a unique value in this column (or columns). The unique column or columns are designated as a Primary, Candidate or Alternate Key.
Domain integrity requires that only valid values exist in each column, including whether or not the column can accept a NULL. For example, if you create a rule that permits only the characters “M” and “F” for the Gender column, and an application attempts to place any other value into the Gender column, the database engine will reject the update attempt.
Referential integrity stems from the fact that relationships between tables are a fundamental concept of the relational database. These relationships are usually implemented
by storing matching key values in the child and parent tables. The value in the unique identifier (or Key) column of the parent table appears in the Foreign Key field of the child table. For example, the Northwind database contains two tables: Orders and Order Details. Each row in the Order Details table contains the unique identifier of one of the rows in the Orders table.
You do not want to allow the application to add an order item without specifying a specific order because every order item must belong to exactly one order.
There are two data integrity enforcement types: procedural and declarative.
Procedural data integrity enforces rules using procedural code stored in triggers and stored procedures. Procedural data integrity is often used when the database engine has no other functionality available (not the case for Microsoft SQL Server) or if the rules are too complex
to be handled by declarative integrity.
Declarative data integrity enforces data integrity by checking the rules that are defined when the tables are created. Declarative data integrity is enforced before any changes are actually made, and therefore enjoys a performance advantage over the procedural methods.
Table 2 summarizes the constraints and other options that SQL Server provides to enforce data integrity. Although not listed here, triggers and stored procedures (procedural code) can be used to enforce all types of data integrity.
Table 2. Different options for enforcing each type of data integrity.
Integrity type
Options
Entity
PRIMARY KEY constraint
UNIQUE constraint
IDENTITY property
Domain
DEFAULT constraint
FOREIGN KEY constraint
CHECK constraints
NOT NULL Data types
Referential
FOREIGN constraints
A discussion of the options listed in Table 2 follows.
Data types
The most basic tool a database implementer has for enforcing domain integrity is the data type. The data type of a column specifies more than what type of data the column can contain. When you assign a data type to a column, you are controlling:
• The nature of the data, such as character, numeric or binary.
• The amount of space reserved for the column. For instance, a char(9) will reserve nine bytes in the row. An int column has a fixed length of four bytes. A varchar(9) column is a variable length column. In this case, SQL Server will allow a maximum of nine bytes for thecolumn, but the actual amount used will be determined by the value
stored in the column.
• For numeric data types only, the precision of a numeric column specifies the
maximum number of digits that a column can contain, not including the decimal point.
For instance, a decimal(7,2) column can contain a maximum of seven digits. A tinyint has a domain of 0 – 255, so the precision is three (but the amount of space reserved for storage in the row is one byte).
• Also for numeric data types, you can specify the scale. The scale determines the maximum number of positions to the right of the decimal point. The scale must be greater than or equal to zero and less than or equal to the precision (0 <= s <= p).
For a column defined as decimal(7,2), SQL Server reserves two places to the right of the decimal point.
IDENTITY property
Each table may have one column that is an Identity column, and it must be defined using one of the numeric data types. When a row is inserted into the table, SQL Server automatically generates a unique sequential numeric value for the column.
As with many column properties, the IDENTITY property can be specified when the table is initially created, or it can be applied to an existing table using the Transact-SQL ALTER TABLE command. When you specify the IDENTITY property, you have the option of specifying a starting value and an increment value. The starting value is called the seed value, and it will become the value placed into the first row added to the table. From that point forward, the values will be incremented by the increment value.
You can use the Transact-SQL @@IDENTITY system function to return the last IDENTITY value assigned. You have to be careful with this system function: It is scoped to the connection, and it contains the last IDENTITY value assigned regardless of the table.
‡
You cannot specify an explicit value for the IDENTITY column unless you enable the IDENTITY_INSERT connection option.
Nullability
The nullability property specifies whether or not the column can accept a NULL value.
It is best to specify the nullability property explicitly for each column. If you don’t, SQL Server makes the decision for you, based on connection and database settings. (See “ANSI null defaults” and “SET ANSI_NULL_DFLT_ON” in the SQL Server Books Online for moreinformation.)
Constraints
SQL Server provides constraints as a mechanism to specify data integrity rules. Designers prefer constraints to procedural mechanisms (triggers and stored procedures) because constraints are simpler and therefore less vulnerable to designer error. Constraints also enjoy a performance advantage over procedural mechanisms because SQL Server checks constraints before updating the data. Procedural mechanisms (i.e., trigger-based integrity solutions) check the data later in the process—after the data has been updated.
Constraints can be specified when the table is initially defined or added to an existing table. If a constraint is added to an existing table, SQL Server checks the constraint against the existing data. If the constraint fails, SQL Server rejects the constraint. To prevent SQL Server
from checking existing data, you can include the WITH NOCHECK option. However, WITH NOCHECK only affects CHECK and FOREIGN KEY constraints.
PRIMARY KEY constraints
The PRIMARY KEY constraint specifies the column or columns that comprise the unique identifier (key) of the table. A table can have only one primary key. SQL Server enforces uniqueness for the entire key by creating a unique index on the column or columns that comprisethe primary key (more on unique indexes later). No column that participates in the primary key may contain a NULL.
You can specify the primary key using Transact-SQL when you create the table as follows:
CREATE TABLE employee (
Emp_id int IDENTITY(1,1) NOT NULL PRIMARY KEY,
Emp_ssn char(9) NOT NULL,
…
)
You can add a PRIMARY KEY constraint to an existing table using the ALTER TABLE command:
ALTER TABLE employee
ADD CONSTRAINT PK_employee PRIMARY KEY (emp_id)
To create a PRIMARY KEY constraint using the SQL Server Enterprise Manager, see the topic “Creating and Modifying PRIMARY KEY Constraints” in the SQL Server Books Online.
UNIQUE constraints
A table may have multiple unique identifiers (although it can have only one primary key). For example, suppose that we have a patient table that contains both patient ID and patient Social Security number. Both columns are unique. If the patient ID is the primary key, we can still instruct SQL Server to enforce uniqueness of the Social Security number by declaring a UNIQUE constraint on the Social Security number column.
Just like the PRIMARY KEY constraint, SQL Server will not allow any two rows to contain the same value in a column marked with a UNIQUE constraint. However, unlike a PRIMARY KEY constraint, a UNIQUE constraint can be placed on a nullable column.
Creating a UNIQUE constraint using Transact-SQL is very similar to creating a PRIMARY KEY constraint. The following example shows how you would add a UNIQUE constraint to an existing employee table:
ALTER TABLE employee
ADD CONSTRAINT UQ_employee UNIQUE (emp_ssn)
To create a UNIQUE constraint using the SQL Server Enterprise Manager, see the topic
“Creating and Modifying UNIQUE Constraints” in the SQL Server Books Online.
CHECK constraints
CHECK constraints enforce domain integrity and are similar to Visual FoxPro’s Field
and Row rules. To create a CHECK constraint, you specify a logical expression involving the column you wish to check. This expression must not evaluate to False when attempting to modify the database; otherwise, SQL Server does not permit the modification to occur. Unlike Visual FoxPro, SQL Server does not allow user-defined functions inside of
CHECK constraints.
You can create a CHECK constraint when you initially define the table or afterwards when the table already exists.
The following example creates a CHECK constraint on the Gender column that allows only the character values “M” and “F”:
CREATE TABLE employee (
Emp_id int IDENTITY(1,1) NOT NULL PRIMARY KEY,
Emp_ssn char(9) NOT NULL UNIQUE,
Gender char(1) CHECK (Gender IN ('M', 'F')),
…
)
To create a CHECK constraint using the SQL Server Enterprise Manager, see the topic
“Creating and Modifying CHECK Constraints” in the SQL Server Books Online.
DEFAULT constraints
A DEFAULT constraint specifies a value to place in a column during an insert if a value was not supplied explicitly. The value specified in the DEFAULT constraint must be compatible with the data type for the column. Unlike Visual FoxPro, SQL Server DEFAULT constraints cannot contain user-defined functions.
Here’s the example from the CHECK constraint, but this time a new column has been added to capture the date and time that the row was created:
CREATE TABLE employee (
Emp_id int IDENTITY(1,1) NOT NULL PRIMARY KEY,
Emp_ssn char(9) NOT NULL UNIQUE,
Gender char(1) CHECK (Gender IN ('M', 'F')),
…
creat_date datetime DEFAULT (GETDATE())
)
In this example, if a specific value is not supplied for the creat_date column, SQL Server will execute the Transact-SQL GETDATE() function and automatically insert the current date and time into the column.
To create a DEFAULT constraint using the SQL Server Enterprise Manager, see the topic
“Creating and Modifying DEFAULT Constraints” in the SQL Server Books Online.
FOREIGN KEY constraints
A FOREIGN KEY constraint serves two purposes. It enforces referential integrity by checking the relationship between the two tables, and it enforces domain integrity on the
foreign key column or columns by allowing only valid primary keys from the parent table. A FOREIGN KEY constraint usually references the parent’s primary key, but it can also reference any of the parent’s other unique keys (the column or columns that comprise UNIQUE constraints).
The following ALTER TABLE command defines a FOREIGN KEY constraint on the
Order Details table that references the Orders table:
ALTER TABLE [Order Details]
ADD CONSTRAINT FK_orders_order_details
FOREIGN KEY (Orderid)
REFERENCES Orders(Orderid)
FK_orders_order_details is the name of the constraint. All constraints require a name. The name may be specified like in the example, or else SQL Server will create one.
FOREIGN KEY (Orderid) identifies the foreign key in the child table: [Order
Details].OrderID.
REFERENCES Orders(Orderid) specifies the primary key in the parent table to which
the foreign key points. This means that the value in the foreign key ([Order Details].OrderID)
must match the value in the parent primary key (Orders.OrderID).
To create FOREIGN KEY constraints using the SQL Server Enterprise Manager, see the topic “Creating and Modifying FOREIGN KEY Constraints” in the SQL Server Books Online.
Indexes
Correctly designed indexes are critically important because of their effect on database performance. (This is true of both SQL Server and VFP databases.)
When SQL Server searches for a specific row or groups of rows, it can check every row of the table or it can find an appropriate index and use the information in the index to go directly to the desired rows. The optimizer will decide which method is less expensive (in terms ofpage I/O) and choose it.
In addition to speeding up searches, indexes are used to enforce uniqueness. (See the earlier discussion of PRIMARY KEY and UNIQUE constraints.)
It is generally a good idea to index the following items:
• Columns within a primary key
• Columns within a foreign key
• Columns that frequently appear in WHERE clauses of queries
• Columns that the application uses frequently as the basis for a sort
You should not create indexes on the following items:
• Columns with few distinct values
• Columns that do not appear in the WHERE clauses of queries
You cannot create an index on columns of the following data types:
• Bit
• Image
• Text
Creating indexes
Indexes can be created by using the Transact-SQL CREATE INDEX command or the SQL Server Enterprise Manager. The partial syntax for the CREATE INDEX command is:
CREATE [UNIQUE] [CLUSTERED|NONCLUSTERED] INDEX index_name
ON table(column [,…n])
Here’s an example:
CREATE INDEX orders_employeeid
ON orders(employeeid)
This statement creates an index on the employeeid column of the orders table in the
Northwind database.
You can create an index on more than one column. Such an index is called a composite index.
CREATE INDEX employee_name
ON employees(lastname, firstname)
In contrast to Visual FoxPro, the columns of a composite index need not be of the same data type. In addition, SQL Server will probably not use a composite index to solve a query unless the high-order column (in this case, lastname) appears in the WHERE clause of the query. SQL Server keeps some statistical information about the distribution of the data within the index. The statistics are used by the optimizer to estimate how useful the index would be in solving the query. For a composite index, SQL Server keeps statistics only on the high-order column.
Indexes are stored internally as a “Balanced Tree” (or “B-Tree” for short). In keeping
with the tree metaphor, different parts of the B-Tree are described using terminology similar to that of a real tree—except upside down (see Figure 4). The Root provides the starting point for all index searches. Below the root (remember, this tree is upside-down) are the Intermediate (also known as non-leaf-level) nodes. Large indexes will probably have multiple levels of intermediate nodes.
At the very bottom of the index are the Leaf nodes. All the keys at the leaf level of the
index are sorted in ascending order based on the key values. The type of index determines the content of the Leaf nodes.
Figure 4. A simple example of a SQL Server B-Tree.
Types of indexes
SQL Server supports two types of indexes: clustered and non-clustered.
Non-clustered indexes are very similar to Visual FoxPro indexes. The leaf level of a non- clustered index contains one key for every row in the table. In addition, each key has a pointer back to the row in the table. This pointer is called a bookmark and has two possible forms depending on whether or not the table has a clustered index (discussed later). If the table does not have a clustered index, the bookmark is a Row Identifier (RID), which is the actual row location in the form of file#:page#:slot#. If the table does have a clustered index, the bookmark contains the key from the clustered index for that row.
You may have up to 249 non-clustered indexes per table, although it is common to have far less.
The leaf level of a clustered index is the table itself. The clustered index sits on top of the table. As a result, the table is physically sorted according to the clustered key. For this reason, a table can have only one clustered index.
SQL Server forces all clustered keys to be unique. If the index was not explicitly created as UNIQUE, SQL Server adds a four-byte value to the key to make it unique. All non-clustered indexes on a clustered table (a table with a clustered index) will use the clustered key asits bookmark.
Views
A view is a virtual table that has no persistent storage or physical presence. It is actually a definition of a query. Its contents are defined by the results of the query when the query is executed against base tables (that is, physical or real tables). The view is dynamically produced whenever it is referenced. To the application, a view looks and behaves just like a base table.
If views look, smell and act like real tables, why bother to use them instead of their base tables? A view can be used to limit a user’s access to data in a table. Using a view, we can make only certain columns or rows available. For example, we may want everyone in the
organization to have access to the name, address and phone number information in the employee table, but only Human Resources personnel should have access to the salary details. To support this requirement, we would create a view that exposes only the name, address and phone number. Everyone in the organization would access the employee data through this view except, of course, Human Resources personnel.
Another use for views is to simplify a complex join situation within the database. The pubs sample database contains a table of authors and a table of titles. Since there is a many-to-many relationship between the two tables, a third table, titleauthor, exists that maps authors totitles.
A view could be created that joins the authors, titles and titleauthor tables so that users are presented with a simpler data structure to use as the basis for queries and reports.
You create (that is, define) a view using the Transact-SQL CREATE VIEW statement. The
CREATE VIEW statement to create the view discussed previously would look like this:
USE pubs
GO
CREATE VIEW titlesandauthors AS
SELECT
Titles.title_id,
Titles.title,
Authors.au_id,
Authors.au_lname,
Authors.au_fname,
Titleauthor.royaltyper AS RoyaltyPercentage
FROM titles INNER JOIN titleauthor INNER JOIN authors
ON authors.au_id = titleauthor.au_id
ON titles.title_id = titleauthor.title_id
Using the view is just a matter of referring to it as you would any real table:
SELECT *
FROM titlesandauthors
ORDER BY title
Stored procedures
A stored procedure is a collection of Transact-SQL statements that is stored in the database. Stored procedures are similar to procedures in other languages. They can accept parameters, call other stored procedures (including recursive calls), and return values and status codes back to the caller. Unlike procedures in other languages, stored procedures cannot be used
in expressions.
Stored procedures are not permanently “compiled” and stored in the database. The only thing “stored” about a stored procedure is the source code, which is physically stored in the SYSCOMMENTS system table. When SQL Server needs to execute a stored procedure, it looks in the cache to see whether there is a compiled version there. If so, SQL Server reuses the cached version. If not, SQL Server gets the definition from the SYSCOMMENTS table, parses it, optimizes it, compiles it and places the resulting execution plan in the cache. The execution plan remains there until it’s paged out (using a “least recently used” algorithm) or the server is restarted.
Stored procedures are a powerful tool in the database implementer’s toolbox. Stored procedures can be used to encapsulate logic and share it across applications. They can provide a performance advantage, by allowing SQL Server to reuse execution plans and skip the parse, optimize and compile steps.
Like views, stored procedures can also be used to limit or control access to data.
Stored procedures are created with the Transact-SQL CREATE PROCEDURE command:
USE pubs
GO
CREATE PROCEDURE getauthors AS
SELECT * FROM authors
The previous example was relatively simple. It simply returns the entire authors table to the caller. The next example adds the use of a parameter that specifies a filter condition:
USE pubs
GO
CREATE PROCEDURE getauthor
@author_id varchar(11)
AS
SELECT * FROM authors WHERE au_id = @author_id
IF @@ROWCOUNT > 0 RETURN 0
ELSE RETURN -1
This example takes a parameter, the ID of an author, and returns the row from the authors table that matches it. There’s also some additional logic to check the number of affected rows using the @@ROWCOUNT system function (similar to Visual FoxPro’s _TALLY system variable) and return a status code of zero (0) for success or –1 for no matches.
To execute this stored procedure, you would use the EXECUTE statement:
DECLARE @result int -- we need a variable to catch the returned status
EXECUTE @result = getauthor '172-32-1176' – passing the parameters by position
or
DECLARE @result int variable to catch the returned status
EXECUTE @result = getauthor @author_id = '172-32-1176' – passing by name
Note that the RETURN statement can only return an integer value; therefore, it cannot be used to return character strings or other data types. Fortunately, returning a result set and the RETURN statement are not the only ways to get data back from a stored procedure. You can declare specific parameters as OUTPUT parameters. OUTPUT parameters allow a value to be returned to the calling routine, similar to passing a parameter by reference in Visual FoxPro. The following example counts the number of books written by the specified author and returns the count through an OUTPUT parameter:
USE pubs
GO
CREATE PROCEDURE BookCount
@author_id varchar(11),
@bookcnt int OUTPUT AS
SELECT @bookcnt = COUNT(*) FROM titleauthor
WHERE au_id = @author_id
Calling this stored procedure looks like this:
DECLARE @lnBookcnt int
EXECUTE BookCount '172-32-1176', @lnBookcnt OUTPUT
The OUTPUT keyword is required in the stored procedure and when the procedure is called. If the keyword is omitted in either place, SQL Server returns an error.
Here’s a more complex example of a stored procedure that handles errors and manages a transaction:
-- Create a small database and the necessary tables
USE master
CREATE DATABASE bank
-- The Funds table is the only table that's needed. We're only going to
-- create the columns required by the TransferFunds stored procedure.
USE bank
CREATE TABLE Funds (
Fund_id int IDENTITY(10000,1) PRIMARY KEY,
Amount money)
GO
CREATE PROCEDURE TransferFunds
@SourceFund int = NULL,
@TargetFund int = NULL,
@amount money = NULL
AS
----------------------
-- Parameter checking
----------------------
IF @SourceFund IS NULL
BEGIN
RAISERROR ('You must supply a source fund', 11, 1)
RETURN 1
END
IF NOT EXISTS (SELECT * FROM funds WHERE fund_id = @SourceFund)
BEGIN
RAISERROR ('Source fund not found', 11, 1)
RETURN 1
END
IF @TargetFund IS NULL
BEGIN
RAISERROR ('You must supply a Target fund', 11, 1)
RETURN 1
END
IF NOT EXISTS (SELECT * FROM funds WHERE fund_id = @TargetFund)
BEGIN
RAISERROR ('Target fund not found', 11, 1)
RETURN 1
END
IF @amount IS NULL OR @amount < 0
BEGIN
RAISERROR ('Invalid transfer amount', 11, 1)
RETURN 1
END
---------------------
-- Make the transfer
---------------------
BEGIN TRANSACTION Fund_Transfer
UPDATE funds SET amount = amount - @amount WHERE fund_id = @SourceFund
IF @@ERROR <> 0 GOTO AbortTransfer
UPDATE funds SET amount = amount + @amount WHERE fund_id = @TargetFund
IF @@ERROR <> 0 GOTO AbortTransfer
COMMIT TRANSACTION Fund_Transfer
RETURN 0
AbortTransfer:
ROLLBACK TRANSACTION Fund_Transfer
RETURN 1
Triggers
A trigger is a special type of stored procedure. It is tightly coupled to a table and is executed by SQL Server in response to specific operations against the table. The most common use of triggers is to enforce rules that are specified procedurally (that is, in procedural code). Triggers are also used to cascade deletes and updates to child tables and to maintain denormalized data.
When you create a trigger, you specify which operation or operations (INSERT, UPDATE and/or DELETE) cause the trigger to fire. New in SQL Server 7.0 is the ability to have multiple triggers for the same operation. For example, you can have multiple update triggers, whereeach trigger essentially “watches” for changes in a specific column.
‡
Microsoft has declared that if multiple triggers are defined for the same operations, their order of operation is unknown.
Unlike a Visual FoxPro trigger, which fires once for each affected row, a SQL Server trigger fires once no matter how many rows were affected by the query. The trigger always fires once—even if the query affected no rows. When you write a trigger, you must considerwhether you need additional code to detect and handle the situation where no rows were affected.
Triggers fire after the data has been modified but before the transaction is committed (in
the case of an implicit transaction). Therefore, a trigger can cause a transaction to be aborted by issuing a ROLLBACK TRANSACTION from within the trigger. Because the trigger fires after SQL Server modifies the data, the trigger can view the before and after results of the query.
This is accomplished by using two special tables called Inserted and Deleted. The Inserted and Deleted tables exist in memory and only for the life of the trigger. These tables are not visible outside the trigger. (For more information on the Inserted and Deleted tables, see the following sections in this chapter: “The INSERT operation,” “The DELETE operation” and “The UPDATE operation.”)
You create a trigger using the Transact-SQL CREATE TRIGGER statement. The partial syntax is shown here:
CREATE TRIGGER trigger_name
ON table_name
FOR [INSERT][,][UPDATE][,][DELETE]
AS
Sql statements
Here’s a simple example that maintains two audit columns, upd_datetime and upd_user. First we’ll add the two columns to the products table and then create the trigger:
USE northwind
GO
ALTER TABLE Products ADD
upd_datetime datetime NULL,
upd_user varchar(10) NULL
GO
CREATE TRIGGER product_audit
ON products
FOR UPDATE
AS
UPDATE products
SET upd_datetime = GETDATE(),
Upd_user = USER_NAME()
WHERE productid IN (SELECT productid FROM inserted)
The previous example referred to the Inserted table that was mentioned earlier. Let’s
look at the operation of triggers, and their effects on the Inserted and Deleted tables, in a little more detail.
The INSERT operation
During an INSERT operation, SQL Server inserts the new rows into the table and places a copy of them into the special Inserted table. This table permits the trigger to detect new rows and act upon them. The trigger in the following example updates the products.UnitsInStockcolumn whenever an item is sold:
CREATE TRIGGER maintain_UnitsInStock
ON [Order Details]
FOR insert
AS
UPDATE Products
SET UnitsInStock = UnitsInStock - (
SELECT quantity
FROM inserted
WHERE inserted.productid = products.productid)
WHERE productid IN (SELECT productid FROM inserted)
This example has a flaw: It will work correctly only if rows are inserted into the Order Details table one at a time. If one INSERT operation manages to produce two Order Details rows for the same product, the trigger will generate an error since this specific use of a subquery allows only one row to be returned. Fortunately, this problem is easy to remedy by replacing the quantity with the SUM aggregate function. The corrected version follows:
CREATE TRIGGER maintain_UnitsInStock
ON [Order Details]
FOR insert
AS
UPDATE Products
SET UnitsInStock = UnitsInStock - (
SELECT SUM(quantity)
FROM inserted
WHERE inserted.productid = products.productid)
WHERE productid IN (SELECT productid FROM inserted)
The DELETE operation
During a DELETE operation, SQL Server removes specified rows from the table and places them in the special Deleted table. Similar to the special Inserted table, the Deleted table permits the trigger to detect deleted rows.
The following trigger cascades a delete from the Orders table to the Order Details table:
CREATE TRIGGER remove_orderliness
ON orders
FOR DELETE
AS
DELETE FROM [order details]
WHERE orderid IN (SELECT orderid FROM deleted)
This trigger will never fire if you have a FOREIGN KEY constraint defined between the Order Details and Orders tables. Remember, constraints are checked before any work is actually done, and triggers fire after changes are made. Before SQL Server executes the DELETE onOrder, the FOREIGN KEY constraint will force it to check for references in the Order Details table. Finding any foreign keys referencing the row that would be deleted will cause SQL Server to return a constraint violation error and kill the statement. In order to implement cascadingdeletes, you will not be able to use FOREIGN KEY constraints between the participating tables.
The UPDATE operation
The INSERT and DELETE operations cause the creation of only one of the special tables, but the UPDATE operation causes the creation of both the Inserted and Deleted tables. This is understandable if you think of an UPDATE operation as a delete of an existing row followed by an insert of the modified row. In the course of an UPDATE operation, SQL Server places a copy of the affected rows into the Deleted table before making the modifications, and then places a copy of the modified rows into the Inserted table after making the modification.
Thus, the Deleted table has the before image and the Inserted table has the after image of all modified rows.
The following example prevents any single UPDATE operation from increasing the price of a product by more that 25 percent:
CREATE TRIGGER price_watcher
ON products
FOR UPDATE
AS
IF UPDATE(unitprice) BEGIN
If exists ( SELECT *
FROM inserted INNER JOIN deleted
ON inserted.productid = deleted.productid
WHERE inserted.unitprice/deleted.unitprice > 1.25)
RAISERROR(
'No product price may be increased by more than 25%',
10, 1)
ROLLBACK TRANSACTION
END
Summary
Our goal for this chapter was to give you some basic information about SQL Server and introduce some fundamental concepts and the various database objects that are used to implement a database design.
In the next chapter, we’ll look at one way to use Visual FoxPro to access SQL Server.
No comments:
Post a Comment