Latest Articles related to all categories. Microsoft, Twitter, Xbox, Autos and much more

Full width home advertisement

Post Page Advertisement [Top]

In the first part of this book, you learned about the capabilities of Visual FoxPro as a client/server tool and the capabilities and features of Microsoft SQL Server. We also demonstrated why client/server applications can be better than file-server applications. But beforeyou begin with a client/server application, you’ll want to know about the choices in design that have to be made. This chapter covers the issues raised by moving an application from file-server to client/server and the options for design, performance and security.

Microsoft Visual FoxPro is a full-featured product with a fully developed language, visual tools for handling forms and reports, object orientation features, a great database engine, and many other tools that make the development process run smoothly. Microsoft SQL Server is a database and query engine with quite a few administration tools to make things like security

and backups a snap to perform. But SQL Server lacks a front end, and although it has a language (Transact-SQL), it is not designed to handle everything that a product like VFP can. The lack of a front end is not a detriment; it’s simply a design decision to allow developers to use products with which they are already familiar.

But as the saying goes, “Familiarity breeds contempt.” In this case, the contempt is not for the devil you know, but rather for the devil you don’t know: SQL Server. In this situation, familiarity with a known product leads you to feel uncomfortable with the capabilities of the new product. That’s not necessarily a bad thing, but with client/server, you may have to rethink your programming habits. You have to remember that client/server performance is better than file-server when the pipeline gets smaller. But that is only true when you treat the pipeline gingerly, restricting access across the network to messages between client and server rather

than massive data transfers.

The question that arises is, just how should you design a client/server application? Where should the messages be restricted and how? Should the server do everything that has to do with the data? When should you take advantage of Visual FoxPro’s local data speed andlanguage capabilities? How do you reconcile VFP’s database container against SQL Server’s database?

SQL database design issues

When designing a client/server database, the main questions are database integrity and indexing. The answers to these questions are easy. If you’re a Visual FoxPro developer, you’re going to want to use the power of VFP to handle integrity and validation. If you’re a SQL Serverdatabase administrator, you’re going to want to use the built-in capabilities of the SQL database. Unfortunately, regardless of what role you play, you have to remember that the other players are not all wrong.

Data integrity mechanisms

SQL Server or Visual FoxPro? Database design or language and forms? Why should SQL Server be used? Because the integrity is in force no matter what front-end product is used— there is no way to circumvent database design. This choice was the one that was promoted when the database container was introduced into Visual FoxPro 3.0. Why use Visual FoxPro for integrity? Because of its object-oriented capabilities, its event-driven form design and its strong language. The important aspect of client/server that is being discussed is whether integrity should be handled before the data is sent to the server, or at the server.

Data integrity takes four forms: entity integrity, domain integrity, referential integrity and user-defined integrity, sometimes called business rules. Entity integrity guarantees that no two records in a table can be exactly alike. Domain integrity enforces the values that can be entered into a field, and referential integrity ensures that the relationship between two tables cannot be broken. User-defined integrity is usually defined as those rules that a business sets up that do not fall into the other categories. Domain integrity can sometimes be thought of asa form of

this, but since the mechanisms for enforcing domain integrity can be built into the structure of a table, it is considered along with the other forms.

In this section, the various mechanisms for enforcing integrity will be explored, both in

Visual FoxPro and in SQL Server.

Data types, NULLs and defaults

These are forms of domain integrity—that is, what values are allowed into a particular field. Data types restrict the types of values, not allowing a Null value is also restrictive, and defaults are values to be used in lieu of any input value.

Data types

No one can possibly argue that a database needs a data type defined for each field in a table. The decision is whether the front-end program should also know about the data type. When using a remote view in Visual FoxPro, the type becomes known when the view is used, so there is no question that the proper data type restriction will be handled—except when certain SQL Server types are used that do not have a correct mapping to the types in Visual FoxPro. These are the types that allow designers to avoid wasting database space because of the defined

ranges of allowable values. Smallmoney is only four bytes, as opposed to Money’s eight bytes. But both types are converted to Currency (eight bytes) in VFP. The same thing goes for Smalldatetime. Besides Int, SQL Server also supports Smallint and Tinyint, which are two

bytes and one byte, respectively. Both get converted into Integer in VFP. Binary and Varbinary are converted into Memo (binary) types in VFP and are virtually uneditable directly (code can be easily written to convert back and forth), and Uniqueidentifier (which is a 16-byte binary value presented as 32 hexadecimal characters) is treated as Character data in VFP. Because of these conversions, it is very easy to forget and allow values using Visual FoxPro’s restrictions that will fail when the view sends the data back to SQL Server.

The issue is: Where should the problem be handled? Since the remote view changes some data types in conversion, you might choose to use Visual FoxPro code to enforce the tighter restraints of the SQL Server data types. This can be done at the view level via the View Field Properties dialog by setting a Field Validation Rule where needed (as in Figure 1), or in the forms used for data entry, or even in classes that you create for the fields in the forms. When

using forms or classes, you would put the test into the LostFocus (or perhaps Valid) event of the control or class.

Figure 1. The View Field Properties dialog showing a Validation Rule entered for the

UnitPrice field.

The complete expression cannot be seen, and there is a comment that explains the validation.Otherwise, without any type of code on the client side, you would have to process any errors returned by the server. This means that you would have to learn the various errorcodes that might be returned from the server and then write the code in VFP to handle those errors.

The downside of coding data type validations on the client side is that since a data type is defined in the database, any changes in the data type on the server would require changes in the code on the client. The issue here would be one of deployment. Although the server change only necessitates a change in one place, the client changes mean recompiling the

software and then distributing it on every computer where it’s used. By using error processing on the client side, every change in structure would not require a rewrite because the errors would still be valid.

Another less favorable way to handle this problem is to restrict the database design on the server to using only those data types that are directly transferable to the VFP data types, thus avoiding the range errors that might result.

Nulls

Both SQL Server and Visual FoxPro recognize Null values. A problem only surfaces when a field that accepts Null values on the server is filled with spaces (for Character fields—for other data types, there are other values to watch for) on the client. Instead of getting the Null value, which is what may have been intended, the server will receive the field with the spaces. When

binding a form’s controls directly to the remote view, that would normally not be a problem. If the user does not fill in a value for the field, then the view would place a Null value in the field on the server. But if the user actually types in a blank of any kind, then the spaces will besent to the server.

To handle this on the front end may require special coding so that if a field were left blank, then a Null value would be inserted into the field. The problem here is that a bound control is more difficult to handle. The other problem is once again how to know the structure of the database on the client side. In order for you to use VFP to process the situation, you have to know the database structure of the server, and if there are any changes, then those changes must migrate to the client application.

The flip side of Null is Not Null, where a Null value is not allowed. This means that a

value must be returned to the server, or errors occur. When doing an insert using a remote view, if a field is left blank, then VFP tries to insert a Null value into that field, causing the insert to fail. For any field that does not allow Null values, either validation has to be done on theclient side, or the error code must be processed from the server.

There is no easy way of handling Null’s on the client side. In any case where you will be using remote views, you will need to know the Null property of the field on the server. Then it is up to you as to whether to put the requirements of the field in the view, the form or a class.

Defaults

Defaults on the server can be very handy. They provide the necessary value for a field when no value is given for that field when a record is inserted from the client. Defaults override the blank on the client side. (Actually, when a record is added through a remote view, Visual FoxPro generates an INSERT statement that will only include values for the fields that have been modified in the new record. That means any fields that are left blank are omitted from the INSERT statement.) If a field on the server allows Null values and has a default, the default is used whenever the field is left blank in a new record. In this way a default covers the situation better than the Null property.

The question is whether or not to put the default value on the client side of the application as well. The rationale for this is to let the data entry people see the value that would be used. On the other hand, this requires that the client application be kept informed of any changeson the server.

Rules and check constraints

As explained in Chapter 3, “Introduction to SQL Server 7.0,” a rule is an object containing a simple logical test of a field’s value, and if the test returns False, then the modification or insert is rejected. A CHECK constraint is similar, except it is a part of the table structure itself. A rulemust be bound to a field after it is created. A field can only have one rule bound to it. A CHECK constraint can test multiple fields from the same record, and there can be many CHECK constraints that impact a single field.

Rules and CHECK constraints both enforce domain integrity. They are used to check the range of possible values for a field, or to perhaps perform a pattern check on a Character data type. Besides being enforced at the server level, they can be enforced at the client level via field validation rules. These can be set at the client, by using the View Field Properties dialog or by

using the DBSETPROP() function to set a Row Rule Expression. You can also process these rules via events in forms or classes, or by using VFP code when saving the changes to a record.

Primary keys

Primary keys are used to enforce entity integrity. By definition, the primary key is a value that cannot be duplicated throughout an entire table. No two records can have the same primary key value. Therefore, no two records can be exactly alike.

For all practical purposes, primary keys are created and behave the same in both Visual FoxPro and Microsoft SQL Server. When you designate a primary key, both products create an index (called candidate in VFP, unique in SQL Server) that enforces the primary key rule. Primary keys are especially important to have on the server, because a remote view needs at a minimum the primary key value in order to update existing records.

The source values for primary keys can either come from the data itself (natural) or can be artificial, generated by your application or the system. Some designers choose to use natural data as a primary key so that no extra data need be created for that purpose. This may come out of a natural way of uniquely identifying individual entities within a table, such as name (by using a combination of the name parts), Social Security number, badge number, part number, invoice number, or any of several ways that are natural to the data itself.

Other designers prefer to create new fields for a primary key because of compactness or to prevent changes to the key itself. An artificial or surrogate key is usually a single field that is either an integer value or a short string. Since primary keys are generally used as foreignkeys when establishing relationships between tables, keeping them short is important as both a disk space saver and a performance enhancement for quick joins between tables.

Generating keys

There is no intent here to say that one way of generating primary keys is better than another, but rather simply to explore the issues when generating them. If the key is being created by the application, or perhaps the usage of the application, then there is not really a problem. Thecode for creating that key will be in the VFP program, so that when a new record is created, then the new value for the primary key will be inserted along with the rest of the data. The only important note is to remember that SQL Server may also be set to enforce uniqueness, so if

there is a possibility of error, it will have to be handled by the application code as well. You will also have to be sure to set the primary key as updatable in the view definition.

On the other hand, if you use some mechanism on the server, then there are definite repercussions. The first is that you will not know the value of the new primary key until after the insert has completed. In fact, using a remote view may necessitate a REQUERY() of the view after the insert has completed. This can cause quite a problem in terms of performance, and there is no shortcut around it. The performance issue is that the insert is handled in the background by VFP, and since the new primary key is only on the server, it cannot be seen on theclient. Normally a refresh of the current record would show any changes to a record in a view, but since the REFRESH() function requires the primary key on the client, it will fail to show the new record.

There are ways of handling this situation, but they all require multiple trips between the client and server. If you are using the IDENTITY property (described in Chapter 3, “Introduction to SQL Server 7.0”) for creating primary keys in a table, you could use the

following method. When you add a record to a table with the IDENTITY property, the

@@IDENTITY function will return the last value generated by an insert for that connection. (This was also described in Chapter 3.)

Regardless of how you implement this technique, you must first find out the connection handle being used by the cursor where the results of the view are stored. Next, generate an

insert command and use SQL pass through to send the command to the server. Finally, use SQL pass through to send a command that will return the @@IDENTITY value in another cursor. The following code demonstrates this technique:

lhConnection = CURSORGETPROP("ConnectHandle")

lnOK = SQLExec( lhConnection, "INSERT INTO TableName (col1, col2, col3) " + ;

VALUES (12,'First', 'Last')")

IF lnOK = 1

lnOK = SQLExec( lhConnection, "SELECT @@IDENTITY AS PriKey", IdentTable)

IF lnOK = 1

SELECT IdentTable

lnNewRec = PriKey

USE

ENDIF

ENDIF

After this, lnNewRec will have the new primary key value generated by the IDENTITY property on the server. You can place the value into the primary key field of the view, but you will not be able to edit that record until after a REQUERY() of the view has been done.

Note that you can also use the TABLEUPDATE() function after you INSERT data into a view to provide the same INSERT INTO statement as described previously. However, you must still determine the connection handle and use a SQLExec() call to grab the

@@IDENTITY value, and you will still need to REQUERY() before being able to edit the record in the VFP cursor.

This technique works even if you request the IDENTITY value before committing the changes to a transaction. Therefore, if you insert a parent record after beginning a transaction, you can still use SELECT

@@IDENTITY to get the foreign key value that you need for any child records. However, if you ROLLBACK the transaction, the IDENTITY value is essentially “lost.”

Referential integrity

There are various ways to enforce referential integrity on the server and a couple of ways in Visual FoxPro. The difference is that although SQL Server supports what is known as Declarative Referential Integrity (DRI) where the integrity is built into the database structure, VFPdoes not. Both server and client can use triggers for referential integrity.

Regardless of which method is used on the server, there is virtually nothing that can be done on the client to prevent a referential integrity violation. Since these problems occur because of referencing another table in the database, the data modification must pass through to the server in order to get the error. The only thing that you can do is program for the error, and handle it after the server returns the result.

DRI/foreign keys

Just like Visual FoxPro, Microsoft SQL Server supports the creation of relationships via the CREATE TABLE or ALTER TABLE commands. The difference is that in VFP, the options only create a defined relationship that is a precursor to using the Referential Integrity (RI) Builder, whereas in SQL Server, the relationship becomes the referential integrity.

You can establish a relationship in both products by creating a foreign key in a child table that references the primary key in a parent table. In SQL Server, this is called DRI, and it establishes a restrictive relationship. DRI will cause an error when an attempt is made to delete arecord in the parent table that has records in a child table, or when the primary key of the record in the parent table is modified and that record has related records in a child table. This means that there is no way to institute cascading deletes or updates when using DRI.

One way that you could do cascading deletes is by deleting the child records first via some client code. This is not easily performed when using remote views, but it is one way of handling it from VFP.

Triggers

Both client and server support the use of triggers for handling referential integrity. In Visual FoxPro it is possible to use the RI Builder to make the job of code generation easier, but there is no such mechanism in SQL Server. There you would have to write Transact-SQL code tohandle the situation. That means you will write code for the Update and Insert triggers on the child table, and Update and Delete triggers on the parent table. Note that the Upsizing Wizard (covered in Chapter 5, “Upsizing: Moving from File-Server to Client/Server”) will write basic RI code into the triggers for you, but only if you have selected the correct options and employed the RI Builder in your VFP database.

Review of data integrity

At this point, you have seen the various types of integrity and the options available via the built-in tools of Microsoft SQL Server, as well as their impact on the use of remote views in Visual FoxPro. There are many cases when a database design on the server can be mimicked on the client via one of several tools, either through field properties of a view, a form’s events or the events of a class. Along with those options, you can also handle the problems through trapping errors.

When possible, transferring restrictions from the server to the client can prevent unnecessary network traffic by allowing the client to verify and validate data before the modification is sent to the server. The downside to this technique is that changes on the server then have to be made to the client application as well. The application has to be recompiled and then deployed out to all of the client workstations where it was first distributed.

With error handling, the good thing is that changes to the server do not have to be promulgated to the client application. But then a round trip to the server is made in order to find out that a mistake has been made. The bad thing about error handling is that you need to findout the error codes of SQL Server and how to process the error messages, which was covered in Chapter 8, “Errors and Debugging.” To refresh your memory, if you attempt to delete a record from a parent table that has related records in child table, then you will get a

VFP error 1526 indicating an ODBC error. By using the AERROR() function, you can discover through the fifth element of the array created that SQL Server error number 547 occurred,

which is a constraint violation. By examining the message in element 2, you will find out what statement caused the error, which constraint, by name, was violated, and the database, table and column names that are involved.

Even though you may find the first method preferable, there is the fact that transferring the rules cannot cover every situation. Sometimes the modification has to go to the server anyway in order to find out the error, as with referential integrity. Sometimes the problem is simply that the tools discussed just don’t do the job by themselves. Something else is going to be needed. That something should also handle errors at the place they happen, the server.

Stored procedures

You can write code on the server that can handle all of the operations that you want to do for the client/server application, or you can write code just to handle some of the operations, such as data modification. But when you start to use code on the server, you will lose the ability to use all aspects of certain tools on the client, specifically remote views.

Remote views are SELECT statements that return data from the server to the client, and when they are used for updating as well, they will also generate the data modification statements automatically. These statements are the INSERT, UPDATE and DELETE commands thatare common to most SQL implementations. The problem is that you have very little flexibility in dealing with the data (it’s automatic) and a lot of headaches wrapped around error handling. One technique that you might have already thought of is that the remote view does nothave to be updatable. Instead, you could use it for returning the data from the server,

and then manually create the data modification statements, which are sent to the server via SQL pass through. But that does not eliminate the error handling or the explicit knowledge needed of the server’s data integrity mechanisms.

This is simply a choice. The alternative is to use code on the server in the form of stored procedures. Just as in Visual FoxPro, stored procedures are stored in the database. That way, they are available for use by any client application. Stored procedures on the server canhandle all aspects of data retrieval and data modification.

Within a stored procedure can be all of the error handling that you would need when using SQL Server. This way, the nature of the errors is known at the place where the errors occur instead of at the client. The stored procedure can then either handle the error or return an error code to the client application that would be defined so that you would know how to handle the error perfectly. Although this might seem to be the same thing as programming the validation code in the client, the difference is that changes can sometimes be isolated at the serverwithout having to rewrite any code on the client. The documentation for the stored procedure would indicate what inputs were required and what the return codes would mean.

Stored procedures also create a solution to the insert problem that was previously described when allowing the server to generate a primary key. By using stored procedures, all of the issues are handled at the server, and the procedure returns the new key to the client aspart of the way it works. This avoids the extra trip to the server to find out what key

was generated.

Basically, there are two ways that you can use stored procedures: either through SQL pass through commands or via ActiveX Data Objects (ADO). With SQL pass through, you are only slightly limited in what the stored procedures can do for you. ADO, on the other hand,provides

more flexibility with stored procedures, but forces you to add some code to handle the relative lack of integration between ADO and VFP.

SQL pass though

If you choose to use SQL pass through queries (specifically the SQLExec() function), then one format for return codes would be via a cursor that would be returned by the stored procedure. That is, the stored procedure would have code similar to this:

SELECT @RetCode AS ReturnCode, @RetMessage AS Message

This would create a one-record cursor with two fields, one with the code indicating success, failure or whatever, and the other with an explanatory message. There might be other fields for special cases, such as stored procedures that would handle INSERT operations, and would return a generated primary key value for the new record.

By definition, the SQLExec() function in Visual FoxPro creates a cursor with the alias of Sqlresult, but the third parameter allows you to specify the name of the alias to use for any cursors generated by the command. If the stored procedure generates multiple cursors, then the aliases will be the name selected followed by numbers, where “1” would be the second set of data returned from the server.

When using SQL pass through, you must create the entire stored procedure command, as in the following code:

lcCommand = "EXECUTE procDelCust 123"

SQLExec( lhConnection, lcCommand, "ResultSet")

In this example, the command to run the stored procedure, the name of the procedure and the parameters all became one string variable that was then passed to the server. As you can see, any parameters must be a part of the string itself. When passing character values, youwill need to enclose them with single quotes within the command. SQL Server supports optional parameters. That means that not all parameters have to be defined when executing the stored procedure. But if you are passing the parameters by position, as in the preceding code,you

cannot skip any parameters. In that case, or in every situation, you can use named parameters so that order does not matter.

lcCommand = "EXECUTE procSalesByCategory @CategoryName='software'"

This example assumes that there is a parameter with the name of @CategoryName. All parameters in SQL Server start with an @.

Just as views can be parameterized in Visual FoxPro, so too can SQL pass through commands. This is done simply by placing a ? in front of a variable name. If the variable name does not exist when the SQLExec() function is executed, an input dialog box will appear promptingthe user to enter the value. Normally, you would create the variable just before execution based on some other form of input.

lcCatName = THISFORM.txtCategory.Value

lcCommand = "EXECUTE procSalesByCategory @CategoryName=?lcCatName"

SQLExec( lhConnection, lcCommand, "SalesCat")

ADO

A little more flexibility results when ADO is used instead of pass through queries. The problem with SQL pass through is that return codes must be handled through a cursor or result set. SQL pass through does allow OUTPUT parameters, but does not allow for return codes from procedures. (Yes, procedures can return a single, integer value.) This is somewhat limiting, but by using ADO, that problem is overcome.

Without getting into an entire discussion of ADO (as it is covered more completely in Chapter 12, “ActiveX Data Objects”), you should understand how the ADO command object handles stored procedures. In ADO, the command object has the ability to handle stored procedures and their parameters in a more object-oriented way. Command objects have a property for storing the name of the stored procedure (CommandText) and another property to indicate that the command is a stored procedure (CommandType). The command object also hasa parameters collection, used in place of passing variables from Visual FoxPro.

The parameters collection contains a parameter object for every parameter passed into a stored procedure, and even one for the return value. The advantage of this is that if a parameter of a stored procedure is defined as an OUTPUT parameter, then after the command object has been executed, that parameter object will have the output value. This way, stored procedures can be designed to return information in a more natural way than a cursor. It makes sense that a procedure that modifies data should not return data, and by using ADO, you can avoidthat sticky situation.

The downside to ActiveX Data Objects is that they cannot be used in the same fashion as remote views. Visual FoxPro does not have an ADO tool to make development with this latest technology easy. As a result, you would have to do a lot more coding than with remote views. ADO does have a recordset object that is a little like the cursors in Visual FoxPro, and they can even be updatable. Unfortunately, VFP cannot use the recordset cursor (they’re called cursors also) in the same way as the temporary data returned via ODBC. Instead, a recordsetcursor exists in the memory of the client workstation, and extra code would be needed to populate a local cursor. Also, although recordset cursors can be updatable, they work differently from remote views, usually causing a higher resource drain on the server.

ADO will be covered in more detail in Chapter 12, “ActiveX Data Objects.”

VFP developer vs. SQL Server DBA

Throughout this section, you have seen the various data integrity issues that must be dealt with in any database. You have also seen the choices that have to be made when designing a client/server application. The last question is the choice to be made for your application. The answer depends on exactly what your role is in designing the application.

A Visual FoxPro developer has had it all for many years. One of the fastest database engines on a desktop, a language geared for processing data, a complete programming language with all the functions that go into most language-only products, object-oriented language,

design tools that have gotten better with every version—these are the features of VFP that have made development easy, fast and comfortable. What that means when it comes time to design integrity into the client/server database is that VFP seems like the best place to take careof these needs.

A Microsoft SQL Server database administrator has been working with a database product that is robust, secure and capable of handling thousands of users. The administrative tools for

SQL Server, especially the Enterprise Manager, have made the day-to-day tasks easier and easier to handle. The DBA feels that SQL Server is strong enough and has enough built-in features to make data integrity the province of the server. What that means is that the DBA feels that all data integrity needs are met at the server.

This is the biggest issue in a client/server design, balancing the experience of the different parties to achieve the goal of a well-designed, robust and secure database. One of the things that you may encounter when moving to a client/server design is that the

server needs more administering than the typical desktop database solution. Therefore it is important to understand the needs of other parties who will be or are already involved in a client/server database.

The answer to how integrity should be handled is not a black and white decision. It is not all or nothing, but rather a balance where the strengths of the client are balanced with the strengths of the server. By now you should realize that it is impractical to try to design the client side without some knowledge of the server design. So even though you would like to have all data integrity handled at the server so that the client application can be designed once, there will be modifications on the server that impact the client.

In the rest of this chapter, we will examine some other issues regarding performance and security that are also part of the database design, and then in the next chapter, we will present choices and recommendations that will help you decide.

Client/server performance issues

After designing the database structure and making sure that the data retains consistency and accuracy, performance is the next important decision. Performance can be broken down into three areas: server, client and network. You have to be concerned with all three in the database design to make sure that everything runs well.

Choosing indexes

The biggest impact on the server in terms of performance is indexing. If there are no indexes, the server must read all records of a table to search for data. This also goes for data modifications, in order to find the record to modify. Although you’re happy to have the server handledata retrieval, without indexes, any server will be brought to its knees in no time, and all the presumed benefits of client/server will be lost.

Choosing indexes in SQL Server is a complex issue because there will many different ways of querying data. The two different types of indexes, clustered and non-clustered, were

explained in Chapter 3, “Introduction to SQL Server 7.0.” What you need to know now is how to choose your indexes.

The brute-force way to find out which indexes are useful is through trial and testing. First you create one or more indexes, and then you find out whether your queries will use any of them. If there are any unused indexes, then you should drop them because they will just add overhead to maintenance without giving you any benefit.

Rather than the trial-and-error approach, Microsoft SQL Server provides some tools that can help you find good indexes. The tools all reside within the administrative applications installed with SQL Server. (You can also install these tools on any client system as well.)

In the Query Analyzer, there are two tools that can help you. The first one is the Graphical

Estimated Execution Plan. This can be selected from the Query menu or from the toolbar. You

can also specify that the execution plan be displayed along with the execution of a statement. By examining the graphical plan, you will discover which indexes, if any, are being used for a statement or group of statements (a batch). Figure 2 shows a sample execution plan.

Figure 2. Graphical Estimated Execution Plan.

This plan shows the steps that the query engine of SQL Server will perform in order to carry out the command. When you position the mouse pointer over any of the icons, a box will appear explaining the nature of the step, various statistics on that step and the argument usedto perform it. Through this tool, you will begin to understand the “thinking” behind SQL Server’s optimizer and then be able to choose good indexes.

Furthermore, the estimated execution plan will actually flag steps where you have no statistics in red and offer a suggestion for creating those statistics. Statistics are what the SQL Server optimizer uses in determining whether an index is useful to optimize a statement. When you right-click on any icon, the context menu that appears will allow you to build and examine indexes on the server.

Another tool in the Query Analyzer is found in the Query menu: Perform Index Analysis. When you use this option, SQL Server will offer suggestions for indexes that would improve the performance of the statement being analyzed. After suggestions are offered, there will bean opportunity for you to create those indexes by accepting the commands that the Index Analysis shows (see Figure 3).

Figure 3. The Query Analyzer Index Analysis dialog.

The problem with the previous two ways of analyzing queries is that they generally do not take into account the overall usage of the data. Determining which index should be the clustered index, or none, and how you should build your non-clustered indexes (composite, unique and so forth) is very difficult because there is no way to predict exactly all of the ways that the data will be queried. To that end, the Enterprise Manager has a tool called the Index Tuning Wizard, which can be a great help.

Before you use this tool, you should either create a script (text file with a .SQL extension) with all of the queries that are being used, or a workload file from the SQL Server Profiler. The first method is almost the same as using the Query Analyzer Index Analysis tool, so the real benefit comes from the second method.

The SQL Server Profiler is a tool that can be used to trace the commands that come into SQL Server. When creating a trace, you can specify what to include or exclude from the trace. By specifying a particular table in a database, the trace will only include statements that reference that table. By saving the results into a workload file (which can be either a table in the database or a file on the disk), you will have a picture of how that table is being used. This workload table or file can then be used as the input for the Index Tuning Wizard. This way,you will get recommendations based on the actual usage of your data.

Client/server division of work

One of the aspects of client/server that has to be addressed is where to do the work. As discussed in the section on data integrity, true client/server involves a division of responsibility. Not everything should be done on the server, and certainly not all validation should be done, even when it can be, on the client.

The power of the server is that it can greatly reduce disk I/O. Microsoft SQL Server is designed to reduce the amount of information read from and written to the disk. That is, it attempts to do these things as little as possible, and with the greatest degree of efficiency. SQL Server will leave as much data as it can in memory, as well as storing the query plans of stored procedures and even some ad hoc queries. This capability means that a typical server will have much more memory installed than the client computers.

Some tasks are better handled on the client systems. Data entry, by necessity will be done there, but also formatting for reports is better handled on the client side. Although there are formatting functions, data conversion functions, and even some statistical and mathematical functions in Transact-SQL, it is not a language that is well suited for those types of calculations. Selecting and sorting the data is what the server is good for, while formatting and calculating is what the client does well.

Another area of concern is record-by-record processing. Although SQL Server can do this, via server-side cursors, that kind of processing should be done in Visual FoxPro, as it is better suited to this kind of processing. An alternative is to use Transact-SQL statements that doset processing. There are many times when it is assumed that the only way to do something is record-by-record, but a careful examination of the activity will reveal many times when more advanced SQL statements will do the job. But if all else fails, try to come up with a way to handle it on the client.

Not all of these decisions are so simple, but as a rule, it is very easy to remember that the server does data retrieval and the client does data reporting. As for data entry and validation, a middle ground does have to be found. The forms are on the client, and the data validationswill

be on the server, but the business rules—the user-defined integrity—could be on either, or both. It is better to try and centralize the business rules, so that they can more easily be changed, but there are other performance issues to consider.

Bandwidth

Along with managing the server and client, you also have to be concerned with the network as well. Bandwidth refers to the capacity of a network, which can be impacted by many factors. These factors include the physical layout, the type of cabling used, the distance from clientsto servers, and the number of users on the network. But one of the more important factors is the design of the client/server layout.

Although there are many physical characteristics, the one thing that is known is that bandwidth is a finite resource and should not be abused. You want to keep the amount of information passing across the network to a minimum. That way, any network concerns will stay with the physical. In order to keep network use down, try not to download data unnecessarily, and try to keep trips between client and server to as few as possible.

To help limit the amount of downloaded data, only specify the fields that are absolutely needed on the client. Furthermore, limit the number of records returned through the use of the WHERE clause, and make sure users are required to specify what it is they are trying to find.

Reducing the number of trips between the client and server is a bit trickier. As you’ve already seen in the data integrity section, sometimes you have to go to the server to validate data entry, which means that if an error occurs, you end up with two extra trips. One way of helping with that is to keep the number of error trips to a minimum. This can be done by not reporting just one error at a time. It is more efficient to gather up all the errors committed by the data entry person and then report them back so that multiple discovery trips are eliminated. Stored procedures can do this as well as the multiple rows returned by AERROR(). However, as mentioned earlier, stored procedures also minimize the amount of information that needs to be sent to the server in the first place, so you can see that stored procedures can be an even bigger help in reducing trips as well.

Scalability

Scalability refers to the ability of an application to handle larger and larger numbers of users. This goes hand-in-hand with bandwidth reduction, as the more that you design to protect bandwidth, the more users the system will be able to handle. But it is also a part of the design as well.

In the past, using only Visual FoxPro, it was easy to create forms with controls that were bound to the data itself. For the size of the applications and the way that VFP actually takes the information into each client system’s memory, this works well. But when all of the data isbeing shared among many different client systems and the data stays in the memory of the server, then binding forms impacts scalability.

Now it is acknowledged that using remote views hardly involves binding client forms to server data, but the potential for abuse is there and it should be avoided at all costs. One way to look at it is to ask yourself if the design that you’re using will work well for two users, 10users,

100 users, 500 users, 1,000 users and more. By keeping the number of potential users in mind, your designs will be more efficient.

Data location

This last area of performance is the tricky one, and you’ll soon see why it is important. There are times when the data should be stored on the client instead of the server. That’s right, even though the server is where the data is kept, there are times when you want to store data onthe client. This is done not as a permanent solution, but rather to reduce network traffic. For example, suppose the server had a table of state and province postal codes. These are not likely to change; therefore, it’s a waste of network bandwidth to download this table more oftenthan it’s modified.

The same is true to some degree for any table that is relatively stable. We don’t mean that it has to be completely stable, just that it has to be data that is modified infrequently. This way, the data can be stored on the client and only downloaded when necessary. This enablesyou to move a little bit more of the validation to the client system, but this time, rather than it being hard-coded, it is data-driven validation, based on the data stored on the client.

The only question then is when should the data be downloaded, and how will you know when that data has been modified. There are several options for the first question. It can be done every time the application is launched, the first time the application is run during a calendar period, or when the data is modified. There are many ways that modification can be detected, such as a special smaller table that can hold datetime values for when the data is changed, or sending it automatically through replication.

Security

The last design issue is security—making sure that only authorized users have access to the data. Although you may have used application-level security in the past, Microsoft SQL Server is so good at handling this that you’ll definitely want to use the built-in tools to administer security.

Client application

Just as in the days of writing VFP-only applications, the client program may still require a user to supply a login ID and password. This time, however, the client side will merely pass that information on to the server, where the user will be authenticated.

If you are using SQL Server in a Windows NT network, then by using NT authentication, the user will not even have to worry about a login procedure. NT authentication means that the user was already authenticated when they logged onto the network. So if their networkuser ID was registered with SQL Server, or if any Windows NT group they are a member of is registered with SQL Server, they’ll have access.

If you cannot use NT authentication, you can handle logins by storing the necessary information in memory and using that to connect to the server in one of several ways. The first is by creating a data source through the ODBC Data Source Administrator. Then, when creating the connection, specify the data source name. The data source can actually specify the login information, so if everyone using the application has been validated through some other means, then perhaps the user need not specify any login information. But this would bevery risky, as anyone who can access the client computer would be able to gain access to the server.

Another way is directly through the ODBC driver, using what is known as a DSN-less connection. Login information must be specified using this technique. Finally, if you are using

ADO, then you could use the OLE DB provider for SQL Server. Using this technique, you’ll need the login information as well.

SQL Server logins and permissions

Administering logins in Microsoft SQL Server is a big job. All the users of the client/server application have to be set up on the server along with passwords. It’s a little easier if you are using Windows NT, for then you can either set up the users or the NT groups of which they aremembers. The latter makes administration easier and more efficient.

Anyone who needs access to SQL Server must have some sort of valid login, or they cannot do anything with the data on the server. The data is accessed through SQL Server only, so no one has the ability to get to the data files and open them without using SQL Server.

After a user has gained access to the server, they need permissions to get at the data. Permissions are set on tables and stored procedures. With a table, a user could be granted or denied the ability to execute SELECT, INSERT, UPDATE or DELETE statements. As for storedprocedures, security is controlled by the user’s ability to submit the EXECUTE command. Regardless, without the proper permissions, the users will be unable to either read or modify the data.

Security is another area where stored procedures have an advantage. A user can be granted EXECUTE permission on a stored procedure, but be denied permissions to modify and extract the data from the tables accessed in the stored procedure. This means that you can havetight control over how users get to the data. This happens due to an efficiency built into Microsoft SQL Server. If the stored procedure and the objects it uses have the same owner, then the permissions on the underlying objects are not checked. Handled correctly, all objects in a database are automatically owned by dbo (the database owner), allowing you to easily take advantage of this feature.

Another way to make administration of permissions easier is to use roles within a database on SQL Server. A standard role is very much akin to a Windows NT group. You can set any number of users in a database as members of a role. Then you can assign permissions to the roles, rather than to the individual users. This way, as various people leave their jobs and new people replace them, you won’t have to do any more than drop the old login and create a new one. Then assign the new login to the database, make it a member of the correct role, andyour job is done.

Even if a user is a member of multiple roles, the various permissions are combined so that SQL Server will determine exactly what it is that that user can do. Keep in mind that in SQL Server, a DENY permission always overrides a GRANT permission.

Application roles

There’s a new way to manage security in Microsoft SQL Server 7 through a feature called application roles. Unlike server and database roles, application roles do not have members. Instead, an application role has a password. Since almost all database activity, other than administration, will be handled through your client/server application, there is no need to manage standard roles and their memberships. Instead, have the client application login to the server and then set the application role. Once the application role is set, the user will be unable to do anything that is not allowed to the application role.

Even if a user has permissions that the role does not have, they will be unavailable during the application. The user’s membership in any standard roles will not have any impact on the application role because it overrides the connection’s permissions.

The Transact-SQL statement that sets the application role is the following:

EXECUTE sp_setapprole <AppRoleName>, <Password>

The sp_setapprole is a system stored procedure that activates the application role and is submitted through a SQLExec() call. There is an option to encrypt the password as it is sent to the server. The users will still need to be able to authenticate or log on to the server, but they will need absolutely no database permissions. All the permission setting will be with the application role, and those should be set to match up with the activity of the client/server application.

Summary

In this chapter, you learned about the issues around client/server database design. You learned what the various options are when planning out an application such as this, with special attention being paid to those areas where there are conflicts between client design and server design. When planning out the data integrity issues, keep in mind the pros and cons of where validation is done. When validation is performed on the client, the issue is deployment and recompiling when changes are made. When validation is done on the server, the issue is network traffic and server overload.

You learned that stored procedures aid in handling validation, security and error processing, as well as cutting down on network traffic. You also saw the use of stored procedures through ADO and the advantages that ADO brings.

Client/server design is not just choosing between client and server; it’s also making database decisions that will impact performance and security.

In this chapter, the options were presented so that you can make informed decisions. In the next chapter, you will learn about the care and feeding of a client/server database.

No comments:

Post a Comment

Bottom Ad [Post Page]