Latest Articles related to all categories. Microsoft, Twitter, Xbox, Autos and much more

Full width home advertisement

Post Page Advertisement [Top]

Scalability is a popular term these days and is commonly used to mean that something small can be made big. But what about the opposite situation? What about taking something big and making it small? What about taking client/server systems and downsizing them so they work where SQL Server might not be appropriate or cost- effective? This chapter addresses how developers can maximize their efficiency by creating systems that can use either client/server or file-server back ends with a single code base. In thischapter, you’ll learn how to write applications for either VFP or SQL Server back ends, and how to downsize them by using either remote SQL Server views and local VFP views or remote SQL Server and remote VFP views. You’ll also learn how the Microsoft DataEngine (MSDE) lets you deploy true client/server applications on any computer—for free.

The case for a single code base

Let’s consider the example of a client/server application that was written years ago for a company with an installed base of 1,800 file-server systems. The dealers naturally wanted to work on accounts that paid big commissions, and many of those accounts wanted client/server. The app in question was a 10-year-old FoxPro application that had been converted to VFP 3.0 (during the VFP 3.0 beta, a long story in itself) and had undergone extensive updates and improvements. It didn’t take long to realize that using the existing code might yield aclient/server application quickly, but it would be a terrible product. Lacking the resources to write two products, the solution was to write a single new application that could switch between a SQL Server or VFP back end at any time.

Some of us write shrink-wrapped software, but most VFP developers are consultants who are working on a different application for each client. If each client gets a different application, how can you have a single code base? Hopefully this is a rhetorical question, and youreuse as much code as necessary. If your application framework and common classes are designed with multiple back ends in mind, you can use them for any application, file-server

or client/server.

Interchangeable back ends

Hopefully you’ve already read the previous chapters and learned about client/server-style development. The first step to ensure the success of any application or components that are back-end-independent is to make sure you design client/server-style. It is much better to have a client/server-style file-server application than to try to do it the other way around.

Any object-oriented programmer can tell you that the key to code reusability is abstraction. The same is true if you want to reuse your code with different back ends. You need to abstract your data access functionality into a few distinct components. The obvious place to startis with the data access mechanism itself. As explained earlier, this book deals primarily with VFP

views for data access, but other abstract mechanisms, such as ADO, will work as well. For details on using ADO, see Chapter 12, “ActiveX Data Objects.”

A client/server-style application using Visual FoxPro data can be designed using two types of views: local or remote.

Remote views of VFP data

While it may not seem obvious, a VFP application can access VFP data via remote views by using the VFP ODBC driver. This means that the actual opening and closing of tables, processing of queries and so forth are handled by the VFP ODBC driver instead of VFP itself.

As you learned in Chapter 2, “Visual FoxPro for Client/Server Development,” ODBC adds another layer of overhead to your application. You might think this is unnecessary overhead

that will degrade performance. Under some circumstances, you would be correct. But other times, you would be surprised by performance improvements. In informal tests using remote views of VFP data where the database resided on the local workstation, performance via ODBC was approximately 80 percent slower than using local views of the same tables. In other words, it took 80 percent longer to perform a query via ODBC than with native VFP access. But when the data was moved to a network file server, the performance equation reversed itself! Infact,

in many cases ODBC performance was observed to be up to 250 percent better than native.

What could possibly explain ODBC performance that is better than native performance? Nobody at Microsoft is talking, but here’s a theory. If you create two views of a VFP table, one local and one remote, and then USE them NODATA and observe the results with NetworkMonitor, you’ll see a surprising difference in network traffic. The local view produces several times the network traffic of the remote view. The remote view seems to bring down the schema only; we suspect that the local view causes VFP to bring down a bunch of stuff that it mayneed for Rushmore optimization. When a query is executed, VFP may or may not have downloaded what it actually needed to optimize the query. If it downloaded the right stuff, then the query is faster. But if it was wrong, then not only is the query slower, but a bunch ofunnecessary stuff came down over the wire. The ODBC driver doesn’t seem to make such guesses. That means it needs to bring down everything necessary for each query, but it doesn’t bring down anything it doesn’t need, either. Remember, this is just a theory. We’re not privy to the inside workings of Visual FoxPro’s database engine.

All this isn’t to say that you should jump on this technique in order to improve performance, but that ODBC performance may be better than you think.

Advantages of remote VFP data

The biggest reason to use remote views of VFP data is that branching logic for the different back ends is almost totally eliminated. Here are the main advantages of using remote VFP data (that is, VFP data via ODBC):

• Since both back ends are connected via ODBC, the mechanics of establishing a connection is the same.

• Remote views can have any name, including the name of the table on the back end.

But local views must have a different name than the table, as VFP will open both the view and the table when using local views.

• On-the-fly queries can be created anywhere in an application using SQL pass through, as it is compatible with both back ends. Although there are limitations in SQL syntax compatibility, by and large a SELECT can be passed through to either back end.

• Cursors created on the fly can be made updatable with remote data, with either back end. Local cursors created with VFP’s native SELECT are not updatable. See Chapter

6, “Extending Remote Views with SQL Pass Through,” for more information on updatable cursors.

• Transactions can be handled the same way with either back end by using ODBC

transactions via SQLSETPROP().

Later in this chapter, you will learn details of how to create components that keep back- end-dependent branching code in a limited number of places. If your application uses remote views of VFP data, almost no branching logic is necessary.

Disadvantages of remote VFP data

The first time we used this technique to create an application, it seemed almost too good to be true. It was. The disadvantages of remote VFP views are primarily related to the limitations of the VFP ODBC driver:

• Only a limited subset of VFP syntax is supported by the ODBC driver. This makes it extremely difficult to write stored procedures that work via ODBC.

• The VFP ODBC driver documentation doesn’t offer much help in figuring out the previous point.

• The VFP ODBC driver doesn’t have a debugger.

• Field defaults that use stored procedures don’t work. The procedure executes, but the value doesn’t end up in the field.

• Calling stored procedures via ODBC isn’t supported.

Another disadvantage is unrelated to the ODBC driver, per se, but rather has to do with the normal differences among any two back ends: The SQL syntax differs between VFP and SQL Server or between any two back ends. A SQL pass through command that works for oneback end may not work for the other. However, as long as you pass through simple SQL syntax that is compatible with both databases, this isn’t a problem.

Creating remote views of VFP data

Most of what you need to know to use VFP data remotely is the same as what you learned about remote views of SQL Server data in Chapter 4, “Remote Views.” The main difference is in creating an ODBC DSN. To do so, use the same steps as in Chapter 4, but use the Microsoft Visual FoxPro driver instead of the SQL Server driver. Figure 1 shows the Setup dialog for a VFP ODBC connection.

Figure 1. The ODBC Visual FoxPro Setup dialog, expanded to show all options.

If you are connecting to a VFP database, you set the path to be the full path and file name of the DBC. When connecting to free tables, you specify the directory only. Note that UNC paths are supported. The lower portion of the dialog, below the path, is only visible afterthe Options button is clicked. The settings there correspond to SET COLLATE, SET EXCLUSIVE, SET NULL, SET DELETED and FetchAsNeeded.

Substituting local views for remote views

The obvious way to write a client/server-style application against a VFP back end is with

VFP’s local views. This is the best solution in most situations, but it does require that you write more code to abstract data-handling functionality. Abstracting data-handling functionality is covered in more detail later in this chapter.

Local views differ from remote views in that no connection is used. The database and table are specified in the local view definition.

CREATE SQL VIEW myview AS ;

SELECT * FROM mydatabase!mytable

As long as the VFP database is in the search path, this syntax works fine. Put your local views in a separate DBC from your VFP database and give a copy to each user, just as you would with remote views. In other words, your application uses two DBCs: one with views and one with tables. This makes your application more modular and reliable and makes it easier to use the data environment of forms.

One nice thing about local views is that the VFP View Designer doesn’t hiccup nearly as often as with remote views. Many remote views with joins cannot be edited in the View Designer and require all work to be done in code. But this is much less often the case with localviews.

When you open a local view, VFP actually uses at least two work areas. If the view is based on a single table and no view of that table has been opened yet in the current data session, then after you USE the view you will see one work area for the view, and one for the table itself. When the view joins multiple tables, one work area will be used for the view and one for each table in the join. Figure 2 shows the three work areas opened for the following local view of data in the VFP TasTrade sample:

CREATE SQL VIEW VcustOrders AS ; SELECT * ;

FROM tastrade!customer JOIN tastrade!orders ; ON customer.customer_id = orders.customer_id

Figure 2. The VFP Data Session window showing three work areas opened for a single local view.

Another nice feature of local views is that if you create a multi-table join on tables for which relations are defined, the View Designer will automatically detect those relations and create join conditions to match, as shown in Figure 3.

Figure 3. The VFP View Designer will automatically detect persistent relations between tables and create join conditions that match.

Abstracting data access functionality

Using two different back ends with a single application could easily turn into a nightmare if you aren’t careful. After all, there are numerous places where different actions are required for the different back ends. You could end up with lots of branching logic like this:

IF VFP back end

Do VFP stuff

ELSE

Do SQL Server stuff

ENDIF

Yet your application must be able to provide both types of functionality. The way to prevent unmanageable spaghetti is to pull the branching code out into a few abstract components that are then used by various parts of the application when working with data. There arethree main areas where you should perform this abstraction:

• Application-level data-handling class(es)

• Form-level data-handling class(es)

• Views DBC

Application-level data handler

Application startup is a good time to handle various differences in the back ends, such as connecting to the database. Connecting with SQL Server might entail opening a remote view to establish an ODBC connection, along with login security. With VFP you might open each table up front so that index keys are downloaded over a slow connection at startup, not when each view is opened in individual forms.

A good way to handle this is to have an application object determine which back end is being used and then instantiate the desired application-level data handler. Base all data- handling classes on the same abstract class so they’ll share the same interface. Then they can be

used interchangeably. This technique of selectively instantiating a particular class based on run- time conditions is called a class factory.

Following is a list of some of the data-handling functionality you might want to put into an application-level data handler:

• Establishing the connection to the database. With a SQL back end, this is done by opening a remote view with a shared connection in order to load the ODBC DLLs and open an ODBC connection on the server. With a VFP back end, it may be as simple as setting the search path to the location of the VFP database.

• Handling security. With SQL Server, this may be as simple as letting the connection handle the login or as elaborate as applying roles to the connection. With VFP, as there is no built-in security, you’ll have to do it all in your application code.

• Ensuring that application and database versions are synchronized. We consider it good practice to put a version number on a database and include metadata identifying the version in the application. This helps ensure that the two are in synch. The application-level data handler checks that they are in synch and, if not, either warns the user and shuts down the application or runs whatever routines are necessary to

re-synchronize them.

• Doing the slow stuff at startup. For example, over a low-bandwidth connection to a

VFP database, opening tables for the first time can be quite slow. Once a table has

been opened, however, queries against it are generally of acceptable speed because the table doesn’t have to be reopened. Rather than slow down application performance each time a table must be queried, you might consider looping through each table in a DBC andopening it. This technique slows down application startup, but it makes

other actions the user takes later appear pretty zippy. Opening all the tables in a DBC is pretty easy, as you can use the ADBOBJECTS() function or even USE the DBC itself as a table to get the name of every table in it:

SELECT DISTINCT objectname ; FROM mydatabase.dbc ;

WHERE objecttype LIKE 'Table'

You also might want to let the application-level data handler provide other database- specific services to the remainder of the application. Although we prefer to put non-startup functionality in the form-level data handler, we do use a method in the application-level data handler that returns which type of back end is being used.

Form-level data handler

Forms generally require data-handling services. How you implement these services depends on which back end you use. Because forms with private data sessions are isolated from one another, each form requires some of its own data-handling functionality. As with application- level data handling, we use a class factory to instantiate the correct data-handling class for each form. The form asks the application-level data handler which back end is being used and instantiates the appropriate form-level data handler. These data-handling classes, as with the application-level data-handling classes, are subclassed from a single class to ensure they share the same interface. If the data handler is instantiated from the form’s Load event, it will exist before any objects on the form are instantiated and before the form’s DataEnvironment object

is instantiated.

The first thing to have the data handler do is open the correct views DBC. In order to simplify multi-programmer development, it’s preferable to use views DBCs with different names for the different back ends. So the data handler simply opens the appropriate DBC, the name ofwhich is stored in a property of the data-handling class. Once the views DBC is opened, you won’t have to refer to it again, as each view can be accessed simply with USE and without passing the DBC name to it.

However, note that we don’t use the form’s DataEnvironment object, preferring instead to open all views with good old-fashioned procedural code. Furthermore, since we prefer to keep form definitions in class libraries (VCXs), rather than “form” files (SCXs), we don’t have a DataEnvironment object to work with anyway. But if you do use the DataEnvironment, note that it stores the name of the DBC. So you’ll either have to change the name of the DBC for each cursor object in the DataEnvironment’s BeforeOpenTables event or be sure that your

different views DBCs have the same name. The latter is definitely simpler for coding but a little more difficult to maintain, as it is so easy for developers to make changes to the wrong versions of like-named files.

We create five methods on the data handler and use these methods to replace five native

VFP functions and/or SQL pass through commands, as shown in Table 1.

Table 1. Five form-level data handler methods and the VFP functions they replace.

Method

VFP function

UpdateTable()

TABLEUPDATE()

RevertTable()

TABLEREVERT()

BeginTransaction()

BEGIN TRANSACTION or SQLEXEC(nHandle, “Transactions”, 2)

CommitTransaction()

END TRANSACTION or SQLCOMMIT()

RollbackTransaction()

ROLLBACK or SQLROLLBACK()

We’d never write code like Listing 1 in a form.

Listing 1. A simple snippet that begins a transaction and attempts to update two views. If either update fails, the transaction is rolled back; otherwise, it is committed.

BEGIN TRANSACTION DO CASE

CASE ! TABLEUPDATE("view1") ROLLBACK

CASE ! TABLEUPDATE("view2") ROLLBACK

OTHERWISE

END TRANSACTION

ENDCASE

Instead, we would use the form-level data handler as shown in Listing 2.

Listing 2. A snippet that does the same thing as the code in Listing 1, but calls the form-level data handler instead of making the calls directly.

WITH THISFORM.oDataHandler

.BeginTransaction()

DO CASE

CASE ! .UpdateTable("view1")

.RollbackTransaction()

CASE ! .UpdateTable("view2")

.RollbackTransaction()

OTHERWISE

.CommitTransaction()

ENDCASE

ENDWITH

In Chapter 6, “Extending Remote Views with SQL Pass Through,” you learned about transaction handling with remote data, which explained why the transaction-handling methods should be different between the two data handlers. The VFP data handler’s BeginTransaction() method simply needs to pass through BEGIN TRANSACTION, something like this:

BEGIN TRANSACTION

while the SQL Server handler sets the connection’s Transactions property to manual:

SQLSETPROP(lnHandle, "Transactions", DB_TRANSMANUAL)

Naturally, each method needs to check existing settings and so forth, but the preceding code shows the primary functionality.

The meat of the CommitTransaction() method for VFP looks like this:

END TRANSACTION

while the SQL version looks like this:

SQLCOMMIT(lnHandle)

SQLSETPROP(lnHandle, "Transactions", DB_TRANSAUTO)

Note that the VFP command END TRANSACTION both commits and ends a transaction, but that the SQL Server version must set the Transactions property back to automatic. The RollbackTransaction() methods are essentially the same as

CommitTransaction(), but ROLLBACK is substituted for END TRANSACTION and

SQLROLLBACK() for SQLCOMMIT().

The UpdateTable() and RevertTable() methods simply pass parameters to VFP’s TABLEUPDATE() and TABLEREVERT() functions, respectively. We abstract them in case we want to add back-end-specific functionality here. We’d sure hate to suddenly have to find andreplace thousands of calls to TABLEUPDATE()because a need for back-end-specific functionality arose that wasn’t foreseen earlier in the project.

Views DBC

If you’ve worked with lots of local data in VFP applications, you may be in the habit of calling DBCs “databases.” If you do, wash your mouth out with soap right now and don’t do it again. A DBC is not truly a database. A database is a collection of tables, while a DBC is nothing

more than a metadata table containing information about tables, views and/or connections. You wouldn’t give every user his or her own copy of a database, but you can and should give every user his or her own copy of the views DBC. The DBC contains nothing more than abunch of code and properties defining the views and connections. If each user has a copy, you don’t have to worry about pathing, and you can temporarily store all kinds of useful, user-specific data in the DBC. This technique is covered in more detail in Chapter 4, “Remote Views.”

Since a view is nothing more than a collection of code and properties, it can be used to abstract data access functionality. A view of the same name in two different DBCs can be defined differently for different back ends. Stored procedures with the same interface can do different things. Properties for objects in the DBCs can be set differently. In fact, these are the main things to do differently in your views DBCs.

Each view definition must be written using the SQL syntax supported by the appropriate back end, as back end requirements and capabilities vary. For example, VFP and SQL Server

7.0 both support TOP n queries, but don’t try this with SQL 6.5. You’ll have to leave that clause out of your SELECT and use the view’s MaxRecords property instead.

Different back ends also support different functions. For example, to produce a query returning rows for a particular date in VFP, you would use the YEAR() function, but in SQL Server, you would use the DATEPART(year) function. Different back ends also have different keywords, so a view that works fine in VFP might fail in SQL Server because you attempted to use a SQL Server keyword.

Just be sure that you create views that look the same to the front end. This may take some trial and error, and you should work with both back ends at the same time. Consider one author’s experience of working on a module and testing it with a VFP back end, only tocome back some time later and discover that a SQL Server keyword had been used in a

view definition, making it necessary to go back and change a bunch of code where the view was used.

Some functions just seem to fit best as stored procedures in the views DBC. As long as you’re only opening one DBC, you can call the stored procedure at any time with a simple function or procedure call. One excellent use for DBC stored procedures is to generate primarykeys. With SQL Server, you may call a SQL stored procedure or use an identity column, while with VFP you might get a primary key directly from the VFP database. Either way, if you create a stored procedure in the views DBC, it can contain the logic that is appropriate for its backend; all you do is call the procedure. If you put this sort of function in your application-

level data handler, you might find yourself writing lightweight COM components where you need this functionality and don’t want the overhead of a data-handling class. You can simply move the code to the views DBCs and rewrite the data handler to pass the call through tothe stored procedure.

Finally, you can set properties for each object in the DBC. One good example of the need for this is to make back-end-specific data type conversions, as with date fields in VFP tables and datetime fields in SQL Server tables. As SQL Server has no date data type, you must use datetimes. This is no problem if you also use datetimes in the VFP schema, but if you used

dates in VFP, then you simply change the DataType property of the view’s field. A SELECT of a date field in a VFP table will automatically set the data type to date. You can easily change it like this:

DBSETPROP("myview.mydatefield", "Field", "DataType", "T")

So hard to get a date

There’s one final major gotcha when you write against both VFP and SQL Server back ends:

empty and null dates.

When you insert a row directly to a VFP table and you don’t specify a value for a nullable column, that column will be stored as a NULL unless the column has a default value. When you use a remote view to add a row, you get the same results. But if you add rows with localviews, instead of a NULL, the column will contain an empty value. This isn’t a problem with most

data types—you simply check for EMPTY() and/or ISNULL():

IF EMPTY(myfield) OR ISNULL(myfield) Do something

But datetimes are different. EMPTY() will return FALSE in a remote view containing an empty datetime field. If the empty datetime field is in SQL Server, then in a remote view it will appear as 01/01/1900 12:00:00 AM (with SET(“DATE”) = “MDY”). To complicate matters further, an empty datetime field in a remote view of VFP data will be 12/30/1899 12:00:00

AM. So every time you test for an empty or null datetime, you also have to test for it being equal to one of these datetime values.

This is an excellent argument for writing your own function to test for EMPTY() or ISNULL(). If the VARTYPE() of the value being tested is “T,” be sure to test for the “empty” datetime values.

To make matters worse, you may have to deal with actual dates that are the same as the “empty” ones. There are people alive today who were born on those days, though admittedly not many. Fortunately, you most likely don’t have to deal with the time for those dates. Soto ensure that you are dealing with an actual datetime, rather than a phantom one, set the time component to something other than 12:00:00 AM. We use 12:00:01 AM instead.

By the way, you can’t ensure against empty dates in your user interface because somebody can always get to a field some other way. Remember, your application isn’t the only way

people can get to data.

Microsoft Data Engine (MSDE)

For years we’ve been writing applications that can use either VFP or SQL Server back ends, all the while thinking that there’s got to be a better way. Wouldn’t it be nice if you could serve all users with a single back end? In June 1999, Microsoft released the Microsoft Data Engine.It just might be the better way we’ve been looking for. A developer can now create an application for SQL Server and distribute it for smaller systems using MSDE.

What is MSDE?

MSDE is a client/server database that’s 100 percent compatible with SQL Server 7.0. It

is included with Microsoft Office 2000 Premium and Developer editions, and a royalty-free run-time distribution version is available for licensed users of any of the following Microsoft products:

• Visual FoxPro 6.0, Professional edition

• Visual Basic 6.0, Professional and Enterprise editions

• Visual C++, Professional and Enterprise editions

• Visual InterDev 6.0, Professional edition

• Visual J++ 6.0, Professional edition

• Visual Studio 6.0, Professional and Enterprise editions

Following are some of the key features of MSDE.

Free run-time distribution and licensing

This is one of the best parts: MSDE is free. At the time of this writing, the royalty-free run-time engine can be downloaded at http://msdn.microsoft.com/vstudio/msde. Better yet, visit the

same site and order a free CD-ROM that contains the run-time engine as well as a free copy of SQL Server 7.0 Developer edition, which is licensed for developing MSDE solutions. This same CD is also included in the MSDN Universal Subscription.

SQL Server compatibility

MSDE is fully compatible with SQL Server 7.0:

• They use the same ODBC driver and are both fully accessible with SQL Database Management Objects (SQL-DMO).

• They support exactly the same set of SQL commands and the same version of the

T-SQL language.

• They support the same files. A database can be migrated from MSDE to SQL Server or vice versa simply by detaching the database from one, moving the files and attaching to the other.

• MSDE, like SQL Server, provides multi-processor support with Windows NT

and 2000.

• They are both compatible with Microsoft Distributed Transaction Coordinator (MS DTC), which is covered in Chapter 1, “Introduction to Client/Server.” This allows separate databases in MSDE and SQL Server to participate in the same transaction.

• MSDE databases can be replicated to SQL Server and vice versa.

Replication of MSDE databases requires a SQL Server client access license to replicate with other SQL Server databases.

Operating system compatibility

MSDE runs on Windows 95, 98, NT and 2000. Windows NT and 2000 are supported on both

Intel and DEC Alpha platforms.

Microsoft Office 2000

As mentioned earlier, MSDE is now a feature of Office 2000 Premium and Developer editions. Access 2000 must already be installed in order to install MSDE from the Office 2000 CD, which can be found in the Sql\X86\Setup folder. Once installed, Access 2000 “Data Projects” canbe created and maintained using either the Jet engine or MSDE. Makes one wonder what Microsoft’s plans are for the future of Jet.

MSDE vs. SQL Server

If MSDE sounds sort of like a free version of SQL Server, it isn’t. There are many differences between the two engines, the two biggest of which are: MSDE is tuned for five concurrent users and has no user interface at all. Following are some details of the differences between SQLServer 7.0 and MSDE.

User limitations

Microsoft says MSDE is tuned for five or fewer concurrent users. What does this really mean? Theoretically it means the server is likely to be actively handling requests from five users at the same time. But it doesn’t mean an MSDE system is limited to five users.

In order to explore the limits of this feature, we did some tests to attempt to determine how many users could be logged into MSDE at the same time and how performance was affected as the number of users went up. We were able to connect more than 100 users without aproblem. However, there was a severe performance penalty as the number of users increased. With 15 or fewer connections, there seemed to be no difference in performance between MSDE and SQL Server 7.0 on similarly configured systems. But as soon as a sixteenth user wasconnected, everything slowed down dramatically. When 16 users were connected, everything was slower than with 15, even when only one user was actually doing anything.

Capacity limitations

Each MSDE database is limited to 2GB. Bigger databases require SQL Server.

No user interface

SQL Server ships with Enterprise Manager, Performance Monitor, Profiler, Query Analyzer and other great tools for administering SQL Server and designing/modifying databases. None of these tools are included in MSDE. However, if those tools exist on a network, they can be used to manage a MSDE server. We’ll discuss three other possible tools here: Microsoft Access 2000, Microsoft Visual InterDev 6.0 and in-house tools.

Access 2000

Microsoft Access 2000 can be used to manage most aspects of an MSDE server, including database schema, triggers, stored procedures, views, security, backup/restore and replication. Some things that aren’t particularly easy to work with from Access are user- defined data types, defaults and rules. Many of the individual administrative tasks are performed almost identically in Access and Enterprise Manager. For example, creating views uses tools that differ in the two products only in their toolbars. Figure 4 shows the design surface for views inAccess, and Figure 5 shows the Enterprise Manager version, which offers a toolbar.

Figure 4. Designing the Northwind database’s Quarterly Orders view in

Access 2000.

Figure 5. The Northwind database’s Quarterly Orders view in Enterprise Manager. Note the addition of a toolbar.

Visual InterDev 6.0

Visual InterDev offers another alternative for managing MSDE databases. In comparison to Access, you have the ability to automatically generate T-SQL scripts for schema creation or changes. Figure 6 shows part of a change script generated in Visual InterDev.

Figure 6. A dialog containing a change script created by saving a schema change in

Visual InterDev 6.0.

As with Access 2000, you’ll find many of the design surfaces in Visual InterDev to be similar to those in Enterprise Manager. An example is the table design view, shown in Figure

7, which is identical to the one in Enterprise Manager.

However, as with Access, you’ll find that Visual InterDev isn’t a complete replacement for the Enterprise Manager. For example, Visual InterDev doesn’t offer an easy way to manage users or security.

Figure 7. The table design view in Visual InterDev 6.0 is identical to the one in

Enterprise Manager for SQL Server 6.0.

In-house tools

Chapter 10, “Application Distribution and Managing Updates,” discusses creating tools that allow users to perform various management functions for SQL Server databases. After all, you may need to make schema changes during the life of a project and, though you could require users to run scripts in the Query Analyzer, providing the user with an application specifically for managing your database may give you better control. MSDE makes such a tool/application even more important. Perhaps your users don’t have Access or Visual InterDev. Maybethat’s good, as users can do a fair amount of damage with such tools. But they may need to perform

simple tasks such as changing the administrators password or adding new users. These tasks are fairly simple to perform and are discussed in greater detail in Chapter 10.

Distributing MSDE applications

MSDE can be distributed royalty-free by using the MSDE for Visual Studio installation program, which can be found on the MSDE for Visual Studio CD in the \MSDE directory. The Intel version is MSDEx86.exe, and the Alpha version is MSDEAlpha.exe. Only the Intel version willbe discussed here. Although the \MSDE directory contains documentation on how to use MSDEx86.exe, at the time of this writing the documentation on both the CD and the Microsoft Web site is incorrect in some ways. Unfortunately, if you do something wrong in the installationprocess, the process simply fails without any warnings or error messages. However, a log file, setup.log, will be written to the Windows directory and can be checked for result codes, which will be listed later in this chapter.

The MSDE installation program is an InstallShield self-extracting command-line program and relies on a response file (.iss) for installation options. A default version of this file, called unattend.iss, is in the \MSDE directory on the CD. You can use this file as-is for a default installation. To perform the default installation, use this syntax:

c:\temp\msdex86.exe –a –f1 c:\temp\unattend.iss

The path in the command line must be the fully qualified path to the two files. You may rename the .iss file, but you must pass the fully qualified path, not a relative path, or installation will fail. If you use InstallShield or other commercial installers for your install program, you

can specify the path in an installation script with a memvar for the location the user selected for installation.

Microsoft says in its documentation to surround the .iss file name and path with double quotes, but we’ve found that doing so causes frequent installation failures. You must also use the –a and –f1 switches or your installation will fail. Installation will also fail if there are any spaces in either path on the command line.

The documentation also says to use a –s switch to cause a “silent mode” install, and that omitting the switch will provide a user interface during the install. In tests, the switch does nothing and you get a silent install whether you want it or not. Because you’re stuck with a silent install that takes several minutes with no feedback to the user, be sure to warn them in some way prior to performing the MSDE install.

The .iss file contains numerous installation settings, including the installation directory. This path is hard-coded into the file, and you cannot use relative paths. This means the user has no choice of destination directory for MSDE. You could programmatically change the file at install time to substitute a user-defined path, but this would be quite a bit of work with most installation programs.

The MSDE installation will also fail when certain registry keys exist on the target machine. MSDE cannot be installed on a computer that has had SQL Server on it unless

SQL Server has been completely uninstalled. We found this out the hard way by attempting to put MSDE on a machine that had once had SQL Server 6.5 on it. The 6.5 installation had been upgraded to 7.0; 6.5 was later uninstalled. Unfortunately, numerous registry keys remained behind.

The MSDE installation program writes a file called setup.log in your Windows directory. This file looks just like an INI file, and there are four lines to look for to help debug the installation. If everything went fine, it will look like this:

[Status] Completed=1 [ResponseResult] ResultCode=0

If the Completed value is anything other than 1, the installation failed. If the ResultCode value is anything other than 0 or –1, the installation also failed. Even though the ResultCode value of –1 is technically an error, if Competed is 1 and ResultCode is –1, then the installation simply requires a reboot. Other ResultCode values are shown in Table 2.

Table 2. MSDEx86.exe installation ResultCode values.

Value

Meaning

0

Success.

-1

General error, or requires reboot.

-2

Invalid mode.

-3

Required data not found in the .iss file.

-4

Not enough memory available.

-5

File does not exist.

-6

Cannot write to the response file.

-7

Unable to write to the log file (don’t know how you’d find this one out).

-8

Invalid path to the InstallShield silent response file.

-9

Not a valid list type (string or number).

-10

Data type is invalid.

-11

Unknown error occurred during setup.

-12

Dialog boxes are out of order.

-51

Cannot create the specified folder.

-52

Cannot access the specified file or folder.

-53

Invalid option selected.

After a successful installation, the Startup folder on the Start button will contain a shortcut called “Service Manager.” This is the only user interface for MSDE. When the system is booted, the Service Manager will start. However, by default, the MSDE service itself will not be started. Start the Service Manager to start MSDE, and also check the “Auto-start service

when OS starts” check box so that MSDE will automatically start when the computer is booted.

If the user has a license for Microsoft Office 2000 Premium or Developer edition, then MSDE can also be installed from the Office 2000 CD. Run \Sql\X86\Setup\Sqlsetup.exe from the Office 2000 CD. This version will install Microsoft DTS (Data Transformation Service) in addition to MSDE. This version of MSDE can only be installed on machines where Access

2000 is already installed.

Migrating MSDE databases to SQL Server

Because MSDE is file-compatible with SQL Server 7.0, you can move an MSDE database to a SQL Server, or vice versa, at any time simply by copying the files from one server to another and attaching them to the SQL Server.

Because the server locks the files open when it is running, you need to detach the database from the server, which can only be performed when there are no connections into the database. Here is the T-SQL command to detach the Northwind database in MSDE, which you caneither send to SQL Server with VFP SQL pass through or by using SQL Server’s Query Analyzer:

EXEC sp_detach_db 'Northwind'

After copying all the appropriate MDF and LDF files to the new server, you attach them like this:

EXEC sp_attach_db @dbname = N'Northwind',

@filename1 = 'd:\mssql7\data\northwnd.mdf',

@filename2 = 'd:\mssql7\data\northwnd.ldf'

There’s one minor catch. Although a list of users is stored in the sysusers table of each database, the Security IDs (SIDs) required by SQL Server are actually stored in the master..sysxlogins table. A user won’t be able to get into the database after the move because theSIDs don’t match. This is easily corrected by running the SQL Server

sp_change_users_login stored procedure for every user. Listing 3 shows a VFP procedure that calls sp_change_users_login for every user and application role in the database. If the ODBC DSN you use to connect already specifies the database, then you don’t need to call this procedure with the name of the database. This works only for normal SQL Server users and application roles; it does not work for NT-integrated users. You might encounter situations where this step isn’t required, such as when no logins are defined in a database because all access is through the administrative login.

Listing 3. This code will reset the internal Security ID (SID) for a SQL database after it has been moved from one server to another or from MSDE to SQL Server.

LPARAMETERS tcDatabase

LOCAL lnHandle, lcSQL, lnResult

*-- By connecting without parms VFP asks for DSN, then login and password lnHandle = SQLCONNECT()

*-- Connect to a database if the tcDatabase parameter was received

* Otherwise, if a database is specified in the DSN, this parm is not needed

IF !EMPTY(tcDatabase)

lnResult = SQLEXEC(lnHandle, "USE " + tcDatabase)

IF lnResult < 0

RETURN .F.

ENDIF

ENDIF

*-- Must be for SQL Users and/or Application Roles only

* Does not work for NT Users, dbo, guest, or INFORMATION_SCHEMA

lcSQL = "SELECT name FROM sysusers WHERE (issqluser = 1 OR isapprole = 1) " + ;

"AND name NOT LIKE 'dbo' " + ;

"AND name NOT LIKE 'guest' " + ;

"AND name NOT LIKE 'INFORMATION_SCHEMA'"

lnResult = SQLEXEC(lnHandle, lcSQL, "sqlusers")

IF lnResult < 0

RETURN .F.

ENDIF

SELECT sqlusers

SCAN

*-- Call SQL Server stored procedure to fix logins

SQLEXEC(lnHandle, "EXEC sp_change_users_login 'Auto_Fix'," + ;

"'" + ALLTRIM(sqlusers.name) + "'")

ENDSCAN

USE IN sqlusers

RETURN

You aren’t limited to migrating only from MSDE to SQL Server. You can migrate the other way, too. However, be aware that MSDE has capacity limitations that might prevent a large database from being

migrated to MSDE.

Summary

In this chapter, you learned a couple of approaches to using the same application code with either a VFP or SQL Server back end: using remote views with SQL Server and local views with VFP, and using remote views with both SQL Server and VFP. You also learned aboutMicrosoft Data Engine, a SQL-Server-compatible client/server database that just might eliminate the need to code for more than one back end, as it allows you to deploy your SQL Server application from laptops to the enterprise. In the next chapter, you’ll learn about error handling in client/server applications.

No comments:

Post a Comment

Bottom Ad [Post Page]