Monday, March 31, 2008

Transfer Logins from SQL Server to another SQL Server Instance

(for more database related articles)


Please find the below link for the transfer login task

http://support.microsoft.com/kb/246133

Wednesday, March 26, 2008

Setup Replication MySQL


(for more database related articles)


MySQL replication allows you to have an exact copy of a database from a master server on another server (slave), and all updates to the database on the master server are immediately replicated to the database on the slave server so that both databases are in sync. This is not a backup policy because an accidentally issued DELETE command will also be carried out on the slave; but replication can help protect against hardware failures though.

Steps for setting up replication. The first step is to set up a user account to use only for replication. It's best not to use an existing account for security reasons. To do this, enter an SQL statement like the following on the master server, logged in as root or a user that has GRANT OPTION privileges:
GRANT REPLICATION SLAVE, REPLICATION CLIENT
ON *.*
TO 'replicant'@'slave_host'
IDENTIFIED BY 'newpassowrd';

In this SQL statement, the user account replicant is granted only what's needed for replication. The user name can be almost anything. The host name (or IP address) is given in quotes. You have to enter this same statement on the slave server with the same user name and password, but with the master's host name or IP address.
GRANT REPLICATION SLAVE, REPLICATION CLIENT
ON *.*
TO 'replicant'@'master_host'
IDENTIFIED BY 'newpassowrd';

This way, if the master fails and will be down for a while, you could redirect users to the slave with DNS or by some other method. When the master is back up, you can then use replication to get it up to date by temporarily making it a slave to the former slave server.

Configuring the Servers


Once the replication user(replicant) is set up on both servers, we will need to add some lines to the MySQL configuration file on the master and on the slave server(my.cnf/my.ini). Depending on the type of operating system, the file will probably be called my.cnf or my.ini. On Unix-type systems, the configuration file is usually located in the /etc directory. On Windows systems, it's usually located in c:\ or in c:\Windows. Using a text editor, add the following lines to the configuration file, under the [mysqld] group heading:
server-id = 1
log-bin = /var/log/mysql/bin.log

The server identification number is an arbitrary number to identify the master server. Almost any whole number is fine. A different one should be assigned to the slave server to keep them straight. The second line above instructs MySQL to perform binary logging to the path and file given. The actual path and file name is mostly up to you. Just be sure that the directory exists and the user mysql is the owner, or at least has permission to write to the directory. Also, for the file name use the suffix of ".log" as shown here. It will be replaced automatically with an index number (e.g., ".000001") as new log files are created when the server is restarted or the logs are flushed.

For the slave server, we will need to add a few more lines to the configuration file. We'll have to provide information on connecting to the master server, as well as more log file options. We would add lines similar to the following to the slave's configuration file:
server-id = 2

master-host = masterservernameoripaddress
master-port = 3306
master-user = replicant
master-password = newpassword

log-bin = /var/log/mysql/bin.log
log-bin-index = /var/log/mysql/log-bin.index
log-error = /var/log/mysql/error.log

relay-log = /var/log/mysql/relay.log
relay-log-info-file = /var/log/mysql/relay-log.info
relay-log-index = /var/log/mysql/relay-log.index

This may seem like a lot, but it's pretty straightforward once you pick it apart. The first line is the identification number for the slave server. If you set up more than one slave server, give them each a different number. If you're only using replication for backing up your data, though, you probably won't need more than one slave server. The next set of lines provides information on the master server: the host name as shown here, or the IP address of the master may be given. Next, the port to use is given. Port 3306 is the default port for MySQL, but another could be used for performance or security considerations. The next two lines provide the user name and password for logging into the master server.

The last two stanzas above set up logging. The second to last stanza starts binary logging as we did on the master server, but this time on the slave. This is the log that can be used to allow the master and the slave to reverse roles, as mentioned earlier. The binary log index file (log-bin.index) is for recording the name of the current binary log file to use. As the server is restarted or the logs are flushed, the current log file changes and its name is recorded here. The log-error option establishes an error log. If you don't already have this set up, you should, since it's where any problems with replication will be recorded. The last stanza establishes the relay log and related files mentioned earlier. The relay log makes a copy of each entry in the master server's binary log for performance's sake, the relay-log-info-file option names the file where the slave's position in the master's binary log will be noted, and the relay log index file is for keeping track of the name of the current relay log file to use for replicating.

For the failover in Linux we can use the High Availability Services. We will talk about it later.

Data Types in SQL Server


(for more database related articles)


Explain Varchar and Nvarchar


VARCHAR and NVARCHAR data types are both character data types that are variable-length. Below is the summary of the differences between these 2 data types:


 































 



VARCHAR(n)



NVARCHAR(n)



Character Data Type



Non-Unicode Data



Unicode Data



Maximum Length



8,000



4,000



Character Size



1 byte



2 bytes



Storage Size



Actual Length (in bytes)



2 times Actual Length (in bytes)



 


We would use NVARCHAR data type for columns that store characters from more than one character set or when we will be using characters that require 2-byte characters, which are basically the Unicode characters such as the Japanese Kanji or Korean Hangul characters.


 


 Explain Varchar(MAX), Nvarchar(MAX) and Varbinary(MAX)


 MAX indicates that the maximum storage size is 2^31-1 bytes. The storage size is the actual length of data entered + 2 bytes. The data entered can be 0 characters in length.



We can also declare local variables of VARCHAR(MAX), NVARCHAR(MAX) and VARBINARY(MAX) data types. This is unlike the data types (TEXT, NTEXT and IMAGE). These enhanced data types are replacing because declaring local variables of TEXT, NTEXT and IMAGE data types is not allowed in the previous versions of SQL Server.



Explain XML Data type
We can create variables, parameters, and columns of the xml type. We can optionally associate a collection of XML schemas with a variable, parameter, or column of xml type and this type of xml instance is call typed and if we will not associate xml schemas with the XML Data Type than it is called untyped.

Monday, March 24, 2008

MySQL Optimization Tips

(for more database related articles)



MYSQL Optimization Tips
The MySQL database server performance depends on the number of factors. The Optimized Query is one of the factors for the MySQL robust performance.

The MySQL performance depends on the below factors.




  1. Hardware (RAM, DISK, CPU etc)

  2. Operating System (i.e. Linux OS will give the more performance compare to Windows OS )

  3. Application

  4. Optimization of MySQL Server & Queries


· Choose compiler and compiler options.


· Find the best MySQL startup options for your system (my.ini/my.cnf).


· Use EXPLAIN SELECT, SHOW VARIABLES, SHOW GLOBAL STATUS, SHOW GLOBAL STATUS and SHOW PROCESSLIST.


· Optimize your table formats.


· Maintain your tables (myisamchk, CHECK TABLE, OPTIMIZE TABLE).


· Use MySQL extensions to get things done faster.


· Write a MySQL UDF function if you notice that you would need some function in many places.


· Don't use GRANT on table level or column level if you don't really need it


· Use Index columns in joins


· Use better data types for the table design. (i.e. “INT” data type is better than “BIG INT” data type)


· Increase the use of “NOT NULL” at table level, that will save some bits


· Do not use UTF8 where you do not need it. UTF8 has 3 times more space reserved. Also UTF8 comparison and sorting is much more expensive. Only use UTF8 for mixed charset data


· Use staraight_join instead of inner join


· Use joins instead of “IN” or “Sub-Queries”


· Decide the database engine for the table by most effective way. (INNODB, MEMORY, ARCHIVE etc)


· INNODB database engine needs more performance and tuning for the MySQL Server & Query optimization.


· Try to create Unique Index. Avoid duplicate data in the index columns.


· Beware of Large Limit


• LIMIT 1000000, 10 can be slow. Even Google does not let you to page 100000. If large number of groups use


SQL_BIG_RESULT hint. Use FileSort instead of temporary table


· USE Index hints (INDEX/FORCE INDEX/IGNORE INDEX) (i.e SELECT * FROM Country IGNORE INDEX(PRIMARY)). This will give the advice to MySQL for the Index Use.


· Use “UNION ALL” instead of “UNION”


· Do not normalize the schema up to more than 3rd Level NF


· Avoid the use of cursors in the stored procedure if not required.


· Avoid the use DDL statements in the stored procedure if not required.


· Use SQL for the things it's good at, and do other things in your application. Use the MySQL server to:


· Find rows based on WHERE clause.


· JOIN tables


· GROUP BY


· ORDER BY


· DISTINCT


Don't use MySQL server:


· To validate data (like date)


· As a calculator


· Use keys wisely.


· Keys are good for searches, but bad for inserts / updates of key columns.


· Keep by data in the 3rd normal database form, but don't be afraid of duplicating information or creating summary tables if you need more speed.


· Instead of doing a lot of GROUP BYs on a big table, create summary tables of the big table and query this instead.


· UPDATE table set count=count+1 where key_column=constant is very fast!


· For log tables, it's probably better to generate summary tables from them once in a while than try to keep the summary tables live.


· Take advantage of default values on INSERT.


· Use Index columns in joins



· Use the explain command
Use multiple-row INSERT statements to store many rows with one SQL statement.

The explain command can tell you which indexes are used with the specified query and many other pieces of useful information that can help you choose a better index or query.


Example of usage: explain select * from table


Explanation of row output:


o table—The name of the table.


o type—The join type, of which there are several.


o possible_keys—This column indicates which indexes MySQL could use to find the rows in this table. If the result is NULL, no indexes would help with this query. You should then take a look at your table structure and see whether there are any indexes that you could create that would increase the performance of this query.


o key—The key actually used in this query, or NULL if no index was used.


o key_len—The length of the key used, if any.


o ref—Any columns used with the key to retrieve a result.


o rows—The number of rows MySQL must examine to execute the query.


o extra—Additional information regarding how MySQL will execute the query. There are several options, such as Using index (an index was used) and Where (a WHERE clause was used).


· Use less complex permissions


The more complex your permissions setup, the more overhead you have. Using simpler permissions when you issue GRANT statements enables MySQL to reduce permission-checking overhead when clients execute statements.


· Specific MySQL functions can be tested using the built-in “benchmark” command


If your problem is with a specific MySQL expression or function, you can perform a timing test by invoking the BENCHMARK() function using the mysql client program. Its syntax is BENCHMARK(loop_count,expression). The return value is always zero, but mysql prints a line displaying approximately how long the statement took to execute


· Optimize where clauses


o Remove unnecessary parentheses


o COUNT(*) on a single table without a WHERE is retrieved directly from the table information for MyISAM and MEMORY tables. This is also done for any NOT NULL expression when used with only one table.


o If you use the SQL_SMALL_RESULT option, MySQL uses an in-memory temporary table


· Run optimize table


This command de-fragments a table after you have deleted/inserted lots of rows into table.


· Avoid variable-length column types when necessary


For MyISAM tables that change frequently, you should try to avoid all variable-length columns (VARCHAR, BLOB, and TEXT). The table uses dynamic row format if it includes even a single variable-length column.


· Insert delayed


Use insert delayed when you do not need to know when your data is written. This reduces the overall insertion impact because many rows can be written with a single disk write.


· Use statement priorities


o Use INSERT LOW_PRIORITY when you want to give SELECT statements higher priority than your inserts.


o Use SELECT HIGH_PRIORITY to get retrievals that jump the queue. That is, the SELECT is executed even if there is another client waiting.



· Use multiple-row inserts


Use multiple-row INSERT statements to store many rows with one SQL statement.


· Synchronize data-types


Columns with identical information in different tables should be declared to have identical data types so that joins based on the corresponding columns will be faster.



· Optimizing tables


o MySQL has a rich set of different types. You should try to use the most efficient type for each column.


o The ANALYSE procedure can help you find the optimal types for a table: SELECT * FROM table_name PROCEDURE ANALYSE()


o Use NOT NULL for columns which will not store null values. This is particularly important for columns which you index.


o Change your ISAM tables to MyISAM.


o If possible, create your tables with a fixed table format.


o Don't create indexes you are not going to use.


o Use the fact that MySQL can search on a prefix of an index; If you have and INDEX (a,b), you don't need an index on (a).


o Instead of creating an index on long CHAR/VARCHAR column, index just a prefix of the column to save space. CREATE TABLE table_name (hostname CHAR(255) not null, index(hostname(10)))


o Use the most efficient table type for each table.


o Columns with identical information in different tables should be declared identically and have identical names.



When MySQL uses indexes


o Using >, >=, =, <, <=, IF NULL and BETWEEN on a key.


o SELECT * FROM table_name WHERE key_part1=1 and key_part2 > 5;


o SELECT * FROM table_name WHERE key_part1 IS NULL;


o When you use a LIKE that doesn't start with a wildcard.


o SELECT * FROM table_name WHERE key_part1 LIKE 'jani%'


o Retrieving rows from other tables when performing joins.


o SELECT * from t1,t2 where t1.col=t2.key_part


o Find the MAX() or MIN() value for a specific index.


o SELECT MIN(key_part2),MAX(key_part2) FROM table_name where key_part1=10


o ORDER BY or GROUP BY on a prefix of a key.


o SELECT * FROM foo ORDER BY key_part1,key_part2,key_part3


o When all columns used in the query are part of one key.


o SELECT key_part3 FROM table_name WHERE key_part1=1

Wednesday, March 19, 2008

Transactions and Locks in SQL Server

• What is a “Database Transactions “?
A database transaction is a unit of work performed against a database management system or similar system that is treated in a coherent and reliable way independent of other transactions. A database transaction, by definition, must be atomic, consistent, isolated and durable. These properties of database transactions are often referred to by the acronym ACID.

Transactions provide an "all-or-nothing" proposition stating that work units performed in a database must be completed in their entirety or take no effect whatsoever. Further, transactions must be isolated from other transactions, results must conform to existing constraints in the database and transactions that complete successfully must be committed to durable storage.

In some systems, transactions are also called LUWs for Logical Units of Work.

• What is ACID?
The ACID model is one of the oldest and most important concepts of database theory. It sets forward four goals that every database management system must strive to achieve: atomicity, consistency, isolation and durability. No database that fails to meet any of these four goals can be considered reliable.

Let’s take a moment to examine each one of these characteristics in detail:

Atomicity states that database modifications must follow an “all or nothing” rule. Each transaction is said to be “atomic.” If one part of the transaction fails, the entire transaction fails. It is critical that the database management system maintain the atomic nature of transactions in spite of any DBMS, operating system or hardware failure.

Consistency states that only valid data will be written to the database.If, for some reason, a transaction is executed that violates the database’s consistency rules, the entire transaction will be rolled back and the database will be restored to a state consistent with those rules. On the other hand, if a transaction successfully executes, it will take the database from one state that is consistent with the rules to another state that is also consistent with the rules.

Isolation requires that multiple transactions occurring at the same time not impact each other’s execution. For example, if Joe issues a transaction against a database at the same time that Mary issues a different transaction, both transactions should operate on the database in an isolated manner. The database should either perform Joe’s entire transaction before executing Mary’s or vice-versa. This prevents Joe’s transaction from reading intermediate data produced as a side effect of part of Mary’s transaction that will not eventually be committed to the database. Note that the isolation property does not ensure which transaction will execute first, merely that they will not interfere with each other.

Durability ensures that any transaction committed to the database will not be lost. Durability is ensured through the use of database backups and transaction logs that facilitate the restoration of committed transactions in spite of any subsequent software or hardware failures.

• What is “Begin Trans”, “Commit Tran”, “Rollback Tran” and “Save Tran”?
Transactions group a set of tasks into a single execution unit. Each transaction begins with a specific task and ends when all the tasks in the group successfully complete. If any of the tasks fails, the transaction fails. Therefore, a transaction has only two results: success or failure. Incomplete steps result in the failure of the transaction.

Users can group two or more Transact-SQL statements into a single transaction using the following statements:

* Begin Transaction
* Rollback Transaction
* Commit Transaction

Begin Transaction
Marks the starting point of an explicit, local transaction. BEGIN TRANSACTION increments @@TRANCOUNT by 1.

Rollback Transaction
If anything goes wrong with any of the grouped statements, all changes need to be aborted. The process of reversing changes is called rollback in SQL Server terminology.
A ROLLBACK, on the other hand, works regardless of the level at which it is issued, but rolls back all transactions, regardless of the nesting level

Commit Transaction
If everything is in order with all statements within a single transaction, all changes are recorded together in the database. In SQL Server terminology, we say that these changes are committed to the database.
A COMMIT issued against any transaction except the outermost one doesn't commit any changes to disk - it merely decrements the@@TRANCOUNT automatic variable.

Save Tran
Savepoints offer a mechanism to roll back portions of transactions. A user can set a savepoint, or marker, within a transaction. The savepoint defines a location to which a transaction can return if part of the transaction is conditionally canceled. SQL Server allows you to use savepoints via the SAVE TRAN statement, which doesn't affect the @@TRANCOUNT value. A rollback to a savepoint (not a transaction) doesn't affect the value returned by @@TRANCOUNT, either. However, the rollback must explicitly name the savepoint: using ROLLBACK TRAN without a specific name will always roll back the entire transaction.

• What are “Checkpoint’s” in SQL Server?
Forces all dirty pages for the current database to be written to disk. Dirty pages are data or log pages modified after entered into the buffer cache, but the modifications have not yet been written to disk.

Syntax
CHECKPOINT

• What are “Implicit Transactions”?
Microsoft SQL Server operates in three transaction modes:
Autocommit transactions
Each individual statement is a transaction.
Explicit transactions
Each transaction is explicitly started with the BEGIN TRANSACTION statement and explicitly ended with a COMMIT or ROLLBACK statement.
Implicit transactions
A new transaction is implicitly started when the prior transaction completes, but each transaction is explicitly completed with a COMMIT or ROLLBACK statement.

• Is it good to use “Implicit Transactions”?
If you want all your commands to require an explicit COMMIT or ROLLBACK in order to finish, you can issue the command SET IMPLICIT_TRANSACTIONS ON. By default, SQL Server operates in the autocommit mode; it does not operate with implicit transactions. Any time you issue a data modification command such as INSERT, UPDATE, or DELETE, SQL Server automatically commits the transaction. However, if you use the SET IMPLICIT_TRANSACTIONS ON command, you can override the automatic commitment so that SQL Server will wait for you to issue an explicit COMMIT or ROLLBACK statement to do anything with the transaction. This can be handy when you issue commands interactively, mimicking the behavior of other databases such as Oracle.

What's distinctive about implicit transactions is that reissuing SET IMPLICIT_TRANSACTIONS ON does not increase the value of @@TRANCOUNT. Also, neither COMMIT nor ROLLBACK reduce the value of @@TRANCOUNT until after you issue the command SET IMPLICIT_TRANSACTIONS OFF. Developers do not often use implicit transactions; however, there is an interesting exception in ADO. See the sidebar, Implicit Transactions and ADO Classic.

• What is Concurrency?
When many people attempt to modify data in a database at the same time, a system of controls must be implemented so that modifications made by one person do not adversely affect those of another person. This is called concurrency control.

Concurrency control theory has two classifications for the methods of instituting concurrency control:
Pessimistic concurrency control
A system of locks prevents users from modifying data in a way that affects other users. After a user performs an action that causes a lock to be applied, other users cannot perform actions that would conflict with the lock until the owner releases it. This is called pessimistic control because it is mainly used in environments where there is high contention for data, where the cost of protecting data with locks is less than the cost of rolling back transactions if concurrency conflicts occur.
Optimistic concurrency control
In optimistic concurrency control, users do not lock data when they read it. When an update is performed, the system checks to see if another user changed the data after it was read. If another user updated the data, an error is raised. Typically, the user receiving the error rolls back the transaction and starts over. This is called optimistic because it is mainly used in environments where there is low contention for data, and where the cost of occasionally rolling back a transaction outweighs the costs of locking data when read.

• What are “Dirty reads”?
Uncommitted dependency occurs when a second transaction selects a row that is being updated by another transaction. The second transaction is reading data that has not been committed yet and may be changed by the transaction updating the row.

• What are “Unrepeatable reads”?
Inconsistent Analysis (Nonrepeatable Read)
Inconsistent analysis occurs when a second transaction accesses the same row several times and reads different data each time. Inconsistent analysis is similar to uncommitted dependency in that another transaction is changing the data that a second transaction is reading. However, in inconsistent analysis, the data read by the second transaction was committed by the transaction that made the change. Also, inconsistent analysis involves multiple reads (two or more) of the same row and each time the information is changed by another transaction; thus, the term nonrepeatable read.

• What are “Phantom rows”?
Phantom reads occur when an insert or delete action is performed against a row that belongs to a range of rows being read by a transaction. The transaction's first read of the range of rows shows a row that no longer exists in the second or succeeding read, as a result of a deletion by a different transaction. Similarly, as the result of an insert by a different transaction, the transaction's second or succeeding read shows a row that did not exist in the original read.

For example, an editor makes changes to a document submitted by a writer, but when the changes are incorporated into the master copy of the document by the production department, they find that new unedited material has been added to the document by the author. This problem could be avoided if no one could add new material to the document until the editor and production department finish working with the original document.

• What are “Lost Updates”?
Lost updates occur when two or more transactions select the same row and then update the row based on the value originally selected. Each transaction is unaware of other transactions. The last update overwrites updates made by the other transactions, which results in lost data.

• What are different levels of granularity of locking resources?
Microsoft SQL Server 2000 has multigranular locking that allows different types of resources to be locked by a transaction. To minimize the cost of locking, SQL Server locks resources automatically at a level appropriate to the task. Locking at a smaller granularity, such as rows, increases concurrency, but has a higher overhead because more locks must be held if many rows are locked. Locking at a larger granularity, such as tables, are expensive in terms of concurrency because locking an entire table restricts access to any part of the table by other transactions, but has a lower overhead because fewer locks are being maintained.

SQL Server can lock these resources (listed in order of increasing granularity).
RID: Row identifier. Used to lock a single row within a table.
Key: Row lock within an index. Used to protect key ranges in serializable transactions.
Page: 8 kilobyte –(KB) data page or index page.
Extent: Contiguous group of eight data pages or index pages.
Table: Entire table, including all data and indexes.
DB: Database.

• What are different types of Isolation levels in SQL Server?
READ COMMITTED
Specifies that shared locks are held while the data is being read to avoid dirty reads, but the data can be changed before the end of the transaction, resulting in nonrepeatable reads or phantom data. This option is the SQL Server default.

READ UNCOMMITTED
Implements dirty read, or isolation level 0 locking, which means that no shared locks are issued and no exclusive locks are honored. When this option is set, it is possible to read uncommitted or dirty data; values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT statements in a transaction. This is the least restrictive of the four isolation levels.

REPEATABLE READ
Locks are placed on all data that is used in a query, preventing other users from updating the data, but new phantom rows can be inserted into the data set by another user and are included in later reads in the current transaction. Because concurrency is lower than the default isolation level, use this option only when necessary.

SERIALIZABLE
Places a range lock on the data set, preventing other users from updating or inserting rows into the data set until the transaction is complete. This is the most restrictive of the four isolation levels. Because concurrency is lower, use this option only when necessary. This option has the same effect as setting HOLDLOCK on all tables in all SELECT statements in a transaction.

• If you are using COM+, what “Isolation” level is set by default?
SERIALIZABLE transaction isolation level is the default isolation level for the COM+ application.

• What are “Lock” hints?
A range of table-level locking hints can be specified using the SELECT, INSERT, UPDATE, and DELETE statements to direct Microsoft SQL Server 2000 to the type of locks to be used. Table-level locking hints can be used when a finer control of the types of locks acquired on an object is required. These locking hints override the current transaction isolation level for the session.

• What is a “Deadlock”?
Deadlocking occurs when two user processes have locks on separate objects and each process is trying to acquire a lock on the object that the other process has. When this happens, SQL Server identifies the problem and ends the deadlock by automatically choosing one process and aborting the other process, allowing the other process to continue. The aborted transaction is rolled back and an error message is sent to the user of the aborted process. Generally, the transaction that requires the least amount of overhead to rollback is the transaction that is aborted.

• What are the steps you can take to avoid “Deadlocks”?
Here are some tips on how to avoid deadlocking on your SQL Server:
* Ensure the database design is properly normalized.
* Have the application access server objects in the same order each time.
* During transactions, don't allow any user input. Collect it before the transaction begins.
* Avoid cursors.
* Keep transactions as short as possible. One way to help accomplish this is to reduce the number of round trips between your application and SQL Server by using stored procedures or keeping transactions with a single batch. Another way of reducing the time a transaction takes to complete is to make sure you are not performing the same reads over and over again. If your application does need to read the same data more than once, cache it by storing it in a variable or an array, and then re-reading it from there, not from SQL Server.
* Reduce lock time. Try to develop your application so that it grabs locks at the latest possible time, and then releases them at the very earliest time.
* If appropriate, reduce lock escalation by using the ROWLOCK or PAGLOCK.
* Consider using the NOLOCK hint to prevent locking if the data being locked is not modified often.
* If appropriate, use as low of an isolation level as possible for the user connection running the transaction.
* Consider using bound connections.

• What is Bound Connection?
Bound connections allow two or more connections to share the same transaction and locks. Bound connections can work on the same data without lock conflicts. Bound connections can be created from multiple connections within the same application, or from multiple applications with separate connections. Bound connections make coordinating actions across multiple connections easier.

To participate in a bound connection, a connection calls sp_getbindtoken or srv_getbindtoken (Open Data Services) to get a bind token. A bind token is a character string that uniquely identifies each bound transaction. The bind token is then sent to the other connections participating in the bound connection. The other connections bind to the transaction by calling sp_bindsession, using the bind token received from the first connection.

• Specity the types of Bound Connections
Local bound connection
Allows bound connections to share the transaction space of a single transaction on a single server.
Distributed bound connection
Allows bound connections to share the same transaction across two or more servers until the entire transaction is either committed or rolled back by using Microsoft Distributed Transaction Coordinator (MS DTC).

• How can I know what locks are running on which resource?
Use SP_Locks system stored procedure

Tuesday, March 18, 2008

Stored Procedure and System Tables

System Tables Which contains Stored Procedure Text
SYSCOMMENTS
System table located in all databases containing entries for each view, rule, default, trigger, CHECK constraint, DEFAULT constraint, and stored procedure in the database. The text column contains the original SQL definition statements, which are limited to a maximum size of 4 MB.

SYSCOLUMNS
System table located in all databases containing one row for every column in each table and each view, and a row for each parameter in a stored procedure.

SYSDEPENDS
System table located in all databases containing dependency information between objects (views, procedures, and triggers), and the objects (tables, views, and procedures) contained in their definition.

SYSCACHEOBJECTS
System table located in the master database containing information about how the cache is used.

SQL Server Database Coding Standards

Please visit http://sqldbpool.blogspot.com/


(for more database related articles)


Databases are the heart and soul of many of the recent enterprise applications and it is very essential to pay special attention to database programming. I've seen in many occasions where database programming is overlooked, thinking that it's something easy and can be done by anyone. This is wrong. For a better performing database you need a real DBA and a specialist database programmer, let it be Microsoft SQL Server, Oracle, Sybase, DB2 or whatever! If you don't use database specialists during your development cycle, database often ends up becoming the performance bottleneck. I decided to write this article, to put together some of the database programming best practices, so that my fellow DBAs and database developers can benefit!


Here are some of the programming guidelines, best practices, keeping quality, performance and maintainability in mind.

  • Decide upon a database naming convention, standardize it across your organization and be consistent in following it. It helps make your code more readable and understandable.

  • Do not depend on undocumented functionality. The reasons being:
    - You will not get support from Microsoft, when something goes wrong with your undocumented code
    - Undocumented functionality is not guaranteed to exist (or behave the same) in a future release or service pack, there by breaking your code

  • Try not to use system tables directly. System table structures may change in a future release. Wherever possible, use the sp_help* stored procedures or INFORMATION_SCHEMA views. There will be situattions where you cannot avoid accessing system table though!

  • Make sure you normalize your data at least till 3rd normal form. At the same time, do not compromise on query performance. A little bit of denormalization helps queries perform faster.

  • Write comments in your stored procedures, triggers and SQL batches generously, whenever something is not very obvious. This helps other programmers understand your code clearly. Don't worry about the length of the comments, as it won't impact the performance, unlike interpreted languages like ASP 2.0.

  • Do not use SELECT * in your queries. Always write the required column names after the SELECT statement, like SELECT CustomerID, CustomerFirstName, City. This technique results in less disk IO and less network traffic and hence better performance.

  • Try to avoid server side cursors as much as possible. Always stick to 'set based approach' instead of a 'procedural approach' for accessing/manipulating data. Cursors can be easily avoided by SELECT statements in many cases. If a cursor is unavoidable, use a simpleWHILE loop instead, to loop through the table. I personally tested and concluded that a WHILE loop is faster than a cursor most of the times. But for a WHILE loop to replace a cursor you need a column (primary key or unique key) to identify each row uniquely and I personally believe every table must have a primary or unique key.

  • Avoid the creation of temporary tables while processing data, as much as possible, as creating a temporary table means more disk IO. Consider advanced SQL or views or table variables of SQL Server 2000 or derived tables, instead of temporary tables. Keep in mind that, in some cases, using a temporary table performs better than a highly complicated query.

  • Try to avoid wildcard characters at the beginning of a word while searching using the LIKE keyword, as that results in an index scan, which is defeating the purpose of having an index. The following statement results in an index scan, while the second statement results in an index seek:1. SELECT LocationID FROM Locations WHERE Specialities LIKE '%pples'
    2.
    SELECT LocationID FROM Locations WHERE Specialities LIKE 'A%s' Also avoid searching with not equals operators (<> and NOT) as they result in table and index scans. If you must do heavy text-based searches, consider using the Full-Text search feature of SQL Server for better performance.

  • Use 'Derived tables' wherever possible, as they perform better. Consider the following query to find the second highest salary from Employees table: SELECT MIN(Salary)
    FROM Employees
    WHERE EmpID IN
    (
    SELECT TOP 2 EmpID
    FROM Employees
    ORDER BY Salary Desc
    )
    The same query can be re-written using a derived table as shown below, and it performs twice as fast as the above query: SELECT MIN(Salary)
    FROM
    (
    SELECT TOP 2 Salary
    FROM Employees
    ORDER BY Salary Desc
    ) AS A
    This is just an example, the results might differ in different scenarios depending upon the database design, indexes, volume of data etc. So, test all the possible ways a query could be written and go with the efficient one. With some practice and understanding of 'how SQL Server optimizer works', you will be able to come up with the best possible queries without this trial and error method.

  • While designing your database, design it keeping 'performance' in mind. You can't really tune performance later, when your database is in production, as it involves rebuilding tables/indexes, re-writing queries. Use the graphical execution plan in Query Analyzer or SHOWPLAN_TEXT or SHOWPLAN_ALL commands to analyze your queries. Make sure your queries do 'Index seeks' instead of 'Index scans' or 'Table scans'. A table scan or an index scan is a very bad thing and should be avoided where possible (sometimes when the table is too small or when the whole table needs to be processed, the optimizer will choose a table or index scan).

  • Prefix the table names with owner names, as this improves readability, avoids any unnecessary confusions. Microsoft SQL Server Books Online even states that qualifying tables names, with owner names helps in execution plan reuse.

  • Use SET NOCOUNT ON at the beginning of your SQL batches, stored procedures and triggers in production environments, as this suppresses messages like '(1 row(s) affected)' after executing INSERT, UPDATE, DELETE and SELECT statements. This in turn improves the performance of the stored procedures by reducing the network traffic.

  • Use the more readable ANSI-Standard Join clauses instead of the old style joins. With ANSI joins the WHERE clause is used only for filtering data. Where as with older style joins, the WHERE clause handles both the join condition and filtering data. The first of the following two queries shows an old style join, while the second one shows the new ANSI join syntax: SELECT a.au_id, t.title
    FROM titles t, authors a, titleauthor ta
    WHERE
    a.au_id = ta.au_id AND
    ta.title_id = t.title_id AND
    t.title LIKE '%Computer%'SELECT a.au_id, t.title
    FROM authors a
    INNER JOIN
    titleauthor ta
    ON
    a.au_id = ta.au_id
    INNER JOIN
    titles t
    ON
    ta.title_id = t.title_id
    WHERE t.title LIKE '%Computer%'
    Be aware that the old style *= and =* left and right outer join syntax may not be supported in a future release of SQL Server, so you are better off adopting the ANSI standard outer join syntax.

  • Do not prefix your stored procedure names with 'sp_'. The prefix sp_ is reserved for system stored procedure that ship with SQL Server. Whenever SQL Server encounters a procedure name starting with sp_,, it first tries to locate the procedure in the master database, then looks for any qualifiers (database, owner) provided, then using dbo as the owner. So, you can really save time in locating the stored procedure by avoiding sp_ prefix. But there is an exception! While creating general purpose stored procedures that are called from all your databases go ahead and prefix those stored procedure names with sp_ and create them in the master database.

  • Views are generally used to show specific data to specific users based on their interest. Views are also used to restrict access to the base tables by granting permission on only views. Yet another significant use of views is that, they simplify your queries. Incorporate your frequently required complicated joins and calculations into a view, so that you don't have to repeat those joins/calculations in all your queries, instead just select from the view.

  • Use 'User Defined Datatypes', if a particular column repeats in a lot of your tables, so that the datatype of that column is consistent across all your tables.

  • Do not let your front-end applications query/manipulate the data directly using SELECT or INSERT/UPDATE/DELETE statements. Instead, create stored procedures, and let your applications access these stored procedures. This keeps the data access clean and consistent across all the modules of your application, at the same time centralizing the business logic within the database.

  • Try not to use text, ntext datatypes for storing large textual data. 'text' datatype has some inherent problems associated with it. You can not directly write, update text data using INSERT, UPDATE statements (You have to use special statements like READTEXT, WRITETEXT and UPDATETEXT). There are a lot of bugs associated with replicating tables containing text columns. So, if you don't have to store more than 8 KB of text, use char(8000) or varchar(8000)datatypes.

  • If you have a choice, do not store binary files, image files (Binary large objects or BLOBs) etc. inside the database. Instead store the path to the binary/image file in the database and use that as a pointer to the actual binary file. Retrieving, manipulating these large binary files is better performed outside the database and after all, database is not meant for storing files.

  • Use char data type for a column, only when the column is non-nullable. If a char column is nullable, it is treated as a fixed length column in SQL Server 7.0+. So, a char(100), when NULL, will eat up 100 bytes, resulting in space wastage. So, use varchar(100) in this situation. Of course, variable length columns do have a very little processing overhead over fixed length columns. Carefully choose between char and varchar depending up on the length of the data you are going to store.

  • Avoid dynamic SQL statements as much as possible. Dynamic SQL tends to be slower than static SQL, as SQL Server must generate an execution plan every time at runtime. IF and CASE statements come in handy to avoid dynamic SQL. Another major disadvantage of using dynamic SQL is that, it requires the users to have direct access permissions on all accessed objects like tables and views. Generally, users are given access to the stored procedures which reference the tables, but not directly on the tables. In this case, dynamic SQL will not work. Consider the following scenario, where a user named 'dSQLuser' is added to the pubs database, and is granted access to a procedure named 'dSQLproc', but not on any other tables in the pubs database. The procedure dSQLproc executes a direct SELECT on titles table and that works. The second statement runs the same SELECT on titles table, using dynamic SQL and it fails with the following error:Server: Msg 229, Level 14, State 5, Line 1
    SELECT permission denied on object 'titles', database 'pubs', owner 'dbo'.To reproduce the above problem, use the following commands:

    sp_addlogin 'dSQLuser'
    GO
    sp_defaultdb 'dSQLuser', 'pubs'
    USE pubs
    GO
    sp_adduser 'dSQLUser', 'dSQLUser'
    GO
    CREATE PROC dSQLProc
    AS
    BEGIN
    SELECT * FROM titles WHERE title_id = 'BU1032'
    --This works
    DECLARE @str CHAR(100)
    SET @str = 'SELECT * FROM titles WHERE title_id = ''BU1032'''
    EXEC (@str)
    --This fails
    END
    GO
    GRANT EXEC ON dSQLProc TO dSQLuser
    GO
    Now login to the pubs database using the login dSQLuser and execute the procedure dSQLproc to see the problem.

  • Consider the following drawbacks before using IDENTITY property for generating primary keys. IDENTITY is very much SQL Server specific, and you will have problems if you want to support different database backends for your application.IDENTITY columns have other inherent problems. IDENTITY columns run out of numbers one day or the other. Numbers can't be reused automatically, after deleting rows. Replication and IDENTITY columns don't always get along well. So, come up with an algorithm to generate a primary key, in the front-end or from within the inserting stored procedure. There could be issues with generating your own primary keys too, like concurrency while generating the key, running out of values. So, consider both the options and go with the one that suits you well.

  • Minimize the usage of NULLs, as they often confuse the front-end applications, unless the applications are coded intelligently to eliminate NULLs or convert the NULLs into some other form. Any expression that deals with NULL results in a NULL output. ISNULL and COALESCE functions are helpful in dealing with NULL values. Here's an example that explains the problem:Consider the following table, Customers which stores the names of the customers and the middle name can be NULL. CREATE TABLE Customers
    (
    FirstName varchar(20),
    MiddleName varchar(20),
    LastName varchar(20)
    )
    Now insert a customer into the table whose name is Tony Blair, without a middle name: INSERT INTO Customers
    (FirstName, MiddleName, LastName)
    VALUES ('Tony',NULL,'Blair')
    The following SELECT statement returns NULL, instead of the customer name: SELECT FirstName + ' ' + MiddleName + ' ' + LastName FROM Customers

    To avoid this problem, use
    ISNULL as shown below: SELECT FirstName + ' ' + ISNULL(MiddleName + ' ','') + LastName FROM Customers

  • Use Unicode datatypes like nchar, nvarchar, ntext, if your database is going to store not just plain English characters, but a variety of characters used all over the world. Use these datatypes, only when they are absolutely needed as they need twice as much space as non-unicode datatypes.

  • Always use a column list in your INSERT statements. This helps in avoiding problems when the table structure changes (like adding a column). Here's an example which shows the problem.Consider the following table: CREATE TABLE EuropeanCountries
    (
    CountryID int PRIMARY KEY,
    CountryName varchar(25)
    )
    Here's an INSERT statement without a column list , that works perfectly: INSERT INTO EuropeanCountries
    VALUES (1, 'Ireland')

    Now, let's add a new column to this table:
    ALTER TABLE EuropeanCountries
    ADD EuroSupport bit
    Now run the above INSERT statement. You get the following error from SQL Server:Server: Msg 213, Level 16, State 4, Line 1
    Insert Error: Column name or number of supplied values does not match table definition.This problem can be avoided by writing an

    INSERT statement with a column list as shown below: INSERT INTO EuropeanCountries
    (CountryID, CountryName)
    VALUES (1, 'England')

  • Perform all your referential integrity checks, data validations using constraints (foreign key and check constraints). These constraints are faster than triggers. So, use triggers only for auditing, custom tasks and validations that can not be performed using these constraints. These constraints save you time as well, as you don't have to write code for these validations and the RDBMS will do all the work for you.

  • Always access tables in the same order in all your stored procedures/triggers consistently. This helps in avoiding deadlocks. Other things to keep in mind to avoid deadlocks are: Keep your transactions as short as possible. Touch as less data as possible during a transaction. Never, ever wait for user input in the middle of a transaction. Do not use higher level locking hints or restrictive isolation levels unless they are absolutely needed. Make your front-end applications deadlock-intelligent, that is, these applications should be able to resubmit the transaction incase the previous transaction fails with error 1205. In your applications, process all the results returned by SQL Server immediately, so that the locks on the processed rows are released, hence no blocking.

  • Offload tasks like string manipulations, concatenations, row numbering, case conversions, type conversions etc. to the front-end applications, if these operations are going to consume more CPU cycles on the database server (It's okay to do simple string manipulations on the database end though). Also try to do basic validations in the front-end itself during data entry. This saves unnecessary network roundtrips.

  • If back-end portability is your concern, stay away from bit manipulations with T-SQL, as this is very much RDBMS specific. Further, using bitmaps to represent different states of a particular entity conflicts with the normalization rules.

  • Consider adding a @Debug parameter to your stored procedures. This can be of bit data type. When a 1 is passed for this parameter, print all the intermediate results, variable contents using SELECT or PRINT statements and when 0 is passed do not print debug information. This helps in quick debugging of stored procedures, as you don't have to add and remove these PRINT/SELECT statements before and after troubleshooting problems.

  • Do not call functions repeatedly within your stored procedures, triggers, functions and batches. For example, you might need the length of a string variable in many places of your procedure, but don't call the LEN function whenever it's needed, instead, call the LEN function once, and store the result in a variable, for later use.

  • Make sure your stored procedures always return a value indicating the status. Standardize on the return values of stored procedures for success and failures. The RETURN statement is meant for returning the execution status only, but not data. If you need to return data, use OUTPUT parameters.

  • If your stored procedure always returns a single row resultset, consider returning the resultset using OUTPUT parameters instead of a SELECT statement, as ADO handles output parameters faster than resultsets returned by SELECT statements.

  • Always check the global variable @@ERROR immediately after executing a data manipulation statement (like INSERT/UPDATE/DELETE), so that you can rollback the transaction in case of an error (@@ERROR will be greater than 0 in case of an error). This is important, because, by default, SQL Server will not rollback all the previous changes within a transaction if a particular statement fails. This behavior can be changed by executing SET XACT_ABORT ON. The @@ROWCOUNT variable also plays an important role in determining how many rows were affected by a previous data manipulation (also, retrieval) statement, and based on that you could choose to commit or rollback a particular transaction.

  • To make SQL Statements more readable, start each clause on a new line and indent when needed. Following is an example: SELECT title_id, title
    FROM titles
    WHERE title LIKE 'Computing%' AND
    title LIKE 'Gardening%'

  • Though we survived the Y2K, always store 4 digit years in dates (especially, when using char or int datatype columns), instead of 2 digit years to avoid any confusion and problems. This is not a problem with datetime columns, as the century is stored even if you specify a 2 digit year. But it's always a good practice to specify 4 digit years even with datetime datatype columns.

  • In your queries and other SQL statements, always represent date in yyyy/mm/dd format. This format will always be interpreted correctly, no matter what the default date format on the SQL Server is. This also prevents the following error, while working with dates: Server: Msg 242, Level 16, State 3, Line 2
    The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value.

  • As is true with any other programming language, do not use GOTO or use it sparingly. Excessive usage of GOTO can lead to hard-to-read-and-understand code.

  • Do not forget to enforce unique constraints on your alternate keys.

  • Always be consistent with the usage of case in your code. On a case insensitive server, your code might work fine, but it will fail on a case sensitive SQL Server if your code is not consistent in case. For example, if you create a table in SQL Server or database that has a case-sensitive or binary sort order, all references to the table must use the same case that was specified in the CREATE TABLE statement. If you name the table as 'MyTable' in the CREATE TABLE statement and use 'mytable' in the SELECT statement, you get an 'object not found' or 'invalid object name' error.

  • Though T-SQL has no concept of constants (like the ones in C language), variables will serve the same purpose. Using variables instead of constant values within your SQL statements, improves readability and maintainability of your code. Consider the following example: UPDATE dbo.Orders
    SET OrderStatus = 5
    WHERE OrdDate < '2001/10/25'
    The same update statement can be re-written in a more readable form as shown below: DECLARE @ORDER_PENDING int
    SET @ORDER_PENDING = 5
    UPDATE dbo.Orders
    SET OrderStatus = @ORDER_PENDING
    WHERE OrdDate < '2001/10/25'



  • Do not use the column numbers in the ORDER BY clause as it impairs the readability of the SQL statement. Further, changing the order of columns in the SELECT list has no impact on the ORDER BY when the columns are referred by names instead of numbers. Consider the following example, in which the second query is more readable than the first one: SELECT OrderID, OrderDate
    FROM Orders
    ORDER BY 2SELECT OrderID, OrderDate
    FROM Orders
    ORDER BY OrderDate