Friday, December 26, 2008

Happy New Year

I think that is my first blog which does not relate to SQL Server,though why not?
I wish to SQL Server Consultants lots of contracts to help out businesses in 2009, I wish to SQL Server DBAs lots of interesting work but that their databases will run smoothly,I wish to SQL Server Developers to write efficient queries and resolve challenges and to every body and their families in 2009 I wish you health ,happiness,wealth and if you look forward to a good year ahead, spread happiness with these wonderful New Year wishes.

Thursday, December 18, 2008

Do you always write parameterized query?

As you know SQL Server creates an execution plan for the query/stored procedure and stores it in procedure cache in order to reuse it when you run the query next time.
But more and more execution plans in memory will also hurt the performance as we do not have a control the procedure cache's size so what would you do? Lets see the below scenario I've ran on the DEV(DO NOT run on Production) machine. First of all I used DBCC FREEPROCCACHE which clears the procedure cache and causes ad hoc queries to
be recompiled. Next I ran actually the same query with different parameters.

DBCC FREEPROCCACHE

SELECT * FROM Production.WorkOrder
WHERE ProductID=522

SELECT * FROM Production.WorkOrder
WHERE ProductID=737

DECLARE @i INT
set @i = 518
SELECT *
FROM Production.WorkOrder
WHERE ProductID = @i
GO
--See how many plans SQL Server created
SELECT stats.execution_count AS exec_count,
p.size_in_bytes as [size],
[sql].[text] as [plan_text]
FROM sys.dm_exec_cached_plans p
OUTER APPLY sys.dm_exec_sql_text (p.plan_handle) sql
JOIN sys.dm_exec_query_stats stats ON stats.plan_handle = p.plan_handle

/*
1 49152 DECLARE @i INT set @i = 518 SELECT * FROM Production.WorkOrder WHERE ProductID = @i
1 24576 SELECT * FROM Production.WorkOrder WHERE ProductID=737
1 24576 SELECT * FROM Production.WorkOrder WHERE ProductID=522
*/
As you see SQL Server created THREE execution plans.
Now let's wrap the query within a stored procedure and see what will be happened.

CREATE PROCEDURE spTest
@i INT
AS
SELECT *
FROM Production.WorkOrder
WHERE ProductID = @i
GO

EXEC spTest 522
EXEC spTest 737
EXEC spTest 518

--See how many plans SQL Server created
SELECT stats.execution_count AS exec_count,
p.size_in_bytes as [size],
[sql].[text] as [plan_text]
FROM sys.dm_exec_cached_plans p
OUTER APPLY sys.dm_exec_sql_text (p.plan_handle) sql
JOIN sys.dm_exec_query_stats stats ON stats.plan_handle = p.plan_handle

/*
3 40960 CREATE PROCEDURE spTest @i INT AS SELECT * FROM Production.WorkOrder WHERE ProductID = @i
*/
Wow, you we get single execution plan that SQL Server used three times.
I also recommend you to read Tony's blog about the subject
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2007/07/07/procedure-cache-tuning-sizing-from-1gbyte-to-768kbytes-increase-the-size-of-usable-data-cache.aspx

Tony writes in his blog and I'm completely agree with him.
"Surprised? It's one of the reasons DBA's keep bleating on about using stored procedures, it forces the mindset to use the procedure cache more effectively. By parameterising, but preferably using stored procedures we get plan reuse which means a) less compiles thereby reducing CPU load, b) more available pages for the data cache thereby reducing physical disk IO and c) DBA’s are always right ;)."


PS.Besides using stored procedures we can use sp_executesql with parameters as below

EXEC sp_executesql N'SELECT SUM(ProductID) AS ProductTotal
FROM Production.WorkOrder
WHERE ProductID= @ProductID', N'@ProductID INT', 722

EXEC sp_executesql N'SELECT SUM(ProductID) AS ProductTotal
FROM Production.WorkOrder
WHERE ProductID= @ProductID', N'@ProductID INT', 522

.....

GO

Thursday, December 4, 2008

When constraint remains in system table even the table is no longer exists

Ok, I'm talking about temporary tables and yes,it could happen to you even you think that the temporary table is no longer exists. Consider the below script where stored procedure inserts the data with wrong type which caused the error(245conversion error). As you propably know we would never name the constraints for local temporary tables (as opposite to permanent tables), because if we run the following statement from two connections we get something like that.
/*
Msg 2714, Level 16, State 4, Line 1
There is already an object named 'PK_tmp1' in the database
*/


USE master
GO
ALTER PROCEDURE spSample
AS
CREATE TABLE #tmp(
col int --PRIMARY KEY NONCLUSTERED--,
CONSTRAINT PK_tmp1 PRIMARY KEY NONCLUSTERED
(
col
)
)
BEGIN TRY
INSERT INTO #tmp SELECT 'A'
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER()
END CATCH

--Usage
USE master
EXEC spSample


USE tempdb
SELECT * FROM sys.objects
WHERE OBJECT_NAME(OBJECT_ID)like '%#tmp%'

Try add DROP TABLE #tmp in BEGIN CATCH ..block and then SQL Server deletes the constraint as well.

Wednesday, November 12, 2008

Can I hide undesirable database in SSMS?

We have discussed the issue many times and there is an opened connection to Microsoft to add this feature in the next release. Recently I was visiting our client and we tried to do something for the subject.

1) CREATE new SQL login 'John'
2) CREATE a user named 'John in master database
3) GRANT CREATE DATABASE to John
4) While impersonating John, create a database called 'demo'
5) REVOKE CREATE DATABASE permission from John
6) REVOKE VIEW ANY DATABASE permission from PUBLIC
7) Register this server as John
8) From the 'John' session, expand database tree. Now, you should see only master, tempdb, dbtest
9) GRANT VIEW ANY DATABASE to PUBLIC
10) From the 'John' session, you should see all the databases.

However, this works perfectly if the user is the owner of the database.

Wednesday, October 22, 2008

Track down DDL Database changes

If you want to know who alters/drops/creates tables/views/stored procedures... I would like to share with you the following script. As you know DDL Triggers (introduced in SQL Server 2005) work very like the DML triggers but details of the event that fired the a trigger are available only in XML format.

Fist of all I create a table that will hold events.

CREATE TABLE [dbo].[DDL_ChangeEvents](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Session_ID] [int] NOT NULL CONSTRAINT [DF_ddl_change_Session_ID] DEFAULT (@@spid),
[Session_IPAddress] [nvarchar](50) NULL,
[Insert_Date] [datetime] NOT NULL CONSTRAINT [DF_ddl_change_Insert_Date] DEFAULT (GETDATE()),
[Username] [nvarchar](100) NOT NULL CONSTRAINT [DF_DDL_change_Username] DEFAULT (CONVERT([nvarchar](100),ORIGINAL_LOGIN(),(0))),
[EventType] [nvarchar](200) NULL,
[objectName] [nvarchar](200) NULL,
[objectType] [nvarchar](200) NULL,
[sql] [nvarchar](max) NULL
) ON [PRIMARY]

It wont help if I get only SPID of the session as in many cases users get logged with only one defined login or even with 'sa'.So I need IP address of those workstations thus I added Session_IPAddress column.
Now, let's create a database trigger to capture the info.

CREATE TRIGGER [trgDataDDLChangeEvent] ON DATABASE
FOR DDL_DATABASE_LEVEL_EVENTS
AS
DECLARE @eventdata XML
SET @eventdata = EVENTDATA()
IF @eventdata.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(200)')
<> 'CREATE_STATISTICS'
INSERT INTO DDL_ChangeEvents
(
EventType,
ObjectName,
ObjectType,
[sql] ,
Session_IPAddress
)
SELECT @eventdata.value('(/EVENT_INSTANCE/EventType)[1]',
'nvarchar(200)'),
@eventdata.value('(/EVENT_INSTANCE/ObjectName)[1]',
'nvarchar(200)'),
@eventdata.value('(/EVENT_INSTANCE/ObjectType)[1]',
'nvarchar(200)'),
@eventdata.value('(/EVENT_INSTANCE/TSQLCommand)[1]',
'nvarchar(max)'), client_net_address
FROM sys.dm_exec_connections WHERE session_id=@@SPID
;

Well I won't bother to record CREATE STATISTIC events hence there is an IF block to skip this event. I get the IP Address from sys.dm_exec_connections DMV which has client_net_address column.
Now create/drop/alter table (also via SSMS) for example and query the DDL_ChangeEvents table to see what happened.

Sunday, October 5, 2008

There is a clever way of rebuild indexes

Hi everybody.
It seems like I am seeing more and more inquires from our clients asking for help solving performance related issues with rebuilding indexes. All of them (or almost all of them) have been using Maintanace Plan Rebuild/Reorganize Index Task. We have lots of clients who have pretty big databases(>200GB) and have not hired yeat a DBA:-).They used to use this task and specify all tables as well as all databases, moreover, one client used to run such tasks in the middle of work day. It leads to locks on tables and performance decreasing. I would also notice you to not cancelling the task as SQL Server will rolback the whole transactions and you are about to wait a lot of time. Just let the task to complete. I suggested instead of running the task, first, identify fragmented indexes on tables that have more than 1000 pages.

DECLARE @RebuildStatement nvarchar(4000)
DECLARE RebuildStatements CURSOR LOCAL FAST_FORWARD
FOR
SELECT 'ALTER INDEX '+i.name+ ' ON '+
OBJECT_NAME(i.object_id)+' REORGANIZE;'
FROM
sys.dm_db_index_physical_stats(db_id(), NULL, NULL, NULL, 'DETAILED') phystat inner JOIN sys.indexes i
ON i.object_id = phystat.object_id
AND i.index_id = phystat.index_id WHERE phystat.avg_fragmentation_in_percent > 40
and page_count>=1000

OPEN RebuildStatements
WHILE 1 = 1
BEGIN
FETCH NEXT FROM RebuildStatements INTO @RebuildStatement
IF @@FETCH_STATUS <> 0 BREAK
EXEC(@RebuildStatement)
END
CLOSE RebuildStatements
DEALLOCATE RebuildStatements

The above SELECT generates a simple script to REORGANIZE (change to REBUILD) indexes and EXECUTES the dynamic sql. As you probaly know this script has to be run on SQL Server 2005/2008 and do not forget about really great feature such rebuilding indexes ONLINE. For more details please see BOL.

Monday, September 15, 2008

To speak kindly about RedGate

All of us are aware that restore database with different collation may cause a headache. That what happened to one of our databases where one developer created a database without to pay attention about what kind of data he is going to deal with. Ok,we can set COLLATION even per column , but what if you have lots of tables to be altered with new COLLATION. Well, we can use either import/export or SSIS package or perhaps write some T-SQL script to do the job , however I'd like to tell you how easy to get a new database with desired COLLATION by using RedGate tool.
1) CREATE DATABASE dbname COLLATE 'your desired collation'
2) Open RedGate (SQL Compare)tool to move the structure of source db to the destination db. That's all.

I takes a few minutes even we had 25GB database. I'd strongly recommend to have a look at this great tool.(www.red-gate.com)

Monday, September 1, 2008

Computed column is PERSISTED?

Hi everybody.
I'd like to share with you how important to define a computed column to be PESRISTED.
As you know from the BOL
/*
For columns specified as PERSISTED, the SQL Server 2005 Database Engine physically stores the computed values in the table and updates the values when any other columns on which the computed column depends are updated. By marking a computed column as PERSISTED, you can create indexes on computed columns defined on expressions that are deterministic, but not precise.
*/
I visited our client two days ago who has been experienced with performance issue for one of their very important query.One big table containes a computed column which SELECT statement is using for to return to the client. We have seen very high number of logical reads and TWO computer scalar iterators. For an obvious reason we define the computed column as PESRISTED and performance was increased dramatically.Moreover, create an index on computed column and see how perfromance will be increased more..


See demo script to see how it is affected.

CREATE TABLE t(c INT NOT NULL identity(1,1) PRIMARY KEY,
c1 AS '00'+cast(c AS VARCHAR(100)))

SET NOCOUNT ON
INSERT INTO t DEFAULT VALUES
GO 100000


SET STATISTICS IO ON
SET STATISTICS PROFILE ON
SELECT c1 FROM t
--Table 't'. Scan count 1, logical reads 1250, physical reads 0, read-ahead reads 8, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

--select c1 from t
|--Compute Scalar(DEFINE:([master].[dbo].[t].[c1]=[master].[dbo].[t].[c1]))
|--Compute Scalar(DEFINE:([master].[dbo].[t].[c1]='00'+CONVERT(varchar(100),[master].[dbo].[t].[c],0)))
|--Clustered Index Scan(OBJECT:([master].[dbo].[t].[PK__t__1446FBA6]))



SET STATISTICS IO ON
SET STATISTICS PROFILE ON

SELECT c1 FROM t
--Table 't'. Scan count 0, logical reads 50, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
--select c1 from t
|--Compute Scalar(DEFINE:([master].[dbo].[t].[c1]=[master].[dbo].[t].[c1]))
|--Clustered Index Scan(OBJECT:([master].[dbo].[t].[PK__t__1446FBA6]))

DROP TABLE t

Tuesday, August 19, 2008

String or binary data would be truncated

Hi folks.
I named this article with this pretty famous error message. I'm sure that everybody has seen this error at least once.This week I visited our client who asked me very intresting question. They have a table with many columns that most of them defined as VARCHAR(n) datatype. One of the query has been failing with below error.

"Msg 8152, Level 16, State 14, Line 1
String or binary data would be truncated.
The statement has been terminated."

Well ,we knew for sure that the user supplied a string which does not match with column datatype but real question is WHICH of so many columns? As I said, they have more than 50 columns and it was pretty difficult to identify on which column is failed.
My point is that it would be nice to have more information from this error message about which column is failing and I hope that MS will do something for the matter.

Monday, July 28, 2008

Cannot delete a job which is related to MP?

Hi folks
If you are using SQL Server 2005 (SP) and used to build Maintanace Plans especially with adding subplans so you are probable seen the folowing error message
/*
Drop failed for Job ‘jobname’. (Microsoft.SqlServer.Smo)

The DELETE statement conflicted with the REFERENCE constraint “FK_subplan_job_id”. The conflict occurred in database “msdb”, table “dbo.sysmaintplan_subplans”, column ‘job_id’.
The statement has been terminated. (Microsoft SQL Server, Error: 547)
*/

So if you create a MP,SQL Server will create a job and SSIS which is refernced to the subplan as well as inserts the data into system tables in msdb database. (sysmaintplan_subplans,sysjobs_view,sysjobschedules).

Intresting is that if you execute a job it makes more insert into log table called sysmaintplan_log. All of these tables are linked through FK and PK relationships. The problem is when you try to delete a job it gives a Foreign Key errors until you manually remove those entries by the SQL tables.

Please see the link written by Jonas Kempas http://gudenas.com/2007/04/20/sql-server-2005-delete-maintenance-plan-error/ explains step by step how to delete not associated jobs.

I tried it and it worked just fine.

Wednesday, July 2, 2008

How do we open a large table?

This question raised when we worked at the client's side by one of developers. They use SQL Server 2005 (SP2) and remembered that in SQL Server 2000 we can open the table via EM and then specify (All rows , Top rows...) do you remember?

I rarely use SSMS to open/edit tables data, and it seems that MS just removed this option. In SQL Server 2005 we have TABLESAMPLE clause that used to open a table with lots of rows. So we only needed to see what kind of data this psecific table has without open entire table or using TOP clause.

This example returns an approximate percentage of rows and generates a random value for each physical 8-KB page in the table.

SELECT * FROM table
TABLESAMPLE system(5 PERCENT)

I'd really advise you to read BOL about this great feature
ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/8868e8fd-6c42-4171-9eab-a0e38cb1bfd3.htm

Thursday, June 19, 2008

How do you maintain SharePoint databases?

My latest post is about how maintaining MSDB database, today I would like to share with you some thoughts about maintaining Sharepoint databases.

Last week I visited our client who has been working with SharePoint (MOSS7) and started complain about performance of SQL Server. I identified long running queries as well as very good number of deadlocks that happened every hour to databases which belong to SharePoint product. However , when I recommend to add some indexes on the tables people start almost crying not to do that as they were told that SharePoint databases are self managed product and DBA should not be touched it at all. It looks to me very strange , but that was my first experience with MOSS and I decided to do some searching on internet.

I found a couple of documents (even published by MS) to read them as if Sharepoint SQL Server performance can be managed by Shrinking & Defragging the DB.
I also asked some Sharepoint people and they say that accessing the database directly, changing anything on their databases
aside from what's provided out of the box, etc. is not supported unless you
do it thru the Sharepoint API. Hmm.... looks strange , does not?

Finally I ended up with sample script to identify very fragmented indexes and running ALTER INDEX index_name ON tablename REORGANIZE;

PS.
I could not imagine a customer being unwilling to create whatever indexes are
necessary to ensure reasonable performance of a production Sharepoint
system.

Sunday, May 18, 2008

How do you maintain MSDB database?

Last week I visited our client who has pretty big databases and performs BACKUP LOG ..operation on almost all user databases. Now, one of the most critical databases got corrupted and the DBA was pretty confident that he won't loose any data (as he had backup of log file) and brings the database from the backup within 10-12 minutes.

They also have very well written stored procedure that does RESTORE DATABASE based on name of the database and number of log files to be restored. They run the stored procedure and it has been running for almost 5 hours till DBA canceled the process. What happened? Why it has taken so much time? I thought about it and asked him a question,:-"Have you ever cleared backup history?", he replied that he hasn't. Then we checked backupset system database that contained more than one million rows!!!!

I remember SQL Server MVP Geoff N.Hiten wrote the blog about the issue and I even posted a comment on.Please check the following article
http://weblogs.sqlteam.com/geoffh/archive/2008/01/21/MSDB-Performance-Tuning.aspx

It tooks only 3 minutes to run a script that create indexes and about 15 munutes to run
use msdb

go

declare @OldestDate datetime

set @OldestDate = getdate() -100



exec sp_delete_backuphistory @OldestDate

Now that it is finished , our RESTORE command took only 12 minutes to complete.
I'd like to point out how important is to clear backup history (Fortunately, in SQL Server 2005 we have builtin taks to do the job) as on time 'X' you will succefully restore a needed database.

Monday, May 5, 2008

Getting row count of table in SQL Server 2005

It is much easier in SQL Server 2005 to get row count per table.

SELECT
[TableName]=tbl.name,
[RowCount] = SUM
(
CASE
WHEN (pt.index_id < 2) AND (au.type = 1) THEN pt.rows
ELSE 0
END
)
FROM
sys.tables tbl
INNER JOIN sys.partitions pt
ON tbl.object_id = pt.object_id
INNER JOIN sys.allocation_units au
ON pt.partition_id = au.container_id
GROUP BY
tbl.name ORDER BY [RowCount]DESC;

Sunday, May 4, 2008

Getting next value

Hi folks. I'd like to share with you some technique to get a next value from the table means to create your own sequence mechanism.
As we all know that an Identity property may have gaps, so use the below script to retrieve a next value. We create a table with a one row and a only one column which holding the last used sequence value.


CREATE TABLE seq(col int not null);
INSERT INTO seq values(0);
go
SELECT * FROM seq

CREATE PROC spget_nextseq @next_val AS INT OUTPUT
AS
UPDATE seq SET @next_val= col = col + 1;
go

-- usage
DECLARE @i as int;
EXEC spget_nextseq @i output;
SELECT @i;

Note, in OLTP application where many connections run this script you may end up with deadlocks. One way to pevent it is using lock hint called TABLOCK

ALTER PROC spget_nextseq @next_val AS INT OUTPUT
AS
UPDATE seq SET @next_val= col = col + 1 FROM seq WITH (TABLOCK);
go

Sunday, April 20, 2008

Quick look at IN predicate

There are lots of articles and techniques onnthe internet about how to deal with delimited parameters. Last Wed I was visited a client that asked to write a quick query to return the data based on delimited values . Here we go

Use pubs

DECLARE @t VARCHAR(50)
SET @t = ('Bennet,smith')

SELECT *
FROM authors
WHERE ',' + @t + ',' LIKE '%,' + au_lname + ',%'

SELECT * FROM
authors WHERE CHARINDEX(',' + au_lname + ',',','+ @t+',')>0

As you can imagine, performance will be horrible.

Monday, March 17, 2008

SQL Server Agent Jobs duration Report

Hi folks. I would like to share with you the following simple script to show us jobs duration report in SQL Server 2005. I manipulated with INTERGER values stored by SQL Server to convert them into DATETIME/CHAR(8) datatypes to represent the data.
Thanks to SQL Server MVP Peter Ward provided me with StartTime calculation.

WITH job_duration_view
AS
(
SELECT name,
StartTime = CONVERT(DATETIME, RTRIM(last_run_date)) +
(last_run_time * 9 + last_run_time % 10000 * 6 + last_run_time % 100 * 10 + 25 * last_run_duration) / 216e4 ,
CONVERT(CHAR(8),DATEADD(ss,last_run_duration,CAST(last_run_date AS CHAR(8))),114)
AS duration
FROM msdb.dbo.sysjobservers js
JOIN msdb.dbo.sysjobs j ON j.job_id = js.job_id
WHERE last_run_date >0 AND last_run_time >0
) SELECT name AS job_name,StartTime,
StartTime -'19000101'+Duration AS EndDate ,Duration
FROM job_duration_view

Sunday, March 9, 2008

Change collation in tempdb

What's happening if you installed SQL Server instance with a collation that is different from a database collation? We just started testing our production application and everything seem to work well, however one of our stored procedure inserts hebrew characters into temporary table and then after some operations the data get insertded into a real table. Guess what wee have seen in the database? Right,we have seen '????' symbols. Sure, if you do not use temporary table and insert the data directly into a permanent table you will see the right characters. Someone said that we should decline of using temporary table and insert the data into 'temporary' permanent table. Another guy said that we should run ALTER DATABASE tempdb command to change COLLATION, but as we know you cannot run this statement on system databases.
The error is
Msg 3708, Level 16, State 5, Line 1
Cannot alter the database 'tempdb' because it is a system database.

So what is the solution? Well , use the REBUILDDATABASE option in Setup.exe or re-install the instance. Fortunately, the whole story happened on developing machine and we did not forget to install PRODUCTION server with right collation:-).

Just wanted to note you how important is to choose the 'right' collation while installing production server.

Sunday, March 2, 2008

Scheduling database backup in SQL 2005 Express Edition

I have been recently visited client who uses SQL Server 2005 Express Edition for his business. As you know that edition is supplied without SQL Server Agent (which was in MSDE). So,the question is how to schedule backup of the database in that case? I would like to thank to Jasper Smith, who wrote these greate articles

Automating Database maintenance in SQL 2005 Express Edition Part I
http://www.sqldbatips.com/showarticle.asp?ID=27

Automating Database maintenance in SQL 2005 Express Edition Part II
http://www.sqldbatips.com/showarticle.asp?ID=29


CREATE PROCEDURE usp_BackupDatabases
@dbname SYSNAME,
@backup_path VARCHAR(100)
AS
DECLARE @dt VARCHAR(20) ---date of db backup
DECLARE @FileName VARCHAR(50) ---build path and dbname
DECLARE @sql VARCHAR(100) ---build final sql statement

SET @dt =CONVERT(VARCHAR(15),GETDATE(),112)
SELECT @FileName = @backup_path+@dbname + @dt+'.bak'

SET @sql = N'BACKUP DATABASE [' + @dbname + '] TO DISK = ''' + @FileName + ''''
EXEC(@sql)


The above is simple stored procedure that accepts two parameters and performs BACKUP of the database.Now that, we have created stored procedure lets put the call of the sp into a text file ( I use userdbfullbackup.sql) and place the file on filesystem.
usp_BackupDatabases @dbname='$(DB)',@backup_path='$(BACKUPFOLDER)'


SQLCMD utility 'knows' to accept parameters (-v),so we can easily provide parameters.

--Usage
EXEC master..xp_cmdshell 'SQLCMD -S URID\SQLSERVERDEV2005 -i"c:\userdbfullbackup.sql" -v DB="Northwind" -v BACKUPFOLDER="c:\"'


But wait , till now we just written a stored procedure that backups our databases and tested it , so what is about scheduling?

Windows Servers/XP have Scheduled Tasks Wizard
You can refernce to below KB
http://support.microsoft.com/default.aspx?scid=kb;en-us;308569&sd=tech

Please see Jasper's explanation ( do not want repetition).
http://www.sqldbatips.com/showarticle.asp?ID=27

Sunday, February 24, 2008

ISNULL changes to NOT to allow NULLs

Hi folks. Actually, it is pretty old trick/feature I would like to show you. Let's say we have a table called 'Test1' with one column as VARCHAR(10) which allows inserting NULLs. Well,we want to change this to NOT allow NULLs and I'm also going to change datatype for this column at the same time ,so just for testing let's assume that 'Test1' does not contain alphabetic values.


CREATE TABLE Test1
(
Col1 varchar(10) NOT NULL,
)
INSERT INTO Test1 VALUES('1')

EXEC sp_rename 'Test1', 'Test1_Old'
GO
SELECT ISNULL(CAST(Col1 AS int), 0) AS Col1 ---Makes Col1 not to allow NULLs
INTO Test1
FROM Test1_Old

DROP TABLE Test1_Old

Make sure that you do not have duplicates, so you can create a constraint on the column.
/*
ALTER TABLE Test1
ADD CONSTRAINT PK_Test1
PRIMARY KEY (Col1)
*/
GO

Tuesday, February 12, 2008

Percent of completion for some operations in SS2005

Hi folks
I have seen many and many businesses today have already upgraded to SQL Server 2005 and at this time I would like to share with you some of new features that this product gives us.
As you know SQL Server 2005 provides lots of dynamic management views (DMV) to get access to internal behaviour of processes or commands. If I remember well, someone of MVPs has already published about this feature , so anywhere..

Take a look at this script. Run for example BACKUP DATABASE comand and in another session run the below script. It provides you with start time column , percent of complete and what possible completion time is...(estimated_completion_time ) COOL, right?.

SELECT TOP 2 start_time,
percent_complete ,estimated_completion_time
FROM sys.dm_exec_requests
ORDER BY start_time desc

Check it out with those commands

DBCC CHECKDB
DBCC SHRINKDATABASE
DBCC SHRINKFILE
BACKUP DATABASE
ROLLBACK

Monday, January 21, 2008

Tables that do not have Clustered Index

I'd like to share with you this script that returns name of tables that do not have Clustered Index defined on.You may ask, why do we need to create a Clustered Index on the table at all? Well,I think it is a subject for another blog , my experience is that every table should have clustered index.

--SQL Server 2005 (SP2)

SELECT name AS object_name FROM sys.objects WHERE name NOT IN
(
SELECT o.name AS object_name
FROM sys.indexes i
JOIN sys.objects o
ON i.object_id = o.object_id
WHERE objectproperty(o.object_id,'IsMSShipped') =0
AND I.type=1 ---Clustered Index
) AND objectproperty(sys.objects.object_id,'IsMSShipped') =0 ---Only user objects
AND objectproperty(sys.objects.object_id,'IsTable') =1 ---Only tables
ORDER BY name

Tuesday, January 8, 2008

Columns_Updated() within a trigger

Happy New Year to everyone. Once someone asked me about what columns are affected by the UPDATE statement. As you probaly know that in SQL Server 2005 we can do that in more elegant way, but for SQL Server 2000 I decided perfrom some testing.

CREATE TABLE Test (c1 INT NOT NULL PRIMARY KEY, c2 CHAR(1),c3 INT)
GO
--Insert some data
INSERT INTO Test VALUES (1,'A',100)
INSERT INTO Test VALUES (2,'B',500)

--Create a trigger
CREATE TRIGGER tr_my_trigger ON Test FOR UPDATE
AS
DECLARE @ColumnID INT
DECLARE @Columns VARCHAR(8000)
SET @Columns = SPACE(0)
SET @ColumnID = 1
WHILE @ColumnID <= (SELECT COUNT(*)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Test')
BEGIN
IF (SUBSTRING(COLUMNS_UPDATED(),(@ColumnID - 1) / 8 + 1, 1)) &
POWER(2, (@ColumnID - 1) % 8) =
POWER(2, (@ColumnID - 1) % 8)
SET @Columns = @Columns + CAST(@ColumnID AS VARCHAR) + ','
SET @ColumnID = @ColumnID + 1
END
PRINT 'Updated columns are :' + @Columns

---
UPDATE Test SET c2='D',c3=2000 WHERE c1=1
--Updated columns are :2,3
We see that the second and the third column were affected according to the INFORMATION_SCHEMA.COLUMNS ordered by ORDINAL_POSITION (check it out). It is not comfortable to see those numbers , but with a little effort you can get also name of columns from the INFORMATION_SCHEMA.COLUMNS.

I suggest you not to run/use that on the production server as it may hurt performance.