Sunday, December 20, 2009

SQL_VARIANT_PROPERTY() to find the info about data type?

BOL says that this function returns the base data type and other information about a sql_variant value. Have you ever looked what is datatype of GETDATE() or perhaps DB_ID() function or what is the data type of SYSTEM_USER??

DECLARE @var sql_variant
SELECT SQL_VARIANT_PROPERTY(@var , 'BaseType') as BaseType
, SQL_VARIANT_PROPERTY(@var , 'Precision') as Precision
, SQL_VARIANT_PROPERTY(@var , 'Scale') as Scale
, SQL_VARIANT_PROPERTY(@var , 'TotalBytes') as TotalBytes
, SQL_VARIANT_PROPERTY(@var , 'MaxLength') as MaxLength
, SQL_VARIANT_PROPERTY(@var , 'Collation') as Collation;

I found that very usefull as you can easily retun the basic info about even system objects and how SQL Server interpetes it.

Wednesday, December 9, 2009

Think before you use ALTER TABLE ....

Let me say you have been told to add a computed column to the very huge table in SQL Server. Well,if you have been working with SQL Server for a while you are probably aware that you will be better of using ALTER TABLE.. command instead of using SSMS because under some circumstances SQL Server will copy the entire table into a temporary one and then rename it , but if the users work on the table they will be locked... So far so good. Now , what if you have been told to add a computed column with PERSISTED clause which means SQL Server will store physically the values of the column..Here stop and think a little bit especialy you deal with huge tables. SQL Server is going to update every row in the table with computed value based on the formula you provide with. How long will it take? I have seen that adding a PERSISTED column by using ALTER TABLE command on the table with 230 million rows took 19 hours on very powerful server, can you afford it?.
So what is the solution? The solution is to add permanent column and then running an UPDATE command , however do not forget to divide the UPDATE into small batches (each one of 10000 rows for example) such as you will control the rows has been updated/affected by the UPDATE and your LOG file will not be blowing up.

Sunday, November 8, 2009

Consistency checking for a 1.2 TB database

I would like to share with you my recent experience on how to build consistency checking for VLDB by using DBCC CHECKDB/CHECKTABLE and etc. I was following Paul's great article
to build the same or almost the same strategy on the client's site. They have 8 huge tables arround 658,245,225 pages each table and I tested the bigest one to run DBCC CHECKTABLE command. It took 17 hours to complete the task but unfortunately it was not acceptable for the client. So my final solution is (thanks to Paul) to restore the database on the dev.machine and perform the checking on.

Sunday, October 18, 2009

Import data from AS400 by using SSIS

That was deviation towards the world of BI. I was helping my co-worker to build a SSIS Package to import the data from AS400 into SQL Server.
At first I selected OLE DB Source component for getting the data but started getting odd errors of driver compatibilty. Then I read that there is a newish driver called DB2OLEDB which unfortunately works only on Enterprise/Developer Editions ,so I had only Standtard...

After some investigations I solved the problem by choosing DataReader Source control and selected ODBC driver for Net.Providers. It looks obvious for experienced BI consultants but for me, who rarely use SSIS, I felt great when co-worker was so happy to get all the data:-))

Monday, October 5, 2009

Does the fragmentation have impact on performance on Heaps?

I have been working on some project that involves using very huge tables. Actually most of tables are heaps and to reduce IO we decided to compress those tables on PAGE level because the database is READ-ONLY. Fortunately in SQL Server 2008 we can issue ALTER TABLE... REBUILD.. to rebild the heaps to remove forwarding pointers ,please read Tibor's article (

Well ,after running the scripts we ended up with almost 500GB empty space. Usually it is not good to shrink the database as you gain nothing (it will grow again) and that caused fragmentation which does not help in terms of performance. Shrinking VLDB is another topic but what I did NOT know if your queries are doing INDEX SEEK SQL Server can get to any row easily no matter how fragmented the table is.
In that case fragmentation has no impact on performance.

If you think about it does perfect sense. Fragmentation is a measure of how closely the logical order matches the physical order. So if you do a scan (following the logical order) on a fragmented table, you can be jumping all over the to get the rows you need,instead of accessing them nice and neat and contiguously.

Many thanks to Kalen Delaney.

Wednesday, September 2, 2009

UNION ALL within a view

I have a client who has viewes that contain UNION ALL command for number of very large tables. The logic is to apply a WHERE condition to that view to get the data. Even having indexes on those tables SQL Server applies a WHERE condition to each SELECT statement within a view that may lead to performance proble. To demonstate it please considetr AdwentureWork data and two tables Sales.SalerDetails and SalesOrderHeader

Sales.SalesOrderDetail S
,S.CreditCardID FROM
Sales.SalesOrderHeader S

SELECT TOP 100 * FROM v1 WHERE SalesOrderDetailID >40000 AND SalesOrderDetailID<45000

Table 'SalesOrderDetail'. Scan count 3, logical reads 1359, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'SalesOrderHeader'. Scan count 1, logical reads 34, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

You can see that SQL Server 'touched' those tables by using Clustered Index Scan on
Sales.SalerDetails and Clustered Index Seek on Sales.SalesOrderHeader. Indeed I realy simplified the logic as in reality it has horrible performance. How to improve?

I create a Multi-Statement Table-Valued UDF that accepts a parameter
@SalesOrderID INT
Sales.SalesOrderDetail S
WHERE SalesOrderID > @SalesOrderID AND SalesOrderID<@SalesOrderID+5000
Sales.SalesOrderHeader S
WHERE SalesOrderID > @SalesOrderID AND SalesOrderID<@SalesOrderID+5000
Table '#74AE54BC'. Scan count 1, logical reads 12, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

See , I applied a WHERE condition within an UDF for every statement as UDF must get parameters.This is another argument to use Table-Valued UDF ...

Sunday, August 9, 2009

SQL Server 2005/2008 on Windows 7

SQL Server MVP John Paul Cook has already mentioned on that SQL Server 2008 will install on Windows 7 and it works just fine. I would like to add that SQL Server 2005 is also working just fine along with SQL Server 2008.
Having said that I'm told by many people that Reporting Services does not work on W7.Well the phrase 'does not work' does not say too much without additional info however I decided to do some testing and found out that SSRS works just fine as well, so the problem could be because the SSRS/SQL Server account does not have enough credentials to run SSRS. More over I took old reports which were deployed to our dedicated SSRS server (Windows 2003) and redeployed to Windows 7 and that also worked just fine. I'm really happy with Windows 7 which is faster than Vista and more comfortable than XP.....

Wednesday, July 8, 2009

Report on Indexes on FKs

This question had been asked in NG yesterday.I think it is a good exercise so I take a script too get the column usage from both sides (parent/child) written by Aaron Bertrand and modified a little bit to return the table/column that does not have an index on.

WITH fk_no_indexes
[constraint_name] = f.[name],
[child_table] = OBJECT_NAME(f.parent_object_id),
[child_column] =,
[parent_table] = OBJECT_NAME(f.referenced_object_id),
[parent_column] =
sys.foreign_keys f
FROM sys.columns c INNER JOIN sys.index_columns ic
ON c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id
) AS pc
f.key_index_id = pc.index_id
INNER JOIN sys.foreign_key_columns fkc
ON f.[object_id] = fkc.constraint_object_id
AND pc.[object_id] = fkc.referenced_object_id
AND fkc.referenced_column_id = pc.column_id
INNER JOIN sys.columns cc
ON fkc.parent_object_id = cc.[object_id]
AND fkc.parent_column_id = cc.column_id
SELECT [constraint_name],[child_table],[child_column],[parent_table] FROM
fk_no_indexes WHERE NOT EXISTS
SELECT * FROM sys.index_columns i
WHERE i.[object_id]=[child_table_id]
AND [child_column_id]=column_id

Monday, June 15, 2009

Have you applied latest sevice pack to the client tools?

Well, I have seen a huge number of developers who have not heard aboit it at all. Yes,I'm serious.Recently, I had helped out tuning SQL Server performance of the database in pretty big and to be held in respect company with more than 10 developers. One day one of them asked me about the query ,which restuns the data without the error on his machine ,however , throws the error on the server.If I remember well it was conversion error.I checked both queries and indeed on the dev.machine it runs without the error. I know there is no magic here..and as you can imagine the dev.machine has installed NO service pack at all,but Production SQL Server 2000 has latest service pack (SP4).That's all story. The 'little' difference could have made a big error witout testing as each developer has installed server and client tools on their machines.The SQL Server query optimizer is free to move/change expressions with new release of service packs or hotfix thus please make sure that you have latest service pack on your client tools and test the query as well.

Thursday, May 14, 2009

How to upload/modify more than one BLOB column in single statement

Recently, I have been visited our client who has one table with two columns defined as VARBINARY(MAX)to store the images. How can we insert/upade these columns with single statements? I have not used blobs too much ,so I ended up with the following


---Single INSERT statement

INSERT INTO Blobs (b1,b2)
Openrowset(Bulk 'N:\OnePicture.JPG', SINGLE_BLOB) AS d1)
Openrowset(Bulk 'N:\AnotherPicture.JPG', SINGLE_BLOB) AS d2)


Openrowset(Bulk 'N:\AnotherPicture.JPG', SINGLE_BLOB) AS d2),
Openrowset(Bulk 'N:\OnePicture.JPG', SINGLE_BLOB) AS d2)
---WHERE condition put here

Wednesday, May 6, 2009

My wife gave birth to our third child on 4 May!

A son, 3 kg 370 grams!

We have replaced ourselves and are now done.

Sunday, April 19, 2009


Last time I was visiting our client the person showed me one process which was in suspended status for a long time. Using Adam's great script( I identified the SELECT statement which had ASYNC_NETWORK_IO wait type. MS said that this type occurs on network writes when the task is blocked behind the network and we need to verify that the client is processing data from SQL Server. The problem was that SELECT statement retrievs more than 50.000 rows , then the application manipulates (FOR ..NEXT loop) with that data and finally only 200 rows that a client had seen at the end. Well,thanks to SQL Server 2005 we changed the statement to accept a parameter for TOP clause and this wait type has gone. We have not seen a great improvment of the application as we did not expect it but it is another way to identify poor written queries.

Wednesday, April 1, 2009

MVP Award for 2009 year.

I've been given the MVP award for another year. I hope you have found this blog and my posts in public forums useful and that helped you out to resolve some problems.

Thursday, March 5, 2009

What is a shared drive/s used by SQL Server instance?

SQL Server 2005 has a great dynamic management view named sys.dm_io_cluster_shared_drives . If you have clustering and want to know what
shared disk/s which is/are used by clustered SQL Server instance run the following query.

SELECT DriveName
FROM sys.dm_io_cluster_shared_drives
ORDER BY DriveName

Tuesday, March 3, 2009

Running DTS without SQL Server?

Seems like unreasonable request.Well, we had a client who wanted by click on icon on his Desktop to launch a DTS Package which runs on another computer.I'd like to share with you the script that friend of mine created on a workstation that did not have SQL Server installed.

MD "\Program Files\Microsoft SQL Server\80\Tools\Binn\Resources\1033"

COPY sqlresld.dll "\Program Files\Microsoft SQL Server\80\Tools\Binn"

COPY dtsffile.dll "\Program Files\Microsoft SQL Server\80\Tools\Binn"
COPY dtsffile.rll "\Program Files\Microsoft SQL Server\80\Tools\Binn\Resources\1033"
regsvr32.exe "\Program Files\Microsoft SQL Server\80\Tools\Binn\dtsffile.dll"

COPY dtspkg.dll "\Program Files\Microsoft SQL Server\80\Tools\Binn"
COPY dtspkg.rll "\Program Files\Microsoft SQL Server\80\Tools\Binn\Resources\1033"
regsvr32.exe "\Program Files\Microsoft SQL Server\80\Tools\Binn\dtspkg.dll"

COPY dtspump.dll "\Program Files\Microsoft SQL Server\80\Tools\Binn"
COPY dtspump.rll "\Program Files\Microsoft SQL Server\80\Tools\Binn\Resources\1033"
regsvr32.exe "\Program Files\Microsoft SQL Server\80\Tools\Binn\dtspump.dll"

COPY axscphst.dll "\Program Files\Microsoft SQL Server\80\Tools\Binn"
COPY axscphst.rll "\Program Files\Microsoft SQL Server\80\Tools\Binn\Resources\1033"
regsvr32.exe "\Program Files\Microsoft SQL Server\80\Tools\Binn\axscphst.dll"

COPY dtsrun.exe "\Program Files\Microsoft SQL Server\80\Tools\Binn"

COPY dtsrun.rll "\Program Files\Microsoft SQL Server\80\Tools\Binn\Resources\1033"

COPY custtask.dll "\Program Files\Microsoft SQL Server\80\Tools\Binn"
COPY custtask.rll "\Program Files\Microsoft SQL Server\80\Tools\Binn\Resources\1033"
regsvr32.exe "\Program Files\Microsoft SQL Server\80\Tools\Binn\custtask.dll"

COPY sqlunirl.dll %SYSTEMROOT%\system32

"\Program Files\Microsoft SQL Server\80\Tools\Binn\dtsrun.exe" /FProcessAppSalesCube.dts


Wednesday, February 4, 2009

How to render the Report to PDF/EXCEL format

I had a client who wanted to get his report directly to PDF file. Previously, he used to use a Report Viewer component in application and then exporting it to the appropriate format.
In this example I render my report to the PDF file that will be located on disk C:\.
This report also accepts one parameter.The key of the module is ReportExecutionService class which contains some methods that we need to use.

Please refer to the BOL


Imports Microsoft.SqlServer.ReportingServices2005

Private Sub CreatePdfFile()
Dim Res As New Execution.ReportExecutionService
Dim params(0) As Execution.ParameterValue
Res.Credentials = System.Net.CredentialCache.DefaultCredentials
Res.Url = "http://servername/ReportServer/ReportExecution2005.asmx?wsdl"
params(0) = New Execution.ParameterValue
params(0).Name = "ParamName"
params(0).Value = Value

Dim ReportPath As String = "/Reports/ReportName"

Res.LoadReport(ReportPath, Nothing)
Dim Format As String = "PDF"
Dim devInfo As String = Nothing
Dim extension As String = Nothing
Dim mimeType As String = Nothing
Dim encoding As String = Nothing
Dim warnings() As Execution.Warning = Nothing
Dim stremIds() As String = Nothing
Dim result() As Byte = Nothing
Res.SetExecutionParameters(params, "en-us") --ParameterLanguage

result = Res.Render(Format, devInfo, extension, mimeType, encoding, warnings, stremIds)
IO.File.WriteAllBytes("C:\Report.pdf", result)
End Sub

Thursday, January 8, 2009

Executing multiple script files via SQLCMD

I'm sure most people either DBA or developers faced at least once a requirement to execute bunch of script files. As you know we can do it by using Script Task in SSIS, however I would like to share with you a infrequent used :r parameter in SQLCMD utilty. Let say you have three files (.sql) that do an UPDATE,INSERT and finally SELECT statements. All you need is to have a single batch file which will execute those .sql files.
Create a batch file named DML.sql and add the below code
:r "c:\myUpdate.sql"

:r "c:\myInsert.sql"

:r "c:\MySelect.sql"

Make sure that you leave free space between commands.
EXEC master..xp_cmdshell 'SQLCMD -S Server\SQLSERVERDEV2005 -i"c:\DML.sql"'

I recommend you to read this article with more examples about how using :r parameter.