Thursday 13 December 2012

SQL IO Latencies

I've spent time recently looking into the performance of visualized SQL instances on SAN storage. The systems present a myriad of possibilities with regards to how exposed drives are configured i.e. physical disks, LUNs and virtual drives. Smart SAN solutions can move data around according to frequency of use, placing the most accessed data on the fastest storage. Virtual Hosts can reallocate resources amongst guests e.g. if a failure occurs or for a data processing window. Resources available to a VM can therefore change, maybe even without the guest OS being aware.

Anyway, Paul Randal has a script for looking at your I/O throughput as seen by SQL Server.
It's a lot quicker than using Perfmon (Performance Monitor).

How to examine IO subsystem latencies from within SQL Server

Monday 17 September 2012

Copying large files - Fastcopy

If copying large files you might want to consider 'how'.
It comes down to Buffered vs. Unbuffered Input/Output (I/O).

Unbuffered copies are built-in to Win 2008 R2 and the Win 7 version

To perform unbuffered copies on an older system the following tools are useful.


FastCopy  - Download

Eseutil (exchange install) - Download


Ref -
How to copy very large files across a slow or unreliable network






Saturday 15 September 2012

Tools - Nanozip

A great compression tool which allows you to control how many processors it uses -

c:\>nz a -p2 nanozip.nz "x:\export\*.*"
NanoZip 0.09 alpha/Win32  (C) 2008-2011 Sami Runsas  www.nanozip.net
Intel(R) Xeon(R) CPU 5110 @ 1.60GHz|21926 MHz|#4|1208/2047 MB
Archive: nanozip.nz
Threads: 4, memory: 512 MB, IO-buffers: 4+1 MB
Compressor #0: nz_optimum1 [251 MB]
Compressor #1: nz_optimum1 [251 MB]
Compressed 56 357 924 728 into 4 336 512 403 in 5h 39m 13.10s, 2704 KB/s
IO-in: 31m 53.80s, 28 MB/s. IO-out: 10.15s, 407 MB/s

Download





Friday 14 September 2012

Finding Default Column Constraints where the columns allow NULLs

Finding Default COlumn Constraints where the columns allow NULLs Investigating a database I wrote these to find some design inconsistencies.

I plan to make a version for foreign key constraints too. SQL 2000 script

SELECT	  
	  u.name AS OwnerName
	, tab.name AS TableName
	, col.name AS ColumnName
	, col.isnullable
	, con.name AS DefaultName 
	, com.text AS DefaultValue
FROM sysobjects tab
INNER JOIN sysusers u ON tab.uid = u.uid
INNER JOIN syscolumns col ON col.id = tab.id
INNER JOIN sysobjects con ON con.id = col.cdefault 
       AND con.xtype = 'D'
INNER JOIN syscomments com ON com.id = con.id 
 LEFT JOIN syscolumns dfc ON dfc.id = com.id
WHERE col.isnullable = 1
ORDER BY 1,2

SQL 2005+ script

SELECT 
	 Tab.name AS Tablename
	,Col.name AS Columnname
	,Col.is_nullable
	,Con.name AS DefaultName
	,[Definition] AS DefaultValue
FROM	sys.all_columns Col
INNER JOIN sys.tables Tab 
		ON Col.object_id = Tab.object_id
INNER JOIN sys.default_constraints Con
		ON Col.default_object_id = Con.object_id
WHERE col.is_nullable = 1
ORDER BY 1,2

Similar to the above, most can be done from INFORMATION_SCHEMA view ins sql 2005 (with the exception of the default name)

SELECT
	 TABLE_SCHEMA AS SchemaName
	,TABLE_NAME AS TableName
	,COLUMN_NAME AS ColumnName
	,IS_NULLABLE
	,COLUMN_DEFAULT AS DefaultValue
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_DEFAULT IS NOT NULL
  AND IS_NULLABLE = 'YES'
ORDER BY 1,2,3

Tuesday 7 August 2012

Collation Mismatch : I think it's one of those deja vu things

This error surfaced once again today...

Msg 468, Level 16, State 9, Server SERVER1, Line 6 Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AS" in the equal to operation. 

Here is a quick query of looking at COLLATION and COMPATIBILITY LEVEL differences between a server and the databases hosted on it. I'm looking at compatibility level too as in my case I correctly suspected that the databases concerned were migrated from another server.

SELECT 
  @@SERVERNAME AS ServerName
 ,SERVERPROPERTY('Collation') AS ServerCollation
 ,(10 * (CAST(LEFT(CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR(20)), CHARINDEX('.', CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR(20)), 1) - 1) AS INTEGER))) AS ServerVersion
 ,[Name] AS DBName
 ,DATABASEPROPERTYEX([Name],'Collation') AS DBCollation
 ,[cmptlevel] AS DBCompatibilityLevel
 ,CASE WHEN SERVERPROPERTY('Collation') <> DATABASEPROPERTYEX([Name],'Collation') THEN 'Mismatch' ELSE 'Match' END AS CollationSettings
 ,CASE WHEN (10 * (CAST(LEFT(CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR(20)), CHARINDEX('.', CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR(20)), 1) - 1) AS INTEGER))) <> [cmptlevel] THEN 'Mismatch' ELSE 'Match' END AS CompatibilitySettings
FROM sysdatabases

I can actually solve my specific problem by using a COLLATE clause on the join condition (as demonstrated back in 2007).

This is because I'm querying SQL System tables across databases (the master and user databases having different collations having the user databases being migrated from another server).

I think it's one of those deja vu things, an article I wrote on Collation for SQL Server Club (just don't talk about SQL Server Club).

Wednesday 1 August 2012

Function - msdb.dbo.agent_datetime

How did I miss this function?

For those that ever went through the pain of interpretting the run_date and run_time columns stored in msdb's sysjobhistory table, this function is a godsend. It returns a DATETIME format, that you can simply add the job duration to.

It appeared back in SQL 2005, and is demoed by this query ...
SELECT 
 run_date
,run_time
,msdb.dbo.agent_datetime(run_date,run_time) 
FROM msdb.dbo.sysjobhistory
ORDER BY run_date ,run_time

Various methods exist for doing the same in SQL 2000, all involving CAST/CONVERT to manipulate the strings and adding leading zeros where they are missing.
Here is my effort ...
SELECT 
 run_date
,run_time
,STUFF(STUFF(STR(run_date, 8, 0),5,0,'-'),8,0,'-') AS run_date_dateformat
,STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR,run_time),6), 3, 0, ':'), 6, 0, ':') AS run_date_timeformat
,CAST(STUFF(STUFF(STR(run_date, 8, 0),5,0,'-'),8,0,'-') + ' ' + STUFF(STUFF(RIGHT('000000' + CONVERT(VARCHAR,run_time),6), 3, 0, ':'), 6, 0, ':') AS DATETIME) AS run_datetime_complete
FROM     msdb.dbo.sysjobhistory
ORDER BY run_date ,run_time

Wednesday 13 June 2012

Attaching a database (without a log file - .LDF)

Had to to this with the AdventureWorks 2012 download.
ATTACH_REBUILD_LOG is the option here...

CREATE DATABASE [AdventureWorksDW2012] ON 
( FILENAME = N'E:\SQL2012\AdventureWorksDW2012_Data.mdf' )
 FOR ATTACH_REBUILD_LOG
GO

Thursday 5 April 2012

Reading & Writing to the Registry from SQL Server

Fetching data from the registry - 

DECLARE @path NVARCHAR(4000)

EXECUTE [master].[dbo].[xp_instance_regread]
 , N'HKEY_LOCAL_MACHINE'
 , N'Software\Microsoft\MSSQLServer\MSSQLServer',N'BackupDirectory'
 , @path OUTPUT 
 , 'no_output'
SELECT @path AS DefaultBackupDirectory

Result - C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Backup 

Writing data to the registry -

DECLARE @path NVARCHAR(4000)
SET @path ='C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Backup'

EXECUTE [master].[sys].[xp_instance_regwrite]
    N'HKEY_LOCAL_MACHINE'
  , N'Software\Microsoft\MSSQLServer\MSSQLServer'
  , N'BackupDirectory'
  , N'REG_SZ'
  , @path;

2nd example, using named parameters -

DECLARE @path NVARCHAR(4000)
SET @path ='C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Backup' 
 
EXECUTE [master].[sys].[xp_instance_regwrite]
  @rootkey = N'HKEY_LOCAL_MACHINE'
 ,@key = N'Software\Microsoft\MSSQLServer\MSSQLServer'
 ,@value_name = N'BackupDirectory'
 ,@type = N'REG_SZ'
 ,@value = @path;

Friday 30 March 2012

Video Notes : Let’s talk about joins (Join internals)

I'm beginning to watch the free 30 minute training videos on Brent Ozar's site (now Brent Ozar PLF).

Todays was entitled Let’s talk about joins by Jeremiah Peschka and served as a revision as to how sql processes joins. Obviously wqtch it yourself, I'm placing the following notes here to remind myself what they are all about...
 
Nested Loop Join

Reads every row from each table
Fastest for small tables
Performance governed by number of rows, gets slower as row counts increase.
 
Merge Join

Both tables must be sorted (by the join key) for this to occur.
Performance governed by number of pages retrieved.
To optimise, look at  indexes, memory & disk speed.
 
Hash Joins

Use Hash functions
Hash function applied to join key (turning variable length keys to fixed)
Internally SQL sorts works on small ‘buckets’ of data when comparing data for joins.
Best for large tables, tempdb is used though if memory spills to disk.

Monday 19 March 2012

Video Notes : How to succeed in database development without really trying

How to succeed in database development without really trying is another video put together by Jeremiah Peschka. Here are my notes -

Database Developer -
‘A database developer recognizes that the database is more than an implementation detail’
Thinking about Data
  • Sets not rows – Perform operations on sets, not row by row
  • Think Like you dress : in layers  - use views, procedures functions to separate logic & functionality from data
  • Normalize – Use a balance –' Normalise till it hurts, denoramalise till it works'

Specializations

Decide if you want to be a Generalist or a Specialist?
Some people specialise in a Language, Tool, Feature

Database Specialiszations
  • Performance Tuning – Read execution plans, what is going on under covers, turn into actionable results . I/O , memory, indexing etc
  • Reporting – Reporting queries different for volumes
  • Object Relational Modelling – Optimizing frameworks, ORM Tools - Nhibernate etc
  • Modeling – Planning

Core Skills
  • Be able to spot patterns – e.g. row rather than set based processing
  • Be lazy – make sure your sql is updating least number of rows possible, db doing least work possible. Use referential integrity, foreign keys, use constraints for data integrity, calculated columns  Make sure business logic is in ORM.
  • Think about the long term – Data choices for storage/
  • Understand normalization – Join decisions
  • ETL Basics – Transferring & Processing data
  • Architecture – Application & Data access patterns
 
Hints - How do you get there?

  • Practice
  • Consult DBAs!
  • Reading

Recommended Books


Friday 16 March 2012

LINK : SQL 2012 - New Certification Info


James Serra is first past the post with an easily digestable  piece on the SQL 2012 certifications.
I'm happy to see there is an upgrade path ...

SQL Server 2012 : New Certification Info

Friday 2 March 2012

Not what you want to see...


It's bad enough when your client is on SQL 2000, without this error too!

(No, I never did get to the bottom of who 'tampered' or why it was like this)

Wednesday 29 February 2012

Bookmark : Leaking Money

Does your group spend too much or too little? is a great read, demonstrating the 'hidden cost of junk'.

It can equally be applied to software purchases, developing software, developing databases, buying hardware etc...




Wednesday 15 February 2012

Thursday 9 February 2012

DOSsing around

Having just finished a command line scripting system, here are the bookmarks/ techniques I used.
I can't share it here, but I'm quite proud of the finished result, which allows me to recursively execute folders of .sql files and update a central table of which patches have been applied.

DOS Commands & Batch files
 
A little DOS for a poor DBA

DOS String Manipulation

DOS FOR command

  
File name parsing in batch file and more idioms

Loop files in a directory with a batch file


Executing a folder of sql scripts

for /r . %f in (*.sql) do @echo %f

for %i in (*.*) do echo %~ni

NB : You need to change "%i" to "%%i" for use inside a batch file.


Adding SQL ...

osql is command line tool from SQL 2000. Replaced by SQLCMD in SQL 2005+ and scheduled for deprecation but is still present.

osql -S{server} -d{database} -U{user} -P{password} -f 65001 -n -i{filename} -b 

See Solace : OSQL & SQLCMD 



Wednesday 1 February 2012

Running XP in Virtualbox

To run Windows XP efficiently on Virtualbox...
  • Use 1 core
  • Disable PAE/NX

  • Disable IO APIC
  • Disable EFI

  • Use SATA storage for the drives (you'll need to install drivers for this)


Disable VT-x/AMD-V
Enable Nested Paging
These appear dependent on support for them.

Ref : XP Speed Tips

Bookmark : Burnout & Balance

2 links from TechRepublic today -

Firstly - 10 things IT pros do that lead to burnout

How many of these can you relate to? I'd bet on at least 3 of them!


An earlier post has some suggestions to balance the situation however -

10 things you can do to keep your IT job from taking over your life

(It doesn't say not to blog past midnight though)

Tuesday 31 January 2012

ORACLE : Sqldeveloper - MSVCR71.DLL was not found

Attempting to start SQLDeveloper on my XP desktop today I was faced with this error. This application failed to start because MSVCR71.dll was not found


I had already installed the Java VM and hence some searching led me to Michel Belor's post on how to resolve the error. His solution is linked below and involves adding a couple of entries to the registry.

SQLDeveloper : MSVCR71.DLL not found error

Monday 30 January 2012

Windows 7 : Disable Windows Search

Disabling Windows Search may be a little off topic for a sql blog, but I have found it to be worthwhile when creating a Windows 7 Virtual Machine. The Search Service constantly indexes the drives which has quite an overhead on the virtual machine.



Thursday 19 January 2012

Removing the RDP Client Server History List

Is easily accomplished via Regedit.

Delete entries from this key to achieve this >

HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\Default

Wednesday 4 January 2012

Selectively Clearing the Plan Cache - FLUSHPROCINDB

DBCC FREEPROCCACHE clears the plan cache for the whole instance.
If you have databases from different applications / vendors on the same server however , you might need to do so for just one database. That is where DBCC FLUSHPROCINDB comes in.

1) Use DB_ID() to get the database ID for the current database.
2) Use that as a parameter for FLUSHPROCINDB 


SELECT DB_ID()

DBCC FLUSHPROCINDB(10)

Tuesday 3 January 2012

2011

In common with a lot of technical bloggers I have been publicly setting and reviewing goals over the past few years. I find that having long term goals outside of the workplace keeps me focused and increases the breadth of my skill set.

That is all very well and has worked in the past for me. In 2011 however I fell short of achieving all of  the tasks I set myself. Anyway, here is my autopsy of my goal list from last year.

A new role

I achieved this goal and started in April 2011. The role has been challenging in terms of the volume of work and has forced me to up my game re; communication skills. Technically however, the technologies are old and simple issues are repeated over geographically separate client sites. In terms of skills I have still been able to utilize a mixture of administration and development.

Blogging

My output to this site has been less than half that of previous years and has occurred in bursts rather than the usual trickle. I found myself constantly returning to old posts and scripts last year so my efforts have paid dividends. I need to index the site better however so that will be on this years list! Rather bizarrely a lot of content has been focused on SQL 2000 which I thought I had long seen the back of! The client is always right (or skint) however...

Community

I did manage to publish more scripts although only on my site (no further SSC contributions this year). As for publishing articles (to other sites) I totally failed on this goal.
In terms of SQL Server events I managed to attend 3 in 2011. In April there was SQLBits 8 in Brighton and September saw me attend (and help out at) SQLBits 9 in Liverpool.
The 3rd event I attended was Gavin Payne's October SQL Server in the Evening event where I made my speaking debut. In line with recent experiences I presented my approach to auditing SQL Server systems. Public speaking wasn't in the plan but I am grateful for the opportunity to conquer a demon.

Learning

I am a little disappointed not to have had the chance for a major project that requires SSAS, SSIS or .NET development so these will remain on my list. In terms of reading, my book backlog is not being helped by the volume of quality SQL content the community is producing. Time has not been on my side of late and my reading list is being joined by a viewing list of awesome free training videos!