Audit databases – Using DDL triggers

Posted by Deepak Kumar | Posted in Scripts, SQL DBA, Uncategorized | Posted on 05-06-2010

3

As a DBA, do you ever wonder what is going inside your production database? Who is changing database schema during production time without informing you. Now, you need not to worry, you can setup DDL triggers to easily records all changes in table; like what was changed, when and by whom etc. Given is two set of commands that you need to use and leave all worries aside. Also, you can prepare customized email alerts on top of data table to notify you as soon as an event occurs.

Create a table that will store all records. For example I am using MSDB database to store this table.
a DDL trigger need to created inside database; for that you want to enable audit. This trigger will capture all DDL events and save in table. Test and check process by creating test procedure, tables etc.

1. Script to create table

USE [msdb] 
GO 
SET ANSI_NULLS ON 
GO 
SET QUOTED_IDENTIFIER ON 
GO 
SET ANSI_PADDING ON 
GO 
CREATE TABLE [dbo].[DatabaseChangeLog]( 
    [LogId] [int] IDENTITY(1,1) NOT NULL, 
    [DatabaseName] [varchar](256) NOT NULL, 
    [EventType] [varchar](50) NOT NULL, 
    [ObjectName] [varchar](256) NOT NULL, 
    [ObjectType] [varchar](25) NOT NULL, 
    [SqlCommand] [varchar](max) NOT NULL, 
    [EventDate] [datetime] NOT NULL CONSTRAINT [DF_DBChangeLog_EventDate]  DEFAULT (getdate()), 
    [LoginName] [varchar](256) NOT NULL 
) ON [PRIMARY] 
GO 
SET ANSI_PADDING OFF 
 

clip_image002[4]

 
2. Script to create database trigger

USE [SQLKnowledge] 
GO 
SET ANSI_NULLS ON 
GO 
SET QUOTED_IDENTIFIER ON 
GO 
CREATE trigger [TrgDBChangeLog] 
on database 
for DDL_DATABASE_LEVEL_EVENTS 
as 
set nocount on 
If Object_ID('msdb.dbo.DatabaseChangeLog') IS NOT NULL 
BEGIN 
declare @data xml 
set @data = EVENTDATA() 
insert into msdb.dbo.DatabaseChangeLog(databasename, eventtype, objectname, objecttype, sqlcommand, loginname) 
values( 
@data.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'varchar(256)'), 
@data.value('(/EVENT_INSTANCE/EventType)[1]', 'varchar(50)'), 
@data.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(256)'), 
@data.value('(/EVENT_INSTANCE/ObjectType)[1]', 'varchar(25)'), 
@data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'varchar(max)'), 
@data.value('(/EVENT_INSTANCE/LoginName)[1]', 'varchar(256)') 
) 
END 
GO 
SET ANSI_NULLS OFF 
GO 
SET QUOTED_IDENTIFIER OFF 
GO 
ENABLE TRIGGER [TrgDBChangeLog] ON DATABASE 

 

clip_image004

3. Script to test process and check Audit records

-- Test table creation 
use SQLKnowledge 
create table test (fname varchar(10), lname varchar(10)) 
 
-- Test procedure creation 
create proc test1 
as 
select * from msdb.dbo.DatabaseChangeLog 
 
-- Check table records in msdb database 
select * from msdb.dbo.DatabaseChangeLog 
 

clip_image006

Please leave a comment, if you have questions or need suggestions

Database Logshipping issues & Considerations!

Posted by Deepak Kumar | Posted in SQL DBA, Uncategorized | Posted on 04-06-2010

3

While working on SQL Server logshipping a DBA must consider following points to have log shipping setup running smoothly over a long period of time. Logshipping architect will look like:

image

  • Transaction log Backup and restoration jobs should work constantly (in parallel) and if backup/restoration jobs are not in sync due to network/server or any other issue it will crash logshipping setup and you may need to setup again or apply full backup restore once again. A DBA must add good number of alerts to notify as soon as transaction log backup, file copy or restoration job fails and act immediately to resolve it.
  • Its highly recommended that you should use almost similar set (size/drive name) etc on both servers. if your database log file increases to 100GB on primary server that means your secondary server should have 100GB capacity also. Similarly if you add new log/data files on primary database it should have similar location on secondary server, or you may need to manually restore TL backup on secondary server with move command.
  • Primary database can create huge transaction log backups during alter index/dbreindex tasks that you may find hard to move to secondary server(s); to mitigate the delay and risk you may choose to buy 3rd party products like Red-Gate/ Idera etc with logshipping features.
  • Use a customized script to transfer SQL Server logins to secondary server(s).
  • Logshipping failover is not fully automatic, manual DBA intervention is expected to recover databases during failure.
  • For SQL 2000 logshipping is good, however if you are using SQL2005 or 2008 use database mirroring which is advance to logshipping with hot standby feature of SQL Server.

Join us on LinkedIn

Posted by Deepak Kumar | Posted in SQL DBA | Posted on 28-05-2010

2

It gives me immense pleasure to write that our SQLDBA group on LinkedIn has crossed 10,000 members mark. This is a milestone as its the largest group on LinkedIn for Microsoft SQL server DBA’s.

Feel free to join in at :Link

SQLDBA

Created: March 06, 2008 | Professional Organization | Members: 10,800

is a common & shared community for all SQL Server DBA’s, Developers, Designers, Consultants, Job Seekers and Independent experts to enhance knowledge & sharing information on Microsoft SQL Server Technologies. Come on join us, what you waiting for? Website: http://sqlknowledge.com

How long does a SP stay in the cache?

Posted by Deepak Kumar | Posted in Scripts, SQL BI/DW/SSRS, SQL DBA, SQL Dev, Uncategorized | Posted on 14-04-2010

0

 image As a SQL Database Administrator, developer, designer you must be eager to know how long does a stored procedure execution plan stay in the cache? On what basis and parameters Microsoft SQL Server decide age of an execution plan to reside in cache? The quick answer is, it is based on the cost factor and number of reference to that object in cache. Let’s see how it goes in details:

Once the execution plan is generated for a SP, it stays in the procedure cache. SQL Server 2000 Lazy writer keep looking and throwing out unused plans out of the cache “only when space is needed in cache”.

Each query plan and execution context has an ‘associated cost factor’ that indicates how expensive the structure is to compile. These data structures also have an age field. Each time the object is referenced by a connection, the age field is incremented by the compilation cost factor.

For example, if a query plan of SP has a cost factor of 8 and is referenced twice, its age becomes 16. The lazywriter process periodically scans the list of objects in the procedure cache. The lazywriter decrements the age field of each object by 1 on each scan.

The age of your SP Execution plan is decremented to 0 after 16 scans of the procedure cache, unless another user references the plan. The lazywriter process deallocates an object if these conditions are met:

  • The memory manager requires memory and all available memory is currently in use.
  • The age field for the object is 0.
  • The object is not currently referenced by a connection.

The same way, certain changes in a database can cause an execution plan to be either inefficient or invalid, like below:

  • Any structural changes made to a table or view referenced by the query (ALTER TABLE and ALTER VIEW).
  • New distribution statistics generated either explicitly from a statement such as UPDATE STATISTICS or automatically.
  • Dropping an index used by the execution plan.
  • An explicit call to sp_recompile.
  • Large numbers of changes to keys (generated by INSERT or DELETE statements from other users that modify a table referenced by the query).
  • For tables with triggers, if the number of rows in the inserted or deleted tables grows significantly.

You can read more from SQL 2000 BOL: Topic under “Lazy Writer”, ‘Freeing and Writing Buffer Pages’ OR online at: Link

CHECKPOINT

CHECKPOINT Forces all dirty pages for the current database to be written to disk. Dirty pages are data or log pages modified after entered into the buffer cache, but the modifications have not yet been written to disk.

SQL Server Buffer Mgr: Lazy Writes/Sec:

This counter tracks how many times a second that the Lazy Writer process is moving dirty pages from the buffer to disk in order to free up buffer space.

Below are some documented and undocumented DBCC commands available in SQL Server 2000 to deal and find more information about SQL Server cache.

To monitor the cache:

DBCC SQLPERF (LRUSTATS)
DBCC CACHESTATS
DBCC MEMORYSTATUS
DBCC PROCCACHE

To clean the cache:

DBCC FLUSHPROCINDB
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

Hopefully I have not put too many new questions in your mind. But if I have, feel free to post your comments!

List of Undocumented DBCC command..

Posted by Deepak Kumar | Posted in Scripts, SQL DBA, SQL Dev, Uncategorized | Posted on 10-03-2010

0

List of Undocumented DBCC command..

DBCC activecursors [(spid)]

DBCC addextendedproc (function_name, dll_name)

DBCC addinstance (objectname, instancename)

DBCC adduserobject (name)

DBCC auditevent (eventclass, eventsubclass, success, loginname, rolename, dbusername, loginid)

DBCC autopilot (typeid, dbid, tabid, indid, pages [,flag])

DBCC balancefactor (variance_percent)

DBCC bufcount [(number_of_buffers)]

DBCC buffer ( {‘dbname’ | dbid} [, objid [, number [, printopt={0|1|2} ][, dirty | io | kept | rlock | ioerr | hashed ]]])

DBCC bytes ( startaddress, length )

DBCC cachestats

DBCC callfulltext

DBCC checkalloc [(‘database_name'[, NOINDEX | REPAIR])] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, ESTIMATEONLY]]

DBCC checkcatalog [(‘database_name’)] [WITH NO_INFOMSGS]

DBCC checkconstraints [( ‘tab_name’ | tab_id | ‘constraint_name’ | constraint_id )] [WITH ALL_CONSTRAINTS | ALL_ERRORMSGS]

DBCC checkdb [(‘database_name'[, NOINDEX | REPAIR])] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, DBCC TABLOCK]

DBCC checkdbts (dbid, newTimestamp)]

DBCC checkfilegroup [( [ {‘filegroup_name’ | filegroup_id} ] [, NOINDEX] )] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, TABLOCK]]

DBCC checkident (‘table_name'[, { NORESEED | {RESEED [, new_reseed_value] } } ] )

DBCC checkprimaryfile ( {‘FileName’} [, opt={0|1|2|3} ])

DBCC checktable (‘table_name'[, {NOINDEX | index_id | REPAIR}]) [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, TABLOCK]]

DBCC cleantable (‘database_name’|database_id, ‘table_name’|table_id, [batch_size])

DBCC cacheprofile [( {actionid} [, bucketid])

DBCC clearspacecaches (‘database_name’|database_id, ‘table_name’|table_id, ‘index_name’|index_id)

DBCC collectstats (on | off)

DBCC concurrencyviolation (reset | display | startlog | stoplog)

DBCC config

DBCC cursorstats ([spid [,’clear’]])

DBCC dbinfo [(‘dbname’)]

DBCC dbrecover (dbname [, IgnoreErrors])

DBCC dbreindex (‘table_name’ [, index_name [, fillfactor ]]) [WITH NO_INFOMSGS]

DBCC dbreindexall (db_name/db_id, type_bitmap)

DBCC dbrepair (‘dbname’, DROPDB [, NOINIT])

DBCC dbtable [({‘dbname’ | dbid})]

DBCC debugbreak

DBCC deleteinstance (objectname, instancename)

DBCC des [( {‘dbname’ | dbid} [, {‘objname’ | objid} ])]

DBCC detachdb [( ‘dbname’ )]

DBCC dropcleanbuffers

DBCC dropextendedproc (function_name)

DBCC dropuserobject (‘object_name’)

DBCC dumptrigger ({‘BREAK’, {0 | 1}} | ‘DISPLAY’ | {‘SET’, exception_number} | {‘CLEAR’, exception_number})

DBCC errorlog

DBCC extentinfo [({‘database_name’| dbid | 0} [, {‘table_name’ | table_id} [, {‘index_name’ | index_id | -1}]])]

DBCC fileheader [( {‘dbname’ | dbid} [, fileid])

DBCC fixallocation [({‘ADD’ | ‘REMOVE’}, {‘PAGE’ | ‘SINGLEPAGE’ | ‘EXTENT’ | ‘MIXEDEXTENT’}, filenum, pagenum [, objectid, indid])

DBCC flush (‘data’ | ‘log’, dbid)

DBCC flushprocindb (database)

DBCC free dll_name (FREE)

DBCC freeproccache

DBCC freeze_io (db)

DBCC getvalue (name)

DBCC help (‘dbcc_command’ | ‘?’)

DBCC icecapquery (‘dbname’, stored_proc_name [, #_times_to_icecap  (-1 infinite, 0 turns off)]) Use ‘dbcc icecapquery (printlist)’ to see list of SP’s to profile. Use ‘dbcc icecapquery (icecapall)’ to profile all SP’s.

DBCC incrementinstance (objectname, countername, instancename, value)

DBCC ind ( { ‘dbname’ | dbid }, { ‘objname’ | objid }, { indid | 0 | -1 | -2 } )

DBCC indexdefrag ({dbid | dbname | 0}, {tableid | tablename}, {indid | indname})

DBCC inputbuffer (spid)

DBCC invalidate_textptr (textptr)

DBCC invalidate_textptr_objid (objid)

DBCC iotrace ( { ‘dbname’ | dbid | 0 | -1 }, { fileid | 0 }, bufsize, [ { numIOs | -1 } [, { timeout (sec) | -1 } [, printopt={ 0 | 1 }]]] )

DBCC latch ( address [, ‘owners’] [, ‘stackdumps’])

DBCC lock ([{‘DUMPTABLE’ | ‘DUMPSTATS’ | ‘RESETSTATS’ | ‘HASH’}]|[{‘STALLREPORTTHESHOLD’, stallthreshold}])

DBCC lockobjectschema (‘object_name’)

DBCC log ([dbid[,{0|1|2|3|4}[,[‘lsn’,'[0x]x:y:z’]|[‘numrecs’,num]|[‘xdesid’,’x:y’]|[‘extent’,’x:y’]|[‘pageid’,’x:y’]|[‘objid’,{x,’y’}]|[‘logrecs’,{‘lop’|op}…]|[‘output’,x,[‘filename’,’x’]]…]]])

DBCC loginfo [({‘database_name’ | dbid})]

DBCC matview ({‘PERSIST’ | ‘ENDPERSIST’ | ‘FREE’ | ‘USE’ | ‘ENDUSE’})

DBCC memobjlist [(memory object)]

DBCC memorymap

DBCC memorystatus

DBCC memospy

DBCC memusage ([IDS | NAMES], [Number of rows to output])

DBCC monitorevents (‘sink’ [, ‘filter-expression’])

DBCC newalloc – please use checkalloc instead

DBCC no_textptr (table_id , max_inline)

DBCC opentran [({‘dbname’| dbid})] [WITH TABLERESULTS[,NO_INFOMSGS]]

DBCC outputbuffer (spid)

DBCC page ( {‘dbname’ | dbid}, filenum, pagenum [, printopt={0|1|2|3} ][, cache={0|1} ])

DBCC perflog

DBCC perfmon

DBCC pglinkage (dbid, startfile, startpg, number, printopt={0|1|2}, targetfile, targetpg, order={1|0})

DBCC pintable (database_id, table_id)

DBCC procbuf [({‘dbname’ | dbid}[, {‘objname’ | objid}[, nbufs[, printopt = { 0 | 1 } ]]] )]

DBCC proccache

DBCC prtipage (dbid, objid, indexid [, [{{level, 0} | {filenum, pagenum}}] [,printopt]])

DBCC pss [(uid[, spid[, printopt = { 1 | 0 }]] )]

DBCC readpage ({ dbid, ‘dbname’ }, fileid, pageid, formatstr [, printopt = { 0 | 1} ])

DBCC rebuild_log (dbname [, filename])

DBCC renamecolumn (object_name, old_name, new_name)

DBCC resource

DBCC row_lock (dbid, tableid, set) – Not Needed

DBCC ruleoff ({ rulenum | rulestring } [, { rulenum | rulestring } ]+)

DBCC ruleon (  rulenum | rulestring } [, { rulenum | rulestring } ]+)

DBCC setcpuweight (weight)

DBCC setinstance (objectname, countername, instancename, value)

DBCC setioweight (weight)

DBCC show_statistics (‘table_name’, ‘target_name’)

DBCC showcontig (table_id | table_name [, index_id | index_name] [WITH FAST, ALL_INDEXES, TABLERESULTS [,ALL_LEVELS]])

DBCC showdbaffinity

DBCC showfilestats [(file_num)]

DBCC showoffrules

DBCC showonrules

DBCC showtableaffinity (table)

DBCC showtext (‘dbname’, {textpointer | {fileid, pageid, slotid [,option]}})

DBCC showweights

DBCC shrinkdatabase ({dbid | ‘dbname’}, [freespace_percentage [, {NOTRUNCATE | TRUNCATEONLY}]])

DBCC shrinkdb  is no longer supported. Please use shrinkdatabase instead

DBCC shrinkfile ({fileid | ‘filename’}, [compress_size [, {NOTRUNCATE | TRUNCATEONLY | EMPTYFILE}]])

DBCC sqlmgrstats

DBCC sqlperf (LOGSPACE)({IOSTATS | LRUSTATS | NETSTATS | RASTATS [, CLEAR]} | {THREADS} | {LOGSPACE})

DBCC stackdump [( {uid[, spid[, ecid]} | {threadId, ‘THREADID’}] )]

DBCC tab ( dbid, objid )

DBCC tape_control {‘query’ | ‘release’}[,(‘\\.\tape<n>’)]

DBCC tec [( uid[, spid[, ecid]] )]

DBCC textall [({‘database_name’|database_id}[, ‘FULL’ | FAST] )]

DBCC textalloc ({‘table_name’|table_id}[, ‘FULL’ | FAST])

DBCC thaw_io (db)

DBCC traceoff [( tracenum [, tracenum … ] )]

DBCC traceon [( tracenum [, tracenum … ] )]

DBCC tracestatus (trace# [, …trace#])

DBCC unpintable (dbid, table_id)

DBCC updateusage ({‘database_name’| 0} [, ‘table_name’ [, index_id]]) [WITH [NO_INFOMSGS] [,] COUNT_ROWS]

DBCC upgradedb (db)

DBCC usagegovernor (command, value)

DBCC useplan [(number_of_plan)]

DBCC useroptions

DBCC wakeup (spid)

DBCC writepage ({ dbid, ‘dbname’ }, fileid, pageid, offset, length, data)

Website renovated

Posted by Deepak Kumar | Posted in Uncategorized | Posted on 17-02-2010

0

Dear site follower,

Due to some technical reasons we had to recreate whole website from scratch and team will be uploading latest SQL Server related data. Apologies if you are not able to find some of the link you have saved for future usage. SQL Forum is discontinued immediately and all formally registered users are requested to sign-in on new portal for continued support.

We hope to get constant support as usual and help each other.

Thanks again!

© 2010 SQLKnowledge.com Increase your website traffic with Attracta.com