Happy to join SQLKnowledge

Posted by Shashikant | Posted in Scripts, SQL BI/DW/SSRS, SQL DBA, SQL Dev, Uncategorized | Posted on 25-12-2010

1

I have joined sqlknowledge and in addition to Deepak and other members, I will be writing about SQL troubleshooting, tips and tricks keep checking for more..

Merry Christmas

SQL 2011 Denali Basic Setup and Configurations

Posted by Deepak Kumar | Posted in SQL BI/DW/SSRS, SQL DBA, SQL Dev, Uncategorized | Posted on 22-12-2010

0

In continuation of my earlier post for Denali CTP1 launch, here are SQL Server 2011 setup screen prints. To begin with you can download installation media from Microsoft Link. Before you start installation, you must check system requirement and are as follows:

System Requirements: Details

  • Supported Operating Systems:Windows 7;Windows Server 2008 R2;Windows Server 2008 Service Pack 2;Windows Vista Service Pack 2
  • 32-bit systems
  • Computer with Intel or compatible 1GHz or faster processor (2 GHz or faster is recommended.)
  • 64-bit systems
  • 1.4 GHz or faster processor
  • Minimum of 1 GB of RAM (2 GB or more is recommended.)
  • 2.2 GB of available hard disk space

The very first screen you will see is typical SQL Server installation, For this installation I used new SQL Server stand alone installation.

clip_image002

clip_image004

clip_image006

clip_image008

clip_image010

clip_image012

clip_image014

clip_image016

clip_image018

clip_image020

clip_image022

clip_image024

clip_image026

clip_image028

clip_image030

clip_image032

clip_image034

clip_image036 

 

clip_image038

clip_image040

SQL Server Consolidation & Virtualization Practice..

Posted by Deepak Kumar | Posted in SQL BI/DW/SSRS, SQL DBA, SQL Dev, Uncategorized | Posted on 22-12-2010

0

 

    image Consolidation by definition is the process of combining multiple SQL Server Databases & Servers running on different machines (could be geographically separated) onto a smaller number of more powerful machines in a central location. However in my opinion regarding SQL, its a process of getting organized, spending for what you need or use, and off-course saving at the same time. Lets see how consolidation and virtualization works together?

    Microsoft SQL Server since beginning is overall self maintaining, secure by default and self tuning DBMS that requires a very little or basic configuration during installation or later during usage, In a large Enterprise size organization its easier adaptability and usage can lead into uncoordinated installations, wasted hardware and licenses, lack of standards and security holes.

Think Again!  You should consider adopting Virtualization and Consolidation technique if you have similar questions:

  • Looking by your SQL Server inventory list, you realized that number of SQL servers are going up every month?

  • Looking by server utilization reports, you realized that your team or vendor over estimated database server hardware requirements and workload; leading into buying excessive hardware that your application is never going to use?

  • Every time it take months to plan & implement SQL patching, upgrades, installation etc, Applying SQL Server best practices and security updates/settings are complex and far of reach.

Better together Approach: You can consider three best possible scenario to implement, but are having its own benefits and concerns.

  • One OS/One SQLinstance- multiple databases
    Its the best that you can get, in simple words evaluate your application that uses a few database(s); move your databases to a shared database server. Concerns: shared cache, SA permission, similar name databases or objects and logins, maintenance window, remote desktop connection (rdp) etc

  • One OS/Multiple SQL Instance
    Build a server having multiple SQL Server instances for applications that require its own unique collation, SQL version & build, own dedicated memory/cache, tempdb etc. Concerns: SA permissions, remote desktop etc.

  • One OS/One SQL Instance (virtualization)
    For applications having extreme performance needs and unique set of configurations. build a server with multiple virtual Operating systems and each OS running SQL Instance.
    Software dependency: Microsoft Hyper-V, VMware, hp polyserve etc

    image

      Consolidation & Virtualization benefits

      • Reduced Software & Operating System Licensing cost
      • Reduced Server Hardwar cost, fewer number of servers required
      • Datacenter space, Power consumption, Cooling Cost cut down (GreenIT)
      • Monitoring & Support cost; fewer resources needed to monitor/control/patch servers etc
      • Easier Server move-ability with scale up & scale out solutions
      • High availability option (depends on setup)
      •  

        Concern(s):

        • Single point of failure (But, you can implementing a good high availability solution to deal with this issue)
        • Takes time & efforts to consolidate (But, once setup year by year return in terms of savings)
        • Complex Service Charge Model. If your organization do the billing of services provided to various business units, then you may need to do complex calculation based on usage before billing to individual units.
        •  

          Milestones to Destination

          • Inventory: Prepare inventory of SQL Server hosted in your environment. You may consider using Microsoft Assessment and Planning tool (MAP)
          • Hardware Sizing: Document database server resource available on the server like CPU, Memory, Storage, DISKIO, etc
          • Hardware Usage: Identify database server usage/utilization over a period of time, prepare histogram of Peak, Low & average usage. You may use perfmon or 3rd party tools like VMware capacity planner.
          • Savings: Calculate server operational cost in current setup and compare with new consolidated & virtualized model.
          • Going ahead: You may want to ask some specific database related questions to application owners
            • Is it a vendor supported/provided SQL Instance with limitations or internal home grown application? and what is workload or capacity planning guidelines for future.
            • Any significant reason a physical server is required? or why physical to virtual (P2V) should not be done?
            • Can Databases’ from SQL Instance be clubbed/consolidated with other SQL Instance? what is frequency of database changes/deployment or downtime requirements.
            • Can SQL instance be upgraded to latest SQL Server version & build? as per virtualization standard in your environment?
            • What are high availability options implemented for the databases & SQL Service?
          • Best practices:

            • Its better to divide entire SQL Server inventory into multiple smaller sections. Example- Should look at creating 5 subset of 100 servers rather than going in for virtualization of all 500 servers in a single attempt. Apply learning, experiences and best practices in later subsets.
            • Use single machine with individual SQL Named Instance or VM for Production/Test/Development/Staging db requirements. (depends on application/environment)
            • Never ever, oversubscribe resources for your server on virtual platform
            • Calculate total server workload in virtualization model with real-time application and database workload in different scenario or timings.
            • Carefully choose virtualization technique and server hardware to implement

            Resources: Microsoft, VMware, MS PDF

            Script to find used and free space in Database files

            Posted by Deepak Kumar | Posted in Scripts, SQL BI/DW/SSRS, SQL DBA, SQL Dev, Uncategorized | Posted on 02-08-2010

            2

            In an Enterprise world, you may be responsible to monitor 1000+ databases hosted on hundreds of SQL Server instances. On a lazy afternoon, suddenly low disk space alarm hit your inbox. What will you do, storage can not be added or expended on the fly. Here is the tested script to find used and free space from SQL Server database’s data files that you can shrink to easy the current situation.

            -- Find Log file information and save in temp file using dbcc sqlperf logspace
            CREATE TABLE #LogSpace (DatabaseName varchar(40), LogSize int, LogSpace int, status bit)
            INSERT #LogSpace exec ('dbcc sqlperf(logspace) WITH NO_INFOMSGS') 
             
            -- Find data file information using cursor and dbcc showfilestats
            Declare @DatabaseName varchar(500)
            create table #DBSpace(FielId tinyint, Filegroup tinyint, TotalSpace int, 
            Used_Space int, Name1 varchar(25),NameofFile Varchar(900) )
            Declare curDB cursor for select name from master..sysdatabases 
            open curDB
            fetch curDB into @DatabaseName
            while @@fetch_status = 0
            begin
                if databasepropertyex(@DatabaseName,'Status') = 'ONLINE'
                begin
                insert into #DBSpace exec ('USE [' + @DatabaseName + ']  DBCC SHOWFILESTATS WITH NO_INFOMSGS')
                end
                fetch curDB into @DatabaseName
            end
            close curDB
            deallocate curDB 
             
            -- Select data in tabular format with proper headings & order by
            select left(sd.name,30) AS 'DatabaseName',
            (ff.[DataFileSpace(MB)])+ls.LogSize as 'DatabaseSize(MB)',ff.[DataFileSpace(MB)],
            ff.[DataFileUsedSpace(MB)], ff.[DataFileFreeSpace(MB)],
            ls.LogSize as 'LogFileSize(MB)', ls.LogSpace as'LogFileSpaceUsedIn(%)'
            from #DBSpace dbs join master..sysdatabases sd on sd.filename=dbs.NameofFile
            join #LogSpace ls on sd.name=ls.DatabaseName
            join
            (select  sf.dbid, sum(dbss.TotalSpace/16) as 'DataFileSpace(MB)', 
            sum(dbss.Used_Space/16) as 'DataFileUsedSpace(MB)', 
            (sum(dbss.TotalSpace/16)- sum(dbss.Used_Space/16)) 
            as 'DataFileFreeSpace(MB)' from #DBSpace dbss
            join master.sys.sysaltfiles sf on rtrim(sf.filename)= rtrim(dbss.NameofFile)
            group by sf.dbid) ff on ff.dbid=sd.dbid
            order by 'DataFileFreeSpace(MB)' desc 
             
            -- Drop temporary tables manually
            Drop table #DBSpace
            DROP TABLE #LogSpace 

            How long does a SP stay in the cache?

            Posted by Deepak Kumar | Posted in Scripts, SQL BI/DW/SSRS, SQL DBA, SQL Dev, Uncategorized | Posted on 14-04-2010

            0

             image As a SQL Database Administrator, developer, designer you must be eager to know how long does a stored procedure execution plan stay in the cache? On what basis and parameters Microsoft SQL Server decide age of an execution plan to reside in cache? The quick answer is, it is based on the cost factor and number of reference to that object in cache. Let’s see how it goes in details:

            Once the execution plan is generated for a SP, it stays in the procedure cache. SQL Server 2000 Lazy writer keep looking and throwing out unused plans out of the cache “only when space is needed in cache”.

            Each query plan and execution context has an ‘associated cost factor’ that indicates how expensive the structure is to compile. These data structures also have an age field. Each time the object is referenced by a connection, the age field is incremented by the compilation cost factor.

            For example, if a query plan of SP has a cost factor of 8 and is referenced twice, its age becomes 16. The lazywriter process periodically scans the list of objects in the procedure cache. The lazywriter decrements the age field of each object by 1 on each scan.

            The age of your SP Execution plan is decremented to 0 after 16 scans of the procedure cache, unless another user references the plan. The lazywriter process deallocates an object if these conditions are met:

            • The memory manager requires memory and all available memory is currently in use.
            • The age field for the object is 0.
            • The object is not currently referenced by a connection.

            The same way, certain changes in a database can cause an execution plan to be either inefficient or invalid, like below:

            • Any structural changes made to a table or view referenced by the query (ALTER TABLE and ALTER VIEW).
            • New distribution statistics generated either explicitly from a statement such as UPDATE STATISTICS or automatically.
            • Dropping an index used by the execution plan.
            • An explicit call to sp_recompile.
            • Large numbers of changes to keys (generated by INSERT or DELETE statements from other users that modify a table referenced by the query).
            • For tables with triggers, if the number of rows in the inserted or deleted tables grows significantly.

            You can read more from SQL 2000 BOL: Topic under “Lazy Writer”, ‘Freeing and Writing Buffer Pages’ OR online at: Link

            CHECKPOINT

            CHECKPOINT Forces all dirty pages for the current database to be written to disk. Dirty pages are data or log pages modified after entered into the buffer cache, but the modifications have not yet been written to disk.

            SQL Server Buffer Mgr: Lazy Writes/Sec:

            This counter tracks how many times a second that the Lazy Writer process is moving dirty pages from the buffer to disk in order to free up buffer space.

            Below are some documented and undocumented DBCC commands available in SQL Server 2000 to deal and find more information about SQL Server cache.

            To monitor the cache:

            DBCC SQLPERF (LRUSTATS)
            DBCC CACHESTATS
            DBCC MEMORYSTATUS
            DBCC PROCCACHE

            To clean the cache:

            DBCC FLUSHPROCINDB
            DBCC DROPCLEANBUFFERS
            DBCC FREEPROCCACHE

            Hopefully I have not put too many new questions in your mind. But if I have, feel free to post your comments!

            © 2010 SQLKnowledge.com Increase your website traffic with Attracta.com