Git Product home page Git Product logo

brentozarultd / sql-server-first-responder-kit Goto Github PK

View Code? Open in Web Editor NEW
3.2K 261.0 978.0 24.33 MB

sp_Blitz, sp_BlitzCache, sp_BlitzFirst, sp_BlitzIndex, and other SQL Server scripts for health checks and performance tuning.

Home Page: http://FirstResponderKit.org

License: Other

TSQL 99.89% PowerShell 0.11%
sql-server sp-blitz sp-blitzcache sp-blitzindex sp-blitzfirst health-checks ms-sql-server microsoft-sql-server first-responder-kit

sql-server-first-responder-kit's Introduction

SQL Server First Responder Kit

licence badge stars badge forks badge issues badge contributors_badge

Navigation

You're a DBA, sysadmin, or developer who manages Microsoft SQL Servers. It's your fault if they're down or slow. These tools help you understand what's going on in your server.

  • When you want an overall health check, run sp_Blitz.
  • To learn which queries have been using the most resources, run sp_BlitzCache.
  • To analyze which indexes are missing or slowing you down, run sp_BlitzIndex.
  • To find out why the server is slow right now, run sp_BlitzFirst.

To install, download the latest release ZIP, then run the SQL files in the master database. (You can use other databases if you prefer.)

The First Responder Kit runs on:

  • SQL Server on Windows - all versions that Microsoft supports. For end of support dates, check out the "Support Ends" column at https://sqlserverupdates.com.
  • SQL Server on Linux - yes, fully supported except sp_AllNightLog and sp_DatabaseRestore, which require xp_cmdshell, which Microsoft doesn't provide on Linux.
  • Amazon RDS SQL Server - fully supported.
  • Azure SQL DB - not supported. Some of the procedures work, but some don't, and Microsoft has a tendency to change DMVs in Azure without warning, so we don't put any effort into supporting it. If it works, great! If not, any changes to make it work would be on you. See the contributing.md file for how to do that.

If you're stuck with versions of SQL Server that Microsoft no longer supports, like SQL Server 2008, check the Deprecated folder for older versions of the scripts which may work, depending on your versions and compatibility levels.

How to Install the Scripts

For SQL Server, to install all of the scripts at once, open Install-All-Scripts.sql in SSMS or Azure Data Studio, switch to the database where you want to install the procs, and run it. It will install the stored procedures if they don't already exist, or update them to the current version if they do exist.

For Azure SQL DB, use Install-Azure.sql, which will only install the procs that are compatible with Azure SQL DB.

If you hit an error when running Install-All-Scripts, it's likely because you're using an older version of SQL Server that Microsoft no longer supports. In that case, check out the Deprecated folder. That's where we keep old versions of the scripts around as a last-ditch effort - but really, if Microsoft won't support their own old stuff, you shouldn't try to do it either.

We recommend installing these stored procedures in the master database, but if you want to use another one, that's totally fine - they're all supported in any database - but just be aware that you can run into problems if you have these procs in multiple databases. You may not keep them all up to date, and you may hit an issue when you're running an older version.

There are a couple of Install-Core scripts included for legacy purposes, for folks with installers they've built. You can ignore those.

How to Get Support

Everyone here is expected to abide by the Contributor Covenant Code of Conduct.

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. If you need a free invite, hit SQLslack.com. Be patient - it's staffed with volunteers who have day jobs, heh.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the "More Details" URL for any warning you find. We put a lot of work into documentation, and we wouldn't want someone to yell at you to go read the fine manual. After that, when you've still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes us!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you're working with.

Back to top

sp_Blitz: Overall Health Check

Run sp_Blitz daily or weekly for an overall health check. Just run it from SQL Server Management Studio, and you'll get a prioritized list of issues on your server right now.

Output columns include:

  • Priority - 1 is the most urgent, stuff that could get you fired. The warnings get progressively less urgent.
  • FindingsGroup, Findings - describe the problem sp_Blitz found on the server.
  • DatabaseName - the database having the problem. If it's null, it's a server-wide problem.
  • URL - copy/paste this into a browser for more information.
  • Details - not just bland text, but dynamically generated stuff with more info.

Commonly used parameters:

  • @CheckUserDatabaseObjects = 0 - by default, we check inside user databases for things like triggers or heaps. Turn this off (0) to make checks go faster, or ignore stuff you can't fix if you're managing third party databases. If a server has 50+ databases, @CheckUserDatabaseObjects is automatically turned off unless...
  • @BringThePain = 1 - required if you want to run @CheckUserDatabaseObjects = 1 with over 50 databases. It's gonna be slow.
  • @CheckServerInfo = 1 - includes additional rows at priority 250 with server configuration details like service accounts.
  • @IgnorePrioritiesAbove = 50 - if you want a daily bulletin of the most important warnings, set @IgnorePrioritiesAbove = 50 to only get the urgent stuff.

Advanced tips:

Back to top

Advanced sp_Blitz Parameters

In addition to the parameters common to many of the stored procedures, here are the ones specific to sp_Blitz:

  • @Debug default 0. When 1, we print out messages of what we're doing. When 2, we print the dynamic queries as well

Back to top

Writing sp_Blitz Output to a Table

EXEC sp_Blitz @OutputDatabaseName = 'DBAtools', @OutputSchemaName = 'dbo', @OutputTableName = 'BlitzResults';

Checks for the existence of a table DBAtools.dbo.BlitzResults, creates it if necessary, then adds the output of sp_Blitz into this table. This table is designed to support multiple outputs from multiple servers, so you can track your server's configuration history over time.

Back to top

Skipping Checks or Databases

CREATE TABLE dbo.BlitzChecksToSkip (
    ServerName NVARCHAR(128),
    DatabaseName NVARCHAR(128),
    CheckID INT
);
GO
INSERT INTO dbo.BlitzChecksToSkip (ServerName, DatabaseName, CheckID)
VALUES (NULL, 'SalesDB', 50)
sp_Blitz @SkipChecksDatabase = 'DBAtools', @SkipChecksSchema = 'dbo', @SkipChecksTable = 'BlitzChecksToSkip';

Checks for the existence of a table named Fred - just kidding, named DBAtools.dbo.BlitzChecksToSkip. The table needs at least the columns shown above (ServerName, DatabaseName, and CheckID). For each row:

  • If the DatabaseName is populated but CheckID is null, then all checks will be skipped for that database
  • If both DatabaseName and CheckID are populated, then that check will be skipped for that database
  • If CheckID is populated but DatabaseName is null, then that check will be skipped for all databases

Back to top

sp_BlitzCache: Find the Most Resource-Intensive Queries

sp_BlitzCache looks at your plan cache where SQL Server keeps track of which queries have run recently, and how much impact they've had on the server.

By default, it includes two result sets:

  • The first result set shows your 10 most resource-intensive queries.
  • The second result set explains the contents of the Warnings column - but it only shows the warnings that were produced in the first result set. (It's kinda like the most relevant glossary of execution plan terms.)

Output columns include:

  • Database - the database context where the query ran. Keep in mind that if you fully qualify your object names, the same query might be run from multiple databases.
  • Cost - the Estimated Subtree Cost of the query, what Kendra Little calls "Query Bucks."
  • Query Text - don't copy/paste from here - it's only a quick reference. A better source for the query will show up later on.
  • Warnings - problems we found.
  • Created At - when the plan showed up in the cache.
  • Last Execution - maybe the query only runs at night.
  • Query Plan - click on this, and the graphical plan pops up.

Common sp_BlitzCache Parameters

The @SortOrder parameter lets you pick which top 10 queries you want to examine:

  • reads - logical reads
  • CPU - from total_worker_time in sys.dm_exec_query_stats
  • executions - how many times the query ran since the CreationDate
  • xpm - executions per minute, derived from the CreationDate and LastExecution
  • recent compilations - if you're looking for things that are recompiling a lot
  • memory grant - if you're troubleshooting a RESOURCE_SEMAPHORE issue and want to find queries getting a lot of memory
  • writes - if you wanna find those pesky ETL processes
  • You can also use average or avg for a lot of the sorts, like @SortOrder = 'avg reads'
  • all - sorts by all the different sort order options, and returns a single result set of hot messes. This is a little tricky because:
  • We find the @Top N queries by CPU, then by reads, then writes, duration, executions, memory grant, spills, etc. If you want to set @Top > 10, you also have to set @BringThePain = 1 to make sure you understand that it can be pretty slow.
  • As we work through each pattern, we exclude the results from the prior patterns. So for example, we get the top 10 by CPU, and then when we go to get the top 10 by reads, we exclude queries that were already found in the top 10 by CPU. As a result, the top 10 by reads may not really be the top 10 by reads - because some of those might have been in the top 10 by CPU.
  • To make things even a little more confusing, in the Pattern column of the output, we only specify the first pattern that matched, not all of the patterns that matched. It would be cool if at some point in the future, we turned this into a comma-delimited list of patterns that a query matched, and then we'd be able to get down to a tighter list of top queries. For now, though, this is kinda unscientific.
  • query hash - filters for only queries that have multiple cached plans (even though they may all still be the same plan, just different copies stored.) If you use @SortOrder = 'query hash', you can specify a second sort order with a comma, like 'query hash, reads' in order to find only queries with multiple plans, sorted by the ones doing the most reads. The default second sort is CPU.

Other common parameters include:

  • @Top = 10 - by default, you get 10 plans, but you can ask for more. Just know that the more you get, the slower it goes.
  • @ExportToExcel = 1 - turn this on, and it doesn't return XML fields that would hinder you from copy/pasting the data into Excel.
  • @ExpertMode = 1 - turn this on, and you get more columns with more data. Doesn't take longer to run though.
  • @IgnoreSystemDBs = 0 - if you want to show queries in master/model/msdb. By default we hide these. Additionally hides queries from databases named dbadmin, dbmaintenance, and dbatools.
  • @MinimumExecutionCount = 0 - in servers like data warehouses where lots of queries only run a few times, you can set a floor number for examination.

Back to top

Advanced sp_BlitzCache Parameters

In addition to the parameters common to many of the stored procedures, here are the ones specific to sp_BlitzCache:

  • @OnlyQueryHashes - if you want to examine specific query plans, you can pass in a comma-separated list of them in a string.
  • @IgnoreQueryHashes - if you know some queries suck and you don't want to see them, you can pass in a comma-separated list of them.
  • @OnlySqlHandles, @IgnoreSqlHandles - just like the above two params
  • @DatabaseName - if you only want to analyze plans in a single database. However, keep in mind that this is only the database context. A single query that runs in Database1 can join across objects in Database2 and Database3, but we can only know that it ran in Database1.
  • @SlowlySearchPlansFor - lets you search for strings, but will not find all results due to a bug in the way SQL Server removes spaces from XML. If your search string includes spaces, SQL Server may remove those before the search runs, unfortunately.

sp_BlitzCache Known Issues

  • We skip databases in an Availability Group that require read-only intent. If you wanted to contribute code to enable read-only intent databases to work, look for this phrase in the code: "Checking for Read intent databases to exclude".

Back to top

sp_BlitzFirst: Real-Time Performance Advice

When performance emergencies strike, this should be the first stored proc in the kit you run.

It takes a sample from a bunch of DMVs (wait stats, Perfmon counters, plan cache), waits 5 seconds, and then takes another sample. It examines the differences between the samples, and then gives you a prioritized list of things that might be causing performance issues right now. Examples include:

  • Data or log file growing (or heaven forbid, shrinking)
  • Backup or restore running
  • DBCC operation happening

If no problems are found, it'll tell you that too. That's one of our favorite features because you can have your help desk team run sp_BlitzFirst and read the output to you over the phone. If no problems are found, you can keep right on drinking at the bar. (Ha! Just kidding, you'll still have to close out your tab, but at least you'll feel better about finishing that drink rather than trying to sober up.)

Common sp_BlitzFirst parameters include:

  • @Seconds = 5 by default. You can specify longer samples if you want to track stats during a load test or demo, for example.
  • @ShowSleepingSPIDs = 0 by default. When set to 1, shows long-running sleeping queries that might be blocking others.
  • @ExpertMode = 0 by default. When set to 1, it calls sp_BlitzWho when it starts (to show you what queries are running right now), plus outputs additional result sets for wait stats, Perfmon counters, and file stats during the sample, then finishes with one final execution of sp_BlitzWho to show you what was running at the end of the sample. When set to 2, it does the same as 1, but skips the calls to sp_BlitzWho.

Logging sp_BlitzFirst to Tables

You can log sp_BlitzFirst performance data to tables by scheduling an Agent job to run sp_BlitzFirst every 15 minutes with these parameters populated:

  • @OutputDatabaseName = typically 'DBAtools'
  • @OutputSchemaName = 'dbo'
  • @OutputTableName = 'BlitzFirst' - the quick diagnosis result set goes here
  • @OutputTableNameFileStats = 'BlitzFirst_FileStats'
  • @OutputTableNamePerfmonStats = 'BlitzFirst_PerfmonStats'
  • @OutputTableNameWaitStats = 'BlitzFirst_WaitStats'
  • @OutputTableNameBlitzCache = 'BlitzCache'
  • @OutputTableNameBlitzWho = 'BlitzWho'

All of the above OutputTableName parameters are optional: if you don't want to collect all of the stats, you don't have to. Keep in mind that the sp_BlitzCache results will get large, fast, because each execution plan is megabytes in size.

Logging Performance Tuning Activities

You can also log your own activities like tuning queries, adding indexes, or changing configuration settings. To do it, run sp_BlitzFirst with these parameters:

  • @OutputDatabaseName = typically 'DBAtools'
  • @OutputSchemaName = 'dbo'
  • @OutputTableName = 'BlitzFirst' - the quick diagnosis result set goes here
  • @LogMessage = 'Whatever you wanna show in your monitoring tool'

Optionally, you can also pass in:

  • @LogMessagePriority = 1
  • @LogMessageFindingsGroup = 'Logged Message'
  • @LogMessageFinding = 'Logged from sp_BlitzFirst' - you could use other values here to track other data sources like DDL triggers, Agent jobs, ETL jobs
  • @LogMessageURL = 'https://OurHelpDeskSystem/ticket/?12345' - or maybe a Github issue, or Pagerduty alert
  • @LogMessageCheckDate = '2017/10/31 11:00' - in case you need to log a message for a prior date/time, like if you forgot to log the message earlier

Back to top

sp_BlitzIndex: Tune Your Indexes

SQL Server tracks your indexes: how big they are, how often they change, whether they're used to make queries go faster, and which indexes you should consider adding. The results columns are fairly self-explanatory.

By default, sp_BlitzIndex analyzes the indexes of the database you're in (your current context.)

Common parameters include:

  • @DatabaseName - if you want to analyze a specific database
  • @SchemaName, @TableName - if you pass in these, sp_BlitzIndex does a deeper-dive analysis of just one table. You get several result sets back describing more information about the table's current indexes, foreign key relationships, missing indexes, and fields in the table.
  • @GetAllDatabases = 1 - slower, but lets you analyze all the databases at once, up to 50. If you want more than 50 databases, you also have to pass in @BringThePain = 1.
  • @ThresholdMB = 250 - by default, we only analyze objects over 250MB because you're busy.
  • @Mode = 0 (default) - returns high-priority (1-100) advice on the most urgent index issues.
    • @Mode = 4: Diagnose Details - like @Mode 0, but returns even more advice (priorities 1-255) with things you may not be able to fix right away, and things we just want to warn you about.
    • @Mode = 1: Summarize - total numbers of indexes, space used, etc per database.
    • @Mode = 2: Index Usage Details - an inventory of your existing indexes and their usage statistics. Great for copy/pasting into Excel to do slice & dice analysis. This is the only mode that works with the @Output parameters: you can export this data to table on a monthly basis if you need to go back and look to see which indexes were used over time.
    • @Mode = 3: Missing Indexes - an inventory of indexes SQL Server is suggesting. Also great for copy/pasting into Excel for later analysis.

sp_BlitzIndex focuses on mainstream index types. Other index types have varying amounts of support:

  • Fully supported: rowstore indexes, columnstore indexes, temporal tables.
  • Columnstore indexes: fully supported. Key columns are shown as includes rather than keys since they're not in a specific order.
  • In-Memory OLTP (Hekaton): unsupported. These objects show up in the results, but for more info, you'll want to use sp_BlitzInMemoryOLTP instead.
  • Graph tables: unsupported. These objects show up in the results, but we don't do anything special with 'em, like call out that they're graph tables.
  • Spatial indexes: unsupported. We call out that they're spatial, but we don't do any special handling for them.
  • XML indexes: unsupported. These objects show up in the results, but we don't include the index's columns or sizes.

Back to top

Advanced sp_BlitzIndex Parameters

In addition to the parameters common to many of the stored procedures, here are the ones specific to sp_BlitzIndex:

  • @SkipPartitions = 1 - add this if you want to analyze large partitioned tables. We skip these by default for performance reasons.
  • @SkipStatistics = 0 - right now, by default, we skip statistics analysis because we've had some performance issues on this.
  • @Filter = 0 (default) - 1=No low-usage warnings for objects with 0 reads. 2=Only warn for objects >= 500MB
  • @OutputDatabaseName, @OutputSchemaName, @OutputTableName - these only work for @Mode = 2, index usage detail.

Back to top

sp_BlitzLock: Deadlock Analysis

Checks either the System Health session or a specific Extended Event session that captures deadlocks and parses out all the XML for you.

Parameters you can use:

  • @Top: Use if you want to limit the number of deadlocks to return. This is ordered by event date ascending.
  • @DatabaseName: If you want to filter to a specific database
  • @StartDate: The date you want to start searching on.
  • @EndDate: The date you want to stop searching on.
  • @ObjectName: If you want to filter to a specific table. The object name has to be fully qualified 'Database.Schema.Table'
  • @StoredProcName: If you want to search for a single stored procedure. Don't specify a schema or database name - just a stored procedure name alone is all you need, and if it exists in any schema (or multiple schemas), we'll find it.
  • @AppName: If you want to filter to a specific application.
  • @HostName: If you want to filter to a specific host.
  • @LoginName: If you want to filter to a specific login.
  • @EventSessionPath: If you want to point this at an XE session rather than the system health session.

Known issues:

Back to top

sp_BlitzWho: What Queries are Running Now

This is like sp_who, except it goes into way, way, way more details.

It's designed for query tuners, so it includes things like memory grants, degrees of parallelism, and execution plans.

Back to top

sp_BlitzAnalysis: Query sp_BlitzFirst output tables

Retrieves data from the output tables where you are storing your sp_BlitzFirst output.

Parameters include:

  • @StartDate: When you want to start seeing data from , NULL will set @StartDate to 1 hour ago.
  • @EndDate: When you want to see data up to, NULL will get an hour of data since @StartDate.
  • @Databasename: Filter results by database name where possible, Default: NULL which shows all.
  • @Servername: Filter results by server name, Default: @@SERVERNAME.
  • @OutputDatabaseName: Specify the database name where where we can find your logged sp_BlitzFirst Output table data
  • @OutputSchemaName: Schema which the sp_BlitzFirst Output tables belong to
  • @OutputTableNameBlitzFirst: Table name where you are storing sp_BlitzFirst @OutputTableNameBlitzFirst output, we default to BlitzFirst - you can Set to NULL to skip lookups against this table
  • @OutputTableNameFileStats: Table name where you are storing sp_BlitzFirst @OutputTableNameFileStats output, we default to BlitzFirst_FileStats - you can Set to NULL to skip lookups against this table.
  • @OutputTableNamePerfmonStats: Table name where you are storing sp_BlitzFirst @OutputTableNamePerfmonStats output, we default to BlitzFirst_PerfmonStats - you can Set to NULL to skip lookups against this table.
  • @OutputTableNameWaitStats: Table name where you are storing sp_BlitzFirst @OutputTableNameWaitStats output, we default to BlitzFirst_WaitStats - you can Set to NULL to skip lookups against this table.
  • @OutputTableNameBlitzCache: Table name where you are storing sp_BlitzFirst @OutputTableNameBlitzCache output, we default to BlitzCache - you can Set to NULL to skip lookups against this table.
  • @OutputTableNameBlitzWho: Table name where you are storing sp_BlitzFirst @OutputTableNameBlitzWho output, we default to BlitzWho - you can Set to NULL to skip lookups against this table.
  • @MaxBlitzFirstPriority: Max priority to include in the results from your BlitzFirst table, Default: 249.
  • @BlitzCacheSortorder: Controls the results returned from your BlitzCache table, you will get a TOP 5 per sort order per CheckDate, Default: 'cpu' Accepted values 'all' 'cpu' 'reads' 'writes' 'duration' 'executions' 'memory grant' 'spills'.
  • @WaitStatsTop: Controls the Top X waits per CheckDate from your wait stats table, Default: 10.
  • @ReadLatencyThreshold: Sets the threshold in ms to compare against io_stall_read_average_ms in your filestats table, Default: 100.
  • @WriteLatencyThreshold: Sets the threshold in ms to compare against io_stall_write_average_ms in your filestats table, Default: 100.
  • @BringThePain: If you are getting more than 4 hours of data from your BlitzCache table with @BlitzCacheSortorder set to 'all' you will need to set BringThePain to 1.
  • @Maxdop: Control the degree of parallelism that the queries within this proc can use if they want to, Default = 1.
  • @Debug: Show sp_BlitzAnalysis SQL commands in the messages tab as they execute.

Example calls:

Get information for the last hour from all sp_BlitzFirst output tables

EXEC sp_BlitzAnalysis 
	@StartDate = NULL,
	@EndDate = NULL,
	@OutputDatabaseName = 'DBAtools',
	@OutputSchemaName = 'dbo',
	@OutputTableNameFileStats = N'BlitzFirst_FileStats',		
	@OutputTableNamePerfmonStats  = N'BlitzFirst_PerfmonStats',		
	@OutputTableNameWaitStats = N'BlitzFirst_WaitStats',		 
	@OutputTableNameBlitzCache = N'BlitzCache',		 
	@OutputTableNameBlitzWho = N'BlitzWho';

Exclude specific tables e.g lets exclude PerfmonStats by setting to NULL, no lookup will occur against the table and a skipped message will appear in the resultset

EXEC sp_BlitzAnalysis 
	@StartDate = NULL,
	@EndDate = NULL,
	@OutputDatabaseName = 'DBAtools',
	@OutputSchemaName = 'Blitz',
	@OutputTableNameFileStats = N'BlitzFirst_FileStats',		
	@OutputTableNamePerfmonStats  = NULL,		
	@OutputTableNameWaitStats = N'BlitzFirst_WaitStats',		 
	@OutputTableNameBlitzCache = N'BlitzCache',		 
	@OutputTableNameBlitzWho = N'BlitzWho';

Known issues: We are likely to be hitting some big tables here and some of these queries will require scans of the clustered indexes as there are no nonclustered indexes to cover the queries by default, keep this in mind if you are planning on running this in a production environment!

I have noticed that the Perfmon query can ask for a big memory grant so be mindful when including this table with large volumes of data:

SELECT 
      [ServerName]
    , [CheckDate]
    , [counter_name]
    , [object_name]
    , [instance_name]
    , [cntr_value]
FROM [dbo].[BlitzFirst_PerfmonStats_Actuals]
WHERE CheckDate BETWEEN @FromDate AND @ToDate
ORDER BY 
      [CheckDate] ASC
    , [counter_name] ASC;

Back to top

sp_BlitzBackups: How Much Data Could You Lose

Checks your backups and reports estimated RPO and RTO based on historical data in msdb, or a centralized location for [msdb].dbo.backupset.

Parameters include:

  • @HoursBack -- How many hours into backup history you want to go. Should be a negative number (we're going back in time, after all). But if you enter a positive number, we'll make it negative for you. You're welcome.
  • @MSDBName -- if you need to prefix dbo.backupset with an alternate database name.
  • @AGName -- If you have more than 1 AG on the server, and you don't know the listener name, specify the name of the AG you want to use the listener for, to push backup data. This may get used during analysis in a future release for filtering.
  • @RestoreSpeedFullMBps, @RestoreSpeedDiffMBps, @RestoreSpeedLogMBps -- if you know your restore speeds, you can input them here to better calculate your worst-case RPO times. Otherwise, we assume that your restore speed will be the same as your backup speed. That isn't likely true - your restore speed will likely be worse - but these numbers already scare the pants off people.
  • @PushBackupHistoryToListener -- Turn this to 1 to skip analysis and use sp_BlitzBackups to push backup data from msdb to a centralized location (more the mechanics of this to follow)
  • @WriteBackupsToListenerName -- This is the name of the AG listener, and MUST have a linked server configured pointing to it. Yes, that means you need to create a linked server that points to the AG Listener, with the appropriate permissions to write data.
  • @WriteBackupsToDatabaseName -- This can't be 'msdb' if you're going to use the backup data pushing mechanism. We can't write to your actual msdb tables.
  • @WriteBackupsLastHours -- How many hours in the past you want to move data for. Should be a negative number (we're going back in time, after all). But if you enter a positive number, we'll make it negative for you. You're welcome.

An example run of sp_BlitzBackups to push data looks like this:

EXEC sp_BlitzBackups @PushBackupHistoryToListener = 1, -- Turn it on!
                     @WriteBackupsToListenerName = 'AG_LISTENER_NAME', -- Name of AG Listener and Linked Server 
                     @WriteBackupsToDatabaseName = 'FAKE_MSDB_NAME',  -- Fake MSDB name you want to push to. Remember, can't be real MSDB.
                     @WriteBackupsLastHours = -24 -- Hours back in time you want to go

In an effort to not clog your servers up, we've taken some care in batching things as we move data. Inspired by Michael J. Swart's Take Care When Scripting Batches, we only move data in 10 minute intervals.

The reason behind that is, if you have 500 databases, and you're taking log backups every minute, you can have a lot of data to move. A 5000 row batch should move pretty quickly.

Back to top

sp_DatabaseRestore: Easier Multi-File Restores

If you use Ola Hallengren's backup scripts, DatabaseRestore.sql helps you rapidly restore a database to the most recent point in time.

Parameters include:

  • @Database - the database's name, like LogShipMe
  • @RestoreDatabaseName
  • @BackupPathFull - typically a UNC path like '\\FILESERVER\BACKUPS\SQL2016PROD1A\LogShipMe\FULL' that points to where the full backups are stored. Note that if the path doesn't exist, we don't create it, and the query might take 30+ seconds if you specify an invalid server name.
  • @BackupPathDiff, @BackupPathLog - as with the Full, this should be set to the exact path where the differentials and logs are stored. We don't append anything to these parameters.
  • @MoveFiles, @MoveDataDrive, @MoveLogDrive - if you want to restore to somewhere other than your default database locations.
  • @FileNamePrefix - Prefix to add to the names of all restored files. Useful when you need to restore different backups of the same database into the same directory.
  • @RunCheckDB - default 0. When set to 1, we run Ola Hallengren's DatabaseIntegrityCheck stored procedure on this database, and log the results to table. We use that stored proc's default parameters, nothing fancy.
  • @TestRestore - default 0. When set to 1, we delete the database after the restore completes. Used for just testing your restores. Especially useful in combination with @RunCheckDB = 1 because we'll delete the database after running checkdb, but know that we delete the database even if it fails checkdb tests.
  • @RestoreDiff - default 0. When set to 1, we restore the ncessary full, differential, and log backups (instead of just full and log) to get to the most recent point in time.
  • @ContinueLogs - default 0. When set to 1, we don't restore a full or differential backup - we only restore the transaction log backups. Good for continuous log restores with tools like sp_AllNightLog.
  • @RunRecovery - default 0. When set to 1, we run RESTORE WITH RECOVERY, putting the database into writable mode, and no additional log backups can be restored.
  • @ExistingDBAction - if the database already exists when we try to restore it, 1 sets the database to single user mode, 2 kills the connections, and 3 kills the connections and then drops the database.
  • @Debug - default 0. When 1, we print out messages of what we're doing in the messages tab of SSMS.
  • @StopAt NVARCHAR(14) - pass in a date time to stop your restores at a time like '20170508201501'. This doesn't use the StopAt parameter for the restore command - it simply stops restoring logs that would have this date/time's contents in it. (For example, if you're taking backups every 15 minutes on the hour, and you pass in 9:05 AM as part of the restore time, the restores would stop at your last log backup that doesn't include 9:05AM's data - but it won't restore right up to 9:05 AM.)
  • @SkipBackupsAlreadyInMsdb - default 0. When set to 1, we check MSDB for the most recently restored backup from this log path, and skip all backup files prior to that. Useful if you're pulling backups from across a slow network and you don't want to wait to check the restore header of each backup.

For information about how this works, see Tara Kizer's white paper on Log Shipping 2.0 with Google Compute Engine.

Back to top

Parameters Common to Many of the Stored Procedures

  • @Help = 1 - returns a result set or prints messages explaining the stored procedure's input and output. Make sure to check the Messages tab in SSMS to read it.
  • @ExpertMode = 1 - turns on more details useful for digging deeper into results.
  • @OutputDatabaseName, @OutputSchemaName, @OutputTableName - pass all three of these in, and the stored proc's output will be written to a table. We'll create the table if it doesn't already exist. @OutputServerName will push the data to a linked server as long as you configure the linked server first and enable RPC OUT calls.

To check versions of any of the stored procedures, use their output parameters for Version and VersionDate like this:

DECLARE @VersionOutput VARCHAR(30), @VersionDateOutput DATETIME;
EXEC sp_Blitz 
    @Version = @VersionOutput OUTPUT, 
    @VersionDate = @VersionDateOutput OUTPUT,
    @VersionCheckMode = 1;
SELECT
    @VersionOutput AS Version, 
    @VersionDateOutput AS VersionDate;

Back to top

License

The SQL Server First Responder Kit uses the MIT License.

Back to top

sql-server-first-responder-kit's People

Contributors

adedba avatar andreasjordan avatar ant-green avatar blitzerik avatar brentozar avatar codykonior avatar dalehhirt avatar davidwiseman avatar digitalohm avatar douglane avatar emanuelemeazzo avatar erikdarling avatar erikdarlingdata avatar gdoddsy avatar jeffchulg avatar jesb avatar johnkness avatar ktaranov avatar lowlydba avatar misterzeus avatar montro1981 avatar nickpapatonis avatar peschkaj avatar pierreletter avatar pwsqldba avatar reecegoding avatar richbenner avatar rwhoward avatar seankilleen avatar shawncrocker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sql-server-first-responder-kit's Issues

Typo in sp_BlitzCache.sql (v2.3 - 2014-06-07)

sp_BlitzCache broken links

First Name: Hendrik
Last Name: Meiffert
Your Email: [email protected]
Company: Leo Burnett
Phone:
Your Comments: Hey All,

thank you for everything you give to us newbies out there. I really enjoy watching your movies. Sometimes the fun is lowered because of bad microphone quality but the information you give us, is enourmous. Would have taken me AGES to come to that point.

I am just trying sp_blitzcache for the first time and wanted to report a dead link :
http://brentozar.com/blitzcache/unparameterized-queries

It seemed to be moved for whatever reason, but sp_blitzcache still refers to it.

thank you,

kind regards,

Hendrik

Your Comments: this one is also dead :

http://brentozar.com/blitzcache/compile-timeout/
Newsletter subscription: Yes

sp_Blitz - check for simple databases that aren't so simple

First Name: Richard
Last Name: Douglas
Your Email: [email protected]
Company: Dell Software
Phone: no
Your Comments: Hi Guys,

Just wanted to add something to sp_blitz. Did a quick check of the code and I don't think you are looking for this currently.

William Durkin wrote a blog about a log reuse problem in simple recovery with 2012. To check if your instance is suffering from that problem there's a simple check:

SELECT name,
recovery_model_desc,
log_reuse_wait_desc
FROM sys.databases
WHERE recovery_model_desc = 'Simple'
AND log_reuse_wait_desc = 'LOG_BACKUP';

William's blog about it is here - http://williamdurkin.com/2014/06/simple-recovery-log_backup/

Regards,
Rich

Populating #sp_BlitzTraceEvents is too slow

I have surely done something silly in that query that's easy to fix. 60 second sample took 4 minutes to parse.

2014-09-03T17:55:19.717- Creating and starting trace.
Create trace dynamic sql
CREATE EVENT SESSION sp_BlitzTrace ON SERVER
ADD EVENT sqlserver.degree_of_parallelism (
ACTION(sqlserver.context_info)
WHERE ([sqlserver].[session_id]=(51))),
ADD EVENT sqlserver.object_created (
ACTION(sqlserver.context_info)
WHERE ([sqlserver].[session_id]=(51))),
ADD EVENT sqlserver.sql_batch_completed(
ACTION(sqlserver.context_info)
WHERE ([sqlserver].[session_id]=(51))),
ADD EVENT sqlserver.rpc_completed (
ACTION(sqlserver.context_info)
WHERE ([sqlserver].[session_id]=(51))),
ADD EVENT sqlserver.sql_statement_recompile (
ACTION(sqlserver.context_info)
WHERE ([sqlserver].[session_id]=(51)))
ADD TARGET package0.ring_buffer
WITH (MAX_MEMORY = 128 MB,
EVENT_RETENTION_MODE = ALLOW_MULTIPLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY = 1 SECONDS,
MEMORY_PARTITION_MODE=NONE,
TRACK_CAUSALITY=OFF,
STARTUP_STATE=OFF)
2014-09-03T17:55:19.723- Waiting for the sample time to complete...
2014-09-03T17:56:19.727- Reporting, stopping, and deleting trace.
2014-09-03T17:56:19.727- Populating #sp_BlitzTraceEvents...
2014-09-03T18:00:23.510- Querying sql_batch_completed, rpc_completed...
2014-09-03T18:00:23.520- Querying parallelism and memory grant...
2014-09-03T18:00:23.523- Querying object_created ...
Stopping and deleting trace.
Resetting context and we're outta here.

Filter cache by database name

Created by Darko Martinovic ([email protected]) at 2014/09/05 12:12:55 +0000:

One instance and many databases, from many software companies. I'm interested to view cache only for one database. It means, it will be good to add new parametar e.g. @filterByDatabaseName. I've made some modification to my own, but glad to see how author will handle this.
Second,maybe is not bad idea, if you set default for @output_schema_name to 'dbo'
And finally - On the line 1416, why not display full table name
RAISERROR('Writing results to table.', 0, 1) WITH NOWAIT; -- why not put full table name e.g. RAISERROR(N'Writing results to table: %s' , 0, 1, @fulltableName) WITH NOWAIT

Uservoice URL: http://brentozarultd.uservoice.com/forums/250742-general/suggestions/6397628-filter-cache-by-database-name

Add a changelog for SQL Server Setup Guide

Created by George Stocker ([email protected]) at 2014/07/28 16:49:14 +0000:

Currently, the SQL Server Setup guide has a date it's been changed, but not a summary of changes since that last version.

The summary of changes is helpful for the following reasons:

1 . Tells us if there's any out of date or incorrect information(!)
2. Tells us what's changed, so we don't need to reread the whole document or try to retain multiple versions.

Please add a changelog to the guide that details the changes between each version

Uservoice URL: http://brentozarultd.uservoice.com/forums/250742-general/suggestions/6225318-add-a-changelog-for-sql-server-setup-guide

sp_AskBrent: parameterize the StartSampleTime

To avoid plan cache bloat. Below is from contribution form:

Full Name
Paolo Piponi
Your Title
Systems Architect
By checking this box, I agree to the Brent Ozar Unlimited Code Contributor Licensing Agreement.

I agree!
Contributing to
sp_AskBrent®
Subject
Better paramterization for Query Stats inserts
Contribution Details
Obviously, this code won't run as it is but the changes should be clear. I've simply replaced the dynamic evaluation with parameters. We have a schedule for AskBrent and this avoids the plan cache filing up unnecessarily.

/* Populate #QueryStats. SQL 2005 doesn't have query hash or query plan hash. */
IF @@Version LIKE 'Microsoft SQL Server 2005%'
SET @StringToExecute = N'INSERT INTO #QueryStats ([sql_handle], Pass, SampleTime, statement_start_offset, statement_end_offset, plan_generation_num, plan_handle, execution_count, total_worker_time, total_physical_reads, total_logical_writes, total_logical_reads, total_clr_time, total_elapsed_time, creation_time, query_hash, query_plan_hash, Points)
SELECT [sql_handle], 2 AS Pass, GETDATE(), statement_start_offset, statement_end_offset, plan_generation_num, plan_handle, execution_count, total_worker_time, total_physical_reads, total_logical_writes, total_logical_reads, total_clr_time, total_elapsed_time, creation_time, NULL AS query_hash, NULL AS query_plan_hash, 0
FROM sys.dm_exec_query_stats qs
WHERE qs.last_execution_time >= @StartSampleTime ';
ELSE
SET @StringToExecute = N'INSERT INTO #QueryStats ([sql_handle], Pass, SampleTime, statement_start_offset, statement_end_offset, plan_generation_num, plan_handle, execution_count, total_worker_time, total_physical_reads, total_logical_writes, total_logical_reads, total_clr_time, total_elapsed_time, creation_time, query_hash, query_plan_hash, Points)
SELECT [sql_handle], 2 AS Pass, GETDATE(), statement_start_offset, statement_end_offset, plan_generation_num, plan_handle, execution_count, total_worker_time, total_physical_reads, total_logical_writes, total_logical_reads, total_clr_time, total_elapsed_time, creation_time, query_hash, query_plan_hash, 0
FROM sys.dm_exec_query_stats qs
WHERE qs.last_execution_time >= @StartSampleTime ';
EXECUTE sys.sp_executesql
@Statement=@StringToExecute,
@params=N'@StartSampleTime datetime',
@StartSampleTime=@StartSampleTime;

Hello guys, Hope you're doing just fine. In respect to the little cute query in http://www.brentozar.com/blitz/agent-jobs-starting-simultan

Created by Paulo A. Nascimento ([email protected]) at 2014/08/15 15:46:48 +0000:

Hello guys,

Hope you're doing just fine.
In respect to the little cute query in http://www.brentozar.com/blitz/agent-jobs-starting-simultaneously/
I proposed adding "AND j.enabled = 1" to not show false-positives. Just an idea.
At the end the query should look something like this:

SELECT j.name, j.description, a.start_execution_date
FROM msdb.dbo.sysjobs j
INNER JOIN msdb.dbo.sysjobactivity a ON j.job_id = a.job_id
WHERE a.start_execution_date > DATEADD(dd, -14, GETDATE()) AND j.enabled = 1
AND a.start_execution_date IN (SELECT start_execution_date
FROM msdb.dbo.sysjobactivity
WHERE start_execution_date > DATEADD(dd, -14, GETDATE())
GROUP BY start_execution_date HAVING COUNT(*) > 1)

Uservoice URL: http://brentozarultd.uservoice.com/forums/250742-general/suggestions/6300768-hello-guys-hope-you-re-doing-just-fine-in-respe

Update link for suggestions in comments at head of sp_blitz v35

Created by Anonymous ([email protected]) at 2014/08/01 14:37:24 +0000:

Comments in sp_blitz v35 indicate

... To contribute code and see your name in the change
log, email your improvements & checks to [email protected].

But if I email that address, I get told to go to

http://support.brentozar.com

Uservoice URL: http://brentozarultd.uservoice.com/forums/250742-general/suggestions/6242763-update-link-for-suggestions-in-comments-at-head-of

Two QDS waits to ignore in 2014

These are sleep loops, are showing up in some samples i'm trying to take on sql 2014 and messin' up my screenshots :)

        'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP',
        'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP'

Create sp_BlitzBufferCache

Created by Steve Hood ([email protected]) at 2014/06/16 17:12:08 +0000:

Based on the following query, check to see what's in cache in the current database and compare that to the size of those indexes. This is a great way to find recent large index scans that are causing more IO on the server, and could help find the offending code in the proc cache or recent code in EE/Trace data.

Although this is just a current snapshot, some large scans happen frequently. If an index looks out of place here then you have the opportunity to look into it more and justify the use of your memory. A single run won't give you everything, but it will give you a start.

SELECT cached_MB
, ObjName = name
, index_id
, index_name
, Pct_Of_Cache = cast((cached_mb * 100) / cast(SUM(cached_mb) over () as DEC(20,4)) as DEC(5,2))
, Pct_InRow_Data_In_Cache = cast((100.0 * cached_MB) / (1.0 * Used_InRow_MB) as DEC(5,2))
, Used_MB
, Used_InRow_MB
, Used_LOB_MB
FROM (
SELECT count(1)/128 AS cached_MB
, obj.name
, i.index_id
, index_name = i.name
, i.Used_MB
, i.Used_InRow_MB
, i.Used_LOB_MB
FROM sys.dm_os_buffer_descriptors AS bd with (NOLOCK)
INNER JOIN
(
SELECT name = OBJECT_SCHEMA_NAME(object_id) + '.' + object_name(object_id)
, object_id
, index_id
, allocation_unit_id
FROM sys.allocation_units AS au with (NOLOCK)
INNER JOIN sys.partitions AS p with (NOLOCK)
ON au.container_id = p.hobt_id
AND (au.type = 1 OR au.type = 3)
UNION ALL
SELECT name = OBJECT_SCHEMA_NAME(object_id) + '.' + object_name(object_id)
, object_id
, index_id
, allocation_unit_id
FROM sys.allocation_units AS au with (NOLOCK)
INNER JOIN sys.partitions AS p with (NOLOCK)
ON au.container_id = p.partition_id
AND au.type = 2
) AS obj
ON bd.allocation_unit_id = obj.allocation_unit_id
INNER JOIN (
SELECT Name = OBJECT_NAME(PS.Object_ID)
, PS.Object_ID
, PS.Index_ID
, Used_MB = SUM(PS.used_page_count) / 128
, Used_InRow_MB = SUM(PS.in_row_used_page_count) / 128
, Used_LOB_MB = SUM(PS.lob_used_page_count) / 128
, Reserved_MB = SUM(PS.reserved_page_count) / 128
, row_count = SUM(Row_Count)
FROM sys.dm_db_partition_stats PS
GROUP BY PS.OBJECT_ID
, PS.Index_ID
) i ON obj.object_id = i.object_id AND obj.index_id = i.index_id
WHERE database_id = db_id()
GROUP BY obj.name
, i.index_id
, i.name
, i.Used_MB
, i.Used_InRow_MB
, i.Used_LOB_MB
HAVING Count(*) > 128
) x
ORDER BY 1 DESC;

Uservoice URL: http://brentozarultd.uservoice.com/forums/250742-general/suggestions/6061048-create-sp-blitzbuffercache

sp_Blitz: case sensitive issues on web page query for partitions

Full Name
Michael James Bluett
Email
[email protected]
Your Title
Database Developer
By checking this box, I agree to the Brent Ozar Unlimited Code Contributor Licensing Agreement.

I agree!
Contributing to
sp_Blitz®
Subject
Fix for case sensitivity issues in the query on http://www.brentozar.com/blitz/partitioned-tables-with-non-aligned-indexes/
Contribution Details
(this is not a fix for sp_Blitz itself, but the query on a page that is linked to from the script)
There are case sensitivity issues in the query on the page http://www.brentozar.com/blitz/partitioned-tables-with-non-aligned-indexes/ (I have fixed DS and OBJECT_NAME which had case issues).

Here is the corrected query:
SELECT
ISNULL(db_name(s.database_id),db_name()) AS DBName
,OBJECT_SCHEMA_NAME(i.object_id,DB_ID()) AS SchemaName
,o.name AS [Object_Name]
,i.name AS Index_name
,i.Type_Desc AS Type_Desc
,ds.name AS DataSpaceName
,ds.type_desc AS DataSpaceTypeDesc
,s.user_seeks
,s.user_scans
,s.user_lookups
,s.user_updates
,s.last_user_seek
,s.last_user_update
FROM sys.objects AS o
JOIN sys.indexes AS i ON o.object_id = i.object_id
JOIN sys.data_spaces ds ON ds.data_space_id = i.data_space_id
LEFT OUTER JOIN sys.dm_db_index_usage_stats AS s ON i.object_id = s.object_id AND i.index_id = s.index_id AND s.database_id = DB_ID()
WHERE o.type = 'u'
AND i.type IN (1, 2)
AND o.object_id in
(
SELECT a.object_id from
(SELECT ob.object_id, ds.type_desc from sys.objects ob JOIN sys.indexes ind on ind.object_id = ob.object_id join sys.data_spaces ds on ds.data_space_id = ind.data_space_id
GROUP BY ob.object_id, ds.type_desc ) a group by a.object_id having COUNT (*) > 1
)
ORDER BY [Object_Name] DESC

sp_Blitz Could Detect MisGuided Plans

This is a database level check. It only works in SQL Server 2008+. (Plan guides apparently exist in 2005 but they were even suckier then.)

Low pri, this feature doesn't get used much, it just might be easy to add and could be a quick win in a rare situation.

If the following query returns > 0, you have a plan guide in the database that's hosed-- and it could be making queries silently fail.

SELECT 
    COUNT(*)
FROM sys.plan_guides
CROSS APPLY fn_validate_plan_guide(plan_guide_id);
GO

For the documentation. To get details on the error, a user can run:

SELECT 
    plan_guide_id, msgnum, severity, state, message, 
    name, create_date, is_disabled, query_text, scope_type_desc, scope_batch, parameters, hints
FROM sys.plan_guides
CROSS APPLY fn_validate_plan_guide(plan_guide_id);
GO

sp_BlitzCache could detect guided plans

I don't think this is super high priority. Or even high priority. But it looks like it'd be easy.

In an execution plan just look for the presence of "PlanGuideName"

The following repro works on SQL14 for StackOverflow. If you needed a repro for adventureworks2012 I could put something together.

DBCC FREEPROCCACHE
GO
use StackOverflow;
GO

IF OBJECT_ID('dbo.AcceptedAnswersByUser') is null
    exec ('create procedure dbo.AcceptedAnswersByUser as return 0')
GO
ALTER PROCEDURE dbo.AcceptedAnswersByUser
    @UserId INT
AS
SELECT 
    (CAST(Count(a.Id) AS float) / (SELECT Count(*) FROM Posts WHERE OwnerUserId = @UserId AND PostTypeId = 2) * 100) AS AcceptedPercentage
FROM dbo.Posts q
JOIN dbo.Posts a ON q.AcceptedAnswerId = a.Id
WHERE
    a.OwnerUserId = @UserId
  AND
    a.PostTypeId = 2
GO


EXEC sp_create_plan_guide 
    @name = N'AcceptedAnswersByUser-Optimize For Value', 
    @stmt = N'SELECT 
    (CAST(Count(a.Id) AS float) / (SELECT Count(*) FROM Posts WHERE OwnerUserId = @UserId AND PostTypeId = 2) * 100) AS AcceptedPercentage
FROM dbo.Posts q
JOIN dbo.Posts a ON q.AcceptedAnswerId = a.Id
WHERE
    a.OwnerUserId = @UserId
  AND
    a.PostTypeId = 2', 
    @type = N'OBJECT', 
    @module_or_batch = N'[dbo].[AcceptedAnswersByUser]', 
    @hints = N'OPTION (OPTIMIZE FOR (@UserId=557499))'
GO

exec dbo.AcceptedAnswersByUser 22656;
GO

EXEC sp_control_plan_guide 'Drop', 'AcceptedAnswersByUser-Optimize For Value';
GO

sp_BlitzCache not grouping by Query Hash

This repro is against the StackOverflow DB. Can be Repro'd against SQL14 in the lab.

use StackOverflow;
GO
DBCC FREEPROCCACHE;
GO
SELECT 
    Score,
    Count(*) AS CommentCount
FROM dbo.Comments
WHERE UserId = 557499
GROUP BY Score
ORDER BY Score DESC;
GO
SELECT 
    Score,
    Count(*) AS CommentCount
FROM dbo.Comments
WHERE UserId = 26837
GROUP BY Score
ORDER BY Score DESC;
GO
SELECT 
    Score,
    Count(*) AS CommentCount
FROM dbo.Comments
WHERE UserId = 22656
GROUP BY Score
ORDER BY Score DESC;
GO
exec sp_BlitzCache @results='Expert';

expected result: one line showing 1 query with 3 executions
Actual result: three lines with 1 execution each, all with Query Hash 0x4046BC7C48BEC501

Fixing this will be mildly complicated because there can be different plans-- so you'll have to just pick one and warn on that. In the example here one throws " Missing Indexes (1), Parallel" whereas the others don't.

They all warn "Multiple Plans" and they all have "# Plans" = 3 and "Distinct Plans"=1, with Executions=1.

The # of plans is right, but the Distinct Plans is not.

sp_AskBrent: Perfmon metrics not working on named instances

First Name: Chris
Last Name: Hurford
Your Email: [email protected]
Company:
Phone:
Your Comments: [dbo].[sp_AskBrent] was not returning perfMon information on my named instance until I updated the insert into #PerfmonCounters section

INSERT INTO #PerfmonCounters ([object_name],[counter_name],[instance_name]) VALUES ('SQLServer:Access Methods','Forwarded Records/sec', NULL)

with

DECLARE @instance SYSNAME

SET @instance = CASE WHEN @@SERVICENAME = 'MSSQLSERVER' THEN 'SQLServer'
ELSE 'MSSQL$' + @@SERVICENAME
END

INSERT INTO #PerfmonCounters ([object_name],[counter_name],[instance_name]) VALUES (@ServiceName + ':Access Methods','Forwarded Records/sec', NULL)

Great tool though, has helped me out of more than one bind.

Thanks,
Chris

Slight logical error in sp_BlitzCache.sql (v2.3 - 2014-06-07)

Created by Michael Bluett ([email protected]) at 2014/07/04 13:35:02 +0000:

It looks like there's a logical error here (the duplicate lines):
IF @output_database_name IS NOT NULL
AND @output_schema_name IS NOT NULL
AND @output_schema_name IS NOT NULL
Should it be:
IF @output_database_name IS NOT NULL
AND @output_schema_name IS NOT NULL
AND @output_table_name IS NOT NULL

Uservoice URL: http://brentozarultd.uservoice.com/forums/250742-general/suggestions/6134458-slight-logical-error-in-sp-blitzcache-sql-v2-3

blitztrace - run in 60 second sample mode by default

by default have sp_blitztrace start the trace, wait 60 seconds, query results, stop the trace, delete the trace.

if users want to just start it or stop it, they have to use a special parameter combo. (I want to leave that in there because it's nice for demo purposes.)

Triggers reference the wrong column

Submitted by Andrew Notarian [email protected]

The XPM option does not work for me, SQL 2012 SP1.  Not sure if this is something that only works on 2014.

/*------------------------
EXEC sp_BlitzCache @sort_order='executions per minute'
------------------------*/
Setting up temporary tables for sp_BlitzCache
Determining SQL Server version.
Creating dynamic SQL based on SQL Server version.
Adding SQL to collect trigger stats.
Collecting execution plan information.
Msg 207, Level 16, State 1, Line 190
Invalid column name 'creation_time'.
Msg 207, Level 16, State 1, Line 190
Invalid column name 'creation_time'.
Msg 207, Level 16, State 1, Line 191
Invalid column name 'creation_time'.
Msg 207, Level 16, State 1, Line 301
Invalid column name 'execution_count'.
Msg 207, Level 16, State 1, Line 302
Invalid column name 'age_minutes'.
Msg 4104, Level 16, State 1, Line 302
The multi-part identifier "qs.creation_time" could not be bound.
Msg 4104, Level 16, State 1, Line 302
The multi-part identifier "qs.last_execution_time" could not be bound.
Msg 207, Level 16, State 1, Line 302
Invalid column name 'age_minutes'.
Msg 4104, Level 16, State 1, Line 302
The multi-part identifier "qs.creation_time" could not be bound.
Msg 4104, Level 16, State 1, Line 302
The multi-part identifier "qs.last_execution_time" could not be bound.
Msg 207, Level 16, State 1, Line 303
Invalid column name 'execution_count'.
Msg 207, Level 16, State 1, Line 303
Invalid column name 'age_minutes'.
Msg 207, Level 16, State 1, Line 303
Invalid column name 'age_minutes'.
Msg 4104, Level 16, State 1, Line 303
The multi-part identifier "qs.creation_time" could not be bound.
Msg 4104, Level 16, State 1, Line 303
The multi-part identifier "qs.last_execution_time" could not be bound.
Computing CPU, duration, read, and write metrics
Checking for query level SQL Server issues.
Scanning individual plan nodes for query issues.
Checking for plan compilation timeouts.
Checking for forced parameterization and cursors.
Populating Warnings column
Building query plan summary data.
Displaying analysis of plan cache.

sp_BlitzCache could detect queries using dirty reads (kind of) and warn if they conflict with iso levels

sp_BlitzCache could look for NOLOCK hints or SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED in procedure text.

This isn't perfect because the nolock hints could be hidden in views, or the app could be setting the iso level in a statement (not a proc). But hey, it's something!

Extra points for detecting if RCSI / Snapshot are enabled in a user database and this might be underminding that setting.

Disregard Trivial plans with 'multiple plans' warning in sp_blitzcache

Created by George Stocker ([email protected]) at 2014/08/05 13:28:58 +0000:

Currently, sp_blitzcache triggers a 'multiple plans' warning for Trivial Plans in the cache (SELECT B FROM TABLE where A=1 and C=2 in my case).

Since these are trivial plans; they're not really useful to show up in sp_blitzcache (since they're not going to be cached anyway); and in multi-tenant situations, these can literally fill up the results for sp_blitzcache.

I've tested a fix, and I believe this works, but if you change line 1289 in v2.3 from:

plan_warnings = CASE WHEN QueryPlan.value('count(//p:Warnings)', 'int') > 0 THEN 1 END,

To:

plan_warnings = CASE WHEN QueryPlan.value('count(//p:Warnings)', 'int') > 0 AND QueryPlan.exist('//p:StmtSimple[@StatementOptmLevel[.="TRIVIAL"]]') = 0 THEN 1 END,

That keeps those trivial statements from triggering a warning in sp_blitzcache.

Uservoice URL: http://brentozarultd.uservoice.com/forums/250742-general/suggestions/6258775-disregard-trivial-plans-with-multiple-plans-warn

sp_Blitz - some default configs may be missing

Hi Brent,

for some config values there are no records in #ConfigurationDefaults.

When running below script on my SQL Server 2014 I get 6 configs that
are not in #ConfigurationDefaults (blocked process threshold (s),
common criteria compliance enabled, EKM provider enabled, backup
compression default, filestream access level, backup checksum
default).

For backup compression default there is another check.
Maybe all the others a new with SQL Server 2014?

select sc.*
from #ConfigurationDefaults cd
right join sys.configurations sc
on cd.name = sc.name
where cd.name is null;

best regards
Tobias Ortmann [email protected]

Have sp_Blitz warn if security bulletin MS14-044 applies

I love how sp_Blitz is warning about the corruption bug for 2012. Similarly would be nice to warn about this security bulletin.

There's two components that this fixes, and one is a TSQL vulnerability. "A local attacker could exploit this vulnerability by creating a specially crafted T-SQL statement that causes the Microsoft SQL Server to stop responding.")

"This security update is rated Important for supported editions of Microsoft SQL Server 2008 Service Pack 3, Microsoft SQL Server 2008 R2 Service Pack 2, and Microsoft SQL Server 2012 Service Pack 1; it is also rated Important for Microsoft SQL Server 2014 for x64-based Systems. "

Good news for one group: SQL Server 2012 users should just go to SQL 2012 SP2 (patched for the corruption bug), that isn't impacted.

https://technet.microsoft.com/en-us/library/security/MS14-044

sp_Blitz - may not be alerting on incorrect server name

From Rebecca Lewis, SQLfingers:

Sweet! Thanks a bunch, Brent. Enjoy the rest of your holiday. I will leave you be now... until tomorrow. :-)

On Mon, May 26, 2014 at 6:02 PM, Brent Ozar Unlimited [email protected] wrote:
And by the way, this is driving me crazy because I had a check for this! Now I gotta find out why it didn't alert you on that, heh. I'll get you a new build tomorrow that better alert you on it and might fix the DT stuff too. Thanks for sending me that output! It really helps.


Tiny glass keyboard
Typos flowing like rivers
As winter snow thaws

On May 26, 2014, at 6:56 PM, Brent Ozar Unlimited [email protected] wrote:

Yeah, there we go. It got renamed in the past but nobody told SQL Server. Here's how to fix it:

http://msdn.microsoft.com/en-us/library/ms143799.aspx


Tiny glass keyboard
Typos flowing like rivers
As winter snow thaws

On May 26, 2014, at 6:47 PM, "[email protected]" [email protected] wrote:

This is the log, yet the @@ServerName is RLG-FL-SQLMR\SCALE.

Message
Server name is 'RLG-NY-SQL-DR\SCALE'. This is an informational message only. No user action is required.

On Mon, May 26, 2014 at 5:43 PM, [email protected] [email protected] wrote:
SOB! Look at that ---

SELECT @@ServerName -- Gives me 'RLG-FL-SQLMR\SCALE'

But that is not what I got with SERVERPROPERTY. See here:

DECLARE @props TABLE (propertyname sysname PRIMARY KEY)

INSERT INTO @props(propertyname)

SELECT 'ComputerNamePhysicalNetBIOS'

UNION

SELECT 'InstanceName'

UNION

SELECT 'MachineName'

UNION

SELECT 'ServerName'

SELECT propertyname, SERVERPROPERTY(propertyname) FROM @props

My results:

ComputerNamePhysicalNetBIOS RLG-NY-SQL-DR
InstanceName SCALE
MachineName RLG-NY-SQL-DR
ServerName RLG-NY-SQL-DR\SCALE

On Mon, May 26, 2014 at 5:38 PM, [email protected] [email protected] wrote:
I pull the SQLServerName, MachineName, InstanceName and netbiosName from SERVERPROPERTY. They are the values below, respectively. I will check again real quick. But they have some crazy DT setup. I have to stop the DT job first, then bring up SQL.

RLG-NY-SQL-DR\SCALE,

RLG-NY-SQL-DR
SCALE

RLG-NY-SQL-DR

On Mon, May 26, 2014 at 5:27 PM, Brent Ozar Unlimited [email protected] wrote:
Ah, I bet the server name is wrong - check @@ServerName versus the setver's actual name. That causes all kinds of problems.


Tiny glass keyboard
Typos flowing like rivers
As winter snow thaws

<SQLServerHealthCheck.xlsx>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.