configuring sql server for best performance
Post on 25-Mar-2022
7 Views
Preview:
TRANSCRIPT
Configuring SQL Server for Best PerformanceAndy Warren
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Welcome
Today we’re talking about how to configure SQL Server for best performance.
Lots of concrete things we can do to support performance and we’ll talk about those.
Ultimately it’s the workload that matters and you may not have a lot of control over it
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Good News
Most of the stuff we’ll be talking about is changeable if you didn’t get it right (or need it) when you did your install
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Hardware, Software, Configuration, Workload
As we get started it’s useful to think about impact vs ease of change vs risk
- On premises, getting new hardware can be time consuming and leads most of us to over spec, because we don’t get a do over
- Most of the configuration is easy to change, but often harder to understand the impact
- Workload tends to be predictable, until it isn’t. We’re often one new query or one not great plan away from a bad day
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Measuring Performance
- Users care about duration, deadlocks, timeouts (and blocking), not magic numbers!
- I’d suggest a good goal is no more than 60% CPU during peak hours with no spike to 100% at all
- PLE? Yes, it matters, and I like to see mine at 30 minutes or more, but don’t spend tons to get there (fast disks can save you)
- Disk latency < 1 ms- Can you get to zero deadlocks, blocks, timeouts, deadlocks during
peak? Takes more than hardware!
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Hardware
- Many of you will be in VM’s or cloud, but it still comes down to picking the right number of CPU’s, enough memory, and enough disk space + IO to meet your needs
- Relatively easy if you’re doing a 3 year upgrade, much harder when you’re guessing for a new application or adding a large customer
- In the cloud you often have to buy more CPU’s to get more memory- Sticking to Standard Edition will save you money, but limits you to
24 cores / 128GB- Buy for today, but always good to have a clear path to adding more
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Hardware Recommendations
- If you insisted on a guess, I’d say 4 to 8 CPU, 128GB, NVMe, SQL 2019 Std
- If you have it, look at your cpu/batches per second trends over time- If you can, try to get to a place where you can scale out, leaving
scale up for the rainy days- Don’t skimp on space. You have data + backups, plus you need
room to pull down an old backup and restore it.- Don’t skimp on IO. Fast storage will smooth out a lot of things.- Be mindful of cost. Yes, it’s a tough balance to strike!
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
CPU, Memory, Storage, IO
- It’s tough to get the perfect ratio at the lowest cost. Or at all!- In general I’d say you size based on CPU needs/estimates, then you
build around that- Remember that high CPU usage could be a sign of real demand, or
could be just a symptom (memory paging, recompiles, parallelization, inefficient queries). [Wait state analysis is the best way to dig into this, but it’s still not trivial to figure out]
- You need fast storage (local, SAN, network) (NVMe, SSD, blah)
I
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Noisy Neighbors & Speed Limits
-While you’re designing you have to think about the neighbors
- Avoid multi instance installs for Prod
- Understand what else runs on the VM host (preferably nothing else)
- Same for SAN storage, it’s fast, but it can still be affected by other users
- In the cloud you could be throttled depending on your choices
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
What Version & Edition
- Rarely a reason not to install the latest version- Typically match the edition you were already using- Enterprise has some nice features, but expensive
- In most of my work online indexing and Availability Groups make it a must have
- In general don’t expect Enterprise to be a miracle fix- Pick it because you know what feature(s) you need and why
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Upgrade in Place?
One of “those” discussions
- Upgrade in place is easier, especially with HA/DR factored in- Clean install on new hardware always makes me feel good, but perf
is unlikely to change (due to the clean install that is)
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Install
- Use your internal standard for logical drives, just makes things easier- Typically I use a logical drive for data, logs, tempdb, even if on one physical
drive- Generally use the recommendations for tempdb files (max of 8
unless your previous config needed more)- Make sure instant file initialization checked - Only install features you need
- This is mostly about security, a little bit about space/memory
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Patch & Scan
- Download and apply the latest cumulative update- Reboot it- Run the vulnerability scan now, before you sink a bunch of time into
configuration and data moving
No, this doesn’t make it run faster, but don’t skip it
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Post Install Config - Not Perf Related
- HA/DR, whatever you’re using- Add to your monitoring/alerting system- Email/operators- Add whatever standard jobs you use- If you’re replacing a server, bring over all the details
- Linked servers- Operators- sys.configuration overrides
- Default locations- ...etc!
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Exclude from AV Scans
Easy to miss, you don’t want your AV solution scanning your data/log files or your backup folder.
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Max Memory
Mentioned earlier, I use Max GB - 10GB, have to leave some for the OS, you can experiment if you want
For those who see all that memory being used and think “memory leak” it’s easiest to think of SQL Server as a giant data cache. It’s much cheaper to get data from memory compared to any storage.
We don’t need memory = space used for effective performance
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Instant File Initialization
Lets us grow data files with no perf penalty. That’s a big deal.
Doesn’t apply to log files.
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
File Growth
I always enable auto grow and always try to grow db’s manually
If I had to pick one number, I’d say 1gb increments, but I frequently do 4/8 GB. On tiny databases you might go smaller. Not the turbo switch.
This is just to avoid churn/fragmentation/long waits while zeroing out new log file space (good to read/learn about VLF’s)
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Trace Flags
Start with:
- tf 3266 to suppress backup messages- tf 4199 query optimizer fixes
Add any you’re already using
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Cost Threshold for Parallelism
This has an old default, bump it up to 50 on new systems, or match what you were using before
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Max Dop
Set to the number of CPU’s, up to 8 (or match what you were using before)
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Add Your Databases
If you’re on a new version, make sure you change the compatibility level
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Monitor
As you move your workload to the new server, you definitely have to monitor (and maybe sweat a little too!)
This is where having a baseline is a huge help. Keep in mind that lower CPU time/lower disk latency/etc...doesn’t mean the user sees any difference.
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
It’s Slower!
With new hardware/version/config there is always the chance that something will perform differently.
- Finding it can be hard- Assume its config vs workload- Could be the compatible level
Good monitoring software, comparing to baselines, wait state analysis, all are part of figuring out why
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
There is always a bottleneck
If you’re trying for better performance, you’re thinking “new car...better, faster, maybe even cheaper than my 10 year old car” and there is a lot of truth to that
But whether it matters depends on what the bottleneck is/was. I bring this up again because it can be a source of great frustration to spend time and money and the problem remains.
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
More Reading
https://sqlperformance.com/2015/03/io-subsystem/monitoring-read-write-latency
https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/configure-the-max-degree-of-parallelism-server-configuration-option?view=sql-server-ver15
https://docs.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-version-15?view=sql-server-ver15
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Q&A
Send in those questions! I’ll take as many as we can after the great sponsor demo from Quest.
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Summary
- Sizing the hardware is still the biggest decision in terms of perf and cost
- There are some config settings that may matter in your environment, but easy to change (not always as easy to validate)
- Nothing wrong with using hardware to get you to a better place, but it has limits
- Squeezing the most of out a busy server requires sustained effort from both DBA and developers
Copyright (c) 2006-2020 Edgewood Solutions, LLC All rights reserved
Don’t Leave Yet!
Please do connect with me at www.linkedin.com/in/sqlandy
top related