We’ve all had those moments.

It’s late in the day. You are about to head home a few minutes early so you can bring your daughter to soccer practice on time. Your phone rings. The voice on the other end of the line mutters something about the server “being slow”, and needs you to fix it so the company stops losing more and more money with each passing minute. You hang up the phone and see that the voice on the other end of the line has already sent an email to everyone in the company that blames the database server as the source of the problem and that they have spoken to you and you will have it fixed in the next five minutes.

You need to fix this, and fast. You company, and your family, is counting on you being there for them. You open up SQL Server Management Studio. You want to take action. Everyone is counting on you to get this done, and done right.

I am here to tell you about the six mistakes that many novice DBAs make when it comes to fixing performance issues. These mistakes may get you through the next few minutes but they are often band-aid solutions at best and sometimes they can make performance even worse.

1. Throwing Indexes at the Problem

Wait, don’t indexes help make performance better? So, wouldn’t more indexes mean more performance? No, not really. When it comes to indexes you can have too much of a good thing. And yet many novice DBAs look to add indexes whenever they come across a query that is not performing as well as expected. With tools such as the Database Engine Tuning Advisor at their fingertips they go about tuning individual statements and accepting all the default recommendations. And they end up fixing one issue and causing ten others.

Adding extra indexes is not always the right thing to do. Be wary of the easy path, for it is often the wrong path. What you want to do is research. Find out more about the entire workload hitting that table. Find out more about the database design and see if improvements can be made to existing structures before just adding new indexes. Remember this phrase: when it comes to adding indexes, just because you can doesn’t mean you should.

2. Using Index Hints Everywhere

After creating so many additional indexes some novice DBAs will then be confused as to why some indexes are not being used when expected. That is when they find out about index hints and so they go about forcing the optimizer to choose a specific index. Or maybe they will find out about plan guides and try those instead. This method works once and with great success and before too long it becomes the de facto standard to use index hints everywhere. In short, they go through a phase where they feel they are smarter than the optimizer. They will even proudly show you the plan guides they have that prove they are smarter than the optimizer. Apparently query algebra is a hobby for them, right?

If you think you are smarter than the optimizer I am here to tell you that you are not. Let the optimizer do the job it was designed to do and try to understand more about why it made the choice that it did. You think it should have used an index, or a different plan? Then you need to understand more about how and why the optimizer made the choice that it did.

3. Updating Statistics Every Hour

We’ve all been there. A query suddenly takes longer to perform and the users are upset. In an effort to see if you can help get things running well again you update the statistics for the table and suddenly everything is better. A few hours later performance is bad again and you update the stats to make it go away. You sense a pattern and before you can say “Bad Idea jeans” you have a job configured inside of SQL Agent to update the stats every hour in order to keep performance running smooth.

What could be wrong with such an approach? You have jumped ahead to a solution without understanding the root cause for the need to have frequent updates of statistics. Is the table poorly designed? Is the application deciding to do massive deletes and inserts because they once heard “updates are too costly”? You need to find the root cause before deciding upon a solution, and often that root cause is due to bad parameter sniffing.

4. Running Profiler in Production

This is never a good idea and usually the sign of a last resort for a seasoned DBA. For the novice DBA, it is often a tool they use frequently, mostly because they haven’t seen the use of Profiler bring down a production server. Isn’t it ironic that in an effort to help improve database performance they end up using a tool that can cause performance to become worse, or even cause an outage?

How can it cause an outage? Did you know that Profiler uses the default temp space for the O/S, which is often on the C: drive, which is where your O/S is stored. Ever see what happens when that drive gets filled? You can also store the results of your trace into a table and if you end up letting the trace run for so long you fill up a disk you will cause all other database activity to stop. Believe me, it happens, and when it does you don’t want to be sitting in a meeting the next day trying to explain that you were responsible for the production server being unusable yesterday. They hired you to keep things up and available, not to cause outages because you didn’t understand the effects of the tools you were using.

5. Flushing the Procedure Cache

When faced with symptoms of memory pressure and slow query performance many novice DBAs will look to find ways to reduce that pressure as quickly as possible in an effort to restore performance. For example, when faced with a low counter for the procedure cache hit ratio a novice may assume that many of the plans are now invalid and need to be flushed. So they reach for the DBCC FREEPROCCACHE statement in order to eject all plans from cache in an effort to restore performance. Again, they are jumping ahead to a solution without understanding the real problem.

Similar to the issue with updating statistics frequently, the idea of putting a job in place to flush your plan cache without understanding what is causing your plan cache to become bloated in the first place is not the best idea. Take the time to find out why you are having issues with your plan cache before deciding that flushing it every hour is a viable solution. The end result may be you need to change the method in which calls are being made to your server.

6. Not Thinking About Tomorrow

Think of a ship, a plane, or a rocket. Each of them can only go so far, and so fast. Each has what is called structural integrity, and has performance thresholds that when crossed the structural integrity will fall apart. Guess what else has performance thresholds that when crossed will cause everything to fall apart? Your database.

You need to know what your thresholds are. How many simultaneous connections and inserts/updates/deletes can you sustain before performance starts to degrade? Just how fast can your ship travel before it breaks apart? If you don’t know this number then you don’t know when the wall is approaching and you could find yourself running right into it at full steam and knock yourself out.

Think of it this way: you are given a problem to solve, say a query is running slow. You make adjustments and it runs faster. But you never bother to check to see how well it can scale. A month later, the user workload has doubled, and the query is slow again. You dive back in and make some adjustments. Now people are wondering “why can’t they just make things work correctly”? A month later things are bad again and you need to make adjustments but you have no adjustments left to make. You have hit your threshold. There is nowhere else to go. But your end users need more.

Wouldn’t it have been good for you to have seen this coming sooner? Of course it would. Proper testing and bench marking is a forgotten art these days. Pick up a brush and learn to paint, it will pay off for you.

Share this: Twitter

Facebook

LinkedIn

Reddit

