SQL Server Indexing Book Giveaway – Part 7

More of the same with the next Expert Performance Indexing for SQL Server book giveaway.  So far, I’ve sent out six copies of the book based on comments from the past six (1, 2, 3, 4, 5, and 6) posts.  I’ve made a small change to the title of the series (week vs. part), mainly because getting a kid off to college and summer vacation has been interfering with the rate of posts.  There will still be thirteen posts, though, going until they are all given away.

To re-cap from previous posts, Expert Performance Indexing for SQL Server is a book I co-wrote with Grant Fritchey (Blog | @gfritchey) last summer, which is a valuable resource to anyone building indexes on SQL Server.  As part of finishing the book, Apress sent me a number of copies of the book to share with people.  I figured the best way to share these out is to give them away to people that comment on this blog.

So here’s a topic for leaving a comment this week…

The Question

For part seven, the question will focus on chapter nine, which covers index maintenance.  The topic for discussion in the comments this time is…

Do you think that defragmenting indexes is worthwhile?  Why or why not?  What evidence do you have to support your position?

If you have a story or opinion to share, leave a comment below. After about a week (once I return from vacation), I’ll select one of the comments and send that author a copy of the book.

4 thoughts on “SQL Server Indexing Book Giveaway – Part 7

  1. Index defragmentation, as opposed to rebuild, I think
    these days can be massively underestimated. With online rebuilds possible for a
    few versions now, the temptation is just to rebuild everything on a rotating
    basis. But, if you have truly massive indexes that only need part of them
    actually defragmenting each time – then you may be wasting a vast amount of
    horsepower.

    For instance, we have created multi-layered re-indexing software (forget
    good ‘ole maint plans), built using SQL Agent and PowerShell. It starts by
    using the read-only AG’s to feed in the fragmentation stats. Then this data is
    picked up by a second layer that sifts it (based on some tuneable parameters),
    to pick up the most urgent indexes to rebuild and add some parameters to pass
    on to the next (autonomous) layer – such as the priority, to defrag / re-build,
    target time to complete (based on a calculation from the past history) and so
    on. The next layer is the execution layer, which pulls off the queue from the
    previous layer, for each database server, the new “re-indexer
    workload” to actually action. The defragments are given a lock timeout,
    and after the target time is reached they are also terminated. Finally, a data
    gather and analysis layer pulls the application execution stats together and
    compares against the input layer stats – which then fine tunes some of the
    parameters.

    Anyway, to cut a long description shorter. After many months of monitoring,
    tweaking and generally optimising the system we’ve found it is often better to
    run a short index defrag (i.e. that gets killed after ~ 1-2 minutes), and do it
    20-50 times over, than to do a full re-build (which can only be done once).
    And, yes, our big tables and indexes are partitioned. And, we do have 100’s of
    billions of row in some of the tables, operating 24-7.

    Of course, your first best port of call is Ola’s scripts, before building a
    custom re-indexing system. But, to summarise, when you get really big – at
    least consider the option of lots of repeated short index defrags – instead of
    one massive lump of re-build.

    Like

  2. I generally look at defragmenting indexes as kind of a “set it and forget it” thing — you put it in your maintenance plan and every week it rebuilds your indexes. Of course, there are drawbacks to this approach, as by default all indexes get rebuilt whether they’re fragmented or not. There’s a tricky balance between indexes and fill factor — a bigger fill factor can reduce fragmentation (so fast inserts), but you end up with slower reads, as there’s a lot more pages that need to be read. As usual, Brent Ozar has some of the best commentary on the topic: https://www.brentozar.com/archive/2012/08/sql-server-index-fragmentation/

    Like

  3. Defragmenting indexes is extremely worthwhile. Without defragmenting them often, my queries would slow down to a crawl, meaning my users would scream at me about the database being slow. Since I do a lot of reads and writes to my databases, I setup a task that rebuilds my indexes weekly — since I have a large maintenance window, I can do this — and my users stay happy, which keeps me happy.

    Like

  4. I always start my index recommendations with MS Index tuning wizard. It gives me an estimate of how much my query can ran faster, basically improvement percentage.

    Like

Comments are closed.