Results 1 to 2 of 2

Thread: [Q] High Transaction load solution?

  1. #1
    Brian Rhee Guest

    [Q] High Transaction load solution?

    Hey guys,

    I orignally wrote a post here regarding some info on setting up a cluster. Upon further analysis of the problem with our system, I noted that at particular times we have tremendous amounst of Update, Insert, Delete etc, transactions hitting out database.

    I thought originally SQL Clustering could solve this problem, but the time and upkeep that will be required to maintain such a configuration might not be feasible and more importantly it may not even fix the problem.

    Next week I plan on doing some more specific performance monitoring off the database during normal business activity, but my initial suspicion is that there is a tremendous amount of I/O processing due to the high transaction load which is slowing down the application.

    I was wondering what you have done to alleviate such problems? One of the solutions I have come up with is to possibly create a Master/Slave SQL Server design where the Slave handles most of the database transactions and then at a low load during the day update the Master DB. How does this sound? Any other ideas would be greatly appreciated...

    Thanx

  2. #2
    Kurt Guest

    [Q] High Transaction load solution? (reply)

    Hmm. Sounds like optimization time. For starters, run the Profiler and allow it to gather information about the queries that are being sent to the server and then save the trace file. Next run the Index Tuning Wizard and use the trace file you created from the profile. The Index Tuning Wizard is a pretty good too to help find some indexes that are missing that could help your performance. Keep in mind that the Wizard is nice, but it can only do so much. You'll need to know exactly what demands are being placed on the SQL Server (queries, locks, memory and cpu limitations etc).

    If you can isolate the tables that are involved in the insert/update action, it isn't a bad idea to move these tables off to another driver altogether, you do this by creating FileGroups. You could have "hot spots" developing due to the type of processing that SQL is being asked to perform. While MS recommends that you don't create FileGroups unless you absolutely have to, they were intended to provide relief for the issues you may be facing. I would agree with MS in shying away from FileGroups, only because they add another layer of complextity to the schema and require additional maintenance. I would first look at your indexes. Doing lots of inserts into a table with a clustered index means that the table has to be physically resorted with each insert, for instance (that depends upon the fill-factor, of course). Queries themselves can cause gridlock, so look at them too. You could even have locking going on that is either consuming lots of memory and/or blocking other connections. Optimizing is definately an art form. As "smart" as SQL Server is in its "auto-tuning" capability, it can't work miracles. Bad queries get written, un-planned traffic happens (that's usually a good thing) and hardware itself becomes obsolete.

    I would recommend looking at the sections in BOL about "tuning" and "optimization". My key bit of advice is you MUST know that is being sent to the server and be able to identity areas that are causing slow-downs.

    Hope this helps,

    Kurt


    ------------
    Brian Rhee at 11/10/01 12:34:05 PM

    Hey guys,

    I orignally wrote a post here regarding some info on setting up a cluster. Upon further analysis of the problem with our system, I noted that at particular times we have tremendous amounst of Update, Insert, Delete etc, transactions hitting out database.

    I thought originally SQL Clustering could solve this problem, but the time and upkeep that will be required to maintain such a configuration might not be feasible and more importantly it may not even fix the problem.

    Next week I plan on doing some more specific performance monitoring off the database during normal business activity, but my initial suspicion is that there is a tremendous amount of I/O processing due to the high transaction load which is slowing down the application.

    I was wondering what you have done to alleviate such problems? One of the solutions I have come up with is to possibly create a Master/Slave SQL Server design where the Slave handles most of the database transactions and then at a low load during the day update the Master DB. How does this sound? Any other ideas would be greatly appreciated...

    Thanx

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •