Results 1 to 4 of 4

Thread: Performance

  1. #1
    JT Guest

    Performance

    We have SQL Server running on a dual processor Pentium 500mhz server. Our database is hit by about 300 users. 200 of those users are doing constant searches though a client table of about 250,000 records, which in turn is linked to a history table containing over 5,000,000 records. This is only the tip of the iceberg, we have many triggers, procedures, updates, etc. going in the background. The database has over 500 tables.

    Keep in mind, these searches that are taking place can involve all kinds of fields: phone number, company name, fax number, first name, last name, status, wildcard searches, etc. So as you can imagine, the database is being hit with all kinds of funky requests to find records. I will be the first to admit that our developers (vendor) are not the best code writers, and we have a tough time getting them to optimize something they do not even understand themselves.

    As I speak, our processor utilization is maxing out between 95 to 100 percent. I've done a lot of performance tuning and all of the problems lie in the searching. We've built, tested, rebuilt, re-tested each and every index. I even used the Profiler to filter what I could. It has improved, but our database is growing at a rate of 10 megs a day (already close to 3 gigs, not that huge). I think I've optimized my indexes as best as I can considering all the fields and possibilities available to users to search for records.

    For a database that requires all of these different search criteria, what would be a more optimal server? We are looking to purchase something ASAP. I could really use help from someone in a similar situation. It seems odd, in mind, that a company of 300 people would need to rely on a quad server (four processor capability.).

    Thanks. JT

  2. #2
    Jeff Putnam Guest

    Performance (reply)

    For an application like this, you might consider full-text indexing. Even doing it in SQL should give you some performance increase over what you're doing now. Alternatively (not a good normalization practice, but sometimes application performance takes precedence) you could have a separate table for each field with a pointer back to the associated record back in the main table, then slap a clustered index on every one of the subtables. Might be a pain to maintain, but could work. I'd also consider making a separate database exclusively for searching, and update it nightly via log shipping or snapshot replication. Frankly, any of the above sounds better than what you've got.

    Your vendor wouldn't by chance have a lot of VB programmers who are 'certified' in SQL, by the way? I've seen similar things before where someone feels qualified to architect an enterprise database simply because they know how to build apps.


    ------------
    JT at 6/26/01 2:17:12 PM

    We have SQL Server running on a dual processor Pentium 500mhz server. Our database is hit by about 300 users. 200 of those users are doing constant searches though a client table of about 250,000 records, which in turn is linked to a history table containing over 5,000,000 records. This is only the tip of the iceberg, we have many triggers, procedures, updates, etc. going in the background. The database has over 500 tables.

    Keep in mind, these searches that are taking place can involve all kinds of fields: phone number, company name, fax number, first name, last name, status, wildcard searches, etc. So as you can imagine, the database is being hit with all kinds of funky requests to find records. I will be the first to admit that our developers (vendor) are not the best code writers, and we have a tough time getting them to optimize something they do not even understand themselves.

    As I speak, our processor utilization is maxing out between 95 to 100 percent. I've done a lot of performance tuning and all of the problems lie in the searching. We've built, tested, rebuilt, re-tested each and every index. I even used the Profiler to filter what I could. It has improved, but our database is growing at a rate of 10 megs a day (already close to 3 gigs, not that huge). I think I've optimized my indexes as best as I can considering all the fields and possibilities available to users to search for records.

    For a database that requires all of these different search criteria, what would be a more optimal server? We are looking to purchase something ASAP. I could really use help from someone in a similar situation. It seems odd, in mind, that a company of 300 people would need to rely on a quad server (four processor capability.).

    Thanks. JT

  3. #3
    JT Guest

    Performance (reply)

    Thanks Jeff. I have actually considered the idea of separate tables so that I could utilize the clustered indexes. The only problem, of course, is that the vendor would have to re-code some of their software to point to the secondary tables. These vendors freak me out. And yes, their VB programmers are not SQL certified. It was one of those unfortunate situations where they over-promised and under-delivered.

    I will continue to do the performance tuning and looking for ways to optimize it. Nevertheless, management wants put a chunk of money into a bigger server, and only ONCE until we find another software in a few years. Can you recommend something based on the types of things I've described?

    JT


    ------------
    Jeff Putnam at 6/26/01 2:34:05 PM

    For an application like this, you might consider full-text indexing. Even doing it in SQL should give you some performance increase over what you're doing now. Alternatively (not a good normalization practice, but sometimes application performance takes precedence) you could have a separate table for each field with a pointer back to the associated record back in the main table, then slap a clustered index on every one of the subtables. Might be a pain to maintain, but could work. I'd also consider making a separate database exclusively for searching, and update it nightly via log shipping or snapshot replication. Frankly, any of the above sounds better than what you've got.

    Your vendor wouldn't by chance have a lot of VB programmers who are 'certified' in SQL, by the way? I've seen similar things before where someone feels qualified to architect an enterprise database simply because they know how to build apps.


    ------------
    JT at 6/26/01 2:17:12 PM

    We have SQL Server running on a dual processor Pentium 500mhz server. Our database is hit by about 300 users. 200 of those users are doing constant searches though a client table of about 250,000 records, which in turn is linked to a history table containing over 5,000,000 records. This is only the tip of the iceberg, we have many triggers, procedures, updates, etc. going in the background. The database has over 500 tables.

    Keep in mind, these searches that are taking place can involve all kinds of fields: phone number, company name, fax number, first name, last name, status, wildcard searches, etc. So as you can imagine, the database is being hit with all kinds of funky requests to find records. I will be the first to admit that our developers (vendor) are not the best code writers, and we have a tough time getting them to optimize something they do not even understand themselves.

    As I speak, our processor utilization is maxing out between 95 to 100 percent. I've done a lot of performance tuning and all of the problems lie in the searching. We've built, tested, rebuilt, re-tested each and every index. I even used the Profiler to filter what I could. It has improved, but our database is growing at a rate of 10 megs a day (already close to 3 gigs, not that huge). I think I've optimized my indexes as best as I can considering all the fields and possibilities available to users to search for records.

    For a database that requires all of these different search criteria, what would be a more optimal server? We are looking to purchase something ASAP. I could really use help from someone in a similar situation. It seems odd, in mind, that a company of 300 people would need to rely on a quad server (four processor capability.).

    Thanks. JT

  4. #4
    Jeff Putnam Guest

    Performance (reply)

    You might also read this article by a columnist on this site..

    http://www.swynk.com/friends/achigrik/FileOptimTips.asp


    ------------
    Jeff Putnam at 6/26/01 2:34:05 PM

    For an application like this, you might consider full-text indexing. Even doing it in SQL should give you some performance increase over what you're doing now. Alternatively (not a good normalization practice, but sometimes application performance takes precedence) you could have a separate table for each field with a pointer back to the associated record back in the main table, then slap a clustered index on every one of the subtables. Might be a pain to maintain, but could work. I'd also consider making a separate database exclusively for searching, and update it nightly via log shipping or snapshot replication. Frankly, any of the above sounds better than what you've got.

    Your vendor wouldn't by chance have a lot of VB programmers who are 'certified' in SQL, by the way? I've seen similar things before where someone feels qualified to architect an enterprise database simply because they know how to build apps.


    ------------
    JT at 6/26/01 2:17:12 PM

    We have SQL Server running on a dual processor Pentium 500mhz server. Our database is hit by about 300 users. 200 of those users are doing constant searches though a client table of about 250,000 records, which in turn is linked to a history table containing over 5,000,000 records. This is only the tip of the iceberg, we have many triggers, procedures, updates, etc. going in the background. The database has over 500 tables.

    Keep in mind, these searches that are taking place can involve all kinds of fields: phone number, company name, fax number, first name, last name, status, wildcard searches, etc. So as you can imagine, the database is being hit with all kinds of funky requests to find records. I will be the first to admit that our developers (vendor) are not the best code writers, and we have a tough time getting them to optimize something they do not even understand themselves.

    As I speak, our processor utilization is maxing out between 95 to 100 percent. I've done a lot of performance tuning and all of the problems lie in the searching. We've built, tested, rebuilt, re-tested each and every index. I even used the Profiler to filter what I could. It has improved, but our database is growing at a rate of 10 megs a day (already close to 3 gigs, not that huge). I think I've optimized my indexes as best as I can considering all the fields and possibilities available to users to search for records.

    For a database that requires all of these different search criteria, what would be a more optimal server? We are looking to purchase something ASAP. I could really use help from someone in a similar situation. It seems odd, in mind, that a company of 300 people would need to rely on a quad server (four processor capability.).

    Thanks. JT

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •