Results 1 to 3 of 3

Thread: custom mysql logging

  1. #1
    Join Date
    Jan 2006
    Posts
    4

    custom mysql logging

    Hey there,

    So in the interest of cleaning and optimizing a large perl/mysql system I inheirited from a prior developer, I need to find a way to log unindexed queries. Slow queries are easy to identify, but unindexed ones are not (unless, of course, they're slow enough to make the slow log). A full query log is out of the question due to the high traffic on the system... and the slow log isn't enough as I have a suspicion some quick-but-not-quick-enough unindexed queries run several times a minute (or more) are having a serious degrading effect on the system overall.

    Can anyone tell me if there is a way to customize which queries get logged, or at least set the system to throw a flag to the error log when it hits an unindexed query? I know with more recent MySQL installations, you can force the query optimizer not to run unindexed queries, however this is not an option for the production site.

    Anyway, any insight into some MySQL monitoring or custom logging working towards optimization of queries would be appreciated - really need to get this system running more smoothly before we try to scale it out to double or quadruple the user base, and hand-sifting through every possible query just isn't an option...

    thanks for your attention
    barry

  2. #2
    Join Date
    Jan 2006
    Posts
    6
    I don't know a MySQL way to log un-indexed queries, but I was in the same position as you not too long ago. I needed to find out which queries were eating up all the database's cycles and causing locking waits.

    I took a script-centric approach. I created a perl object with a constructor and destructor that recorded a Time::HiRes::time and the destructor added the time and $ENV{'SCRIPT_NAME'} to a table in the database. Then I just added a my $object = new timer; at the top of any script I was suspicious of.

    Eventually I had enough numbers to determine which scripts were taking the longest and being accessed most often and then I went through the problem scripts and optimized all the queries I could find. This accounted for pretty much any circumstance, real-world usage frequency, quick-but-not-quick-enough queries in loops, etc.

  3. #3
    Join Date
    Jan 2006
    Posts
    4
    similar thoughts...

    I had actually thought about wrapping the DBI in an object that calculated execution times per query (by overriding 'execute', 'selectall_xxx' and 'selectrow_xxx' calls) and would record them in a log with the query and time taken, to show frequency and delay of the queries in the system, however I quickly realized that that might be too *****ious to try to integrate into a live and active system. Instead I've just been continually dropping my 'slow query' threshhold and using the built in mysql log for some quick analysis - which also forced me to rework a couple of seemingly well built and indexed queries that just *should not* take over 3 seconds (or 2 seconds for that matter) to execute...

    An overall page timer / query counter is still a good idea though - I'm pretty sure there are still several scripts running over 10 queries to produce a page which really requires no more than 3 -- which will slow you down even if they only take a tenth of a second each, especially if they get backed up behind someone else's insert or update.

    As a side note, to definitively answer my own question, if you're using mysql 5 or greater I believe there is a conf variable you can use (--log-non-indexed or something similar) to force any unindexed queries into your slow log. Too bad I'm still using 4.1.

    thanks for the thoughts though, and I'm always glad to see that not everyone using MySQL has succumbed to the PHP juggernaut. Long live Perl!

    thanx:barry

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •