Hi

I read an article on Linuxjournal, and of particular interest for me was the following statement:

Any lingering doubts I had about Collins' geek side vanished when he explained how he managed to compress a 750 meg database. The database contained 21 pages of financial history for more than 5,000 firms. He needed it to fit on an 80 meg drive. He began by trying the Huffman compression algorithm. It didn't squeeze tight enough. It would take the air out of the text portion of the database just fine, but numbers were more difficult and didn't compress as well. One day when he was driving home, it came to him. He converted the numbers to base-256 and voila, it worked and the entire database now fit easily on the 80-meg drive.
Now I wonder - is this an often used practise? What is the effects on performance?

I would like to hear from you what you think about this, seemingly, interesting solution.

Cheers