It wasn't compressed because the LZW compression algorithm couldn't compress it. The result of the algorithm was 166 characters longer. Most likely there wasn't enough long chain repetition in the source text (repetition of short two or three character strings is not enough). The LZW algorithm I presented is much better suited for texts that use the same words often.
Also, the LZW algorithm uses a numeric encoding system which is inherently less space efficient than a true dynamic bit width code (i.e. 9, 10, 11, etc bit codes). My encoding system is not nearly as efficient as it could be. Perhaps in the future, when I don't have as much other work to do, I will convert the encoding system to a more appropriate one.
Perhaps we should add an option to the compress/decompress functions to guarantee comms safety, changing all occurances of \000 and \001 to \001\255 and \001\001 respectively in the compress code, while converting in the reverse direction in decompress. This way, the user of the library doesn't need to worry about this implementation detail.
For those who don't know, I have recently added LibCompress to the SVN.
LibCompress is an LZW based encryption algorithm. It is fairly fast. (Less than half a second to compress 20KB of data on my crappy computer at work, about a tenth of a second to decompress the result)
I have tried to make the algorithm as efficient as I possibly could. Compression is lossless, and the ratio depends entirely on the quality of the uncompressed data. As long as the input data does not contain "/000" characters, the output will not have any either, making it suitable for use with AceComm-3.0 and other such libraries.