seams like you did something wrong fisheye is instant as far as i have seen but even if not then the change should have been pushed to files.wowace.com and id hasnt
74189 galmok Sat, 17 May 2008, 13:41:34 -0700
LibCompress: Added Huffman compression. Changed LZW format slightly. LibCompress:Compress() will compress using all methods and return best result. Use Libcompress:Decompress to uncompress data.
/trunk/LibCompress/LibCompress.lua 74189 (+473 -14) view diffs
/trunk/LibCompress/LibCompress.toc 74189 (+1 -1) view diffs
Has anyone been using this lately? Is it working well? I'm working on a new add-on which needs to send a fairly large amount of data between users (about 20-30K). This takes a long time to send using AceComm-3.0 due to the message size limit. I've brought in LibCompress, which is able to compress the data to an average of about 65% of the original size. The data is actually a table which has been sent through AceSerialize-3.0. The compression appears to be working, but decompression yields an error on line 522 (in the DecompressHuffman function):
[2008/05/26 20:56:22-118-x4]: memory allocation error: block too big:
<in C code>: ?
QuestAgent-73719\Libs\LibCompress\LibCompress.lua:522: in function <...e\AddOns\QuestAgent\Libs\LibCompress\LibCompress.lua:428>
(tail call): ?:
QuestAgent-73719\QuestAgent.lua:60: in function `?'
CallbackHandler-1.0\CallbackHandler-1.0.lua:146: in function <...edia-3.0\CallbackHandler-1.0\CallbackHandler-1.0.lua:146>
<string>:"safecall Dispatcher[4]":4: in function <[string "safecall Dispatcher[4]"]:4>
<in C code>: ?
<string>:"safecall Dispatcher[4]":13: in function `?'
CallbackHandler-1.0\CallbackHandler-1.0.lua:91: in function `Fire'
AceComm-3.0\AceComm-3.0.lua:180: in function `aceCommReassemblerFunc'
AceComm-3.0\AceComm-3.0.lua:243: in function <...terface\AddOns\Omen\Libs\AceComm-3.0\AceComm-3.0.lua:235>
I tried switching to just CompressLZW instead of Compress, and everything works just fine (but the LZW algorithm compresses it far less than Huffman did). Is anyone still working on this? I can provide more code for testing if needed.
Also: did the Huffman algorithm here ever get updated to escape \000 characters?
Is anyone still working on this? I can provide more code for testing if needed.
I haven't worked on this since my last SVN Commit on it. Sometime in the future I may update the LZW code to use a more space efficient encoding system.
Also: did the Huffman algorithm here ever get updated to escape \000 characters?
I have no idea, but judging from the SVN log, I don't think so.
Has anyone been using this lately? Is it working well? I'm working on a new add-on which needs to send a fairly large amount of data between users (about 20-30K). This takes a long time to send using AceComm-3.0 due to the message size limit. I've brought in LibCompress, which is able to compress the data to an average of about 65% of the original size. The data is actually a table which has been sent through AceSerialize-3.0. The compression appears to be working, but decompression yields an error on line 522 (in the DecompressHuffman function):
I tried switching to just CompressLZW instead of Compress, and everything works just fine (but the LZW algorithm compresses it far less than Huffman did). Is anyone still working on this? I can provide more code for testing if needed.
Also: did the Huffman algorithm here ever get updated to escape \000 characters?
Will you be able to provide the data you tried to compress to me? Send to [email]galmok@gmail.com[/email] if possible. But looking at the error, the problem may not be easy to solve (not LUA error, but C error).
Galmok, I haven't quite gotten to emailing you yet, but I was planning on sending you the data I was compressing. It sounds like you've reproduced the issue.
I'm not sure if there is a limit to the number of keys in the table, but I would think it would be extremely large if there is. Could you perhaps be seeing a stack overflow?
I haven't debugged it yet but according to your report, the low-level (c) error occurs when trying to add a new entry to the table. The only two things that should be able to cause this error is if you have run out of memory or if wow lua tables have a low limit on the number of keys. My huffman codec was created, debugged and stress-tested using an external LUA interpreter and only the functionality was tested in wow.
What was the size of the string you compressed (and then tried decompress)?
/run a=LibStub:GetLibrary("LibCompress")
/run for i=10000,30000,1000 do ChatFrame1:AddMessage(i); r=string.rep("a",i); c=a:CompressHuffman(r); d=a:Decompress(c); end
No errors.
Tried up to a string length 100000 and still no problem. Sidenote: This uses approx 3 megabyte memory to compress and decompress (memory is freed again).
I'll repro the issue and send you the string that caused it. However, with the high memory churn and processor usage, I might not use this library at all for this add-on.
I would appreciate the string that causes a wow C error. Even if you can't use the addon for your purpose, I would like to know if there is a bug/flaw in the implementation.
I was thinking about making the compression take part in smaller parts and issue a callback when the data was compressed. But the problem is that player may log off before the data has been compressed and the addon that uses LibCompress has to store the uncompressed data somewhere inbetween.
Most of the compressor and decompressor in Huffman can be broken into smaller parts (as small as wanted, but will get much slower then).
Could you please provide me with the lines necessary to load AceSerialize-3.0 and convert the table to the string? Google finds only this thread mentioning AceSerialize...
You can get AceSerializer-3.0 by going to files.wowace.com and downloading the Ace3 package. Bring the library into your add-on by putting it in a Libs folder and adding Libs\AceSerializer-3.0\AceSerializer-3.0.xml to your toc (or to embeds.xml). From there, add the mixins to your add-on object like so:
LibStub("AceSerializer-3.0"):Embed(MyAddOnObject)
After that, it's easy to serialize and deserialize an object:
local serializedData = self:Serialize(someData)
someData = self:Deserialize(serializedData)
Note that your add-on doesn't need to use Ace3 to embed this library as shown.
Your problem is not directly due to LibCompress. It just triggered triggered a general problem. You could simply have run out of memory or even have experienced a memory error.
Well, the issue is only triggered by DecompressHuffman. DecompressLZW does not cause it. I can't imagine what sort of "general problem" I could be causing -- my code is only doing exactly what you've done, using the same data. If the issue exists in AceSerialize or LibCompress, that's out of my hands to deal with. I suppose there could be something funky going on even further up the call stack, like in CallbackHandler or AceComm-3.0, which are the other libraries involved here. You probably need to replicate my entire call stack to repro the issue. Well, I don't want to mess with it -- I just won't use LibCompress. I appreciate you looking into it!
After rereading my post, I realized something: AceComm is probably the issue (though not directly). I've been forgetting to deal with / ask about the \000 issue. I bet sending the result of CompressHuffman through the SendAddOnMessage API is causing the problem. If so, this should be dealt with in LibCompress -- I am certainly not going to go through a bunch of extra hoops to further encode the data so it is safe to transmit.
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
I tried switching to just CompressLZW instead of Compress, and everything works just fine (but the LZW algorithm compresses it far less than Huffman did). Is anyone still working on this? I can provide more code for testing if needed.
Also: did the Huffman algorithm here ever get updated to escape \000 characters?
I haven't worked on this since my last SVN Commit on it. Sometime in the future I may update the LZW code to use a more space efficient encoding system.
I have no idea, but judging from the SVN log, I don't think so.
Will you be able to provide the data you tried to compress to me? Send to [email]galmok@gmail.com[/email] if possible. But looking at the error, the problem may not be easy to solve (not LUA error, but C error).
table_insert(uncompressed, symbol)
Basically that means the table uncompressed has gotten too large.
How large may tables be in WoW?
Is the limit based in entries (number of keys) or total data in the table?
If the limit is a certain number of entries, then I can solve it fairly easy, but it will cost a bit performance in the decompression.
I'm not sure if there is a limit to the number of keys in the table, but I would think it would be extremely large if there is. Could you perhaps be seeing a stack overflow?
What was the size of the string you compressed (and then tried decompress)?
/run a=LibStub:GetLibrary("LibCompress")
/run for i=10000,30000,1000 do ChatFrame1:AddMessage(i); r=string.rep("a",i); c=a:CompressHuffman(r); d=a:Decompress(c); end
No errors.
Tried up to a string length 100000 and still no problem. Sidenote: This uses approx 3 megabyte memory to compress and decompress (memory is freed again).
I can't reproduce the problem.
I was thinking about making the compression take part in smaller parts and issue a callback when the data was compressed. But the problem is that player may log off before the data has been compressed and the addon that uses LibCompress has to store the uncompressed data somewhere inbetween.
Most of the compressor and decompressor in Huffman can be broken into smaller parts (as small as wanted, but will get much slower then).
After that, it's easy to serialize and deserialize an object:
Note that your add-on doesn't need to use Ace3 to embed this library as shown.
/run a=LibStub:GetLibrary("LibCompress")
/run b=LibStub:GetLibrary("AceSerializer-3.0")
/run c=b:Serialize(Galmok_Save)
/run d=a:CompressHuffman(c)
/run e=a:Decompress(d)
No errors and c is equal to e.
Your problem is not directly due to LibCompress. It just triggered triggered a general problem. You could simply have run out of memory or even have experienced a memory error.