8/08/2013

08-08-13 - Oodle Static LZP for MMO network compression

Followup to my post 05-20-13 - Thoughts on Data Compression for MMOs :

So I've tried a few things, and Oodle is now shipping with a static dictionary LZP compressor.

OodleStaticLZP uses a static dictionary and hash table which is const and shared by all network channels. The size is set by the user. There is an adaptive per-channel arithmetic coder so that the match length and literal statistics can adapt to the channel a bit (this was a big win vs. using any kind of static models).

What I found from working with MMO developers is that per-channel memory use is one of the most important issues. They want to run lots of connections on the same server, which means the limit for per-channel memory use is something like 512k. Even a zlib encoder at 400k is considered rather large. OodleStaticLZP has 182k of per-channel state.

On the server, a large static dictionary is no problem. They're running 16GB servers with 10,000 connections, they really don't care if the static dictionary is 64MB. However, that same static dictionary also has to be on the client, so the limit on how big a static dictionary you can use really comes from the client side. I suspect that something in the 8MB - 16MB range is reasonable. (and of course you can compress the static dictionary; it's only something like 2-4 MB that you have to distribute and load).

(BTW you don't necessarily need an adaptive compression state for every open channel. If some channels tend to go idle, you could drop their state. When the channel starts up again, grab a fresh state (and send a reset message to the client so it wipes its adaptive state). You could do something like have a few thousand compression states which you cycle in an LRU for an unbounded number of open channels. Of course the problem with that is if you actually get a higher number of simultaneous active connections you would be recycling states all the time, which is just the standard cache over-commit problem that causes nasty thrashing, so YMMV etc.)

This is all only for downstream traffic (server->client). The amount of upstream traffic is much less, and the packets are tiny, so it's not worth the memory cost of keeping any memory state per channel for the upstream traffic. For upstream traffic, I suggest using a static huffman encoder with a few different static huffman models; first send a byte selecting the huffman table (or uncompressed) and then the packet huffman coded.

I also tried a static dictionary / adaptive statistics LZA (LZA = LZ77+arith) (and a few other options, like a static O3 context coder and some static fixed-length string matchers, and static longer-word huffman coders, but all those were much worse than static LZA or LZP). The static dictionary LZA was much worse than the LZP.

I could conjecture that the LZP does better on static dictionaries than LZA because LZP works better when the dictionary mismatches the data. The reason being that LZP doesn't even try to code a match unless it finds a context, so it's not wasting code space for matches when they aren't useful. LZ77 is always trying to code matches, and will often find 3-byte matches just by coincidence, but the offsets will be large so they're barely a win vs literals.

But I don't think that's it. I believe the problem with static LZA is simply for an offset-coded LZ (as I was using), it's crucial to put the most useful data at low offset. That requires a very cleverly made static dictionary. You can't just put the most common short substrings at the end - you have to also be smart about how those substrings run together to make the concatenation of them also useful. That would be very interesting hard algorithm to work on, but without that work I find that static LZA is just not very good.

There are obvious alternatives to optimizing the LZA dictionary; for example you could take the static dictionary and build a suffix trie. Then instead of sending offsets into the window, forget about the original linear window and just send substring references in the suffix trie directly, ala the ancient Fiala & Green. This removes the ugly need to optimize the ordering of the linear window. But that's a big complex bowl of worms that I don't really want to go into.

Some results on some real packet data from a game developer :


downstream packets only
1605378 packets taking 595654217 bytes total
371.0 bytes per packet average


O0 static huff : 371.0 -> 233.5 average

zlib with Z_SYNC_FLUSH per packet (32k window)
zlib -z3 : 371.0 -> 121.8 average
zlib -z6 : 371.0 -> 111.8 average

OodleLZH has a 128k window
OodleLZH Fast :
371.0 -> 91.2 average

OodleLZNib Fast lznib_sw_bits=19 , lznib_ht_bits=19 : (= 512k window)
371.0 -> 90.6 average

OodleStaticLZP [mb of static dic|bits of hash]

LZP [ 4|18] : 371.0 -> 82.8 average
LZP [ 8|19] : 371.0 -> 77.6 average
LZP [16|20] : 371.0 -> 69.8 average
LZP [32|21] : 371.0 -> 59.6 average

Note of course that LZP would also benefit from dictionary optimization. Later occurances of a context replace earlier ones, so more useful strings should be later in the window. Also just getting the most useful data into the window will help compression. These results are without much effort to optimize the LZP dictionary. Clients can of course use domain-specific knowledge to help make a good dictionary.

TODOS : 1. optimization of LZP static dictionary selection. 2. mixed static-dynamic LZP with a small (32k?) per-channel sliding window.

No comments:

old rants