
This is converted, by repeatedly dividing by 85 and taking the remainder, into 5 radix-85 digits. When encoding, each group of 4 bytes is taken as a 32-bit binary number, most significant byte first (Ascii85 uses a big-endian convention). (Five radix-85 digits can represent the integers from 0 to 4 437 053 124 inclusive, which suffice to represent all 4 294 967 296 possible 4-byte sequences.) Thus, only the 94 printable ASCII characters are "safe" to use to convey data.Įighty-five is the minimum integral value of n such that n 5 ≥ 256 4 so any sequence of 4 bytes can be encoded as 5 symbols, as long as at least 85 distinct symbols are available. Those communication protocols may only be 7-bit safe (and within that avoid certain ASCII control codes), and may require line breaks at certain maximum intervals, and may not maintain whitespace. If an image is over 50K it better be a separate request and cached, otherwise s small CSS tweak will invalidate the cached images.The basic need for a binary-to-text encoding comes from a need to communicate arbitrary binary data over preexisting communications protocols that were designed to carry only English language human-readable text. I believe the idea is to use data URIs (instead of sprites) for small decoration images. Looks like the difference becomes smaller as the file sizes increase, it's possible that for very big files starting with uncompressed image could be better, but shoving more than 50K of images inline into a CSS file seems to be missing the idea of data URIs. Previous + Amazon sprite + Wikipedia logoĬlearly starting with compressed images is better. Then kept adding more files to grow the size of the result CSS - added Y!Search sprite, Google sprite, Amazon sprite and Wikipedia logo. I tried to keep the test reasonable and used real life images - first the images that use base64 encoding in Yahoo! Search results. This is what I tried - took several compressed PNGs, uncompressed them (with PNGOut's -s4), then encoded both with base64 encoding, put them in CSS, gzip the CSS and compared file sizes. PNGOut has an option to save uncompressed data, like The IDAT data chunk is compressed to save space, but it looks like it doesn't have to be. There could be other chunks such as transparency, background and so on, but these three are required. At the very least there's header (IHDR), data (IDAT) and end (IEND) chunks. The PNG format contains information in "chunks". Compressing already compressed data doesn't sound like a good idea, so it sounds believable that skipping the first compression might give better results. See how it goes: compress - encode - compress again.

take a PNG (which contains compressed data),.When using data URIs you essentially do this: Then I saw a comment somewhere (reddit? hackernews?) that the content before base64-encoding better be uncompressed, because it will be gzipped better after that. I mentioned that according to my tests base64 encoding adds on average 33% to the file size, but gzipping brings it back, sometimes to less than the original. I've talked before about base64-encoded data URIs. Let me share before it disappears to nothing with the next computer crash. Here's another little experiment I made some time ago and forgot about. The perf advent calendar was my attempt to flush out a bunch of stuff, tools and experiments I was doing but never had the time to talk about. Warning: this post is about a failure, so you can skip it altogether 🙂


The beauty of experimentation is that failures are just as fun as successes.
