Code Legibility vs Compactness

I've lately been working on a few projects which -- sadly -- are just a reinvention of the wheel. A large reason for this is that the code quality of a number of Open Source projects I could have used are very low, or the project libraries are massive in size and harmful to game download size, or are simply too inefficient for my needs.

One of the recurring themes I've seen lately that truly concerns me is a push to make "small" libraries, where the size is measured in lines of code. This desire to write compact code is entirely antithetical to writing good code or even truly small code.

I've been working at replacing my use of SDLimage for texture loading in OpenGL applications, for a few different reasons. Primarily, loading a single PNG with SDLimage on Windows requires distributing three extra DLLs (SDLimage.dll, libpng.dll, and zlib.dll), and the SDLimage API combined with OpenGL texture format requirements requires making a minimum of three copies of the image data in addition to whatever copies libpng and zlib are making in their internal buffers. Plus, the way SDL_image works is slower still by demand-loading (and then unloading) the image format DLLs for every image loaded. I had the choice of writing a new API around the incredibly obtuse libpng API, writing my own PNG loader, or using one of the several existing small Open Source PNG loaders. I decided to start with the latter option, and what I found has led to today's article.

Below are a few chunks of code from LodePNG that stand out as pretty hard to read, just to condense the line count. Indentation and whitespace is kept intact.

if(chunkLength < 5) { decoder->error = 76; break; }
for(length = 0; length < chunkLength && data[length] != 0; length++) ;
if(length + 2 >= chunkLength) { decoder->error = 75; break; }
key = (char*)malloc(length + 1);
if(!key) { decoder->error = 9941; break; }

for(y = 0; y < passh[i]; y++)
for(x = 0; x < passw[i]; x++)
{

case 2:
if(precon) for(i = 0; i < length; i++) recon[i] = scanline[i] + precon[i]; else for(i = 0; i < length; i++) recon[i] = scanline[i]; break;

Each of these is certainly condensed, but they're not particularly easy to read. Especially when they're scrunched up against the rest of the overly compact code.

The silly thing really is that the size of the code is mostly irrelevant. If a developer is concerned with making his code easy to integrate into an existing codebase, the size of the code in file count can certainly matter. It's a lot easier to drop in a single .c/.cpp file and a single header than it is to drop in 20, especially when Makefile rules or project configuration has to be modified to get the source files to find their respective headers, or if the developer wishes to rename files to fit the project's guidelines.

However, if you've only got a single file, then whether that file is condensed to 500 lines or cleanly written in 5000 lines is not going to affect anything at all other than legibility. It will not affect compilation times, it will not affect the size of the compiled binary, and it will not negatively affect downstream developers' opinions of the file (unless the developer has some kind of code-related megalophobia, I suppose).

I recall around a year ago, when I first wrote libtelnet, that another developer working on a similar library specifically for MUDs derided libtelnet and extolled his own creation based almost solely on line count. His code was, of course, far more compact: it skipped braces that weren't strictly necessary, was light on whitespace, was even lighter on comments, and didn't support as many of the TELNET RFC's. Still, for some reason, an illogical desire to write short code took hold of him and as a result he wrote lower quality code for compactness' sake.

This doesn't help anybody, but it certainly hurts. Look again at the first example from LodePNG above. Let's add in some liberal use of whitespace.

if (chunkLength < 5) {
decoder->error = 76; break; }

for (length = 0; length < chunkLength && data[length] != 0; length++)
;

if (length + 2 >= chunkLength) {
decoder->error = 75; break; }

key = (char*)malloc(length + 1);
if (!key) {
decoder->error = 9941; break; }

That's a bit better, but it's still not perfect. It is now easier to identify the error handling portions of the code and identify what they're doing. The first thing that stands out to me, though, are the magic-number error codes. The LodePNG author had a good idea in that line of code that triggers an error gets its own error code, to better help the downstream developers identify the exact reason a particular PNG failed to load. Unfortunately, the magic numbers aren't supper helpful in many cases. Some of them are identical to other numbers in the code, making a quick grep a little harder than it should be. The numbers are not immediately useful to the user in any way, so without looking up the numbers in the codebase, it's impossible to generate meaningful error messages. The numbers are possibly duplicated, since there is no mechanism to guarantee that each line gets its own errors.

There are a couple different approaches that could be taken to resolve this problem. One is simply an approach to make the generation of the numbers easier and less error-prone: use the LINE built-in macro to generate error numbers. This guarantees that every error is unique, and makes it very easy for the developer to find the error in the source code. However, it's still not helpful to downstream users, and it would make error numbers change whenever the LodePNG source was modified.

The second approach is to generate a list of error codes in the header and to always return one of those codes. This lets developers easily generate error messages for users (this table could even be distributed as part of LodePNG) and it makes the source easier to read. The downside is that there either need to be a large number of error codes, or portions of the source need to reuse error codes which makes finding the exact cause of an error much harder.

The approach that I find most useful is to actually combine the above two approaches. When setting an error condition, set both a broad error code type as well as setting the source line (and file, for large projects) where the error occurred. If you don't mind adding a little bit of overhead, you can also generated additional string data to store in the error struct, which can be fetched with an API call to the library. Assuming we have updated the decoder struct for this, here's what the LodePNG code snippet could look like:

#define SETERROR(decoder, code) \
do { \ (decoder)->error
code = (code); \ (decoder)->errorline = _LINE; \ } while (0)

...

if (chunkLength < 5) {
SETERROR(decoder, LODEPNGEBADFORMAT); break; }

for (length = 0; length < chunkLength && data[length] != 0; length++)
;

if (length + 2 >= chunkLength) {
SETERROR(decoder, LODEPNGEBADFORMAT); break; }

key = (char*)malloc(length + 1);
if (!key) {
SETERROR(decoder, LODEPNGENOMEM); break; }

The error routines in the code itself are now more meaningful. Instead of seeing some magic number being set, I can clearly see that the last error handling block is dealing with an out-of-memory situation. The client developers get a more useful error messages, and the line number is kept around so that they can use it to look up the exact routine that failed when a particular PNG fails to load.

The next problem with the code is the condensed for-loop. It certainly works, and does its job just fine, but those sorts of for loops always look out of place. They look like they're missing something. The empty statement in the for-loop's body looks like a bug. It's better when the semicolon is on its own line (otherwise its very easy to mistake the following line of code as being part of the for-loop's body), but it's still in poor form. In particular, it's rare to have the loop control be the actual body or intent of a loop. Let's replace it with an identical while loop that makes its intent clearer.

length = 0;
while (length < chunkLength && data[length] != 0) {
++length; }

Notice that I also added braces, even though the while-loop's body has only a single statement. This makes the loop's body very explicit and clear. It's not particularly important whether or not the braces are on the same line as the loop keyword, or on the line below; nor whether or not braces are indented in any particular way. Everyone has their pet coding style. What's important is the clarity of intent with the code. No longer is it a mysterious loop that seemingly does nothing. It is now very easily identified as a loop that's searching for the NUL byte in the data array to find the length of the valid data. Much better.

There's another small change I would make, which is not something everyone will agree with, but which I have found increases legibility. Most developers I've worked with have converted to using the same style in this case. That would be to replace the !key in the final if-statement with a more explicit comparison. My general rule is that only boolean expressions should be used in boolean contexts. All flow control statements are expecting boolean expressions, and the ! operator also operates on a boolean expressions. The fact that C treats any integer or pointer with a value of zero as false and any other value as true certainly exists, but for legibility's sake it is not a good idea to actually use that feature. When testing an integer, it's best to always compare it to zero, and if testing a pointer, it's best to always compare it to NULL (or 0 if you prefer, so long as you're consistent).

key = (char*)malloc(length + 1);
if (key NULL) {
SETERROR(decoder, LODEPNGENOMEM); break; }

Now we're getting somewhere. The final problem with the code is the lack of comments. Why must chunkLength be greater than or equal to five? Why must the length be less than chunkLength minus two? Why do we allocate an extra byte for key? I'm sure it was abundantly obvious to the developer of LodePNG when he wrote it. It might also be obvious If you've already thoroughly read and familiarized yourself with the code. If you're a developer investigating a bug in a piece of code you haven't worked with before, though, it's going to require you to read the entirety of the code to figure out what all these data structures are, what their valid values are, what algorithm each function is implementing, and then piecing together why those algorithms are being used. We could make developers do that, or we could just comment the code liberally and not waste a lot of their time.

There's a second very important reason to comment your own code. When I started analyzing the code to write comments for this article, I immediately found out that I couldn't determine the reason why that first if-statement checks that the chunkLength is at least five. The loop following it will function fine even if chunkLength is zero. The specification for this PNG chunk (iTXt) certainly requires more than five bytes worth of data at a minimum. When I read the spec, I see that for the shortest possible iTXt chunk (which would be devoid of any useful data at all) is actually six bytes, not five. This indicates that I probably found a bug. A bug which the author would have been more likely to find himself if he commented his code with the exact reason that he was checking the length against five. The counting bug is repeated in the second length check, where the key should be followed by at least four bytes instead of only two. The bug is fixed and comments are added below.

/* The chunk needs to contain at a minimum a key
(which takes a minimum of 2 bytes), a compression flag (1 byte), a compression method (1 byte), an optional language (minimum 1 byte), an optional translated keyword (minimum 1 byte), and the payload (which may be empty). If the chunk is not at least 6 bytes in length, it is malformed. */ if (chunkLength < 6) {
SETERROR(decoder, LODEPNGEBADFORMAT); break; }

/* Find out how long the key is in the data stream. The end of the key is indicated by a NUL byte. */ length = 0;
while (length < chunkLength && data[length] != 0) {
++length; }

/* The key will be followed by the two compression bytes, the optional language and optional translated key strings, and the optional payload. */ if (length + 4 >= chunkLength) {
SETERROR(decoder, LODEPNGEBADFORMAT); break; }

/* Allocate memory to store the key. Include room for the NUL byte. / key = (char)malloc(length + 1);
if (key NULL) {
SETERROR(decoder, LODEPNGENOMEM); break; }

There we have it. My version of the code is 33 lines long, compared to the original five. There's little doubt that my version is not only more legible, but also better documented. Plus, thanks to the bug fixes, my version is also more robust. Furthermore, excluding the expanded error setting code from the macro (which adds an additional field to the decoder struct and an additional assignment in the error paths), is actually going to compile just as fast and result in identical binary output. If you're interested in seeing it be exactly identical, replace the SET_ERROR macro with the old-style error assignments (albeit preferably with meaningful macros or an enum for the error codes).

Nothing actually important is sacrificed when making code readable. Even exploding the line count by a factor of six, as in this single example, has no harmful effects in any way, but has demonstrably beneficial effects, such as making it possible for a developer to identify and fix bugs in code he's has no prior experience with.

As a further nail in the coffin, I'd like to point out that I've found a lot of functions in the LodePNG code which are wasteful in terms of space, intent, and compiled codesize. There are a few instances of one-liner functions that are used only once, or even public API functions that are not used at all internally and have no viable use outside of the internal code. The code was rather organically written (the C version is a conversion of an original C++ version), and while effort was put into compacting code on a line-by-line basis there was little effort put into cleaning up and simplifying the code on a larger scale. The irony is that even as legibility is sacrificed for having a smaller line count, the poor code cleanliness kept the line count inflated a fair bit beyond what a concerted effort to compact the code could produce.

Don't fall into the trap of writing compact code. It can be and usually is very harmful to code legibility. On the other hand, code making liberal use of whitespace and comments is easier to understand, easier to debug, easier to analyze for security holes, and is more friendly and helpful to other developers. Do yourself and every other developer a favor and say "No" to compact code.