3/06/2013

03-06-13 - Sympathy for the Library Writer

Over the years of being a coder who was a library-consumer and not a library-writer, I've done my share of griping about annoying API's or what I saw as pointless complication or ineffiency. Man, I've been humbled by my own experience trying to write a public library. It is *hard*.

The big problem with libraries is that you don't control how they're used. This is in contrast to game engines. Game engines are not libraries. I've worked on many game engines over the years, including ones that went out to large free user bases (Genesis 3d and Wild Tangent), and they are much much easier than libraries.

The difference is that game engines generally impose an architecture on the user. They force you to use it in a certain way. (this is of course why more advanced developers despise them so much; it sucks to have some 3rd party telling you your code architecture). It's totally acceptable if a game engine only works well when you use it in the approved way, and is really slow if you abuse it, or it could even crash if you use it oddly.

A library has to be flexible about how it's used; it can't impose a system on the user, like a certain threading model, or a certain memory management model, or even an error-handling style.

Personally when I do IO for games, I make a "tool path" that just uses stdio and is very simple and flexible, does streaming IO and text parsing and so on, but isn't shipped with the game, and I make a "game path" that only does large-block async IO that's pre-baked so you can just point at it. I find that system is powerful enough for my use, it's easy to write and use. It means that the "tool path" doesn't have to be particularly fast, and the fast game path doesn't need to support buffered character IO or anything other than big block reads.

But I can't force that model on clients, so I have to support all the permutations and I have to make them all decently fast.

A lot of times in the past I've complained about over-complicated APIs that have tons of crazy options that nobody ever needs (look at the IJG jpeg code for example). Well, now I see that often those complicated APIs were made because somebody (probably somebody important) needed those options. Of course as the library provider you can offer the complex interface and also simpler alternatives, but that has its own pitfalls of making the API bigger and more redundant (like if you offer OpenFileSimple and OpenFileComplex); in some ways it's better to only offer the complex API and make the user wrap it and reduce the parameter set to what they actually use.

There's also a sort of "liability" issue in libraries. Not exactly legal liability, but program bad behavior liability. Lots of things that would make the library easier to use and faster are naughty to do automatically. For example Oodle under Vista+ can run faster with elevated priviledge, to get access to some of the unsecure file APIs (like extending without zeroing), but it would be naughty for me to do that automatically, so instead I have to add an extra step to make the client specifically ask for that.

Optimization for me has really become a nightmare. At first I was trying to make every function fast, but it's impossible, there are just too many entry points and too many usage patterns. Now my philosophy is to make certain core functions fast, and then address problems in the bigger high level API as customers see issues. I remember as a game developer always being so pissed that all the GL drivers were specially optimized for Id. I would want to use the API in a slightly different style, and my way would be super slow, not for any good reason but just because it hadn't gotten the optimization loving of the important customer's use case.

I used to also rail about the "unnecessary" argument checking that all the 3d APIs do. It massively slows them down, and I would complain that I had ensured the arguments were good so just fucking pass them through, stop slowing me down with all your validation! But now I see that if you really do that, you will just constantly be crashing people as they pass in broken args. In fact arg validation is often the way that people figure out the API, either because they don't read the docs or because the docs are no good.

(this is not even getting into the issue of API design which is another area where I have been suitably humbled)

ADDENDUM : I guess I should mention the really obvious points that I didn't make.

1. One of the things that makes a public library so hard after release is that you can't refactor. The normal way I make APIs for myself (and for internal teams) is to sort of make an effort at a good API the first time, but it usually sucks, and you rip it out and go through big scourges of find-rep. That only works when you control all the code, the library and the consumer. It's only after several iterations that the API becomes really nice (and even then it's only nice for that specific use case, it might still suck in the wild).

2. APIs without users almost always suck. When someone goes away in a cave and works on a big new fancy library and then shows it to the world, it's probably terrible. This a problem that I think everyone at RAD faces. The code of mine that I really like is stuff that I use over and over, so that I see the flaws and when I want it to be easier to use I just go fix it.

3. There are two separate issues about what makes an API "good". One is "is it good for the user?" and one is "is it good for the library maintainer?". Often they are the same but not always.

Anyway, the main point of this post is supposed to be : the next time you complain about a bad library design, there may well be valid reasons why it is the way it is; they have to balance a lot of competing goals. And even if they got it wrong, hey it's hard.

10 comments:

Stephan said...

Having to cater to external clients and not being able to control how a library is used can also dramatically increase the testing and documentation effort, at least if you want to do it *right*.

cbloom said...

Yeah.

It's something where clever design (more clever than me) could help you a lot.

You want to make individual components that are simple and well tested and can be put together without breaking each other.

That way you make N components and your testing load is N.

If you instead cram those features together with interactions, you have a 2^N size load.

(a rare case where we can actually say it's "exponentially" larger)

I often get fooled into doing it the wrong way in search of efficiency.

Say for example you want to do something like a PNG compressor. You have any number of pixel formats, filters, and back-end compressors. The bad API is like :

DoPNGlikeThingy( void * pixels, int format, int filter, int compressor );

you've multiplied the API space up massively and now it's for practical purposes too big to test the whole parameter space.

Instead you could do it like :

TransformToStandardRGB( void * pixels, int format );
DoFilters( void * pixels, int filter );
DoCompress( void * pixels, int compressor );

Now you can test each piece individually over all its options, and ensure they only interact through the simple well-defined channel of the pixel data.

It's x+y+z tests instead of x*y*z.

The trap that nerds like me fall into is that you can be more efficient if you combine the steps; eg. trying to work on pixel rows one by one to keep them in cache. That efficiency gain is real, but it requires you to tangle up all your systems together and leads to a hopelessly huge testing space.

The reason this is so much worse for a library writer is you don't know which uses the game actually cares about. Maybe the pixels are always just 8-bit RGB and we don't need that flexible pixel format at all. Maybe we only use a few compress modes. Then we can reduce the API and only optimize and test the cases we care about. And we can measure our performance in our final usage scenario which means no futzing around with synthetic testing. Ahh! So much better.

won3d said...

Two Casey-related questions:

Any reflections on that library design talk he gave years ago?

Ever think about using some kind of metaprogramming/code-gen to give you some kind of leverage to deal with x*y*z style configuration spaces?

Stephan said...

Developing a software library for external use is definitely more difficult than developing other kinds of software, both technically and commercially. The surprisingly large effort required to "polish up and package" some internal library into something that you can sell to clients is probably the reason why so few libraries are developed in this way. Arguably, most commercial work on non-internal software libraries is done to support a platform (Windows, OSX/iOS, Intel Processors, NVidia GPUs, some cloud hosting platform, etc) or because companies publish and contribute to open source libraries in the hope that they can gain some karma and profit from others' contributions.

Intuitively, this situation has the feel of a market failure, because you'd think that it would be better for the industry at large if there was enough commercial incentive for the best developers to focus on creating high-quality, state-of-the-art libraries for everybody to use, instead of reinwenting the wheel internally at some company for the hundredth time. Economically speaking one could probably make an argument for a market failure based on the transaction costs involved in software library licencing and the presence of positive external effects associated with high-quality software libraries.

I don't know how your company intends to market the library, but maybe it would make sense to sell source code licences. This would allow you to say: "We have optimized and tested this library for use cases X, Y and Z, as demonstrated in the sample code. If you have different requirements, you can adapt the source code and pay us to help you. Also, if the documentation isn't perfect, just look at the highly readable source code."

cbloom said...

"Any reflections on that library design talk he gave years ago?"

Good question. I saw it when it was given but TBH didn't pay too much attention because I wasn't writing APIs at the time. Just had a look back at it now.

I agree with the basics, which is like don't impose systems on the client, work with their systems; don't impose a big retained-mode, let them use their systems for IO/memory/etc, dont force your own systems on them, etc.

There are some issues where I think he doesn't emphasize the negative enough.

Granularity :

exposing the micro-ops inside your larger operations is nice if clients actually need it. But it has a lot of negatives; it is sort of exposing how your internals work. It's the opposite of opacity and encapsulation.

The ideal for me is to expose only the highest level fewest functions possible. In an ideal world the library API would always just be

void magicfunc(void);

that does exactly what the user wants. That way I have very little coupling to their code and I can change the internals without breaking them. The more you start revealing the granular internal bits, the more your details are unchangeable.

Redundancy :

Some redundancy is indeed nice for the user, and when I write APIs for myself I like lots of redundancy.

(for example in cblib you can do strlen on a char* or a wchar* or a String, and all the file IO routines take all of those types, and etc. Redundancy is nice.)

But I'm not so sure the library should be the one offering the nice redundancy, I now sort of thing that should be left up to clients to do with their own wrappers.

One problem with redundancy is just that it makes the API bigger, which means more docs, more testing, more maintenance.

But the big problem is that redundancy is confusing. When you have lots of ways to do the same thing, the client doesn't know if one is "best" or where to get in. I'm starting to lean towards orthogonality as the ideal for the library API.

BTW followup note in next comment cuz this is getting too long.

cbloom said...

followup :

I have come to the thinking that every API that you ever use should always be wrapped.

Don't call stdlib directly. Don't call Win32 or whatever OS. Don't call granny functions. Make your own wrappers.

You can provide yourself nice redundancy, add assertions, fix some stupid naming boofs, and it helps enormously with porting.

When I'm thinking about API design these days, I'm thinking that API that I provide should be sort of hard to use; it should be minimal, orthogonal, as small and clear as possible, and that to make it friendlier to use it should be wrapped on the client side.

I don't think that's actually possible for me at RAD because clients don't think the same way I do, and perhaps more importantly because it creates a barrier to entry which is something that's absolutely crucial to avoid.

Aaron said...

"I don't think that's actually possible for me at RAD because clients don't think the same way I do, and perhaps more importantly because it creates a barrier to entry which is something that's absolutely crucial to avoid."

What if you ship the tight core, but all ship a good example wrapper as part of the product?

Aaron said...

My own statement about 'examples' reminds me of another thing about 'Libraries'.

The examples are *everything*.

No one will read your documentation.

No one will attempt to actually write anything.

They will take your example, copy-paste it into their codebase, and fuck with it until it sorta works.

Examples should be basically the *very best* practice of doing everything. They should never be dumbed down and simple so people can understand them if that conflicts with making them the best way of doing things.

cbloom said...

"What if you ship the tight core, but all ship a good example wrapper as part of the product?"

Yeah, I've been considering that. Make a small PITA official library API, and then have a nice bunch of wrappers on the outside, shipped as client-side example code.

I've sort of started that with some C++-ish wrappers in client-side example code (the API is all pure C) but haven't really made it official practice.

cbloom said...

"The examples are *everything*."

Actually what I'm finding is that customers are all different, and none of them is very comprehensive in their approach. Each person tends to have their one thing that they focus on, and they don't like to use other methods of learning.

That is, some people indeed just go to the examples. But other people seem to be reading docs and don't look at the examples at all. They'll send me questions with broken code snippets that are trying to do something straight out of the examples and I'm like "why don't you just copy-paste from the example that does that" and they didn't look at the examples at all.

I definitely agree with this though :

"Examples should be basically the *very best* practice of doing everything."

A lot of people will just copy-paste the example, so if the example is shitty performance than lots of people will get shitty performance. They'll blame the library and they'll be right to do so.

old rants