Why autoconf, automake and libtool fail

autoconf

config.guess, config.sub There was no source from which one could get up-to-date version of these scripts, which are used to determine the operating system type. This often caused pain: asked the openbsd port maintainers about it. (btw: there is now a canonical source for them, "ftp.gnu.org/gnu/config") autoconf takes the wrong approach The autoconf approach is, in short: check for headers and define some preprocessor variables based on the result.

check for functions and define some preprocessor variables.

replace functions not already found in the installed libraries. Yes, it works, albeit some details are discouraging, to but it mildly.

No, it doesn't work good enough. This approach has lead to an incredible amount of badly code around the world. Studying the autoconf documentation one learns what kind of incompatibilities exists. Using autoconf one can work around them. But autoconf doesn't provide a known working solution of such problems. Examples: AC_REPLACE_FUNCS, a macro to replace a function not in the systems libraries, leads to the inclusion of rarely used code in the package - which is a recipe for disaster. On the developers system the function unportable() is available, on another system it isn't? Oh well, just compile unportable.c there and link it into the programs ...

Yes, this solves a problem. But it's overused, it's dangerous. In many cases unportable.c doesn't work on the developers system, so she can't test it. On other cases unportable.c only works correctly on _one_ kind of system, but will be used on others, too. Yes, the often used packages _have_ been tested almost everywhere. But what about the lesser often used?

Keep in mind that there is no central repository of replacement functions anywhere ... The same is true for AC_CHECK_FUNC(S). In this case there isn't a replacement source file, but even worse, there's an #ifdef inside the main sources, unless the programmers are careful to use wrappers, which they often aren't, because compatibility problems are often discovered very late in the testing process (or even after release) and people are usually trying to make the smallest possible changes.

This is surely nothing which can be avoided completely, but it's something which has to be avoided whereever possible. In both cases you end up with rarely, if ever, used code in your programs. It's not dead code, it's zombie code - one day, somewhere, it will get alive again. There's a solution to this problem, but it is completely different from what's used now: Instead of providing bare workaround autoconf (or a wrapper around it) ought to provide an abstraction layer above it, and a central repository for such things.

That way a programmer wouldn't use opendir, readdir, closedir directly but call they through the wrap_opendir, wrap_readdir and wrap_closedir functions (i'm aware of the fact that the GNU C library is this kind of wrapper, but it hasn't been ported to lots of systems, and you can't rip only a few functions out of it).

autoconf macros are inconsistent. For example: AC_FUNC_FNMATCH checks whether fnmatch is available and usable, and defines HAVE_FNMATCH in this case. AC_FUNC_MEMCMP checks for availability and usability of memcmp, and adds memcmp.o to LIBOBJS if that's not the case. Other examples exist. autoconf is not namespace-clean. autoconf doesn't stick to a small set of prefixes for macro names. For example it defines CLOSEDIR_VOID, STDC_HEADERS, MAJOR_IN_MKDEV, WORDS_BIGENDIAN, in addition to a number of HAVE_somethings. I really dislike that, and it seems to get worse with every new release.

My absolutely best-loved macro in this regard is AC_FUNC_GETLOADAVG, which might define the following symbols: SVR4, DGUX, UMAX, UMAX4_3, NLIST_STRUCT, NLIST_NAME_UNION, GETLOADAVG_PRIVILEGED. autoconf is large I'm feeling uneasy about the sheer size of autoconf. I'm not impressed: autoconf-2.13.tar.gz has a size of 440 KB. Add automake to that (350 KB for version 1.4). Does it _really_ have to be that large? I don't think so.

The size has a meaning - for me it means autoconf is very complicated. It didn't use to be so, back in the "good old days". And it accomplished it's task. I really don't see that it can do so much more today (i don't mean "so much more for me"). configure is large Even trivial configure scripts amount to 70 KB of size. Not much?

Compressed with gzip it's still 16 KB. Multiply that by millions of copies and millions of downloads.

No, i don't object to the size. It's perfectly ok if you get something for it. But you don't, about one half or more of each configure script can be thrown away without any lossage. Large parts of it just deal with caching, which wouldn't be needed if configure wasn't so slow.

Other parts of it are the --help output, which looks so good ... but doesn't help usually (try it and find out what to do, without reading README or INSTALL).

Then there is the most bloated command line argument parser i've ever seen in any shell script.

Then there are many, many comments, but they aren't meant to help you seeing what's going on inside configure, they are the documentation for the macro maintainers (some might actually prove to be useful, but the vast majority doesn't). The configure scripts are the utter horror to read. There's a reason for this: configure doesn't use any "advanced" feature of the shell. But i wonder - are shell functions really unportable? And if the answer is yes: Do you really expect anything to work on that system? The problem is that a shell that old is unlikely to handle almost anything, for example large here documents. The configure scripts are the utter horror to debug. Please just try _once_ to debug 4000 lines of automatically generated shell scripts. Note the autoconf maintainers: The road you're on is going to end soon. autoconf is badly maintained Let me clarify this first: I don't think bad about the developement. I'm missing maintainance of the already released versions. Now, at the end of 2000, almost two years have passed without a maintainance release of autoconf. 9 months have passed since a security problem has been found (in the handling of temporary files). There have more bugs been found, of course.

I know that nobody likes to dig in old code, but 2 years are a little bit much.

automake

update all those machines regulary (i'm not going to really do that, i'd rather stick to what's installed, but anyway) i didn't touch a project for, say, 2 years, and then i need to change something and release a new version. This involves changing the version number in configure.in.

Don't misunderstand me: i don't attribute that to automake. I attribute it to the internal instability of autoconf. Unfortunately you can't have automake without autoconf.

libtool

One problem with libtool is that releases don't happen very often. Libtool rarely is up to date with regards to changes on some operating systems. Which makes it difficult to use in packages meant to be really portable (to put it mildly).

A libtool script and a number of support files are distributed with every package making use of libtool, which ensures that any user can compile the package without having to install libtool before. Sounds good? But it isn't.

This approach means that you can't replace all old libtool stuff on your system easiely.

It also means that every package you try to compile can have a different version of libtool. And since alpha versions of libtool are often used it's not unlikely that you happen to meet one of these versions.

Another problem is the size of the libtool script. 4200 lines ...

summary

I don't use any of them anymore - autoconf alone doesn't give me much i don't get with a few additional make rules.