Going back at least to the Romantics and the nineteenth-century Arts and Crafts movement, we have repeatedly been told that local, artisanal and renewable products are inherently more desirable than more distant, large-scale and ‘artificial’ ones.

In practice though, while small and local might sometimes seem beautiful, bigger and more distant has typically proven better for both people and nature. Natural dyes, which are seemingly rediscovered every generation by idealistic, environmentally minded individuals, are a case in point. Take the following laudatory piece, written by Sarah Bellos in PERC Reports earlier this year. As the Cornell-educated ‘enviropreneur’ sees it, her goal in life is to replace ‘synthetic dyes derived from imported, non-renewable raw materials, such as petroleum and coal tar, with a more sustainable and renewable plant dye source’. Yet she still needs to overcome some major hurdles, including ‘achieving uniform colours from sampling to production’ and increasing local supplies of dye crops. And what Bellos seemingly doesn’t realise is that these are exactly the issues that demanded the development of synthetic dyes. This raises the obvious question: if natural dyes were so great, why were they so completely displaced by synthetic alternatives?

A brief history of natural dyestuff production (1) From ancient times up to the middle of the nineteenth century, all dyes (or dyestuffs) were made from a variety of natural sources, ranging from plants (roots, berries, flower heads, or leaves), minerals and trees (especially barks) to lichens, insects, mollusks, and guano (desiccated bird excrements). For example, most red dyes were derived from plants such as madder, beetroot, cranberry, safflower, and orchil; and a few more valuable ones from insects such as cochineal (from Central America and, for a time, southern British North American colonies) and Kermes (from Southern Europe and the Middle East). Yellows were extracted from Persian berries, weld, dyers broom and saffron. Despite its seemingly natural abundance, green could only be obtained by double-dyeing with yellow (fustic or annatto) and blue (indigo).

While dyers historically settled on the large-scale use of a dozen main sources, plants were by far the most important. Among these, common madder and true indigo, while not the most valuable on a unitary basis, stood out in terms of volume produced and overall importance. The roots of what was historically the most important source of vegetable dyes, common madder (Rubia tinctorum), contain 28 colouring matters, including red (alizarin), orange (rubiacin), purple (purpurin) and yellow (xanthine). From at least 2000 BC, common madder was cultivated on a large scale and in various locations. By the middle of the nineteenth century, large volumes of madder were produced the world over for local consumption, while perhaps as much as 90 per cent of the world madder market relied on the work of growers in the Dutch province of Zeeland and the French regions of Alsace and Provence.

Until the beginning of the nineteenth century, dyes were extracted from madder roots by drying, beating and pounding them to remove the rough parts, in the process yielding the cruder ‘umbro’, ‘munch’ or ‘bunch’ madder and the more expensive ‘crop’ or ‘grade’ madders, which were later matured in casks for two-to-three years. Despite what would now be touted as its ‘organic’ nature and ‘human scale’, madder production and use often had significant environmental impacts. Reminiscing on advances made in previous decades, the Scottish chemist Lyon Playfair observed in 1852 that the ‘large quantities of spent [or used] madder constantly accumulating were found exceedingly inconvenient. It was not valuable enough for the manure heap, and the rivers became polluted in carrying away the waste material.’ (2) Wrestling with the problem, some creative chemists eventually noticed that one third of the colouring matter was thrown away in the process. In time, a simple treatment with a hot acid was devised and again rendered it available as a dye. Following this breakthrough, ‘waste heaps [were] now sources of wealth, and the dyer no longer [poisoned] the rivers with spent madder, but carefully [collected] it in order that the chemist may make it again fit for his use’ (3). Indigo dyes date from before 3000 BC and were produced from leguminous plants of the approximately 800 species-large Indigofera genus. While dyeing matter had been extracted from a few of these plants, the tropical and sub-tropical ‘true indigo’ (Indigofera tinctoria), usually thought to have originated in India but long cultivated over much of the Sub-Indian continent, South-East Asia, the Middle East and Africa, was historically the most important and valuable variety. Arab traders eventually introduced it to the Mediterranean region during the eleventh century where, after centuries of political resistance, it eventually displaced woad (Isatis tinctoria) as the most commonly used blue dye plant in Europe. Indigo plantations based on both true indigo and sometimes other local varieties were then created from the second half of the sixteenth century onwards in the Americas, in areas ranging from Southern and Central America and the West Indies to Louisiana and the Southern British North American colonies.

Like madder production, the manufacture of indigo dye was also a messy business. According to one 1775 Florida account: ‘The stench of the work vats, where the indigo plants were putrefied, was so offensive and deleterious, that the “work” was usually located at least one quarter of a mile away from human dwellings. The odour from the rotting weeds drew flies and other insects by the thousands, greatly increasing the chances of the spread of diseases. Animals and poultry on an indigo plantation likewise suffered, and it was all but impossible to keep livestock on, or near, the indigo manufacturing site.’ (4) In the following decades, a combination of factors ranging from high export duties and increased international competition from more lucrative alternatives enticed New World producers to switch to others crops, such as sugar, cotton and coffee. By the late-eighteenth century, Indian regions (originally Bengal and later Bihar) had emerged as the dominant indigo producers, at first under the leadership of the British East India Company (which itself followed earlier initiatives by other European traders and entrepreneurs) and later under independent British capitalists, planters and traders. At one point India dominated nine tenths of world trade in indigo, with the remainder produced in Java, the Philippines, Central America, Venezuela, Brazil and China.

The rise of synthetic dyes The raw material upon which the numerous synthetic dyestuffs that hit world markets in the late-nineteenth century were to be fashioned was an abundant and cheap waste product of coal-gas manufacture: coal tar, a substance once referred to as the ‘abomination and nuisance of the gas works’ (5). As Playfair put it: ‘[Coal tar was] once the most inconvenient of waste materials. It could not be thrown away into rivers, for it polluted them foully. It could not be buried in the earth because it destroyed vegetation all around. In fact nothing could be done with it except to burn or to mix it with coal as fuel.’ (6)

The first commercially successful, large-scale breakthrough in synthetic dye production occurred in 1856 when a British teenage chemist by the name of William Henry Perkin (1838-1907) obtained a brilliant purplish substance while oxidising some crude aniline. After much developmental work, Perkin and his associates offered the world ‘mauve’ in 1859. This new dyestuff triggered much chemical research in the preparation of aniline-based dyes and, in the process, launched the modern organic chemical industry and prompted the development of coal-tar based products ranging from explosives, medicines and perfumes to flavouring materials, sweeteners, disinfectants, antitoxins and tracing and photographic agents. The supremacy of British dye manufacturers, however, was shortlived. From the 1870s, German producers began to take over this branch of manufacturing and soon achieved almost complete dominance, accounting for almost 90 per cent of the world production of dyestuffs by 1913. Under the direction of Heinrich Caro (1834-1910) of the Badische Anilin and Soda-Fabrick company (better known as BASF), the development of large-scale manufacturing techniques for the production of synthetic alizarin and other colouring substances quickly put madder producers out of business. The road to synthetic indigo, however, proved much more arduous because of the absence of a suitable ‘carbon skeleton’ in the coal-tar hydrocarbon molecules. In time, though, Adolf von Baeyer (1835-1917) eventually succeeded in producing synthetic indigo in three different ways from various coal-tar derivatives. After much investment and developmental work, scientists and engineers working for BASF were able to market naphtalene-based synthetic indigo in 1897.

The triumph of synthetic dyes Synthetic dyes quickly put their natural competitors out of business, despite government protectionist policies. This was not achieved because of some ‘Big Coal’ conspiracy, but because the advantages of synthetic dyes were too obvious to overlook. Most significantly, they offered a much greater range of colours (along the lines of a 10-to-one ratio by the turn of the twentieth century) and they were much cheaper. For example, the value of the amount of alizarin used in the world in 1880 was around $8,000,000, while the cost to manufacture the same amount of dye from madder roots would then have been near $28,000,000 (7). This price difference could be explained by the abundance and reliability of the coal-tar supply and by the fact that the preparation of synthetic dyes was much less labour- and input-intensive than the cultivation and extraction of colouring matter from plants.

Synthetic dyes were also better quality because plants often exhibited weather-related changes of colour and contained many impurities. Growers and natural-dye makers also had much more incentive to adulterate their products because of their raw materials’ lack of uniformity. Plant supplies were also less reliable, as they required a significant growing period and were only harvested periodically. They were also regularly prone to failure or lower yields because of diseases, insects or bad weather. In some cases, they might not even be planted at all if other crops proved more lucrative.