Libre Graphics Meeting (LGM) always features talks that provide status updates from application projects, as well as presentations from artists and users about their own graphics work. But the event is also a rare opportunity to hear about the state of the art in various technology areas that buttress application code itself. At LGM 2015 in Toronto, one such talk was color-management consultant Chris Murphy's status report on the state of color management in free software. Although users can already count on Krita, GIMP, Scribus, and other applications to handle the necessary color transformations, color management is still a field where there are new challenges facing developers.

Color management is, historically, one of LGM's biggest success stories. The various applications that make up the core of the free-software creative toolkit are all color-managed now—users who care to configure their hardware and software can expect an image to look correct on all of their displays, regardless of which applications are used to edit it (and in which order), and to be free of surprises when printed (whether professionally or at home). That accomplishment is thanks, in large part, to collaborations that took place in and around previous LGMs.

So when Murphy stood up to begin the session, he started with a joke. "Everything works great. Next talk!"

Though he was kidding, Murphy continued, in a sense everything is great in free software—especially where Linux is concerned. Most applications use the LittleCMS library to transform color pixels from one space to another. The ArgyllCMS project provides good tools for creating accurate color profiles. There are two actively maintained systems for managing color profiles in a desktop environment: colord and Oyranos. And there is even open hardware available for profiling displays: Richard Hughes's ColorHug (which we looked at in 2012).

This situation on Linux is in contrast to the state of affairs on Windows and on Mac OS X. Windows's color-management library is so buggy that it is disabled by default. Turning it on for professional work, he said, requires "a dance with a dog, a pig, and pony under a full moon." But the good news, he added, is that no one ever reports bugs about it if they can't use it. On Macs, the situation is reversed: the pro-level color-management features cannot be disabled, so they generate a constant stream of bug reports.

In a lightning talk later in the day, Murphy added a few words about iOS and Android, which he said had simply slipped his mind during the main talk. iOS, he said, has a color-management API, "but I don't think it works. No one uses it." As far as he is aware, there is a single app that leverages it: a proprietary tool from X-Rite; even then, the app is largely inconsequential since it does not make any of its features accessible to other apps. Android is much better; device displays can be profiled and tested. He recommended users start with the color profile testing tools at the color.org web site.

The basic underpinnings of color management in Linux and free software are good, he said; primarily the shortcomings at present are found in the user interfaces. The interfaces for activating and tweaking color-management settings all seem to vary and, perhaps more importantly, applications tend to vary in what their default settings are. Specifically, he highlighted that some parts of the color-management pipeline might be turned on for printing with Ghostscript but turned off for on-screen viewing—which can lead to differences between print output and the screen.

Whichever software a user encounters trouble with, however, Murphy urged them to report bugs. "If you experience strange problems and can't figure out what's going on, write to the OpenICC list. CC me, and I'll try to reproduce it." Many users encounter color bugs, he said, but rarely report them. "I recommend being prolific with your complaints. That's how things get fixed."

Although the overall picture is in good shape for free-software users, Murphy did point out several places where there is new work to watch, and a few areas of concern. One thing the color-science community is still working on, he said, is standardizing black-point compensation. Black-point compensation is the process of trying to properly account for the difference between the darkest black that two different devices can produce. The darkest level producible by a digital projector in a well-lit room, for example, is still quite bright in the absolute sense.

There is a draft ISO standard [PDF] addressing how to compensate for black-point differences; developers will want to watch its progress. There are still open questions, such as how the ISO black-point compensation specification should be used in combination with standards from other organizations.

Another new development is the recent effort by one of those other organizations—the International Color Consortium (ICC)—to work more openly with the technology community at large. In the early 20th Century, the scientists who did pioneering color work published everything in the open, Murphy said. In more recent years, though, international standards bodies and technology companies (such as those that make up the ICC) have done most of the new science and specification writing. Too many scientists get hired off to work on proprietary applications, he said, rather than creating open standards or open-source software.

But the ICC is trying to engage more with the public; its ICC Labs project has published a new (version 4) sRGB profile that should provide better color rendering when printing highly colorful images. Software support still needs to catch up, however. There is support for viewing images using the new sRGB profile in several applications (such as Firefox), but output profiles to translate images into printer color spaces have yet to appear.

There is a significant unsolved problem in color management, however, which Murphy called "the elephant in the room that's about to sneeze and cause a lot of chaos." That problem is optical brightening agents (OBAs), which he introduced as "what laundry detergent, toothpaste, and printer paper have in common." OBAs are fluorescent additives used to make objects appear whiter to the human eye; they absorb radiation in the ultraviolet spectrum and radiate it back out in the visible spectrum—usually in the blues and greens.

OBAs are a clever trick for creating whiter whites, but they wreak havoc with color specifications. They are difficult to measure (and, thus, to adjust for), their performance characteristics vary depending on the light in the viewing room, and they degrade over time. OBAs are one reason why printer paper turns yellow after two years, he said.

It is bad enough that OBAs are in new desktop-printer paper, since they make proofing difficult (for proper proofing, the desktop-printer paper should behave the same as the paper used by the commercial print shop). But as paper includes more and more recycled content, which he called an undeniably good change overall, paper stock includes more and more recycled OBAs—in unpredictable amounts and from various sources. Thus, even papers sold as OBA-free may contain some level of OBAs.

Murphy ended the session by noting that the United Nations had declared 2015 the "International Year of Light," a designation intended to promote scientific study. As a result, a number of color-science organizations were conducting programs and workshops that may interest users and developers concerned about color management. The International Commission on Illumination (CIE), for instance, is running a series of Open Lab Days around the globe.

Not to be outdone, Murphy ran his own workshops at LGM apart from his talk; one a BoF about color management, the other a hands-on session helping users configure a full color-managed workflow. For those who could not be at LGM, the good news is how many pieces of the color-management puzzle are already in the correct places. But as the new challenges Murphy outlined reveal, there are few targets in the software development field that sit still for long, color included.

[The author would like to thank Libre Graphics Meeting for assistance with travel to Toronto.]

Comments (10 posted)

At Libre Graphics Meeting 2015 in Toronto, Hong Phuc Dang presented an update of the state of various projects from the free-software and open-hardware world that deal with garment design and manufacturing, as well as textiles in general. The scope of the topic is rather large; it encompasses everything from Arduino-driven knitting machines to producing one-off garments for cosplayers to developing software for fashion designers. Thus, there are a great many small projects that are active in different areas, with the potential to grow into a full-fledged community.

Dang credited Susan Spencer's presentation at LGM 2013 with jump-starting her interest in free software for working with garments and textiles. After that session, she started researching the current state of affairs—talking to fashion designers and students around Asia and Europe, as well as to developers and people in the garment-production business.

In brief, she said, she learned that the fashion industry is and long has been slow to adopt new technology. The modern sewing machine is virtually identical in function to the earliest Singer models from the 1850s. Newer machines are faster, and some can be computer controlled, but they do not offer much else in the way of new capabilities. One of the key reasons for this is that garment manufacturing revolves around notoriously cheap labor. When labor is so inexpensive, producers have no incentive to pay more for newer equipment.

This pits fashion producers into a "race to the bottom" price war, she said, leaving little room to invest in new technology. As a result, the software used even by the largest producers is of low quality. Several designers told Dang that they used CAD drafting software to work on their designs because they cannot find anything else usable in their price range. What software is available is, naturally, proprietary and is locked to closed data formats.

At the same time, she said, there are other problems plaguing the industry that also have an impact on technology. As more garment production moves to third-world countries to save costs, first-world communities begin to lose their collective traditional knowledge. Mass production also means that consumers have grown used to generic, one-size-fits-all garments as the norm, even though technology should allow for fast and easy customization—or perhaps even direct collaboration between the designer and the consumer. And mass production generates significant amounts of waste and environmental pollution.

The drawbacks to mass production of garments are reminiscent of the types of problem that the "maker" movement has already tackled for a number of engineering disciplines. Dang believes free software, open hardware, and open data formats can overcome many of these drawbacks, so she has been working to foster connections within the community. Her community-building project is called Fashiontec, and it includes a GitHub organization in addition to the main site.

There are several active free-software projects worth looking at, she said. Design and patternmaking are the purview of Tau Meta Tau Physica, Valentina, and several independent Processing-based efforts. A related project is BodyApps, which provides a 3D body-measurement system. It is developed by members of the Fashiontec community.

While the patternmaking projects focus on cutting and sewing material, there are also several knitting applications in development. Dang cited Knitic and All Yarns Are Beautiful (AYAB) as among the best; there is a more complete list available at the Fashiontec GitHub site. Related projects include Embroidermodder, an open-source application that can control several programmable embroidery machines.

Most of these knitting projects focus on supporting commercially available hardware devices. On the open-hardware side, there are several projects dedicated to building knitting machines. The most well-known of these is OpenKnit, which uses an Arduino to drive a home-built machine that includes a number of 3D-printed specialty parts. There is also an open-hardware embroidery machine built and documented by members of the OpenBuilds project. Some Fashiontec members are also working on reverse engineering a circular knitting machine.

Last but not least, the Fashiontec community has also been working to define an open file format that can facilitate data sharing between applications. Called the Human Definition Format (HDF), it is a container format modeled after The Document Foundation's Open Document Format (ODF). It contains structured XML and binary images, and can already be used with Valentina.

Together, these projects constitute an active development scene, but Dang ended her session with a reminder that more is still needed. There are many more hardware devices that need to be "liberated" through reverse-engineering so that they can be used with free software. Individuals still face obstacles to setting up their own maker-style businesses. Some of those obstacles are quite large—such as how to compete with the global-scale distribution channels available to mass-production companies. Dang said she is still researching approaches to that problem.

Other challenges are smaller, such as the difficulty of building custom hardware (such as the open-hardware knitting machine). Here, Dang said that the Fashiontec community is trying to reach out more to the maker movement—hacker spaces in particular, which she said could all benefit from adding a sewing or knitting machine to their stable of 3D printers and laser cutters.

Over the coming year, Fashiontec will have a presence at a number of events, including MeshCon in Berlin this October, as well as FOSDEM, FOSSASIA, and several other free-software conferences. Dang closed by saying anyone with an interest in textiles, knitting, or garment production is welcome to join the community.

[The author would like to thank Libre Graphics Meeting for assistance with travel to Toronto.]

Comments (none posted)

It's been a Linux container bonanza in San Francisco recently, and that includes a series of events and announcements from multiple startups and cloud hosts. It seems like everyone is fighting for a piece of what they hope will be a new multi-billion-dollar market. This included Container Camp on April 17 and CoreOS Fest on May 5th and 6th, with DockerCon to come near the end of June. While there is a lot of hype, the current container gold rush has yielded more than a few benefits for users — and caused technological development so rapid it is hard to keep up with.

CoreOS Fest was a demonstration of how trendy containers are in the startup world right now. The event sold out for 300 attendees, despite being planned within the last six months and located in an ill-suited venue called The Village in San Francisco's Tenderloin. Despite the drawbacks, it was well-attended; I suspect that DockerCon will be even bigger. This audience, based on responses to speaker questions, was almost entirely made up of system administrators and dedicated DevOps staff.

Among the latest developments in the container world are new funding, a new appc committee, the release of CoreOS, Inc.'s Tectonic platform, Kubernetes, new tools and techniques for databases on containers, systemd integration, Project Calico, Sysdig, and more. Over this series of three articles, we're going to be exploring some of the developments in the world of Linux containers. But first, some Silicon Valley politics.

Note to forestall confusion: For the rest of this article, "Docker" and "CoreOS" refer to the respective open-source projects and related software, and "Docker, Inc." and "CoreOS, Inc." refer to the companies.

The orchestration gold rush and CoreOS vs. Docker

CoreOS, Inc. was Docker, Inc.'s strongest partner, but split with it only six months before CoreOS Fest, when it launched the competing container platform rkt (formerly also known as Rocket). The separation between the two companies seems to have become a divorce, as competition between them for users and capital has heated up. Docker, Inc. received $95 million in Series D funding on April 14th. That same week, CoreOS, Inc. raised $12 million, notably including an investment from Google Ventures.

The conference made it obvious that it's a strange separation, though. Probably 80% of the people in the room at the keynote were Docker users, and most of the technologies introduced are compatible with Docker. Yet few people on stage ever said the word "Docker"; one speaker even went so far as to use the phrase "the D word" instead of saying the name.

A lot of this competition centers around orchestration platforms, which refers to the suite of tools required to deploy, manage, and network large numbers of containers that make up a container-based software infrastructure. The idea is that while Linux containers on their own are useful as a development platform, for them to be useful as the basis for the whole software stack, there is a need for several orchestration tools. This includes: container schedulers that deploy groups of containers to physical servers, cluster information stores for container data sharing and coordination, software-defined networking for connecting containers, along with resource management and monitoring tools.

All of the companies in the container space seem to have decided that orchestration is where they can differentiate products, and is therefore the primary way to exert influence and create revenue. It's not just Docker, Inc. and CoreOS, Inc. in this field: Red Hat's Project Atomic, Ubuntu's Snappy Core, Joyent's Triton, and the Apache Mesos project are all strong contenders for the future of container orchestration. Notably, Microsoft has now announced that Windows Containers, which make container deployment available for users of Windows Server and .Net Stacks, will be available in 2016.

Perhaps because of this intense competition, there was a much stronger emphasis on talking about container security at CoreOS Fest than there has been at the prior CoreOS Meetups. Weak access controls and a lack of other security measures has been one of CoreOS, Inc.'s main criticisms of the Docker project from before the split in December.

The new appc committee

This security focus was evident in the App Container (appc) specification panel. The specification was created and had its 0.1 release in December. It describes the required properties of an Application Container Image (ACI); rkt is CoreOS's implementation of that specification as explained in an earlier article.

Before discussing any new features, CoreOS, Inc. CEO Alex Polvi cautioned the audience that the committee was still working on the security part of the specification; "sometimes it takes a while to get these things right", he said. He then introduced the members of the panel, who are also the committee in charge of the new "appc specification community": Vincent Batts of Red Hat, Tim Hockin of Google, Charles Aylward of Twitter, along with Brandon Philips and Jonathan Boulle of CoreOS, Inc. Ken Robertson of Apcera was also on the panel, although he is not a member of the committee.

That was one of the two big announcements of the morning: CoreOS has created a governance document and turned over the appc specification project to a committee of "maintainers", the majority of whom do not work for CoreOS, Inc. While this is not a foundation or other incorporated body, the move seems intended to make appc a real, independent specification. It was also a demonstration of partner support for the spec. "[appc] should feel like the HTML 5 standard. Shared standards plus competition creates better product", Polvi said.

To start out, each of the panelists explained their company's interest in the appc specification.

Apcera was working on its own closed-source container technology when the appc project was announced. It quickly worked to bring its own technology in line with the draft specification. "When we saw the rkt announcements, we thought 'damn, now we have to build an abstraction'", said Robertson. He also announced the release of Kurma, Apcera's bootable container infrastructure that is compatible with appc-compliant containers.

Twitter already had a lot of existing infrastructure, and Docker didn't fit in with what it had, Aylward said. Rkt and appc allowed the company to pick and choose what it implemented. Hockin noted that Google is looking to create an open-source platform that mirrors how its large-scale, proprietary container platform works, and has formed a tight partnership with CoreOS to support it. "Coming from Google, I'm interested in building the cathedral. But before you can build the cathedral, you need to pour the foundation", he said.

Batts was more equivocal, saying that Red Hat's interest in appc is in supporting standards and user choice. Since Red Hat's Project Atomic is also closely aligned with Docker, Red Hat's fairly neutral stance makes sense. He explained it as "finding commonalities and working with them which drives everything else forward."

Once corporate politics were out of the way, the panelists discussed the state of the spec and current development. They started with some of the major challenges and feature requests, such as making encryption work with service discovery, the need for a better ACI validator, and the need to lock down more system calls inside the container for better security. The main challenge, however, is that parts of the 0.5 specification are still vaguely described, which frequently forces the rkt team to halt work while the specification is hammered out.

"You can write a spec, but without an implementation, you don't know that you can build it. So implementation and spec need to go hand-in-hand", said Aylward.

The committee agreed on the main goal of the project: for ACI to be the reference format for container images, and for developers to build ACI images first, then use them to create whatever other packages are needed. They did not agree about everything else. For example, while CoreOS, Inc. is devoted to systemd for container bootstrapping and initialization, Google is not using systemd. Hockin also disagreed with the other committee members on how much container isolation could be part of the spec. He believes that, eventually, by separating the general "spec" and the "os-spec", appc can encompass a full application binary interface (ABI) in order to provide full isolation for container runtimes. "It's pretty well understood that containers are not a security barrier. This is something that needs to evolve from the inside out", Hockin said.

Tectonic

The other major announcement for the conference was CoreOS, Inc.'s launch of the Tectonic platform, which is the full CoreOS, Inc. suite of tools. That includes CoreOS Linux, the container deployment tool fleet, the clustered data store etcd, the flannel virtual networking system, and the image repository Quay.io, all combined with Google's Kubernetes project (see below). The idea is to present a single, user-friendly integrated platform for large-scale container orchestration. Polvi called it "Google's infrastructure for everyone else, or GIFEE".

Tectonic is proprietary, commercial software that CoreOS, Inc. plans to sell to customers who want a fully integrated stack with a nice GUI and are willing to pay for it. While all the tools used are available as open source — except for the GUI — doing your own orchestration is difficult due to the newness of the tools and the complex ways in which they interact.

CoreOS's fleet and flannel may seem to have overlapping and conflicting functionality with Kubernetes, but in Tectonic they are complementary. According to Kelsey Hightower of CoreOS, Inc., fleet is used in Tectonic to bootstrap and monitor Kubernetes, which can otherwise require a lot of hand configuration. Flannel supplies an overlay networking system that supports Kubernetes' service discovery features.

As a demonstration product, Intel, Supermicro, and data center vendor Redapt announced a joint venture in making preconfigured Tectonic stacks available. At the conference, they showed off a quarter-rack of servers that were running the beta version of Tectonic as a "plug and play" container infrastructure that was ready to go. It is also possible to run the Tectonic beta on top of Amazon EC2.

Kubernetes

The other project logo that was just as pervasive at CoreOS Fest as the CoreOS logo was the Kubernetes ship's wheel. Brendan Burns, head of the Kubernetes project at Google, explained what Kubernetes was, how it worked, and how it relates to CoreOS and containers.

He started by separating operations into four layers: application ops, cluster ops, kernel ops, and hardware ops. Kubernetes operates at the level of cluster ops, synchronizing servers into a "unified compute substrate", in order to decouple application requirements from specific knowledge of the hardware, in the same way that a public cloud does.

Developers interact with Kubernetes through its API server, which supports both a command-line interface and a JavaScript-based web API. All of its data is contained in etcd. Like the configuration management system Puppet, Kubernetes uses a declarative approach where users specify how the system should be, and then Kubernetes reconciles the actual state with the desired state. An example of this would be "exactly three Redis servers should be running", which would cause Kubernetes to either stop or start containers until that declaration was true.

Deploying containers to servers in order to provide requested services is known as "scheduling". The "atomic unit of scheduling" in Kubernetes is the "pod", a group of containers, networking, and data volumes. This allows Kubernetes to schedule services that require multiple components that don't work if not placed on the same physical server, such as a database and its file storage.

The other big feature of Kubernetes is service discovery, which lets application developers use service proxies to talk to services without knowing where those containers are on the network. This proxy network is driven by "labels" attached to each pod and container that show the services that they provide. In this model, multiple pods supplying the same service are treated as fungible units — Kubernetes will load-balance among them.

Compared with competing orchestration frameworks such as CoreOS's own fleet, Apache Mesos, or Docker, Inc.'s Swarm and Machine, Kubernetes feels more feature-complete and mature in simple trials at my company. Since it's a de facto port of Google's own, in-production orchestration software, this should not be surprising.

The only tool from the CoreOS stack that is actually required for Kubernetes is etcd, although flannel can be used to support the Kubernetes service discovery with virtual networking. Etcd can be used to share metadata for Docker and rkt containers equally well. However, given the close alliance between Google and CoreOS, Inc., further integration with CoreOS tools seems likely.

Next up

The pace of new tools, companies, techniques, and practices in the Linux container world has been extremely rapid, and it is only through events like CoreOS Fest that I have been able to keep pace. The shifting alliances between companies and open-source projects is constantly changing in a way that we haven't seen since the early days of mobile Linux.

In the next part of this series, we'll be covering systemd and CoreOS, the rise of the language Go as a container tool language, the new projects Calico and Sysdig. We will conclude with an article about the issues and solutions for storing persistent data on container infrastructures, including PostgreSQL Governor, CockroachDB, etcd, and the RAFT consensus algorithm.

Comments (1 posted)

It has been a while since the last update on the status of LWN. There are a few changes coming to the LWN site, so this seems like a good time for a summary of the various bits of metanews that have built up.

Perhaps the biggest upcoming change is that we are getting closer to switching over to the new responsive site design by default. Readers who have not yet done so can test out the new design by setting the appropriate preference in the account area. Those who have tried it may wish to give it another look; things have changed significantly in the last few weeks. This feature is no longer limited to subscribers; one does, though, need to be logged in to be able to change to the new design. A few small glitches remain, but most of the big problems have been ironed out — as far as we know.

Once the default changeover happens, it will still be possible to use the older design by changing the same preference value. We will keep that code around for now, but, it must be said, we are unlikely to use it ourselves or to put a lot of maintenance effort into it. Eventually the older mode is likely to fade away unless a strong reason to keep it surfaces.

One problem that came up during the work on this project was a difficulty in finding a spot for the text advertisement that traditionally runs in the left column. As it happens, nobody has bought such an ad in 2015, and only two were sold in 2014. We therefore conclude that LWN text ads are something less than a compelling offering at this point. So, support for text ads has been removed.

A feature that has been quietly added to the new design, instead, is the ability to use Google fonts to render LWN pages. This feature is currently experimental and might be removed in the future. Google fonts are disabled by default, but can be turned on in the preferences page. Note that doing so will cause the fonts to be downloaded from Google's servers if they are not already in your browser's cache. Google's privacy promises regarding fonts seem pretty solid, but we remain reluctant to turn them on by default; reader opinions on the matter would be of interest.

In general terms, LWN is currently running on a solid financial footing and won't be going away anytime soon. It is worth noting, though, that individual subscriptions have been nearly level for a few years now (group subscriptions are up a bit). We have also seen a bit of a tendency for subscribers to drop down to the lower subscription levels. A larger subscription base would enable us to hire more staff (something that your editor, currently dealing with workers compensation insurance issues, would appreciate) and expand our coverage.

So we would like to thank all of our subscribers, and to encourage other readers to subscribe to LWN. That is, in the end, the only thing that keeps this site on the net. We have been at it for 17 years now, but, sometimes, it feels like we're just getting started. Much of interest is going to happen in the free software world in the coming years, and we'll be there to report on it.

Comments (98 posted)