The Kubuntu distribution—a KDE-focused version of Ubuntu—has been one of the more successful Ubuntu "flavors". It celebrated its ten year anniversary earlier this year, which makes it roughly six months younger than its parent. Some recent events have left some wondering about the future of the distribution—or at least its future under the Ubuntu umbrella.

Kubuntu founder Jonathan Riddell (who penned our retrospective linked above) apparently ran afoul of the Ubuntu Community Council (UCC) due to his persistence (some might say belligerence) in pursuing two questions. One of those was about Canonical's policy for redistributing binaries from the Ubuntu web site, while the other was about the accounting for some donations that had been made by users when downloading Ubuntu code. He had put these questions before the council at some point, seemingly a year or more back.

On May 20, the UCC evidently determined that Riddell's behavior had reached a point where it crossed a line of some kind and asked that he "step down as a Kubuntu Council member and from all Ubuntu leadership roles for the next 12 months". The reasons given for that request were a handful of disagreements over his interaction with and reaction to the UCC and its members. The UCC summarized the problems as follows:

In the last few weeks we have come to the conclusion that you have lost perspective on the bigger project and more importantly Ubuntu’s ideals. It became harder and harder to work with you, up to the point where several people involved in the conversations felt hurt and burned out. Whatever problems you saw and wanted to see resolved could have been dealt with in a much different way. No matter how much you felt was going wrong, the people you are talking to are still human beings.

These emails were sent in private to the participants and the Kubuntu Council (KC). For most of the following week, it would seem that KC members, especially Scott Kitterman, tried to convince the UCC that it had made a mistake and to get a dialog going to resolve the issue. When that failed, Kitterman released a batch of those emails, which was, he said, in keeping with the spirit of the Ubuntu governance goals: "Decisions regarding the Ubuntu distribution and community are taken in a fair and transparent fashion."

The message from the UCC to Riddell talks about problems over the past year, but particularly calls out the past month. Two threads started by Riddell in the ubuntu-community-team mailing list would appear to be at least part of the flash point for the council. A May 1 post reiterates Riddell's position on Canonical's intellectual property rights policy and, in particular, its requirement that redistribution of binary packages requires recompiling the source code.

The second thread started with a post from Riddell about accounting for donations that were made on the download page at ubuntu.com. From October 2012 to June 2013, users could choose to allocate their donation toward particular initiatives, one of which was to provide "better support for flavours like Kubuntu, Xubuntu, Lubuntu". Riddell wondered how much money was collected and where it went. In both cases, and throughout both threads, he expressed frustration and unhappiness with the UCC and its actions—often in strong words. For example:

I'm at a loss on how to proceed on this matter as the community council seems unconcerned at the problem and Canonical is happy to carry on perpetrating the myth. I'd welcome any suggestions.

This is not a complex issue or one that is beyond that capabilities of the Ubuntu community to understand. Canonical are directly claiming that Ubuntu packages can not be freely shared. It is a very direct restriction of basic open source freedoms. The Ubuntu Community Council should be making a strong statement that the claim is untrue but they seem to be unwilling which makes me wonder how else can the Ubuntu community stand up for its own basic policies.

It turns out that both issues have been worked on by the UCC, though neither has been completed to Riddell's satisfaction—which is why he keeps bringing them up. But the council feels that it has done what it can. The IP rights statement is currently being discussed between Canonical's lawyers and those at the Software Freedom Law Center on behalf of the Free Software Foundation. According to Benjamin Kerensa, that discussion has been going on for several years but may be coming to a close soon. As far as the donations issue goes, UCC member Charles Profitt said that the council looked into it, recognized that Canonical had failed to account for that money properly, but "that Canonical did not violate the communities trust". So Riddell is beating two dead horses, at least from the perspective of the council.

There are a vast number of opinions of the council's actions, Riddell's actions, what each should have done, and what it means for Ubuntu and Kubuntu going forward. What is clear is that the council got fed up with Riddell continuing to stir up the issues (others might characterize it as "badgering the council") and decided to try to put a stop to it. But that action seems to have raised problems of its own, entirely separate from how one feels about Riddell's behavior. It has the look of a potential "constitutional crisis", though that term may not make any sense in Ubuntu's benevolent dictatorship governance model.

Riddell responded to the UCC request that he step away from leadership roles in Ubuntu for twelve months by rejecting it "because I disagree entirely with the accusations against me". But that led to a note from UCC member (and Ubuntu self-appointed benevolent dictator for life, SABDFL) Mark Shuttleworth, who clearly stood behind the UCC decision and, effectively, rejected Riddell's ability to reject the request:

It is therefore not a question of whether or not you accept the CC request to step down. This is a statement from the CC that we no longer recognise you as the leader of the Kubuntu community. This decision has been rather painful, but is judged necessary. Whether or not you agree with the position, it is the final position of the CC.

Since then, Riddell has quibbled with the characterization of him as a Kubuntu leader, which seems a bit silly given the history. In addition, though, the Kubuntu Council reaffirmed his position on it. That would seem to leave the two councils at loggerheads. Given that the SABDFL sits on one of them, it is fairly clear which will "win", but what does that mean for Kubuntu going forward?

As numerous people have pointed out, there is nothing really tying Kubuntu to Ubuntu other than its name. The Kubuntu trademark is owned by Canonical and is unlikely to be allowed to be used on a distribution based directly on, say, Debian. Certainly Canonical provides a great deal of infrastructure for the project and Kubuntu has made its place within the Ubuntu family (and the Ubuntu community, for that matter). From a technical perspective, though, all of those things are solvable.

Both Ubuntu and Kubuntu (and the project that makes its main distinguishing feature: KDE) are quite community-oriented, however. So it is not a simple technical question by any means. On the other hand, given the defiance of the UCC's edict by the Kubuntu Council (which was just recently elected by Kubuntu members), it is hard to see how the current community continues as Kubuntu unless the UCC relents. Already, Kitterman has indicated that he may put his energy elsewhere, for example.

If the Ubuntu community, its council, and its SABDFL want to continue to have a Kubuntu going forward, they should all carefully consider their next steps. Likewise, Kubuntu developers should carefully consider their options before taking any, potentially rash, steps. There would still seem to be room for reconciliation, but someone will have to take that first step.

One interesting result of all of this is that more details about the donations have been released. Of a bit less than $150,000 donated, $47,000 was dedicated to flavors, though how it was spent will never be known. If nothing else, getting that information is something of a win for Riddell.

Comments (36 posted)

Increasingly, users want to containerize their entire infrastructure, which means not just web servers, queues, and proxies, but database servers, file storage, and caches as well. In fact, one of the first questions from the audience at the recent CoreOS Fest during the application container (appc) specification panel was: "What about stateful containers?" These services require persistent data, or "state", that can't be casually discarded. As Tim Hockin of Google said: "Ideally everything is stateless. But there has to be a turtle at the bottom that holds the state."

Storing persistent data for containers has been a chronic issue since Docker was introduced, because the initial design for Docker simply didn't deal with the concept of data that needs to outlive the container runtime and that can't be easily moved from one machine to another. Docker's only concession to the need for state in version 1.0 was to allow "volumes": external filesystems that the container could access, but that were otherwise unmanaged by the Docker system. The conventional wisdom was not to put your data into containers.

According to the panel, the appc spec may take this a step further by specifying that the entire container image and all of its initial files be immutable, except for specific configured directories or mounts. The idea is that any data that administrators care about needs to be put in specific, network-mounted directories, anyway. Managing these directories will then be left to orchestration frameworks.

A reader would be forgiven for thinking that both Docker and CoreOS had decided to ignore the issue of persistent data for the time being. Certainly a lot of developers in the "world of containers" seem to think so and are working to fill that gap. They described some of their projects to deal with persistent data at ContainerCamp and CoreOS Fest. One solution is to make distributed data storage trustworthy, so that administrators don't have to worry about data being persistent on any specific node.

The Raft consensus algorithm

A reliable distributed data store needs to have some way of ensuring that the data in it will eventually, if not immediately, become consistent across all nodes. While this is easy on a single-node database, a multi-node database where any of the individual nodes is allowed to fail requires more complex logic to determine how to make writes consistent. This is called "consensus", which is an agreement on shared state across multiple nodes. Diego Ongaro, developer of Raft, LogCabin, and RAMCloud, described to the audience how it works.

The Paxos consensus algorithm was introduced by Leslie Lamport in 1989, and for over two decades was the first and last word in consensus. But it had some problems, the chief of which was extreme complexity. "Maybe five people really, truly understood every part of Paxos," Ongaro said. Students and programmers alike found it difficult or impossible to write tests that validated whether the protocols described by Paxos were correctly implemented. This meant that, despite being based on the same algorithm, different Paxos implementations were radically different from each other, and could not be proven to have correctly implemented the algorithm, he said.

To solve this, Ongaro and Professor John Ousterhout at Stanford created a new consensus algorithm that they designed to be simple, testable, and easy to explain to developers. Ongaro's PhD thesis described the algorithm [PDF], called Raft; the name refers to the desire to "escape the island of Paxos." Ongaro described their reasoning: "At every design choice, we asked ourselves: what's easier to explain?"

Judging by the number of projects that implement Raft, chief among them CoreOS's etcd, they have been successful in their goal of comprehensibility. Ongaro explained the core workings of Raft in a half-hour session, and demonstrated it using a Raft model written entirely in JavaScript.

Each node in the Raft cluster has a consensus module and a state machine. The state machine stores the data you care about, including a serialized log of state changes to that data. The consensus module makes sure that that log is consistent with the log of every other node in the cluster, which requires all state changes (writes) to be initiated by a "leader" node. With each write, the leader sends out a message to all nodes confirming the write, and only makes the write permanent if a majority of all nodes (called a "quorum") confirms. This is a form of "two-phase commit", which has long been a component of distributed databases.

Each node in the cluster also has a countdown clock that waits a random but significant time for a leader to send a message. If the current leader is unavailable, the node that counts down first sends out a message requesting a leader election. If a quorum confirms the leader election message, then the sender becomes the new leader. Log messages sent out by the new leader have a new value for the "term" field in the message that indicates which node was leader when the write happened, preventing log conflicts.

There is obviously more to Raft than that, such as how missing and extraneous entries are dealt with, but Ongaro was able to explain the core design in less than half an hour. This simplicity means that many developers have been able to produce software using Raft, including projects like etcd, CockroachDB, Consul, and libraries for Python, C, Go, Erlang, Java, and Scala.

[ Update: Josh Berkus updates some of the information about Raft in a comment below. ]

Etcd and Intel

Nic Weaver, who works on software-defined infrastructure (SDI) at Intel, spoke about what the company has been doing to improve etcd. Intel has a strong interest in both Docker and CoreOS because it is trying to help users scale to larger numbers of machines per administrator. Cloud hosting has allowed companies to scale to numbers of services where configuration management by itself isn't adequate, and Intel sees containers as a way to scale further.

As such, Intel has tasked his team with helping to improve container infrastructure software. In addition to releasing the Tectonic cluster server stack with Supermicro as mentioned in the first article of this series, it also put some work into the software. The component Intel decided to start with was etcd — looking at what it would take to build a really large etcd cluster. As container infrastructures managed with tools based on etcd grow to thousands of containers, the number of required etcd nodes grows and the number of writes to etcd grows even faster.

The problem that the team observed with etcd was that the more nodes you had in a cluster, the slower it would get. This is because, per the Raft algorithm, a write would require more than 50% of the etcd nodes to sync to disk and return success. So even a few nodes with chronic slow storage issues can hold up the cluster. If disk sync was not required, it would eliminate one source of cluster slowdown, but would put the entire cluster at risk of corruption in the event of a data center power loss.

Their solution to this was to make use of a facility added to Xeon processors called "asynchronous DRAM self-refresh" (ADR). This is a small designated area of RAM that is preserved when there is a crash and restored on system restart. It was created to support dedicated storage devices. There is a Linux API for ADR, though, so applications like etcd can use it.

Modifying etcd to use the ADR buffer as the write buffer to its logs was a success. Write time went from 25% to 2% of overall time in the cluster, and it was able to double throughput to 10,000 writes per second. This patch will soon be submitted to the etcd project.

CockroachDB

One of the natural steps to take with the Raft consensus algorithm is to go beyond the etcd key-value store and to build a full-service database around it. While etcd is adequate for configuration information, it lacks many of the features users want in application databases, such as transactions and support for complex requests. Spenser Kimball of Cockroach Labs explained how his team was doing so. The new database is called CockroachDB, because it is intended to be, in his words, "impossible to stamp out. You kill it, and it pops up again somewhere else."

CockroachDB is designed to be similar to Google's Megastore, a project Kimball was quite familiar with from his time at Google. The idea is to support consistency and availability across the whole cluster, including support for transactions. The project is planning to add a SQL-compatible layer on top of the distributed key-value store, as Google's Spanner project did. By having transactions as well as both SQL and key-value modes, it can enable most of the common uses of databases. "We want users to build apps, not workarounds," said Kimball.

The database is deployed as a set of containers, distributed across servers. The key-value address space for the database is partitioned into "ranges", and each range is copied to a subset of the available nodes, usually three or five. Kimball calls this cluster of individual Raft consensus groups "MultiRaft". This allows the entire cluster to contain more data than is present on any individual node, helping to scale the database.

Each node runs in "serializable" transaction mode by default, which means that all transactions must be replayable in log order. If a transaction has a serialization failure, it is rolled back on the originating node. This permits distributed transactions without unnecessary locking.

From this, it sounds like CockroachDB might be the answer to everyone's distributed infrastructure data persistence issues. It has one major fault though: the project isn't yet close to a stable release, and many of the planned features, such as SQL support, haven't been written yet. So while it may solve many persistence issues in the future, there are other solutions for right now.

High-availability PostgreSQL

Since highly distributed databases aren't yet ready for production use, developers are taking existing popular databases and making them fault-tolerant and container-friendly. Two such projects build up high-availability PostgreSQL: Flocker from ClusterHQ and Governor from Compose.io.

Luke Mardsen, CTO of ClusterHQ, presented Flocker at ContainerCamp. Flocker is a data volume management tool designed to help host databases in containers. It uses volume management that supports migrating database containers from one physical machine to another. This means that orchestration frameworks can redeploy database containers in almost the same way they would stateless services, which has been one of the challenges to containerizing databases.

Flocker is able to support migrating containers between physical machines by making use of ZFS on Linux from the project of the same name. Flocker creates Docker volumes on specially managed ZFS directories, allowing the user to move and copy those volumes by using exportable ZFS snapshots. Operations are performed via a simple declarative command-line interface.

Flocker is designed as a plugin for Docker. One challenge for the Flocker team is that Docker doesn't currently support plugins. The team created a plugin infrastructure called Powerstrip, but that tool has yet to be accepted into mainstream Docker. Until it is, the Flocker project can't provide a unified management interface.

If Flocker solves the container migration problem, then the Governor project from Compose, presented by Chris Winslett at CoreOS Fest, aims to solve the availability problem. Governor is an orchestration prototype for a self-managing replicated PostgreSQL cluster, and is a simplified version of the Compose infrastructure.

Compose is a Software-as-a-Service (SaaS) hosting company, which means that the services it offers need to be entirely automated. In order for Compose to deploy PostgreSQL, it needed to support automatic database replica deployment and failover. Since users have full database access, Compose also needed a solution that didn't require making any changes to the PostgreSQL code or to users' databases.

One of the things Winslett figured out quickly was that PostgreSQL could not be the canonical store of its own availability and replication state, because the master and all replicas would have identical information. This led to implementing a solution based on Consul, a distributed high-availability information service. However, Consul requires 40GB of virtual memory for each data node, which wasn't practical for tiny cloud server nodes. Winslett abandoned Consul for the much simpler etcd and, in the process, substantially simplified the failover logic.

Governor works by having the governor daemon control PostgreSQL in each container. Governor queries etcd to find out who the current master is and replicates from it on startup. If there is no master, it attempts to seize the leader key on etcd, and etcd ensures that only one requester can win that contest. Whichever node gets the leader key becomes the new master and the other nodes start replicating from it. Since the leader key has a time-to-live (TTL), if the master fails, a new leader election will follow shortly, ensuring that there will quickly be a new master.

This means that Compose can treat PostgreSQL almost the same as it treats the multi-master, non-relational databases it supports, like MongoDB and Elasticsearch. In the new system, PostgreSQL nodes can be configured in etcd, and then deployed using container orchestration systems, without hands-on administration or handling those containers differently.

Conclusion

Of course, there are many more projects and presentations than the ones mentioned above. For example, Sean McCord spoke at CoreOS Fest about using the distributed filesystem Ceph as a block device inside Docker containers, as well as running Ceph with each node in a container. While this approach is fairly rudimentary right now, it offers another option for containers that need to run services that depend on large file storage. Cloudconfig, CoreOS's new boot-from-a-container tool, was also introduced by Alex Crawford.

As the Linux container ecosystem moves from test instances and web servers into databases and file storage, we can expect to continue to see new approaches to solving these kinds of problems, increasing scale, and integrating with other tools. If anything is clear from CoreOS Fest and ContainerCamp, it's that Linux container technology is still young and we can expect many more new projects and dramatic changes in approaches over the next year.

Comments (11 posted)

In 2013, we reported that SourceForge.net had started to redirect the download links clicked on by some users, providing those users with an installer program that bundled in not just the software the user had requested, but a set of side-loaded "utilities" as well. The practice raised the ire of many in the community, even though it was an optional service that SourceForge offered to project owners. Matters may have changed recently, however, as the GIMP project discovered that "GIMP for Windows" downloads had suddenly become side-loading installers—and that the project could no longer access the SourceForge account that was used to distribute them.

As a refresher, the SourceForge side-loading installer was rolled out in 2013 as a program called DevShare, an optional service made available to SourceForge projects. The program replaced the generic installer package that a project uploaded for users with a customized installer that bundled in several smaller programs provided by SourceForge revenue partners. These side-loaded programs had a generally negative reputation—as "adware," "annoyware," or simply junk that consumed the user's computing resources uninvited.

But there were, at least initially, some clearly defined limits. DevShare only provided side-loading installers for Windows downloads, and project owners were told that they would have full control over what their program's installers contained. Nevertheless, a lot of projects found DevShare unacceptable and some of them—including GIMP—decided to move their project infrastructure off of SourceForge entirely. Perhaps notably, SourceForge responded to GIMP's departure in a blog post, highlighting the opt-in nature and transparency of DevShare—even reassuring the community that "we will NEVER bundle offers with any project without the developers consent."

Since late 2013, GIMP has hosted its downloads for all platforms (including Windows) at the gimp.org site. Up to that point, Jernej Simončič had been the maintainer of the GIMP for Windows project account at SourceForge, which the GIMP team had used to release Windows-specific installers. After the migration away from SourceForge, the GIMP for Windows account went dormant.

So it was a surprise that, on May 26, GIMP user "Ofnuts" first sent an email to the GIMP developers' list reporting that the GIMP for Windows page was now serving up DevShare side-loading installers. Ofnuts noted that the SourceForge project page was still the target of many links (which, of course, makes it rank high in search results), and suggested that it would be better to break those links than to have users download a side-loading installer.

Simončič then replied that he could no longer access the GIMP for Windows account, "apparently due to inactivity, although they haven't done anything like that with a few other inactive projects I'm a member of" and that SourceForge had not replied to his request that it stop distributing the unauthorized installer. Jehan Pagès noted that the SourceForge page included several packages posted after the GIMP team had left SourceForge, and that "this is clearly an impersonation of the official GIMP team. The GPL license allows anyone to do forks of GIMP, or do alternative packages. But that does not give them the right to pretend to be the official upstream."

The GIMP team then posted a notice on Google Plus, accusing SourceForge of hijacking the GIMP for Windows account, and warning users to download releases only from gimp.org itself. It also added an announcement to the GIMP home page warning users against downloading the SourceForge packages. The chain of events was quickly picked up by Hacker News, Reddit, and other online discussion forums.

A little more investigating revealed that administrative access for the GIMP for Windows project had been removed from Simončič's account entirely, replaced by the sf-editor1 account. In the Hacker News discussion, "makomk" reported that GIMP was not alone in this regard. In fact, makomk said, SourceForge seems to have adopted a new policy of "taking over the project pages of projects that've moved off Sourceforge and running the pages themselves as mirrors (apparently with added extras in the installers)." This program is described by SourceForge as the SourceForge Open Source Mirror Directory.

Furthermore, the sf-editor1 account is now listed as the administrator of well over 100 open-source projects, some of which are certainly mirrors of projects (such as Firefox) that have never have been hosted at SourceForge in the past. Others (such as VLC) are former SourceForge-hosted projects that have been abandoned and turned into mirrors. For some projects with lengthy histories, it is hard to say for sure whether or not the project ever had a SourceForge account at some point in the past. Exactly which projects are affected by the side-loading installer behavior is not yet clear.

For its part, SourceForge has since posted a reply on its blog, saying that the GIMP for Windows project "was actually abandoned over 18 months ago, and SourceForge has stepped-in to keep this project current." The post also claims that it changed the status of the project "to clearly delineate it as a mirror, and change administrative control of the project to clearly delineate that it is editorially curated by SourceForge." It goes on to say that SourceForge has not heard from GIMP for Windows's author:

Since our change to mirror GIMP-Win, we have received no requests by the original author to resume use of this project. We welcome further discussion about how SourceForge can best serve the GIMP-Win author.

That statement would certainly seem to contradict Simončič's account of recent events (Simončič said in the GIMP IRC channel that he first contacted the company about the issue on May 16). It is also debatable whether or not the current SourceForge project page adequately communicates that GIMP for Windows is a mirror. There are, for example, no links on the page that take the user to gimp.org—only links to the main SourceForge Open Source Mirror Directory page.

Regardless of whether or not the SourceForge project page looks like a mirror, though, the central problem remains that it has been replacing GIMP's official Windows builds with something else, and not informing users of that fact. By late in the day on May 27, several GIMP team members (such as Michael Schumacher) were reporting that the installers offered on the GIMP for Windows page no longer included the problematic side-loaded bundles. But the GIMP team has still not heard back from SourceForge representatives.

In the mailing-list discussion, Joao S. O. Bueno suggested that the team should take the matter to the GNOME Foundation for assistance. I spoke briefly to some members of the GIMP development team who said that, as of now, there is no plan to pursue any legal resolution to the situation—but that this is as much a pragmatic decision as anything else. Right now, the team just wants to "kick up a bit of a fuss" and quickly inform the public of what is going on. Requesting any formal legal advice would take much longer.

At the moment, the GIMP team appears to be winning the public-relations battle, so "kicking up a fuss" may prove to be the winning strategy. Nevertheless, there are still a lot of unanswered questions from this series of events, not the least of which is how many other open-source projects in SourceForge's mirror directory are still delivering Windows installers that are side-loaded with unrequested software addons—without the consent of the project teams. Given that the site performs OS detection and geolocation before redirecting a download request to a specific installer file, it can be a bit difficult to say for sure which projects' downloads are being affected—but the development community is certainly taking a close look.

Comments (67 posted)