In his lecture at the open source conference FOSDEM in Brussels, red hat container expert Daniel Riek summarized the development of Unix and Linux as a computing platform: a development of mainframes, classic Linux distributions and bare metal up to virtual machines … and finally the ongoing container revolution. He predicted a painful time for advocates of the old Unix school in the near future: “This container stuff is getting worse than systemd when it comes to forthcoming forks and the acceptance of the gray beards.”

Riek should know what he’s talking about. For several years, he headed the department at Red Hat that is responsible for the integration and connection of the various software components of the company’s Sever strategy. He has been working since 2013 to make Red Hat Enterprise Linux (RHEL) fit for the brave new world of containers. He was also instrumental in launching the company’s Atomic Host strategy. Since the end of 2017, he has been working in the CTO office on visions for the use of Red Hat’s OpenShift and Kubernetes platforms in the field of AI and machine learning.

There is no way around containers

The container revolution, as painful as it may be for the gray-bearded old admins, is the only right step for Linux to keep up with the increasing complexity of software stacks of all kinds, says Riek. Since everything in our lives now depends on software, software is becoming increasingly important. Which leads to more and more companies entering the business of software development or software services. This is also made possible by the transformation of the market away from own hardware and towards the cloud. Many companies want to develop their own software, but increasingly can no longer afford to employ experts for every level of their own stack – from the Linux kernel to the application framework.

“That’s why people go to the cloud,” said Riek. You only have to be an expert in your own field, the rest of the stack is provided by the cloud provider – whether in the Amazon public cloud or in the private or hybrid cloud Red Hat. This in turn changes how users perceive the software itself. In the cloud, your own Linux installation is no longer the pet that you lovingly take care of. “We now have Linux distributions as a service,” says Riek. In order to meet these new requirements and still deliver feature and security updates for the software stacks in use in a timely manner, containers are inevitable.

No time for dependency hell

Linux admins could no longer deal with the dependency hell or lovingly maintain their VMs. The future belongs to the containers, says Riek. He explains it this way: Software that is packaged in containers in unchangeable Atomic installations no longer needs to be maintained. If something changes, there is simply a new container from the company whose services you have subscribed to. Different parts of the stack are all packed in their own containers and managed with programs such as Kubernetes. Riek sees Kubernetes as “a kind of systemd for clusters”: a lot of small Linux installations in containers, which all run together in a kind of large meta-Linux and together form the whole software stack. Each individual container is interchangeable.

“This is how containers solve the problem of complex stacks,” says Riek at the end of his FOSDEM lecture. An open system where it doesn’t matter what hardware it runs on. This brave new world is already commonplace for Riek and his Red Hat colleagues. Nevertheless, it is somehow not surprising that long-established Debian or Slackware gray beards are swimming a bit.



(FAB)







