One of the more significant additions to Mirantis OpenStack 8.0 is integration of OpenStack Ironic into Fuel, so we thought it would be a good idea to talk about what Ironic is and how it can give you bare metal performance with virtualization convenience.

What is OpenStack Ironic?

The short answer to that question is that OpenStack Ironic is a set of projects that perform bare metal provisioning and related activities. But what does that actually mean?

It helps to get a little bit of context. OpenStack is an open-source, scalable platform for building public and private clouds. It works mostly as IaaS (Infrastructure-as-a-Service), consisting of services such as Compute (Nova), Networking (Neutron), Storage (Cinder) and others, even down to the Platform as a Service level with projects such as Murano, the OpenStack Application Catalog project.

All of this enables the implementation of most customer use cases on top of a virtualization solution. To support virtualization, OpenStack supports several hypervisors: KVM, Xen, QEMU, Hyper-V, VMWare, LXC, Docker. Among other benefits, virtualization enables the IaaS piece of the puzzle, enabling users to self-provision virtual machines, essentially creating their own servers from a user interface or the command line.

In some cases, however, a virtualized environment is inappropriate, and a user would prefer to have an actual, physical, bare metal server. In order to make that possible in a self-service way, OpenStack needs to support bare metal provisioning.

That’s where Ironic comes in.

Bare metal provisioning in OpenStack means a customer can use hardware directly, deploying the workload (image) onto a real physical machine instead of a virtualized instance on a hypervisor.

To make this happen, Nova includes virtualization driver that makes calls to Ironic to launch a bare metal node. With this Ironic virt driver, users of the OpenStack Compute API can launch a bare metal server instance in the same way that they can currently launch a virtual machine (VM) instance.

Ironic Architecture

You can see the main components of the Ironic architecture the community user guide’s basic and concise visual representation below.

Figure 1: Ironic architecture

Each type of Ironic interaction with actual hardware, such as power, boot, deploy, console, and so on, is wrapped with a driver. Ironic defines a few interfaces that can be implemented in each driver. The Ironic architecture also enables vendors to add their specific logic as Vendor Extensions to their specific hardware Driver.

Out of the box, Ironic supports a set of drivers that enable Ironic projects to be tested on CI (Continuous Integration) using Gates (a CI Job to execute a specific set of tests on each commit):

agent_ssh is a testing driver that combines the ironic-python-agent deploy mechanism with SSH-based power control to enable upstream CI

is a testing driver that combines the ironic-python-agent deploy mechanism with SSH-based power control to enable upstream CI pxe_ssh is a testing driver that combines the PXE deploy mechanism with SSH-based power control to enable upstream CI

Using the IPMI standard, Ironic can manage most of the available hardware:

pxe_ipmitool is the recommended default driver for Ironic in the Icehouse and Juno releases

is the recommended default driver for Ironic in the Icehouse and Juno releases agent_ipmitool uses the ironic-python-agent project for advanced provisioning functionality not available with the PXE driver. It is recommended for Kilo and later releases.

This pluggable architecture provides opportunities to write third party tools and plugins. For example, let’s look at some of the various tools and plugins that already exist in the Ironic ecosystem. Major vendors that already have support for their hardware platforms include HP, Dell, Cray, Fujitsu, and IBM.

Vendor Driver Maintainer Cray pxe_snmp This driver uses SNMP instead of IPMI for power management of power distribution units. Stig Telfer Fujitsu agent_irmc driver This driver enables Virtual Media deploy with IPA (Ironic Python Agent) and power control via ServerView Common Command Interface (SCCI). Naohiro Tamura Fujitsu iscsi_irmc driver This driver enables Virtual Media deploy with image build from Diskimage Builder and power control via ServerView Common Command Interface (SCCI). Naohiro Tamura HP iscsi_ilo The iscsi_ilo driver from HP utilizes HP’s integrated lights-out management facilities, in conjunction with an iSCSI-based deploy mechanism. It does not use PXE. Ramakrishnan G HPE iscsi_pxe_oneview The HPE OneView driver for Ironic enables the users of OneView to use Ironic as a bare metal provider to their managed physical hardware. The iscsi_pxe_oneview driver implement the core interfaces of an Ironic Driver, and use the python-oneviewclient to provide communication between Ironic and OneView through OneView’s Rest API Thiago Paiva Brito, Sinval Neto, Lilia Sampaio HPE agent_pxe_oneview The HPE OneView driver for Ironic enables the users of OneView to use Ironic as a bare metal provider to their managed physical hardware. The agent_pxe_oneview driver implement the core interfaces of an Ironic Driver, and use the python-oneviewclient to provide communication between Ironic and OneView through OneView’s Rest API Thiago Paiva Brito, Sinval Neto, Lilia Sampaio IBM pxe_ipminative pxe_ipminative is like the pxe_ipmitool driver, but substitutes IBM’s ‘pyghmi’ library, a native python IPMI utility, for the generic ipmitool package. Ling Gao

Ironic Components

The Ironic ecosystem consists of a set of projects:

The ironic project itself is responsible for provisioning an operating system on bare metal resource nodes. It has two components: ironic-api ironic-conductor

python-ironicclient is a Python client program

is a Python client program ironic-python-agent is an agent (small program) that is launched inside the Bootstrap image. It prepares a node for deployment and downloads the target system image

is an agent (small program) that is launched inside the Bootstrap image. It prepares a node for deployment and downloads the target system image ironic-inspector helps with hardware introspection. It currently works only with known, already-registered nodes. The Mirantis team is working on adding some discovery capabilities to the ironic-inspector project

helps with hardware introspection. It currently works only with known, already-registered nodes. The Mirantis team is working on adding some discovery capabilities to the ironic-inspector project bifrost is a set of ansible playbooks to install and run Ironic independently of other OpenStack components

is a set of ansible playbooks to install and run Ironic independently of other OpenStack components ironic-webclient is an Angular.js based plugin for Horizon.

is an Angular.js based plugin for Horizon. ironic-lib is a library that provides some common utility code.

is a library that provides some common utility code. pyghmi is an alternative implementation of IPMITool. Many operators are looking forward to new a RedFish specification.

Using Ironic with Fuel

Mirantis is working on a project, Fuel-Agent(FA), which provisions machines before the OpenStack Fuel deployment tool installs Controllers, Compute and other node roles. This tool has a slightly different set of features than the Ironic Python Agent (IPA).

An obvious goal was to ensure that any hardware supported by Mirantis OpenStack/Fuel would be supported by the integrated Ironic project. Also, a high priority customer request we’ve received is support for arbitrary partitioning of bare metal nodes during provisioning.

To accomplish both of these tasks, the team decided to customize deployment and replaced IPA with FA, which required development of a specific Ironic deploy driver. You can see a comparison of the features of both tools in the table below.

Recently, the new Bareon project, a fork and further evolution of FA, has begun development, with quite a promising roadmap.

The Integration

So how did we integrate Ironic with Mirantis OpenStack? As you can see in Figure 2, a deployment diagram of the components, deployment roles (Controller, Ironic) can be assigned during cluster creation in the Fuel UI.

Figure 2: Ironic Deployment Diagram

When it comes to actually deploying Ironic, the whole procedure is optional; when deploying a cluster, you can enabled Ironic by a single check box, as seen in Figure 3.

Figure 3: Choosing to install Ironic.

In Mirantis OpenStack 8.0, bare metal provisioning uses a special dedicated network, so if you choose Ironic for deployment, you must configure it on both the L2 and L3 levels, as in Figure 4 and Figure 5.

Figure 4: Setting the L2 configuration for Ironic in the Fuel UI

Figure 5: Setting the L3 configuration for Ironic in the Fuel UI

Now that you have a general idea of what Ironic is, let’s look at why it matters.

Why should you use Ironic in Mirantis OpenStack?

So why go through all of this trouble to make bare metal servers available, when VMs have been doing just fine? As it happens, there are a lot of areas where people want to use bare metal servers instead of virtualized or containerized servers. They include:

Mission-critical legacy applications that aren’t designed for cloud architectures

Real-time and “near real-time” systems

HPC (High-Performance computing)

BigData and related Data Science and Machine Learning projects

Tasks accessing devices and resources that cannot be virtualized

By introducing bare metal capabilities in OpenStack in general, Ironic brings the advantages of both bare metal and virtualization: performance and manageability.

Similarly, using Ironic in Mirantis OpenStack in particular also brings the best of both worlds: a history of hardening Mirantis OpenStack, and live and community supported Ironic.

Challenges, and limitations, future functionality, and current endeavours

One of the challenges of working with a community project such as Ironic is that we have to coordinate the functionality we need with what is currently available. For example, the current community version of Ironic does not support multi-tenancy, auto-discovery, or integration with inventory management systems. Let’s look at the current status of these issues, and where they stand in relation to Mirantis OpenStack.

Multi-tenancy support

The current Upstream version of Ironic lives in a single tenant, which means that all bare metal nodes are on the same network, even if they’re owned by different tenants. That means that a user who has access to one bare metal node has network connectivity to all of the other bare metal nodes in the environment. Access separation is done on an ALL or NOTHING level.

In order to enable multi-tenancy, Ironic must support the same network isolation level VMs support. To make that happen, VLAN, VxLAN and others will need to come to bare metal instances. To do that, we need to provide the requisite connectivity information to a Neutron ML2 plugin via the LLC field. The connectivity information allows drivers to configure the TOR (Top of the Rack) switch for the bare metal nodes.

As part of this effort our team managed to avoid vendor-lock in for the network switches, creating the upcoming Generic Switch ML2 plugin. The plugin is available in a Github repo, and has been submitted for inclusion as part of the official Community Ironic.

Auto-Discovery

This venture is intended to complement the introspection ability inside the Ironic Inspector with the Discovery part, so that completely new nodes can be discovered and then inspected.

Integration with Inventory management systems

Every modern data center should have a Single Source of Truth – an Inventory management system. We did a small survey of OpenStack Operators and discovered a gap between Google Sheets/MS Excel “accounting” and complex feature-rich commercial products. Our team is filling this hole and integrating OpenStack with such open source systems. As a PoC (proof of concept) we have implemented a “glue-layer” – CSV import from any given URL.

Additional challenges

In addition to these issues, Ironic currently supports limited scenarios with DVR (Distributed Virtual Routing) in Neutron, and the change from IPA to FA includes tradeoffs in functionality. Both of these will improve in time.

Resources