Hyperledger Fabric has some rough edges when it comes to running on Kubernetes. For example, the way peers build and run chaincode isn’t the best fit for Kubernetes, but we can use Docker-in-Docker to avoid most of the issues. Likewise, there are ways around the other issues as well. Today, we’ll look at how to set up Fabric on Kubernetes so we can develop chaincode and client applications.

Overview of Fabric setup

Hyperledger Fabric is a modular system that can be configured with different components. Since we’re just interested in a development environment today, there are few simplifications we’ll start with:

Disable TLS on Fabric components. Everything runs in a single Kubernetes cluster that we control, so we don’t need TLS.

Solo orderer service. We don’t need high availability for development.

Leave out CA and MSP services for now. Whether you need these for development depends on your use case. They’re relatively simple to set up.

In summary, the development environment consists of a solo orderer and a set of peers, all configured to use a static local MSP. Once the environment is set up, we can deploy clients to set up channels, install and run chaincode, etc. Here’s an outline of the setup steps:

Generate configuration artifacts for the network. e.g. Use cryptogen to generate the static MSP files. Use configtxgen to generate the orderer’s genesis block.

to generate the static MSP files. Use to generate the orderer’s genesis block. Upload artifacts to Kubernetes as ConfigMaps and Secrets. e.g. Create a tarball with the MSP files for a peer. Use kubectl create secret --from-file to upload the tarball.

to upload the tarball. Deploy the workload (orderer, peers, and clients) on Kubernetes.

Each step is simple to automate, and it doesn’t require any changes to Fabric or to your Kubernetes cluster. If you don’t have a Kubernetes cluster, creating one on your own machine is as simple as downloading Minikube and running minikube start .

Generating configuration artifacts

If you’ve tried fabric-samples before, you’re probably familiar with cryptogen and configtxgen . The official docs for this step are pretty straightforward.

Write a crypto-config.yaml for the network. Use it (and cryptogen ) to create crypto-config for all components of the network, including the orderer, peers, and users (for clients).

for the network. Use it (and ) to create for all components of the network, including the orderer, peers, and users (for clients). Write an orderer.yaml and configtx.yaml for the network. Use them (and configtxgen ) to create a genesis block for the orderer.

Uploading configuration artifacts

When you run a Fabric network on your own machine using docker-compose, you can directly mount local directories into the orderer and peer containers. Look at the volumes fields in fabric-samples’ docker-compose-base.yaml:

volumes:

- ../channel-artifacts/genesis.block:/var/.../orderer.genesis.block

- ../crypto-config/.../msp:/var/hyperledger/orderer/msp

- ../crypto-config/.../tls:/var/hyperledger/orderer/tls

The part in boldface is the local path on your machine. The part on the right is the mount path — where the orderer container sees the files. This approach doesn’t work if you’re running Fabric on a Kubernetes cluster because Kubernetes can’t access your local filesystem.

Fortunately, the Kubernetes API has another solution for configuration artifacts:

ConfigMap

Secret

Both resources can store arbitrary data, and they can be injected into a container as either environment variables or files. Here’s how to upload the orderer genesis block to a secret named orderer-genesis-block :

kubectl create secret generic orderer-genesis-block --from-file=../channel-artifacts/genesis.block

This command uploads a generic-type Secret with a single entry, genesis.block . Later, when we deploy the orderer, we can mount the Secret orderer-genesis-block into the orderer’s filesystem.

Uploading crypto-config

The process is a little more involved for the MSP artifacts, because cryptogen creates an entire directory structure. There are a few more steps, but the general idea is the same:

tar -zcf orderer.example.com.msp.tar.gz -C ../crypto-config/.../orderer.example.com msp

kubectl create secret generic ...

Mount the Secret into the orderer Pod.

Use an InitContainer to unpack orderer.example.com.msp.tar.gz so the orderer container can use the MSP artifacts.

The main difference here is that the orderer container can’t directly use the file stored in the Secret, so an InitContainer has to unpack the tarball first. Another approach is to create a composite Secret that contains multiple files rather than a single tarball — then an InitContainer might not be needed.

Deploying the workload

Deploying the orderer and peers to Kubernetes is primarily a matter of making sure all resources (Services, Deployments, Secrets, ConfigMaps) are configured correctly. The basic architecture looks like this:

Kubernetes Service and Deployment for the orderer

Kubernetes Service and Deployment for each peer

Kubernetes Secrets and ConfigMaps for Fabric configuration artifacts

Each Deployment is the actual running instance(s) of a service. Each Service is a cluster-internal DNS name for the underlying Deployment. For example, when a peer needs to connect to the orderer, it uses the orderer’s Service name. Likewise, when a client needs to connect to a peer, it uses the peer’s Service name.

There are a lot of things to get right in the Kubernetes config for our Fabric development environment. Here’s a checklist of concerns:

Environment variables are set for orderer, peer, and client containers.

Docker-in-Docker sidecar containers are set up for peer and client containers.

The correct Secrets and ConfigMaps are attached to the orderer, peer, and client Deployments.

InitContainers unpack tarballed secrets and configs and initialize the right filesystem structure for the orderer, peers, and clients.

Orderer and peer Service names match the orderer, peer, and client containers’ environment variables.

Once you have the right Service and Deployment configurations, deploying them is a simple kubectl create -f .

Conclusion

Running Hyperledger Fabric on Kubernetes boils down to making sure you’re deploying the right resource configurations. There are a lot of details to figure out the first time, but afterwards, the process is very simple. Good luck, and as always, leave your thoughts and questions in the comments section!