Technology

Setting up Hyperledger fabric in multiple VMs

The documentation of Hyperledger Fabric’s first network setup is pretty straightforward. But the setup involves all the participants of the network running inside Docker containers in a single machine. This includes the orderer service and four nodes of the two organizations running inside their corresponding containers.



However, in the real world, the orderer is definitely going to be running in a separate machine, whether it’s the solo orderer or the Kafka-based orderer. Also, each node of the organization is going to be running on independent machines. Hence it is essential for any beginner to have a successful setup of Hyperledger’s first network running at least on multiple VMs, if not independent machines.



Representation of Hyperledger Fabric in multiple VMs



We will now explain the steps involved in setting up Fabric’s first network running on five separate VMs with Vagrant on a single host machine.

The VMs for participating nodes

The nodes are named as follows:



hyper0.local (the orderer service)

hyper1.local (the anchor peer for org 1)

hyper2.local (the second peer for org 1)

hyper3.local (the anchor peer for org 2)

hyper4.local (the second peer for org 2)

Additional VM for DNS:

We also had a dnsmasq based DNS server running on an additional VM, which we pass on to all the Docker containers, so that they can identify all the VM hosts just by their hostnames. Just add all the hostname -> IP mappings on the /etc/hosts file on the dnsmasq server, and restart the dnsmasq service.



And to avoid making things too complicated at this early stage, we will proceed with the following pre-conditions:



All the required certificates for the participating nodes are generated inside the hyper0.local machine itself and the certificates required for each node are passed manually. But ideally, each organization will have their own CA service and generate their own certificates for their peers and pass only the public keys to the orderer service.

The default cryptogen utility will be used to generate certificates for now. Fabric CA will come into the picture in the next iteration.

As a first step, we will be cloning the Fabric-samples project from https://github.com/hyperledger/fabric-samples in all the VMs and install all the prerequisites mentioned in https://hyperledger-fabric.readthedocs.io/en/release/prereqs.html separately for all the machines.

Set up for generating digital certificates:

The crypto-config.yml file under first-network folder needs to be modified to generate the certificates needed for each machine. But, this change is done only in the hyper0 machine since we are going to generate the certificates only from the orderer.



OrdererOrgs: - Name: Orderer Domain: example.com Specs: - CommonName: hyper0.local PeerOrgs: # --------------------------------------------------------------------------- # Org1 # --------------------------------------------------------------------------- - Name: Org1 Domain: org1.example.com Specs: - CommonName: hyper1.local - CommonName: hyper2.local # --------------------------------------------------------------------------- # Org2 # --------------------------------------------------------------------------- - Name: Org2 Domain: org2.example.com Specs: - CommonName: hyper3.local - CommonName: hyper4.local

The above spec has the definitions needed to generate the certificates for all the 5 nodes.



The channel and peer config specs will be located in configtx.yaml. Here we need to specify the host address for orderer and all the anchor peers



Under section - &Org1 AnchorPeers: - Host: hyper1.local Port: 7051 And under - &Org2 AnchorPeers: - Host: hyper3.local Port: 7051 And for orderer node, Orderer: &OrdererDefaults Addresses: - hyper0.local:7050

This is pretty much enough to generate all the necessary certificates.



Generating and issuing the actual certificates:

Now, when you issue a simple ./byfn.sh generate command assuming all defaults, the script generates all the certificates needed for all the machines in corresponding folders and put it under the crypto-config folder.



Now a manual step is needed to take out all the folders needed for each node and paste it under the crypto-config folder of each node’s VM



Also make sure,



You copy and paste. Do not cut and paste. Because all the nodes’ public keys are still needed inside the orderer node which generates the genesis block in our next step.

You do not copy other node’s certificate to any other node.

The next step is to bring up your first network by starting the services individually in all nodes. But before that we need to do some modifications in docker-compose-cli.yaml which is the entry point for the network bootstrap.

Setting up Docker-compose files:

The current set up has all the services needed for all the 5 nodes plus an additional cli service to kick start the script.sh file placed inside the single docker compose file. Now as we have split up the services in individual nodes, we make sure that each docker-compose-cli file in every node has only the services needed for that machine. For instance, the orderer machine will have



services: hyper0.local: container_name: hyper0.local environment: - GODEBUG=netdns=go extends: file: base/docker-compose-base.yaml service: hyper0.local Dns: <dns IP needed> networks: - byfn

And not the other nodes’ services. Similarly, do this for other peer nodes too.



Note the additional line we have included in environment, GODEBUG=netdns=go which forces go programs to use the pure Go resolver, because the cgo resolver wasn’t playing well with the dnsmasq based dns server we were using.



Apart from this, the cli service must be added separately in all the 5 nodes since that is the actual service which kickstarts the network in each node.



The base service definitions lies in base/docker-compose-base.yaml. Here we just need to rename the service and container names to match our node host names. For instance, rename orderer.example.com to hyper0.local, rename peer0.org1.example.com to hyper1.local etc. A simple text search and replace should work.

Setting up the script files:

Now comes the final file which does the actual job for us. Navigate to scripts/script.sh, this file will be present in all the VMs but not all the functions are going to be used in all the machines. For instance, createChannel function is purposeful only in hyper0.local (orderer). And chain code query for now is purposeful in all the nodes except orderer. But to keep things simple, we are not caring about code duplication for now. To make sure the correct environment variables are exported, just do a simple text search and replace like we did earlier in docker-compose-base.yaml



Now, our next step is to make sure the script.sh in each VM does what the machine is intended for. This is done by simply commenting out the function calls that are not needed. For example, in hyper0.local, comment out all the function calls except createChannel, joinChannel and updateAnchorPeers. Similarly in other machines, comment out only these three function calls and keep the others. Now our entire set up is ready to be tested.



Starting up the network:



Before the orderer machine bootstraps the network, we need to make sure that the other peer machines are listening on the network in the intended ports.



From the orderer machine, call ./byfn.sh -m up to create channel and issue join command to other peer nodes, once that is done, we can directly call docker-compose-cli.yml file in other peers to test chaincode installation, instantiation and querying.