When moved to the Cloud one of the first question (forget the cost) is how can I predict my IO performance? this is because nobody knows which hardware is used under the hood when you create a compute instance.

Oracle Cloud console

Processor performance is easy to know looking at the information from the CPU, for example an instance created with a shape VM.Standard.E2.1 will have two cores of type:

ubuntu@node1:~$ cat /proc/cpuinfo |grep "model name"

model name : AMD EPYC 7551 32-Core Processor

model name : AMD EPYC 7551 32-Core Processor

ubuntu@node1:~$ cat /proc/cpuinfo |grep "cpu MHz"

cpu MHz : 1996.243

cpu MHz : 1996.243

but what about disks, boot or attached iSCSI volumes, nobody knows what is a paravirtualized disk with no hardware information, see that:

root@node1:/srv/nfs4# hdparm -i /dev/sda /dev/sda:

SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

HDIO_GET_IDENTITY failed: Invalid argument

root@node1:/srv/nfs4# hdparm -i /dev/sdb /dev/sdb:

SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

HDIO_GET_IDENTITY failed: Invalid argument

OK, with above information I decided to test some time ago performance of Oracle Cloud Infrastructure for using Docker Swarm cluster in two posible scenery, NFS shared storage and distributed using CIO from Storidge, two options for storing persistent data for my Docker instances.

For the test I am used a cloud instances similar as the used in my previous post Deploy Docker Swarm at Oracle Cloud with Oracle Linux 7, but in this case using Ubuntu to facilitate the installation of Storidge. In this case using 4 nodes for distributed storage and one node serving as NFS server, in all cases using high IO block storage/latency, may be using SSD under the hood.

A very first raw test using dd

A very simple test of IO is doing using dd, we can check boot partitions and block storage partition (ext4 formatted), here the boot disk:

root@de866e:/tmp# dd if=/dev/zero of=test4.img bs=1G count=1 oflag=dsync

1+0 records in

1+0 records out

1073741824 bytes (1.1 GB) copied, 13.9244 s, 77.1 MB/s

root@de866e:/tmp# dd if=test4.img of=/dev/null oflag=dsync

2097152+0 records in

2097152+0 records out

1073741824 bytes (1.1 GB) copied, 2.06966 s, 519 MB/s

and here data disk on storage/latency disk:

root@de866e:/srv/nfs4# dd if=/dev/zero of=test4.img bs=1G count=1 oflag=dsync

1+0 records in

1+0 records out

1073741824 bytes (1.1 GB) copied, 1.70058 s, 631 MB/s

root@de866e:/srv/nfs4# dd if=test4.img of=/dev/null oflag=dsync

2097152+0 records in

2097152+0 records out

1073741824 bytes (1.1 GB) copied, 2.06904 s, 519 MB/s

note the difference, boot disk seem to be optimized for read performance (519 MB/s equal to other disk) but with low write performance 77 MB/s, storage/latency have better write performance 631 MB/s, dsync flag in dd command avoid kernel caching and implies immediate sync to disk.

But what it means in terms of regular servers, here a comparison with sample server using Intel i7 CPU and SATA SSD disk (6 Gbits maximum transfer), first test using non-SSD/non-RAID SATA disk:

root@vmsvr10-slab:/tmp# dd if=/dev/zero of=test4.img bs=1G count=1 oflag=dsync

1+0 records in

1+0 records out

1073741824 bytes (1,1 GB) copied, 6,99693 s, 153 MB/s

root@vmsvr10-slab:/tmp# dd if=test4.img of=/dev/null oflag=dsync

2097152+0 records in

2097152+0 records out

1073741824 bytes (1,1 GB) copied, 1,29753 s, 828 MB/s

second test using two SSD (KINGSTON 480.1 GB) disk in RAID0 mode:

root@vmsvr10-slab:/home/VMs# dd if=/dev/zero of=test4.img bs=1G count=1 oflag=dsync

1+0 records in

1+0 records out

1073741824 bytes (1,1 GB) copied, 2,02704 s, 530 MB/s

root@vmsvr10-slab:/home/VMs# dd if=test4.img of=/dev/null oflag=dsync

2097152+0 records in

2097152+0 records out

1073741824 bytes (1,1 GB) copied, 1,29408 s, 830 MB/s

by using SSD and RAID0 configuration we have with a regular hardware a similar write performance, but this is a local disk (baremetal server) and is not influence by network latency, in a cloud environment you have a virtual machine and network latency due disk are usually located in other compartment. This is a good new for your cloud planning deployment, because no mater whatever your block storage is located, you have a better write performance than a regular bare metal server.

NFS versus local storage test

As I told above the idea is to use one node of the cluster as storage server exporting the storage/latency optimized disk directory as NFS V4 export and mounting it as a volume in Docker instance, to do this test I am used the Oracle Orion tool packed as Docker image as is described in my previous post Estimating IO throughput at your cloud deployment. First, test one client using local mount to use as reference for comparing performance of a remote mount later:

root@a37709:/srv/nfs4# docker run -ti --rm -v local:/home --name test-local oracle/orion-official:12.2.0.1 /usr/lib/oracle/12.2/client64/bin/orion -run simple -testname firsttest -hugenotneeded

ORION: ORacle IO Numbers -- Version 12.2.0.1.0

firsttest_20180913_0034

Calibration will take approximately 9 minutes.

Using a large value for -cache_size may take longer. Maximum Large MBPS=867.73 @ Small=0 and Large=2 Maximum Small IOPS=12679 @ Small=5 and Large=0

Small Read Latency: avg=392.499 us, min=198.667 us, max=38147.308 us, std dev=425.804 us @ Small=5 and Large=0 Minimum Small Latency=392.499 usecs @ Small=5 and Large=0

Small Read Latency: avg=392.499 us, min=198.667 us, max=38147.308 us, std dev=425.804 us @ Small=5 and Large=0

Small Read Latency Histogram @ Small=5 and Large=0

with a Docker volume mount from local storage we have a read performance of 867 MBPs and around 12679 IOPs, mostly of the IO request and in a range of 256–512 us. as is shown in the histogram.

Now test a NFS mount from a remote node:

root@a4f1ee:~# docker volume create --driver local --opt type=nfs --opt o=addr=192.168.0.6,rw,intr,hard,timeo=600,wsize=32768,rsize=32768,tcp --opt device=:/srv/nfs4 nfs_test

root@a37709:~# docker run -ti --rm -v nfs_test:/home --name test-high oracle/orion-official:12.2.0.1 /usr/lib/oracle/12.2/client64/bin/orion -run simple -testname firsttest -hugenotneeded

ORION: ORacle IO Numbers -- Version 12.2.0.1.0

firsttest_20181004_1102

Calibration will take approximately 16 minutes.

Using a large value for -cache_size may take longer. Maximum Large MBPS=427.37 @ Small=0 and Large=3 Maximum Small IOPS=13185 @ Small=10 and Large=0

Small Read Latency: avg=756.598 us, min=288.397 us, max=14976.572 us, std dev=511.523 us @ Small=10 and Large=0 Minimum Small Latency=480.544 usecs @ Small=3 and Large=0

Small Read Latency: avg=480.544 us, min=255.911 us, max=22302.739 us, std dev=367.564 us @ Small=3 and Large=0

Small Read Latency Histogram @ Small=3 and Large=0

Read performance is decreased to 427 MBPs by read latency histogram and IOPs shows similar values, sure read performance decreases due the overhead of NFS protocol stack.

NFS with two clientes in parallel

Obviously We infers that with two clientes try to access to the the same NFS storage server will incur in bottleneck in the network port of the server or disk pipeline, here the result:

root@d54224:~# docker volume create --driver local --opt type=nfs --opt o=addr=192.168.0.6,rw,intr,hard,timeo=600,wsize=32768,rsize=32768,tcp --opt device=:/srv/nfs4 nfs_test

root@a4f1ee:~# docker volume create --driver local --opt type=nfs --opt o=addr=192.168.0.6,rw,intr,hard,timeo=600,wsize=32768,rsize=32768,tcp --opt device=:/srv/nfs4 nfs_test docker run -ti --rm -v nfs_test:/home --name test-high oracle/orion-official:12.2.0.1 /usr/lib/oracle/12.2/client64/bin/orion -run simple -testname firsttest1 -hugenotneeded

Maximum Large MBPS=330.20 @ Small=0 and Large=2

Maximum Small IOPS=8181 @ Small=5 and Large=0 docker run -ti --rm -v nfs_test:/home --name test-high oracle/orion-official:12.2.0.1 /usr/lib/oracle/12.2/client64/bin/orion -run simple -testname firsttest2 -hugenotneeded

Maximum Large MBPS=328.63 @ Small=0 and Large=2

Maximum Small IOPS=8618 @ Small=5 and Large=0

and here latency histograms:

read latency histograms client1/client2

not a surprise, sum of MBPs (330.20+328.63=658.83) is higher than one client but not than the local storage, IOPs (8181+8618=16799) is similar but here the aggregate IO throughput is higher than the local storage.

A distributed storage using Storidge CIO

Storidge distributed storage is a great option to implement Docker persistent storage for containers, following the guide Install cio you can get four node installation, but before starting with that guide you must replace the kernel of instance by linux-image-aws, sorry Oracle guys but Storidge’s team do not include that kernel on the installer choices, do:

root@node1:~# apt-get install linux-image-aws

...

root@node1:~# apt-get remove linux-image-*oracle

...

root@node1:~# update-grub;reboot

Or you could use Centos 7 image without any change, once I have Storidge cluster up and running tested using a profile HighIO as is:

root@a37709:~# cio cat SUPERIO

---

capacity: 20

directory: /cio/volumes

iops:

min: 1000

max: 15000

level: 2

local: no

provision: thin

type: ssd

service:

compression: no

dedupe: no

encryption:

enabled: no

replication:

enabled: no

destination: none

interval: 120

type: synchronous

snapshot:

enabled: no

interval: 60

max: 10

Note iops (min/max) and level parameters which means practically no limits on IOPs and one replica.

Storidge with one Client

Lets test one client volume stored in our cluster wide storage:

root@a37709:~# docker volume create --driver cio --name highio1 --opt profile=SUPERIO

root@a37709:~# docker run -ti --rm -v highio1:/home --name test-high oracle/orion-official:12.2.0.1 /usr/lib/oracle/12.2/client64/bin/orion -run simple -testname firsttest -hugenotneeded

ORION: ORacle IO Numbers -- Version 12.2.0.1.0

firsttest_20181004_1236

Calibration will take approximately 9 minutes.

Using a large value for -cache_size may take longer. Maximum Large MBPS=297.31 @ Small=0 and Large=1 Maximum Small IOPS=11863 @ Small=5 and Large=0

Small Read Latency: avg=419.660 us, min=185.907 us, max=15414.395 us, std dev=448.024 us @ Small=5 and Large=0 Minimum Small Latency=419.660 usecs @ Small=5 and Large=0

Small Read Latency: avg=419.660 us, min=185.907 us, max=15414.395 us, std dev=448.024 us @ Small=5 and Large=0

Read throughput is decreased a lot 297.31 MBPs compared to 427.37 over NFS which is 45% faster. In terms of IOPS is only 10% difference. In terms of latency this is the histogram:

Small Read Latency Histogram

look similar having most of response time in a range of 256–512 us but more stable than others.

Storidge with three Client

Lets rock with more clients, because is distributed storage:

root@a37709:~# docker run -ti --rm -v highio1:/home --name test-high oracle/orion-official:12.2.0.1 /usr/lib/oracle/12.2/client64/bin/orion -run simple -testname firsttest -hugenotneeded

Maximum Large MBPS=296.15 @ Small=0 and Large=1

Maximum Small IOPS=11508 @ Small=5 and Large=0 root@d54224:~# docker run -ti --rm -v highio2:/home --name test-high oracle/orion-official:12.2.0.1 /usr/lib/oracle/12.2/client64/bin/orion -run simple -testname firsttest -hugenotneeded

Maximum Large MBPS=314.14 @ Small=0 and Large=2

Maximum Small IOPS=9795 @ Small=5 and Large=0 root@b8ef30:~# docker run -ti --rm -v highio3:/home --name test-high oracle/orion-official:12.2.0.1 /usr/lib/oracle/12.2/client64/bin/orion -run simple -testname firsttest -hugenotneeded

Maximum Large MBPS=292.21 @ Small=0 and Large=1

Maximum Small IOPS=11061 @ Small=5 and Large=0

Well now the case is different I have an aggregated throughput of 902,5 MPBPs (296,15+314,14+292,21), more than the double of NFS with one client and higher than local storage (867.73), why? because Storidge is a distributed implementation and then you could aggregate the sum of local IO bandwidth and network ports. Similar is the IOPS 32364!!! compared to a local storage of only 12679. So is clear that using distributed storage solution for your persistent Docker Swarm services is good choice. Here the latency histograms:

three client latency histogram

Next post about BeegFS.io distributed storage at Oracle Cloud.