96-core ARM supercomputer using the NanoPi-Fire3

After the interest in my cluster of Raspberry Pi 3s last year, I was keen to try building clusters with some of the other excellent SBC s now on the market. FriendlyARM in China very generously sent me 12 of their latest NanoPi-Fire3 64-bit ARM boards, each with an EIGHT core ARM A53 SoC running at 1.4GHz with gigabit Ethernet.

Jump to section: Software to run on a cluster? NanoPi-Fire3 vs Raspberry Pi 3 Benchmarks Case design in 3D Laser-cutting the case Design changes from the Pi 3 cluster Server status lights with MQTT Power, temperature & cooling Building the Fire3 Cluster Bill of materials Clusters of other SBCs

The completed cluster measures 146.4 (w) x 151 (h) x 216mm (d) and weighs 1.67kg (5.6 x 5.9 x 8.3", 59oz)

Software to run on a cluster?

or...

Clusters are often used for computationally intensive tasks (medical research, simulating weather, AI/deep learning, cryptocurrency mining) and/or high-availability services (using redundant nodes in case of hardware failures). This cluster is undoubtedly slow in terms of modern supercomputers, but a small portable cluster is ideal for teaching or developing distributed software that can then be ported to much more powerful HPC systems.

I'm planning on writing a couple of articles to showcase this cluster in action:

Cryptocurrency mining on an ARM supercomputer ( coming Q3'18 )

( ) Deep Learning AI on an ARM supercomputer (coming Q4'18)

Running Docker Swarm or Kubernetes look to be excellent options for controlling the cluster, although I haven't tried them yet.

NanoPi-Fire3 vs Raspberry Pi 3

The NanoPi-Fire3 board is a considerable advance over the Raspberry Pi 3 both in terms of performance and features, in a smaller form factor, while still costing about the same:

Model NanoPi-Fire3 Raspberry Pi 3 model B SoC 8-core ARM A53

S5P6818 @ 1.4GHz 4-core ARM A53

BCM2837 @ 1.2GHz Memory 1GB DDR3 1GB DDR2 GPU Mali-400 MP4

500MHz? Broadcom VideoCore IV

400MHz? Network 1000Mbps 100Mbps WiFi no 802.11bgn Bluetooth no 4.1 + BLE Storage microSD card microSD card USB spare 1 fitted

1 microUSB 4 fitted Video Micro HDMI 1.4a, RGB-LCD HDMI, DSI Camera ports DVP CSI Audio no 3.5mm Size 75 x 40mm 85 x 56mm Power 1.2 → 3.6W

2A max, microUSB 1.2 → 2.1W

2.5A max, microUSB Launched Q4 2017 Q1 2016 Price (UK) £34.301 £33.59 1US$35 Fire3 + $5 shipping + 20% VAT + 0% import duty = £34.30

Benchmarks

Processor (Multi-core)

and

hpcc

Most modern computers have multiple CPU cores on a single chip, which means they can run 2 or more jobs simultaneously. This might be different applications (e.g., a web server handling 3 different pagesa database) or it could be a single processor-intensive task that is split into multiple threads for maximum speed (e.g., a ray tracer, file compression, etc). This test from thepackage uses all the cores it can find, effectively testing the overall CPU performance at floating point operations.

Linpack TPP v1.4.1 (Linear Equation Solver)

» MFLOPS , More Is Better

Please enable Javascript in your browser to see the benchmarks.

??

A single Fire3 board with double the number of cores, higher clock speed and faster memory is an amazingfaster than the Pi 3 on this benchmark.

60,000 MFLOPS isn’t all that fast by current performance standards, but back in 2000 the 12-node Fire3 cluster would have made it into the Top250 fastest supercomputers in the world (!) Even the 5-node Fire3 cluster is ?? faster than the same-sized Pi 3 cluster, which can be explained by the extra CPU cores, faster memory, and the much faster networking for node-to-node communication.

The 16-core Cray C90 supercomputer launched in 1992 could do 10780 MFLOPS, but cost US$30.5 million (£16.4 million), weighed 12 US tons (10900 kg) and needed a 495 kW power supply (!)

hpcc

# Setup on each node apt install hpcc swapoff -a adduser mpiuser # Controller node setup su - mpiuser cp /usr/share/doc/hpcc/examples/_hpccinf.txt hpccinf.txt # Edit default hpccinf.txt so that NB=80, N=18560, P=8 and Q=12 (P x Q = 96 cores) sed -i "8s/.*/80\tNBs/; 6s/.*/18560\tNs/; 11s/.*/8\tPs/; 12s/.*/12\tQs/" hpccinf.txt # Generate & copy SSH keys across cluster, so controller can run benchmark on all nodes # (use the hostnames or IP addresses for your nodes) ssh-keygen -t rsa nodes=('controller' 'b1' 'b2' 'b3' 'b4' 'b5' 't1' 't2' 't3' 't4' 't5' 't6') for i in ${nodes[@]} do ssh-copy-id "fire3-$i" echo "fire3-$i slots=8" >> mycluster done mpirun -hostfile mycluster --mca plm_rsh_no_tree_spawn 1 hpcc grep -F -e HPL_Tflops -e PTRANS_GBs -e MPIRandomAccess_GUPs -e MPIFFT_Gflops -e StarSTREAM_Triad -e StarDGEMM_Gflops -e CommWorldProcs -e RandomlyOrderedRingBandwidth_GBytes -e RandomlyOrderedRingLatency_usec hpccoutf.txt

Graphics

GPU

Tuning a cluster to get the highest possible scores is an art all of its own - with compiler optimisations, customised math libraries, etc. However these scores are from the standardbinary package in Ubuntu 16.04.4, using the default configuration.Modern computers often have multi-cores, either in a separate graphics card or built into the main SoC itself. Both the Fire3 and Pi 3 have quad-core GPUs. These are dedicated to processing large blocks of data in parallel, required for computer graphics. More recently they have also been used for specialised computation like cryptocurrency mining.

glmark2-es2 2014.03 (OpenGL ES 2.0)

» Score, More Is Better

Please enable Javascript in your browser to see the benchmarks.

??

A single Fire3 board isfaster than the Pi 3 on this benchmark. The cluster scores are simply scaled based on the number of nodes.

As with the CPU performance above, there are many options for tuning graphics performance by compiling with custom drivers, etc. However this test simply uses the standard glmark2-es2 binary package in Ubuntu 16.04.4, using the default “out of the box” configuration. It is run using:

sudo apt install glmark2-es2 glmark2-es2 --off-screen

The legacy OpenGL renderer for the Pi 3 is quite poor, but if you switch to the (currently experimental) Mesa VC4 renderer using rpi-config then you should get similar performance to the Fire3.

Most ARM SBC s use old GPU designs which give very modest performance compared to recent flagship smartphones, let alone compared to desktop PCs with expensive high-end graphics cards and huge power supplies. The Mali-400 MP4 GPU in the Fire3 dates from 2008, while the Broadcom VideoCore-IV in the Pi 3 is from 2010. There are some recently-announced SBCs like PINE64’s RockPro64 with newer more powerful GPUs (Mali-T860 MP4), while the Samsung Galaxy S9 phone uses the latest generation Mali-G72 MP18.

Networking

This tests the real data transfer speed using iPerf between 2 boards connected to a 100/1000Mpbs Ethernet switch.

iPerf v2.0.5 (TCP, 1000Mbps Ethernet, board-to-board)

» Mbits/sec, More Is Better

Please enable Javascript in your browser to see the benchmarks.

??

sudo apt install iperf # On node1 iperf -s -V # On node2 iperf -c node1 -i 1 -t 20 -V

With default settings the 1000Mbps interface on the Fire3 gives a HUGEboost in network performance over the 100Mpbs interface on the Pi 3.

If you are looking for extra networking performance on your Raspberry Pi (older than the Pi 3 model B+) then adding a USB-gigabit Ethernet adaptor instead of using the onboard 100Mpbs interface can give a useful ?? speed increase, although because it is limited by USB2 it is still far slower than a true 1000Mbps interface. The newest Pi 3 model B+ has this integrated, so you get the improved network speeds without needing a separate USB adaptor.

Cluster performance/watt

To give a score for performance (“speed”) per watt (cluster electrical power), I used the Linpack benchmark results above to give MFLOPS. This measures the multicore floating point performance of a system, and is commonly used for ranking computer systems

Performance Per Watt

» MFLOPS/W, More Is Better

Please enable Javascript in your browser to see the benchmarks.

The 5-node Fire3 cluster is a remarkable ?? more power-efficient than the same-sized Pi 3 cluster, despite using more total power when at 100% load.

Watts were measured at 100% load using a mains energy monitor for the entire cluster, including network switches, fans and power supply. WiFi, bluetooth, HDMI, etc. were all left as system defaults.

The Cray C90 supercomputer mentioned above could only manage 0.02 MFLOPS/W back in 1992.

Case design in 3D

I modified my original Raspberry Pi cluster design using the free version of SketchUp and built rough 3D templates of the NanoPi-Fire3’s, network switches, sockets, etc. I didn’t bother to include ventilation slots/grids in the 3D model. The case is exactly the same size as my 5-node clusters: the challenge was to fit 12 boards, 2x fans, 2x Ethernet switches and all the cables inside!

Laser-cutting the case

I used the free Inkscape application for 2D design, ready for exporting to the laser cutter. Each colour is a different pass of the laser, at different power/speed levels, so the green lines are cut first to make holes for ports/screws/ventilation, pink are extra cuts to help extract delicate parts, orange is text/lines that are etched and finally blue cuts the outside of each panel.

Download files for laser cutting on one 600x400x3mm sheet (although I used a mixture of clear and opaque black for different panels):



The optional smallest piece is an diffuser for the (very bright!) LEDs panel, which you could cut from a frosted acrylic, or just buy the official Pimoroni diffuser for £3.

Read more about the laser cutting and the screwless case-clipping system in my original article.

Design Changes from the Pi 3 cluster

I kept the horizontal mounting rail design but the Fire3 has M3 holes which are easier to find parts for, rather than M2.5 on the Pi. And the holes are closer together because the overall board size is quite a bit smaller than the Pi. Screwing the plastic nuts onto horizontal rails is a bit tedious, and I’d like to 3D print a C-shaped ‘clip’ that holds the boards in place along each rail, or perhaps use tight elastic washers?

design but the Fire3 has M3 holes which are easier to find parts for, rather than M2.5 on the Pi. And the holes are closer together because the overall board size is quite a bit smaller than the Pi. Screwing the plastic nuts onto horizontal rails is a bit tedious, and I’d like to 3D print a C-shaped ‘clip’ that holds the boards in place along each rail, or perhaps use tight elastic washers? External PSU vs Internal USB hub – I swapped out the internal USB hub power supply for a fanless AC-DC power ‘brick’ that sits outside the case. This gives more space inside the case (for the larger number of Fire3 boards, and two fans), and should help with dissipating the heat from the power supply. Each Fire3 can draw up to a maximum of 2A, but will actually be much less in this cluster, without extra USB and GPIO accessories. Need even more power? mainstream computer PSUs have a regulated +5V output that can be tapped, but (most) are large and use noisy fans.

vs – I swapped out the internal USB hub power supply for a fanless AC-DC power ‘brick’ that sits outside the case. This gives more space inside the case (for the larger number of Fire3 boards, and two fans), and should help with dissipating the heat from the power supply. Each Fire3 can draw up to a maximum of 2A, but will actually be much less in this cluster, without extra USB and accessories. Need even more power? mainstream computer PSUs have a regulated +5V output that can be tapped, but (most) are large and use noisy fans. 2x microUSB daisy-chains vs 12x separate microUSB cables – With no suitable commercial cables available I ended up making my own “daisy-chain” power cables using short lengths of a thicker wire (11A rating) and soldered 12x microUSB angled connectors to short spurs, to give a neat result that takes up very little space inside the case... more

vs – With no suitable commercial cables available I ended up making my own “daisy-chain” power cables using short lengths of a thicker wire (11A rating) and soldered 12x microUSB angled connectors to short spurs, to give a neat result that takes up very little space inside the case... more Dual Case Fans vs Single – I was sure that the high performance Fire3 boards would need significant active cooling, and so I designed the case with room for two ultraquiet 92mm fan inside the case, the rear fan to suck cool air in and the front to blow hot air out.

vs – I was sure that the high performance Fire3 boards would need significant active cooling, and so I designed the case with room for two ultraquiet 92mm fan inside the case, the rear fan to suck cool air in and the front to blow hot air out. Gelid Solutions Silent 9 fan vs Nanoxia Deep Silence fan – I was very happy with the performance of the Nanoxia fan (and their excellent customer service) but wanted to try out a cheaper option. The Gelid rubber grommets are thicker than the Nanoxia ones, so I increased the case mounting hole diameters by 0.5mm.

vs – I was very happy with the performance of the Nanoxia fan (and their excellent customer service) but wanted to try out a cheaper option. The Gelid rubber grommets are thicker than the Nanoxia ones, so I increased the case mounting hole diameters by 0.5mm. Direct 5V power for fans vs 5V from GPIO pin – Previous clusters have powered their fan from a GPIO pin of one of the nodes, however given that this cluster would potentially have 2 fans running at 12V, I connected the step-up/boost converter with a direct line from the main case input instead.

vs – Previous clusters have powered their fan from a GPIO pin of one of the nodes, however given that this cluster would potentially have 2 fans running at 12V, I connected the step-up/boost converter with a direct line from the main case input instead. Few ventilation slots vs Many – rather than laser-cutting dozens of ventilation slots all over the case (which takes a while) I only cut ventilation grills on the front & rear case panels, adjacent to the fans. The theory being that this might give a better airflow path through the case and out again?

vs – rather than laser-cutting dozens of ventilation slots all over the case (which takes a while) I only cut ventilation grills on the front & rear case panels, adjacent to the fans. The theory being that this might give a better airflow path through the case and out again? Case USB ports – while perfectly functional, I never really liked the combined twin USB port (with long cables that didn’t bend well) on my original cluster, so for this update I used two separate USB ports with short cables and up-angled male plugs, which gives more room inside the case.

– while perfectly functional, I never really liked the combined twin USB port (with long cables that didn’t bend well) on my original cluster, so for this update I used two separate USB ports with short cables and up-angled male plugs, which gives more room inside the case. No internal shelf vs Shelf – not needing a shelf to attach the USB hub to simplifies the design, and also means it can be cut from a single sheet of 600x400mm acrylic. Removing the shelf would have reduced the rigidity of the case, but screwing the horizontal mounting rails to the side panels keep it secure.

vs – not needing a shelf to attach the USB hub to simplifies the design, and also means it can be cut from a single sheet of 600x400mm acrylic. Removing the shelf would have reduced the rigidity of the case, but screwing the horizontal mounting rails to the side panels keep it secure. Flat LAN cables vs Round – I loved the rainbow LAN cables in my RPi3 build, but it was a tight fit bending them in the case. These flat cables bend far more easily, which is even more important with so many nodes squeezed into the case. I originally tried 25cm cables, but they were far too long and swapping them for 15cm cables gave more free space inside the case.

vs – I loved the rainbow LAN cables in my RPi3 build, but it was a tight fit bending them in the case. These flat cables bend far more easily, which is even more important with so many nodes squeezed into the case. I originally tried 25cm cables, but they were far too long and swapping them for 15cm cables gave more free space inside the case. Blue LAN cables vs Boring grey – the blue really brings out the colour of their eyes PCBs ... plus the FriendlyARM logo is blue+green.

vs – the blue brings out the colour of their PCBs ... plus the FriendlyARM logo is blue+green. Gigabit vs 10Gigabit-switch – the Fire3 network ports are each 1000Mbps (10x faster than the Pi), so using at least a 1000Mbps switch is a no-brainer really. A 10Gbps switch would minimise that bottleneck (e.g., if ten or more Fire3s were saturating their link to the outside network), however these are still expensive at £200+, and too large to fit inside this case. The NETGEAR GS110MX switch looks promising.

vs – the Fire3 network ports are each 1000Mbps (10x faster than the Pi), so using a 1000Mbps switch is a no-brainer really. A 10Gbps switch would minimise that bottleneck (e.g., if ten or more Fire3s were saturating their link to the outside network), however these are still expensive at £200+, and too large to fit inside this case. The NETGEAR GS110MX switch looks promising. 4mm PCB spacers vs 6mm – lowering the Ethernet switch PCBs gave a little more room for easier cabling and better airflow.

vs – lowering the Ethernet switch PCBs gave a little more room for easier cabling and better airflow. Micro HDMI vs HDMI – The Fire3 boards have a Micro HDMI socket, so the shortest (50cm) Micro HDMI → HDMI cable I could source was used. A shorter cable with a separate HDMI → Micro HDMI adaptor was another option, but these are bulky and may have fouled one of the LAN ports.

vs – The Fire3 boards have a Micro HDMI socket, so the shortest (50cm) Micro HDMI → HDMI cable I could source was used. A shorter cable with a separate HDMI → Micro HDMI adaptor was another option, but these are bulky and may have fouled one of the LAN ports. Black perspex panels vs Clear – to ‘hide’ the two fans, while leaving all the electronics visible from the sides or above. The black front panel also focuses the eye on the Unicorn LEDs panel.

vs – to ‘hide’ the two fans, while leaving all the electronics visible from the sides or above. The black front panel also focuses the eye on the Unicorn LEDs panel. Unicorn pHAT LEDs panel vs just node LEDs – because there are so many nodes in the cluster, I wanted the front of the case to include a visual “health display” showing the CPU speed, temperature, disk & network activity for each node... more

Server status lights with MQTT

Although I kept the case exactly the same size as my cluster of Raspberry Pi 3s , I made plenty of changes and improvements:You can also read about some of the design choices on my original Pi cluster.I used the excellent Unicorn pHAT 32x RGB LEDs from Pimoroni to give my cluster a colourful “health display” display to show the CPU load, temperature, disk & network activity for each node. These low-cost boards would normally plug straight onto the headers of a Raspberry Pi, but require a little hackery to work on anything else. The rpi_ws281x library by Jeremy Garff uses some very clever low-level PWM/DMA code that is specific to the Raspberry Pi, so instead I've modified the library to use a single SPI pin to control the LEDs, which should work on almost any hardware.

The Unicorn pHAT is connected with just 3 jumper wires, to +5V, GND and SPI0 MOSI (pin 19) to the controller node. I describe how this works in detail, with full source code, in my article Do Raspberry Pi HATs work on other 'Pi' boards? (coming soon). The LEDs are very bright point sources and look much better with the light ‘spread out’ by a diffuser that attaches to the outside of the case with 2-4 M2.5 screws. You could cut your own diffuser from a frosted acrylic, or Pimoroni sell one with screws for £3.

To monitor the state of the cluster I'm running the lightweight Mosquitto MQTT broker (server) on the controller node, with each node publishing its current CPU speed, temperature, network activity, etc. to the broker once a second.

Power, temperature & cooling

At idle, the entire system of twelve Fire3s, two network switches & dual 7V fans sips a mere 24W, and at 100% load it still only uses 55W in total.

Do I need the heatsinks? with double the number of cores, the Fire3 SoC can generate far more heat than the Pi3, so a heatsink is essential. Luckily FriendlyARM include a substantial heatsink with thermal paste that securely clips onto the Fire3 board. It is much bigger than other aftermarket SBC heatsinks I've seen, and does a good job of reducing the temperature: however you will still want to use a fan.

The power adaptor only supplies up to 75W (1.1A per Fire3), so external USB devices (e.g. hard drives) would likely need a separate supply. Using:

cat /sys/devices/virtual/thermal/thermal_zone0/temp

to measure the SoC core temperature, the cluster idles at 39°Cwith cooling from both 12V fans.

At 100% load and both 12V fans, using:

sysbench --test=cpu --cpu-max-prime=20000000 --num-threads=8 run &

the SoC core temperatures reached a stable 58°C. Without fans the SoCs will quickly reach 80°C and automatically throttle down their clock speed, to avoid overheating. They can safely run long-term at that temperature, but you don’t get maximum performance.

Exactly the same case design should work with NanoPi Fire2s and Fire2As which run cooler than the Fire3, and so would only need a single fan. For cooling a single Fire3 you could use a much smaller fan, perhaps 40-60mm.

Unusually for ARM SBCs, the Fire3 boards include an ultra-low power (~5µA) sleep mode, which suggested the possibility of powering down individual nodes when not required, then waking them on demand. Unfortunately there is no Ethernet Wake-on-LAN support, but instead just an inflexible “wake after X minutes” setting. There is however a PWR toggle header on the boards, which could be soldered and wired for remote wake from the GPIO pins of a controller node ?

Silent cooling:

To cool down the cluster I fitted dual 92mm fans inside the case. I used (effectively) silent fans recommended by Quietpc.com , the Gelid Silent 9 (£5.40 each).

At 5V I have to get my ear within 50-75mm (2-3") to hear even the slightest whisper from the fans, and the supplied rubber gromets definitely do a good job of isolating the case from any small vibrations. However at 12V the fans are quite audible (20dBA) in an otherwise silent room, so I was looking for the voltage that would provide enough cooling, while keeping the fans silent. I used a step-up/boost converter to adjust the speed of the fan(s) by controlling their voltage between 5V and 12V.

description heatsinks? idle 100% load performance Case, rear 12V fan, 1500 rpm yes 42°C 66°C OK Case, rear 9V fan, ? rpm yes 44°C 71°C OK Case, rear 7V fan, ? rpm yes 46°C 75°C throttles Case, both 12V fans, 1500 rpm yes 39°C 58°C OK Case, both 7V fans, ? rpm yes 40°C 65°C OK Case, both 5V fans, ? rpm yes 46°C 77°C throttles

(temperatures shown are averages across the different nodes, so an average of 71°C (158°F) actually had two of the boards close to throttling.)

I was surprised that the 2nd fan didn't make more difference, and in the end it was a choice between a single fan running at 9V vs. two fans at 7V, with the two fan solution a running little cooler and quieter. I'm guessing that a 2nd fan would have more effect with a larger case volume and/or with a less straightforward airflow path?

Power cables, episodes I→V:

Fire3s are powered via microUSB just like the Pi, but I couldn’t source a 12-port/15A USB hub. I considered a 6-port hub with 6x 2-way microUSB splitter cables, or even two separate 6-port USB hubs. These didn’t provide enough power for 12 nodes, or took up too much space inside the case, respectively. With an external AC-DC power ‘brick’ I tried using some off-the-shelf 8-way and 6-way splitter cables designed for CCTV cameras combined with right-angle microUSB→DC jack adaptors, but these took up a lot of space (bad for airflow) and weren’t rated for the current, which led to a drop in the voltage going to each Fire3 board. Could I use the steel mounting rails as conductors for 5V+GND?! Not quite as mad as it sounds: each rod has a low resistance of only 0.5Ω and should be electrically isolated from the node PCBs. But I couldn’t think of a way to make a solid connection from each node to the rods that would make it easy to unplug them in the event of needing to replace a node, etc. A New Hope? Instead of soldering 12x custom microUSB cables, was there another way to power the nodes? The Fire3 boards have unpopulated 5V+GND points on their PCBs, such as the UART header. It would be easier & cheaper to solder a 2-pin header to each node and provide power with pre-made DuPont 2-pin connectors instead of microUSB. However this daring plan was scuppered when I realised that once the cluster is assembled, there isn’t enough space between each node to plug or unplug the 2-pin connectors... plus ideally I’d prefer to avoid soldering each board. I ended up making my own “daisy-chain” power cables using two lengths of 2-core 0.5mm thinwall wire (11A rating, 6 nodes per length) and soldered microUSB angled connectors on short spurs, to give a neat result that takes up very little space inside the case and allows for excellent airflow. The two daisy-chains exit the case via separate DC sockets, to limit the maximum current through a single DC barrel jack. It also means you can power just the top or bottom rails of nodes if you wish.

Building the Fire3 Cluster

The trickiest part of this build turned out to be finding a good solution for powering the 12 nodes, 2 Ethernet switches and 2 fans. I wasto avoid lots of soldering and making custom cables...Both Ethernet switches are also powered from the main 5V input, soldering DC barrel connectors.The build process is similar to that of my 40-core ARM cluster using the NanoPC-T3 , but with more nodes and an extra network switch & fan. The Fire3 boards are spaced every 20mm along the M3 threaded rods, and secured with 8x nuts per board.

For neatness I hot-glued the 5V-to-12V step-up/boost converter PCB to the back case panel, and added header pins so the fans can be easily (dis)connected. Some cables were routed out of the way using small zip-ties.

The Pimoroni LEDs display was connected to a single controller node via 3 GPIO pins... more.

Bill of materials

Most of these parts were sourced from individual sellers on AliExpress or eBay, which of course racks up the postage charges. If there were enough demand, it would be cheaper to bulk buy the parts and have a kit with everything you need to build the cluster.

Edimax ES-5800G V3 Gigabit Ethernet Switch (2 pack) £19.96 Flat 15cm Cat6 LAN cables (12 pack) £6.79 M3 steel screws 12mm (8 from a 10 pack) £1.45 M3 brass female standoff 4mm (8 from a 50 pack) £0.99 5.5/2.1mm DC connector (2 from a 5 pack) £1.49 1m jumper wire red+black n/a 1m 2-core 0.5mm thinwall (11A) DC power wire £0.99 Angled microUSB solder type connector (12 from a 20 pack) £1.63 5.5/2.1mm chassis mount DC socket (2 from a 10 pack) £0.65 10A terminal block (4 from a 12 strip) £1.29 100W PSU (5V @ 20A) fanless power supply, 5.5/2.1mm plug + UK plug £13.51 RJ45 male to female screw mount (2 pack) £1.74 M3 steel screws 8mm (4 from a 5 pack) £1.25 M3 steel threaded bar 150mm inc. washers+nuts (8 pack) £9.20 M3 nylon hex nuts (120 from a 150 pack) £1.73 50cm Micro HDMI male to HDMI female panel mount £2.19 25cm USB female panel mount to up-angled male plug (2 pack) £2.38 3mm extruded clear perspex 600x400mm £5.32 Mini 5V-to-12V step-up/boost converter £2.04 Laser cutting charge n/a Gelid Silent 9 92mm case fan (2 pack) £11.65 Polyurethane rubber feet (4 from a 10 pack) £1.75 Unicorn pHAT 32x RGB LEDs panel £10.00 M2.5 black screws 10mm (2-4 from a 20 pack) £1.02 Small zip-ties (10 pack) n/a Subtotal inc P&P £97.73 NanoPi-Fire3 at US$35/each (12 pack)1 £383.38 SanDisk Industrial class 10 8Gb microSDHC card (12 pack) £62.16 Total inc P&P £543.27

1The NanoPi-Fire3 is duty free to import into the UK, and only costs US$29 to ship 12 boards from China, but then there is UK VAT at 20%, bringing the total to £383.38.

Clusters of other Single Board Computers

So far I’ve also built:The 5-node clusters mostly share the same components, including the acrylic case panels – only the 2 side panels are unique because the boards are different sizes.