Implementing the use case

As outlined earlier IOTA is getting actively developed. By joining the Discord chat you can even see the Devs exchange information for there PRs on github. As well the second level protocol MAM is under development towards MAM+.

That means that the library I am using below might be in parts already outdated when you read this.

Transmitting a message on a public MAM channel

On the RPi

pi@raspberrypi:~/tmmiot $

mkdir IOTA pi@raspberrypi:~/tmmiot/IOTA $

git clone git clone https://github.com/jhab82/tmmiot-IOTA-agent.git pi@raspberrypi:~/tmmiot/IOTA/tmmiot-IOTA-agent $

npm install

Before we are transmitting the sensor data lets shortly try to just send a message and understand what are the different options provided by MAM.

The above code initializes the MAM object that is necessary to create the MAM channel (For those who like to deep dive into this construct of Merkel trees go for it short or long)

We need the IOTA javascript library const IOTA = require(’iota.lib.js’) which allows us to create transaction and outsource work to a IOTA Server (full Node). These Servers are distributed, mostly Virtual Private Server VPS (on AWS, Google Cloud or Azure) running IOTA Reference Software (IRI). The so called full nodes are the backbone of the current IOTA network.

IOTA was developed bearing in mind a mesh network environment of the future IoT. The current network tries to simulate the performance with measures like manual peering and computational PoW.

As we have not installed a full node ourselves we are using public available full nodes and its exposed API to attach our message to the tangle. And as we are in development state, we certainly use the IOTA foundation provided development tangle https://nodes.devnet.thetangle.org

Further we initialize our MAM object with the IOTA provider Mam.init(iota, undefined, 1) and define it to be of lowest security and change it to be public. MAM can be

public (meaning everyone with the address — we call it root address here — can read the stream) and

(meaning everyone with the address — we call it root address here — can read the stream) and restricted (only with the root-address and a password (side-key) chosen by the publisher) and

(only with the root-address and a password (side-key) chosen by the publisher) and private (only the seed holder).

That is already quite a bit of deep diving into the words and phrases used by the IOTA ecosystem. Don’t worry though we have not even touched the surface of this complex undertaking 😄 but the onboarding of data is very quick.

In the code we have an async publish function which translates our payload to trytes and attach it via Mam.attach to the tangle. Here the full node takes the heavy job of finding the two tips to confirm (gTTA) performs PoW with those tips. On the RPi the transactions gets constructed (if private or restricted as well signed) and then send to be broadcast.

pi@raspberrypi:~/tmmiot/IOTA/tmmiot-IOTA-agent $

node mam_min_publish.js

Feedback of the channel-id / root / address

RTPSMMSRXHS9TKHVNJ9B9KPVJSVOET9JRJVPVTZJTVCYZVKXVIWBLWIKAQCGNKBVJMEEA9GWVQEGGTOHKLLTWDD9WD

It takes two IOTA transaction for our one MAM message. If you increase the size of the payload above the limits of an IOTA transaction (according to iota.org you can store 2187 trytes in this message) or change the security setting you would need more transactions per MAM message.

The tangle explorer shows us that we have created two transactions which got immediately confirmed.

In IOTA every transaction needs to validate two other transactions (those two transaction to confirm can be found by the tip selection or the getTransactionToApprove (gTTA) function). When it comes to zero value transaction they don’t need any validation though. Once they are broadcast to the tangle they get accepted. Contrary to value transactions which gets only confirmed when validated by others (basically making sure value is not created nor deleted - excluding double spends).

In order to protect the network from spam the IOTA network requires some Proof of Work (PoW) for every attached transaction. PoW needs some computational power and for now we outsource this to the public available full node (which luckily provides this via their API for free).

With the public MAM decoder we get the decoded message and the channel ID which is the root address and the next channel ID (next root address) where the next message would be stored.

The above screenshots shows us that we have stored our data successfully and that we can fetch it with the provided root-address (channel-ID).

What is different to the other implementation

Contrary to both other IoT platforms we did not authorize , register or had to set up an account nor give any credit card details for future billing — awesome you might think now, I’ll store all my data here. Unfortunately you can’t. At least not forever that easy. In order to keep the full nodes databases reasonable small — the database needs to get pruned from time to time (IOTA calls that a snapshot). Meaning only the balances of the IOTA addresses gets summed up and the database start all over again of course with sound consensus on the state of all non-zero balances. So the data is not immutable at all? Sure it is before the node gets pruned manually — or you have a Permanode which keeps all data and let you query it (most likely for a fee as in thetangle.business).

In addition to the other IoT platforms you wont have a central authority in this network which could view, alter or delete your data. Its is not Google or Siemens, I mean the administrator who authorizes your access. I am sure there are different approaches with sound rights management and encryption to limit the possibility of access in centrally organized permissioned systems, but they all prone to fail once we are moving to the machine economy.

Another features for the paranoid data owner in IOTA is its claimed quantum computer resistant encryption architecture. That sounds so futuristic and I honestly am by no means knowledgeable enough to discuss post-quantum cryptography with you. Anyways what I understood is that most public key algorithms are able to be broken by sufficiently powerful future quantum computers. One Time Pad OTP based architectures are not that easy to brute forced by very efficient quantum computing.

To sum it up we have

created a MAM encoded message

attached this message via remote PoW to the tangle

received the message via a public tangle explorer (providing the channel-ID / root-address)

decoded the message via the public web interface

When executing our above program again a new channel-ID will be generated which leaves our messages unrelated. But we like to be able to stop our stream and resume to the same channel when starting over again. Let’s do this next.

Resume publishing on a created channel

In order to resume publishing on a channel we need to store the next root address and some other information (including the seed) from the state of the MAM object (mamState).

In order to always restream the entire channel we need to store as well the first root-address. I have rewritten the code above to just do this. Now we can resume publishing on this public channel and would be able even after a our RPi breaks down to resume the channel.

Next we restrict the channel from others to be viewed — we need to change the MAM object to be encrypted by a chosen password (side-key).

Change the channel to restricted mode

There is only very little to be changed to the code except creating a key and transforming the key to trytes:

const key = iota.utils.toTrytes("tmmiot-iota-sideky");

Then we need to change the Mode of our mamState:

mamState = Mam.changeMode(mamState, "restricted", key);

That’s it from here we can’t use the tangle explorer but another explorer to view our restricted channel. Reviewing the mamChannelRoot.json which is created now with every new channel we have all the information to request the stream.

{"subscribed":[],"channel":{"side_key":"HDADADXCCDHDRAXCCDHDPCRAGDXCSCTCZCMD","mode":"restricted","next_root":"DKMVKBRGITNGLVEZPOGASILFTIRK9KFJHBKJESQKFIYMGOJLFQKWKFMGIPASCNNTBGR9ITJRQGYJSJILS","security":1,"start":1,"count":1,"next_count":1,"index":0},"seed":"UJHR9SDKXUOBNVHDGDXKPUBCABGPXDWAATCKITOTQRQSJJCBXONMZVDVKZDPPYXNZYLTQMHHJ9SISIDJF"}

When we resume the feed we get further entries.

Adding the sensor data to the agent

Just adding the package like we did for the other agents in Google and MindSphere.

const rpiDhtSensor = require(‘rpi-dht-sensor’);

//DHT sensor com package

var dht = new rpiDhtSensor.DHT22(4); //on GPIO 4 const Sensor = require(‘sds011-client’);//PM sensor com package

const sensor = new Sensor(“/dev/ttyUSB0”);

// Use your system path of SDS011 sensor. sensor.setReportingMode(‘query’); //set the PM sensor to query mode

sensor.setWorkingPeriod(3); //set the interval to 3min.

And query our PM sensor. We get this every 3 min. published to our channel then.

Wonderful — we can run this nohup and leave the room and can rest assured knowing that we will have our sensor data secured, immutably stored in the tangle.

Just for remembering it — our RPi is not doing a lot of work though. Its fetching the sensor data every 3 minutes creating a MAM object from that and sends all the hardwork to the full node which provides us beside finding our transaction to confirm doing PoW and attaching the message to the tangle.

Probably the devnet full node are load balanced Virtual Private Server (VPS) — providing us with an entry point to the tangle devnet. But it could be as well a HW optimized cluster providing a very eco-friendly efficient service.

Visualization of data

Compared to Google Cloud implementation in Part 2 we have not yet seen an old school database which can be queried with SQL to get us all the necessary information for visualization various metrics. With MindSphere the onboarded data is not yet queried with available APIs or the GUI — we just saw a predefined visualization of our own sensor data. Both IoT platforms are not ready to just onboard new environmental sensors — they are fully permissioned. An administrator would need to authorize them (as shown). Of course we would need to automate the onboarding process but for now its all manually permissioned.

With IOTA everyone could already just onboard their data individually with above guidance to the same tangle. But in order to have all the data from all sensors (100, 1000 and 10.000) we would need to think a bit further.

Of course we could implement a market-place which enables to store the individual root-addresses and their side-keys (potentially by providing a bit of cash to the owner or better IOTA tokens) but that is maybe for the later chapters (similar to data.iota.org). For now we assume we know all the root-addresses and their side-keys.

Next I build a simple google chart application able to fetch the data from the tangle (by providing your root-key— similar to the firebase app we queried for our data but instead of listing the variables writing them to a chart.)

Visualization of the MAM stream

I have written a very rough visualization of a public MAM stream

Whereas in a database you can query the latest 10 entries of a filtered object (e.g. device). With MAM you need to run from the root until the end. As the next root of the message gets derived from the actual root.

I have streamed my sensor data one day to the tangle having now 480 payloads stored immutable in this distributed ledger. In order to visualize this and continuously update the stream I need to store somewhere my root-addresses otherwise I would need to run from the beginning to the end through the MAM which is computationally heavy.

There is already other MAM implementation in the making like RAAM. Furthermore as mentioned there is a project MAM+ on the agenda from the foundation which might come with additional functionality.

But here its the current MAM implementation. Or should I say I hope its the current. In order to reduce the computational effort we only derive the next message and store the the next root temporarily. The user can manually executes to decode the next message from that stream.

I have switched back to a public MAM stream (only because I don’t know how to securely ask for the sidekey on this backend implementation). I should have implemented it all client side (maybe next).

Code snippet from the backend function for fetching a single MAM message and the router.

https://iota-mam-vis-app-dot-tell-me-more-iot.appspot.com/