Behind the scenes: Building an event notification system for Neo smart contracts Albert Acebrón Follow Mar 27 · 4 min read

While it’s currently possible to emit events from Neo smart contracts, we found that it’s not easy for applications to listen for these events, as all the current solutions require running your own infrastructure.

Because of this, we took it upon ourselves to build a system, open to everyone, that can deliver push notifications on smart contracts events, new blocks being finalised and transactions being broadcasted into the network and entering the mempool.

In this post I will go through the journey of building such a system, detailing the problems we ran into and how we solved them, but if you want to just give the system a go or see it in action, you could visit this page, which displays notification feeds in real time, check the source code on Github or just open a websocket connection to the following endpoints:

# Events triggered in all smart contract executions

wss://pubsub.main.neologin.io/event # Events triggered by a specific contract

wss://pubsub.main.neologin.io/event?contract=0xfb84b...bf43 # New blocks that have been included into the blockchain

wss://pubsub.main.neologin.io/block # Transactions that were just broadcasted on the network (still not confirmed!)

wss://pubsub.main.neologin.io/mempool/tx

How it all started

Following the “thou shall not reinvent the wheel” mantra, we started by forking a project, started by O3 about two years ago, which created subscription channels for events happening on the P2P protocol layer. Essentially, it allowed subscribers to listen for events such as blocks, transactions or consensus messages being relayed across the network, by posing as a normal P2P node and emulating the P2P protocol, connecting to other nodes and broadcasting the messages it received from the P2P network to it’s subscribers through WebSockets.

But, after fixing it’s dependencies and bringing it back to life, we encountered our first problem: connections with nodes would be dropped for no apparent reason, a behaviour that seemed totally unexpected by the code, as there were no safeguards against it. An investigation into neo’s source code revealed the problem: at some point after the project had been originally written, a change landed in neo that makes nodes start a clock every time a message is received from another node, and, if that clock times out, that node gets kicked from the connection pool.

That change was applied to prevent inactive nodes from taking slots in the connection pool, which is inherently limited (usually set to a maximum of 60 connections) in order to prevent attacks that try to blow up the node’s memory by creating lots of connections, but we managed to quickly bypass it by making our dummy node send ping messages on a schedule, constantly resetting the clock and keeping our connections open for a long period.

Adding events into the mix

Now, once we had our flashy new system working, it became glaring obvious that there wasn’t much of a use for P2P messages, as most of them consisted of unconfirmed data. Any dApp that were to rely on these messages for it’s protocol would leave itself exposed to double spending attacks as, by it’s own nature, the transactions that we were relaying had not been included in the blockchain yet, so it was entirely possible for someone to broadcast a conflicting transaction and get it into a block, orphaning the original transaction we had relayed.

Therefore, we set onto the path of obtaining finalised data, events that had already been made part of the blockchain and which a dApp could rely on. Some work had already been done on that space, with neo-python allowing the collection of events that had been triggered inside contracts and a C# plugin by hal0x2328 that would dump those same emitted events onto a redis queue.

An interesting bit we came across here is that, internally, neo-cli tracks all events being emitted, including those emitted in transactions that end up failing, the same. This means that if someone were to simply collect those, it would be possible for an attacker to trigger events while avoiding it’s effects, making it possible to pretend they had sent a NEP-5 token when in reality that transaction had failed and no transfer had taken place.

Now, going back to our story, we integrated that plugin into our system, making our server relay via websockets the events collected by the node, and set out to the final step: deployment!

Ship it!

In order to make the system scale horizontally and be able to handle traffic spikes, we decided to go for a structure where we have a single neo node that collects information and relays it to a group of servers running on heroku, which handles all the connections from the clients. This lets us leverage heroku’s infrastructure, as heroku will spawn, kill and load-balance the servers to make sure that all demand is met.

It is important to note that, in this structure, the neo node may become a bottleneck if an extremely large amount of clients started creating connections, leading to a lot of new servers spawning and connecting to the node, which could overwhelm it. Nevertheless, this would require a huge amount of client connections and, at that point, I believe most of the network’s nodes would be down, so I don’t think it’s a concern.

Finally, as a departing thought, I personally believe that in this space interoperability between tools is of the utmost priority, so we’ve integrated this system into CoZ’s monitor and we are in the process of making it part of the neon-js library.