When we talk about Dapp in Part1 and Part2, one might think we can just get rid of “crappy old” backend services and centralized software. Well, that is not completely the case. Blockchain does an impressive job in terms of safe transmission of value, securing critical logic of the application and even user authentication. But it is still somewhat slow and what is worse, ridiculously expensive as a data storage.

Demo of the Dapp is available on this address.

All source code is freely available in this GitHub repository.

Join us at: Solidity development community to learn more

Goal of the backend service

We want to design a backend, that addresses blockchains shortcomings. Main points we want to achieve with our service:

Get data from the blockchain and stores it into easier to access database.

Performs data analysis and data manipulation.

Designed for data recovery from the blockchain.

Event based design, which should work with multiple blockchain data inputs (RPC and WebSocket).

Transactions can get rejected or removed because of an uncle block. Service has to be robust enough to handle that.

Blockchain will be data generator and decision maker. It directly processes user made transactions and stores historical transactions data. Backend service will parse blockchain created data and present it to the web. That way we provides faster and much better user experience.

Backend implementation

For backend service I’m using Node.JS with MongoDB and Express framework. Parsing data is done with web3 library. I’m using Infura RPC provider, that way I don’t have to run my own Ethereum node. One drawback is that I don’t have WebSocket and am only using RPC.

Solidity contracts are deployed to the Ropsten blockchain. Address 0x70e5044cE689132d8ECf6EE3433AF796F8E46575

Blockchain data storage

There are two ways to store data on the blockchain:

Contract storage is accessible from within the contract. This are variables and every change of state costs gas, and lots of it. 32 bytes is 1 storage unit. Generally it costs 20.000 gas to change value. Less if storage has been set before and is just being modified. Use https://ethgasstation.info/ to see current gas prices.

is accessible from within the contract. This are variables and every change of state costs gas, and lots of it. 32 bytes is 1 storage unit. Generally it costs 20.000 gas to change value. Less if storage has been set before and is just being modified. Use https://ethgasstation.info/ to see current gas prices. Log events are triggered (emitted) from within transaction and are permanent and irreversible storage. They are not accessible from within the contract, but we can access them with our backend service by querying Ethereum node. They are much cheaper storage, costing 375 gas per log plus 8 gas per byte in the log.

In contract storage you would store latest state of data, which is crucial for contract to know about in order to correctly process new transactions. The rest of data, which is not needed within the contract but is needed for the Dapp to work, should be emitted through Log events.

We have already defined Log events in the smart contracts.

/// @notice Event propagated on each pool creation

event LogPoolCreated(address indexed _pool, string _name, uint256 _rate, uint256 _deadline); /// @notice Event propagated on every executed transaction

event LogTransfer(address indexed _from, address indexed _to, uint256 _value);



/// @notice Event propagated on new deposit to the pool

event LogIssue(address indexed _member, uint256 _value);



/// @notice Event propagated on new address has the most tokens

event LogNewShark(address indexed _shark, uint256 _value);

We will be able to parse data for each pool created, each deposit to the pool, each transfer and each shark change.

Parsing events

You can use Etherscan to see events emitted from the contract. This is for one of my test contracts. We see that it works, for each transaction events are emitted. But how do we parse them on our backend.

const contractInstance = new web3.eth.Contract(contract.abi, contractAddress) const events = await contractInstance.getPastEvents('allEvents', {fromBlock: fromBlock})

const promises = _.map(events, (obj) => {

return handleEvent(obj)

})



await Promise.all(promises)

This is code from one of my parsers using web3 library. First you need a contract instance, specifying contracts ABI and address on the blockchain. Contract ABI should be in JSON file, generated by Truffle. Also address is returned in the console, if deployed with Truffle.

My backend is querying Ethereum node each 30s for every contract with events (hope that is not too many requests). Unfortunately I haven’t find a better solution with RPC node yet. WebSocket is emitting events in real time, but Infura doesn’t yet provide it and running Ethereum node on my server is not as convenient. You can do that by running either geth node, Parity node or Mist node.

Designed for failure

This is the response for one of the Log events.

{

"address": "0xfc4F758BbB89F3570b7D2C2645922C9AEF0C5d2d",

"blockNumber": 3135323,

"transactionHash": "0x3d4f41e2fedecaa7d424b183ef367fc312f58f41dbac2ad92f654b594774d1fa",

"transactionIndex": 19,

"blockHash": "0x56dc21ea05f4dbc932cc786ec16c29da63f843d6ea8db4271fd8e224f34e5f66",

"logIndex": 8,

"removed": false,

"id": "log_cb1c6ac7",

"returnValues": {

"0": "0xaD5f3827284e60fbdA8836266919d4c376fca352",

"1": "0xb3D9B300bFaeafaE61f97C8b375E4Cc72c2Cabc3",

"2": "100000000000000000",

"_from": "0xaD5f3827284e60fbdA8836266919d4c376fca352",

"_to": "0xb3D9B300bFaeafaE61f97C8b375E4Cc72c2Cabc3",

"_value": "100000000000000000"

},

"event": "LogTransfer",

"signature": "0x0a85107a334eae0d22d21cdf13af0f8e8125039ec60baaa843d2c4c5b0680174"

}

}

You can check documentation to see what exactly each field means. What I want to outline at this point is that receiving the event doesn’t make it final. It may not be yet included into the block, so blockHash field is null. And even if it is, it can get removed later on because of an uncle block. In this case field rejected would change to true.

Exchanges usually use 12 block confirmation or even more, to make sure transaction is irreversible. Read Vitalik’s post for more info. It is important that you are able to update your entries in the database even for events that has already been parsed. And detect rejected events within at least latest 12 blocks.

Backend API

Serving data to the end user is done through the Express REST API. You can see routes in the router.js file.

const express = require('express')



// Middleware

const pagination = require('./pagination')



// Controllers

const poolController = require('./pool/controller')

const fishTokenController = require('./fishToken/controller')



const apiRoutes = express.Router()

const v1Routes = express.Router()



// Set v1 routes as subgroup/middleware to apiRoutes

apiRoutes.use('/v1', v1Routes)



v1Routes.post('/pool/list', pagination, poolController.getPools)

v1Routes.get('/pool/:id', poolController.getPoolById)

v1Routes.get('/token/:id', fishTokenController.getTokenById)



module.exports = apiRoutes

For now there are only few routes. I will add more later, once I know what I need for the React web app to work.

Conclusion

In this part we’ve created a backend service, which stores emitted events from the blockchain and serves them to the end user through the REST API. Doing all that directly on the blockchain is highly impractical and super expensive. That’s why we need a backend service to support our Dapp.

Think about just serving a list of all the available Pools. We should get all LogPoolCreated events ever emitted and parse data on the client app. How about creating a search filter for that or just serving Pools which deadline hasn’t expired yet.

On the backend service we can do all that, since storing data is cheap and manipulating it way easier.

Demo of the Dapp is available on this address.

All source code is freely available in this GitHub repository.

Parts of the series: