I’ve been doing some work with Ethereum blockchain recently, and the idea I worked on requires that we write a relatively large array of integer values in smart contract’s storage. Now, most tutorials warn you not to store a lot of data on blockchain, because it’s very costly. But how much is “a lot”, and at which point the price becomes too high for practical purposes? I needed to find that out, since we really needed this data on-chain.

For a quick approximation, let’s have a look at a wide-spread type of contract, ERC20 token. At the very least, it stores a mapping of addresses to token holders’ balances. The address is a 20-byte value, and the balance is usually 32 bytes, so we have at least 52 bytes per token holder, which in reality translate to 64 bytes, since the storage in EVM is parceled in 32-byte blocks. A somewhat popular token can easily have 10000 or more holders, so our estimate that it stores about 625Kb of information. That’s quite a lot, actually – more than enough for our purposes, since by our estimates, we only needed to store maybe tens of kilobytes!

Naive approach

So, let’s go ahead and pass that data to EVM and write it down to the storage:



uint8[] m_test; function test(uint8[] data) public { m_test = data; }

Surprise! This function spends gas like there is no tomorrow. Just trying to save 100 1-byte values with it will use up 814033 gas, a whopping 8140 gas per byte stored!

Okay, let’s take a step back here. What’s the theoretical minimum cost for storing data in Ethereum blockchain? First, remember that the storage is divided in 256-bit (32-byte) blocks. EVM can only write or read a full block, so it makes sense to pack our data in a 32-byte buffer and write it in one go, because the EVM opcode to store data in storage – SSTORE – alone costs 20000 gas! So, if we ignore all other costs, the minimum we can pay per byte is 625 gas. A far cry from 8000 value that we observe. Let’s dig in and find out who’s spending our gas and what can we do about it!

Our first instinct should be to check out the assembly generated by the Solidity compiler for this line of code (m_test = data), since it’s the only one we have. That instinct is correct, since looking at it provides us with a horrifying fact that the compiler generated a monstrosity, which can’t quite be understood with a quick look. It does all kind of things, and some of them are very expensive. Worst of all, it calls SSTORE way, way too much! What IS happening here?

Well, several things. Storing 8-bit values is actually the worst thing you can do in EVM. You’re losing performance at every corner. First of all, when you pass an array of 8-bit values to a function, they all expand to 256 bits, padded with zeroes. So, on the cost of call data size alone, you’re already overpaying 32 times! Nice! However, as an astute observer might have noticed, the cost per byte is about 13 times the minimum, no 32, which, while isn’t a big consolation, suggests that the contract’s storage, at least, is not wasted in the same way as call data memory. And indeed, it isn’t: arrays in Solidity are stored packed tightly, stuffing as many values in every 32-byte slot as possible. This, however, begs the question how does conversion from the unpacked call data to the packed storage happens, and the answer is “in the least effective way I can imagine”.

Written out in pseudo-code, our single innocent line of code expands to a very dreary bit of programming:

for(i = 0; i < data.length; ++i) { // Read a byte from call data, converting 256-bit value to 8-bit one uint8 from_value = uint8(data[i]); // Read a 32-byte word from storage - this involves some computations which we abstract here uint256 to_value = get_storage_data_at_offset(m_test, i); // Write byte to a proper position inside the 32-byte word (this also isn't very straightforward) add_byte_to_value(to_value, i % 32, from_value); // Write the 32-byte word back to storage set_storage_data_at_offset(m_test, i, to_value); }

This code is not affected by turning on optimization (at least in 0.4.24 version of the compiler), and as you can see, it calls SSTORE (as a part of set_storage_data_at_offset) 32 times more than needed. It’s only saving grace is that calling SSTORE on the same location the second time costs not 20000, but 5000 gas. So, every 32 bytes cost us at least 20000+5000*31 = 125000 gas, or about 4000 gas per byte. The rest of the cost comes from the constant re-reading of the storage (in get_storage_data_at_offset), which isn’t very cheap, too, and from the rest of computations involved.

Solution for 8-bit integers

There is not much we can do with the compiler, so the take from this is that you should never pass uint8 arrays into smart contract functions or store them via simple assignment.

Without further suspense, I will share the knowledge of how to solve that problem with you right now – use bytes. As in, “bytes” type, in both function parameter declaration and storage variable declaration.

bytes m_test; function test(bytes data) public { m_test = data; }

This will cost you just 129914 for 100 values, or 1300 gas per byte – a 6 times less than working with uint8! Before I discovered that, I tried to write a JULIA (Solidity macro-assembly) function to do an effective assignment, but due to some problems which I will discuss further, it was slightly less effective than using “bytes” type, achieving the cost of 153023 per 100 values, or 1530 per byte.

Interestingly enough, the more data you store at once, the less cost-per-byte becomes, suggesting that a significant enough part of the costs is fixed. For example, when storing 3000 values, the cost of “bytes” approach falls to around 900 gas per byte.

More general solution

All is well that ends well, right? But our problems don’t end here, because sometimes we might want to store something else than 8-bit values that fit so nicely into “bytes” type. I mean, we can encode anything into a bytes buffer, but accessing it from the contract will not be convenient, and can even cost too much in byte-wrangling. So, we do need a function to convert bytes into (u)intSOMETHING[], after all. Writing such function was very instructive, and gave me a much better understanding of EVM and Solidity, and I’ll try to reproduce my findings here, because I found no other single place which could explain everything I needed well enough.

First of all, let’s discuss the way in which Solidity stores arrays. Arrays are purely Solidity construct – EVM knows nothing about them, and just have abstract storage, which, conceptually, is just a very large amount (2^256) of 32-byte blocks. It doesn’t store all those empty blocks, of course, so in reality this is a map of non-empty blocks, where the key is a 32-byte integer value. Which is what SSTORE and SLOAD opcodes take as the input.

To provide array-like storage, Solidity uses this trick: first, the array’s “main” slot is allocated somewhere at the beginning of the storage, just as it is an uint256. Which means it gets a whole storage slot by itself. There, the array length is stored. But since the array data’s size is unknown (we’re talking about dynamic arrays here), Solidity authors needed to find somewhere safe to store it without overlapping anything else. Well, actually, that’s impossible – in theory, you can create two arrays of length > 2^128, and they will overlap, but since in practice, you probably won’t, they came up with this simple trick: take a SHA3 hash of the array’s main slot number, and use it as the starting address for array data. From there on, elements are stored sequentially, occupying neighboring slots, if needed.

So, in theory, all we need is to find array’s data location, and copy our bytes there, slot by slot. As long as we can fit more than one array element in every slot, and the elements fill every slot fully (i.e. they are uint8, uint16, uint32, uint64, uint128 or their signed counterparts), this method is going to be more effective than the naive m_test = data; approach. However, if you try to store anything bigger, don’t bother with hand-written assembly.

However, there is one problem remaining. If you try to do it straight, you’re going to end up with your array’s bytes order reversed. I think this happens because Solidity’s array indexing code expects bytes to be stuffed into a word in a big-endian order. You can, of course, reverse your bytes before sending them to contract, but I chose to do it in Solidity, to simplify the API.

With all than in mind, here’s the function I finally came up with for 64-bit signed integers (it can be easily adapted for other sizes):

function assign_int64_storage_from_bytes(int64[] storage to, bytes memory from) internal { // Resize the destination array. Since we're writing the code for int64, we use 8 bytes per value. to.length = from.length / 8; // Compute the base location of array's data by taking SHA3 of its position (slot) uint256 addr; bytes32 base; assembly{ // keccak256 works on memory, so we have to save the number of the array's slot // to a memory variable mstore(addr, to_slot) base := keccak256(addr, 32) } uint i = 0; for(uint offset = 0; offset < from.length; offset += 32) { // Load a 32-byte word from the source array // Don't forget to skip the first 32 bytes - in memory arrays, array's length is located // just before the data! uint256 tmp; assembly{ tmp := mload(add(from, add(offset,32))) } // Reverse bytes order. I guess you can do it much more optimally, but thi is more understandable. for(uint b = 0; b < 16; ++b) { uint shift = b*8; uint shift2 = (256 - (b+1)*8); uint low = (tmp & (0xFF << shift)) >> shift; uint high = (tmp & (0xFF << shift2)) >> shift2; tmp = tmp & ~( (0xFF << shift) | (0xFF << shift2)); tmp = tmp | (low << shift2) | (high << shift); } // Store the data in the storage assembly{ sstore(add(base, i), tmp) } i += 1; } }

With 64-bit integers, we don’t lose so much performance with a simple assignment, as we do with 8-bit ones, but still, this solution outperforms naive one a bit – enough to consider using it. When saving 100 64-bit integers, a naive assignment uses 1003225 gas (10032 per integer, 1254 per byte), while the function above uses 718466 gas (7184 per integer, 898 per byte).

It must also be noted, that per-block gas limit puts a hard cap on how much data we can save in one go. What’s worse, appending data to an already filled array is going to be an unpleasant exercise, unless its last storage slot was filled fully (in which case, it’s the same function, but with a different starting offset). With the current (as of September 2018) gas limit per block hovering about 6000000, this means we can save about 6Kb of data in one go, but actually probably less than that, because we also need to pay for transaction’s input data itself.

I have to wonder, though, if a more effective solution is possible with better knowledge of EVM. If you have any ideas how to improve it, please share them in comments!