Lists are a big part of mobile development

Lists are the heart and soul of mobile apps. It seems that many apps spend most of the time displaying lists — whether it’s the Facebook app with a list of posts in your feed, Messenger with lists of conversations, Gmail listing emails, Instagram listing photos, Twitter listing tweets..

As your lists become increasingly complex, with larger sources of data, thousands of rows and rich memory-hungry media — they also become harder to implement.

On one hand, you want to keep your app fast. Scrolling at 60 FPS has become the golden standard of native UX. On the other hand, you want to keep a low memory footprint. Mobile devices are not known for their abundance of resources. It appears that winning both of these fronts is not always a simple task.

Searching for the perfect list view implementation

It’s a common rule of thumb in software engineering that you can’t optimize in advance for every scenario. Let’s borrow from a different field — there is no single perfect database to hold your data. You’re probably familiar with SQL databases that excel in some use-cases, and NoSQL databases that excel in others. Since you probably won’t be implementing your own DB, your job as a software architect is often to choose the right tool for the job.

The same rule holds for list views. You probably won’t find a single list view implementation that will win in every use-case — while keeping both FPS high and memory consumption low.

Two types of lists

Roughly speaking, we can characterize two types of use-cases for lists in mobile:

Nearly identical rows with a very large data-source

A good example is a contact directory. Every contact row probably looks the same and has the same structure. We want to let users browse through many rows quickly until they find what they’re looking for.

A good example is a contact directory. Every contact row probably looks the same and has the same structure. We want to let users browse through many rows quickly until they find what they’re looking for. High variation between rows and a smaller data-source

A good example is a chat conversation thread. Every row here is different, and includes a variable amount of text. Some hold media. Users will typically read messages progressively and not browse through the whole thread.

The benefit of splitting the world into different use-cases is that we can offer different optimization techniques for each one.

The stock React Native list view

React Native comes bundled with an excellent stock ListView implementation. It employs some very clever optimizations, like lazily loading rows as the user scrolls to them, reducing the number of row re-renders to a minimum and rendering rows in different event-loop cycles.

Another interesting property of the stock ListView is that it’s fully implemented in Javascript over the native ScrollView component that ships with React Native. If you come to React Native from a native development background either in iOS or Android, this fact probably strikes you as odd. At the foundation of the native SDK’s for both operating systems, exist time-tested native list view implementations — UITableView for iOS and ListView for Android. It’s interesting that the React Native team decided not to rely on either of them.

There are probably many reasons to why this became to be, but if I’ve had to guess I would say that it has to do with the use-cases we’ve mentioned earlier. The iOS UITableView and the Android ListView use similar optimization techniques that perform very well under the first use-case: Nearly identical rows with a very large data-source. The stock React Native ListView is simply optimized for the second.

The flagship of lists in the Facebook ecosystem is the Facebook feed. The Facebook app has been implemented natively in iOS and Android long before React Native. The initial implementation of the feed probably did rely on the native UITableView in iOS and ListView in Android, and as you can imagine, did not perform as well as expected. The feed is a classic example of the second use-case. There’s high variation between rows because each post is different, with varying amounts of content, different media and structure. Users read through the feed progressively and normally don’t browse through thousands of rows in a single sitting.

Aren’t we supposed to talk about recycling?

If your use-case falls under the second use-case: High variation between rows and a smaller data-source — you should probably stick with the stock ListView implementation. If your use-case falls under the first use-case and you’re unhappy with how the stock implementation performs, it might be a good idea to experiment with alternatives.

Reminder, the first use-case was: Nearly identical rows with a very large data-source. The main optimization technique that has proven itself useful in this scenario is recycling rows.

Since our data-source is potentially very large, we obviously can’t hold all the rows in memory at the same time. To keep memory consumption at a minimum, we would only hold in memory rows that are currently visible on screen. As the user scrolls, rows that are no longer visible will be freed, and new rows that become visible will be allocated.

The difficulty with constantly freeing and allocating rows as the user scrolls, is that this is very CPU-intensive. This naive approach will probably prevent us from reaching our 60 FPS target. Here, will come to our aid the fact that under the current use-case, the rows are nearly identical. This means that instead of freeing a row that went off-screen, we can repurpose it for a new row. We are simply going to replace the data it displays with data from the new row thus avoiding new allocations altogether.