This is the fifth article in my series exploring some of the technical challenges I encountered while writing my Agenda View Windows Phone app. This is the final entry about grouped data in the ListView control, but there will be more on other aspects of the app.

As with most of this series, the techniques here are not limited to phone apps. They will work in Windows Store apps of any kind—Windows Phone, ordinary Windows, or Universal.

My app presents appointments in the user’s calendar as a scrolling list, and this list could be extremely long—if you want to, you can scroll years into your future appointments, if your diary is populated that far. This sort of ‘infinite scroll’ is popular in mobile apps, so it’s not surprising that the ListView control has support for exactly this sort of usage, enabling you to fetch data gradually, rather than attempting to load it all up front.

If a data source implements both ISupportIncrementalLoading , and INotifyCollectionChanged , the ListView will use the source’s HasMoreItems property to discover whether there is more as-yet-unfetched data, and if so, it will call LoadMoreItemsAsync to ask the source to fetch it. It only uses these members when the user scrolls, meaning that you only load data when it is needed. The source raises collection change notifications in the usual way to indicate when it has fetched new data.

Unfortunately, this doesn’t work if you use the grouping features of ListView . It will simply ignore a source’s ISupportIncrementalLoading implementation when operating in grouped mode. Now as you saw in the previous blog, my app doesn’t actually use grouping—in the end I found it necessary to flatten the data completely to avoid some problems. However, I am hoping that the problem in question is in fact a bug in the ListView , and that one day I’ll be able to go back to using grouped data (because it would bring back the sticky headers), and in any case, I had already solved the incremental loading problem before I discovered that I wasn’t going to be able to use the ListView in grouped mode.

Roll Your Own Incremental Loading

Fortunately, it’s not particularly hard to implement incremental loading yourself. The basic requirement is to discover when items scroll into view. You can use this information to ensure that you always have at least some minimum number of data items ahead of that point pre-loaded, ready to scroll into view. If the number of loaded-but-not-yet-seen items drops below some threshold, you load some more.

So how do you discover when an item has scrolled into view? I use an extremely low-tech approach: I wait for data binding to read one of the bound properties of a view model. Here is a slightly simplified version of one of the view models from my app:

public class AgendaDayGroupViewModel : ObservableCollection < ListItemViewModel > { private readonly string _dayText; private Action < AgendaDayGroupViewModel > _onFirstView; public AgendaDayGroupViewModel( DateTime date , Action < AgendaDayGroupViewModel > onFirstView) { Date = date; _onFirstView = onFirstView; _dayText = date == DateTime .Now.Date ? TimeAndDateStrings .TodayGroupHeading : date.ToString( "D" ).ToUpperInvariant(); } public DateTime Date { get ; private set ; } public string DayText { get { Action < AgendaDayGroupViewModel > cb = _onFirstView; if (cb != null ) { _onFirstView = null ; cb( this ); } return _dayText; } } }

I’ve removed the parts that aren’t directly relevant to the example, but the code that handles incremental loading is exactly what the real app uses.

This is a view model representing a group—you can see that it derives from a collection class, so it has a collection of items as well as having its own properties. Back when I had flattened my view models to a single level of grouping, but before I flattened it completely, I had this structure:

The code above represents the items labelled as ‘Day’ in that figure—so it’s a group of all the appointments in a particular day. The DayText property shows the heading, e.g. “TODAY” or “FRIDAY, JULY 18, 2014”. (And even though I now flatten my source entirely, I do that with a wrapper as you saw last time. So this day group class still exists, even though the ListView doesn’t work with the grouped structure directly.)

Anyway, the important code is inside the DayText property’s get accessor. The very first time this property is read, it invokes the callback that was passed when the view model was constructed. And here’s the code that gets called:

public void OnDayGroupViewed( AgendaDayGroupViewModel groupJustViewed) { int position = DayGroupViewModels.IndexOf(groupJustViewed); int targetPosition = position + 10; int indexToFetchBeyond = Math .Min( targetPosition, DayGroupViewModels.Count - 1); AgendaDayGroupViewModel groupVmToFetchBeyond = DayGroupViewModels[indexToFetchBeyond]; int undershoot = targetPosition - indexToFetchBeyond; _ensureDataAvailableAfterRequests.OnNext( groupVmToFetchBeyond.Date.AddDays(undershoot)); }

The rough idea here is to ensure that if we haven’t already, we start fetching data for a couple of screens ahead of where we are now. (You can see that it looks 10 items ahead. That happens to correspond roughly to two screens of data in my app.) However, things get a little messy because the API I’m using to fetch calendar data requires me to work out the date. So I need to get an approximation to the date that will be about two screens of data ahead. This obviously varies according to how many appointments you have, so we work it out as far as we can from the data we already have (e.g., perhaps we have enough data to know the date that is 5 items ahead) and then just bump the date by one day per item. (And this is just a “must have data at least this far ahead” date; the code that fetches the data then goes on to read ahead by another 20 items beyond that point, so we’ll have another four screens of data beyond whatever date we end up picking. So it doesn’t greatly matter in practice that this part is a little bit approximate.)

The final line of that code above calls the OnNext method of a Subject<DateTime> , which is an observable source of notifications. (This is part of the Reactive Extensions framework.) A separate piece of code observes these notifications, compares it with the date of the latest data to have been fetched, and also with any fetch currently in progress, and works out whether it needs to fetch any more batches of data.

And that’s pretty much it. To summarize, our group view model invokes a callback when one of its properties is read for the first time, which turns out to be a reliable indicator of when XAML has just instantiated a header template representing the group and has processed the data bindings in that template. This in turn is a reasonably reliable indicator that the group is about to come into view. We then work out what date will be approximately two screens ahead of that, and ensure that if we don’t already have data at least that far ahead (or we aren’t in the process of fetching data that will cover that range) that we begin to fetch it.

There is one additional complication: the application periodically reloads all appointments to check whether anything has changed since last time. If such a refresh is in progress at the point at which we decide we need to fetch more data, that extra fetch just gets tagged on to the end of the refresh process.

Most of the time this ensures that appointment data is available before the relevant part of the list scrolls into view. However, if you scroll fast enough, you can get ahead of the data fetching. And also, the initial fetch can be a little sluggish. (Both issues are much worse on slower phones. On the Lumia 930 I use as my main phone, the calendar APIs supply data almost immediately, but on the Lumia 620 I use for development and testing, it can take well over a second for the calendar API to return even a batch of just 20 appointments.) So we need to handle the case where we don’t yet have the appointments for the current scroll position.

Indicating Progress

You may have missed it, but I’ve already shown how I handle progress indications for data that we’re fetching but which is not yet available. In my previous entry, I showed the code that flattens my grouped data into a linear collection. The FlatteningObservableCollection class I showed includes an EndPlaceholder property. Whatever you put in there will appear as the final item in the flattened list. (So even if the underlying data source is completely empty, as it will be when the app first runs, and has not yet received any calendar data, the flattened collection will still contain one item—this end placeholder.)

When we have not yet hit the end of the user’s calendar, this end placeholder is a distinct view model type called LoadingDataItemViewModel . And if the calendar API tells us that the user has no more appointments, we replace it with an instance of another distinct type, NoMoreAppointmentsItemViewModel . (And if we end up discovering that the user’s calendar has grown during of the app’s periodic refreshes, we will once again put the LoadingDataItemViewModel in there while we fetch the additional appointments.)

I then use the template selector technique I showed in the third entry in this series to show dedicated templates for these two types. For the loading one, I show a message and a progress bar. And for the ‘no more’ one, I just show a message indicating that there are no more appointments.

Tweaking Performance

The one final piece of the puzzle was to experiment with a couple of variables that affect performance. One was the batch size in which we fetch data from the calendar API, and the other was the extent to which we try to keep ahead of the user’s current scroll position.

When fetching appointments from the calendar, you tell it how many items you’d like it to return. The time it takes can be described as C+N×P, where C is a constant, N is the number of items, and P is the time per item. Both C and P are surprisingly large. (On my Lumia 620, C seemed to be about 0.8 seconds, and P is something like 0.02 seconds.) This has two important consequences: first, fetching even so much as a single appointment will cause a delay long enough to annoy users; second, the difference between a batch of 20 and 40 appointments is enough to turn a slightly annoying delay into a really annoying delay. (This matters less on high end phones, by the way, because everything’s a lot faster on those.)

This results in a trade-off: smaller batches improve initial responsiveness, and will also reduce the delay in the case where the user scrolls fast enough to overtake the app, but larger batches will support a higher sustained rate of scrolling.

The decision of how far ahead we read is related to batch size: the further ahead you read, the larger batch size you can get away with because for a given speed of scrolling, you have more time to fetch the next batch before the user catches up with you.

Now it would have been possible to implement a strategy in which we begin with small reads, in order to fill the screen nice and quickly, and then increase the batch size once we’re ahead, to be able to sustain a higher overall scroll rate. However, with some experimentation, I found that it was possible to achieve satisfactory performance even on a low-end phone without resorting to this complexity.

In the end, I found that kicking off reads about two screens ahead of the current position, and fetching 20 appointments at a time produced a reasonably fast initial load, while making it genuinely difficult (although not impossible) to scroll faster than the app could keep up with on my Lumia 620. (Weirdly enough, the two guesses I plucked out of thin air to begin with—keep at least 10 items ahead, and fetch in batches of 20—turned out to be better than the alternatives I tried.)

Conclusion

So there we have it. Discover how far the user has scrolled with the low-tech technique of watching for when data binding reads one of your view model’s properties. Always stay some way ahead of the user’s current position. And experiment with the distance by which you fetch ahead, and the size of the batches you fetch to see what works best for your particular data source’s performance characteristics.