#perfmatters

At Yelp, we’ve been working hard to improve the performance of our search results on mobile devices (iOS and Android). We know that performance matters from a numbers perspective, but another reason why we’ve decided to invest more in performance is due to a recent increase in user studies commissioned by Yelp. We noticed that most users “grunted or made noises when waiting for search to load”. We took this as a sign that something needed to be done!

In our quest to make search faster on the Android mobile app client, we broke down performance into two different categories: perceived search performance and scroll performance.

1. Perceived Search Performance

Perceived search performance is the time it takes between when a user presses the search button to when they see the first search result rendered on screen. This is made up of three separate time spans illustrated in the following diagram:

For each search request on the Android app, we measure the time it takes the request to be resolved by the back-end and parsed on the client. We record this timing in the search_request timing metric, represented in red above.

We also measure the total amount of time it takes from when the request is sent to when the results are rendered on the device with a custom metric that we refer to as the search_results_loaded timing metric. This metric is associated with the same request id as the search_request timing metrics.

We can then subtract the search_request timing metrics from the search_results_loaded timing metrics to get the total amount of time spent rendering search results on the device. This is represented by the second blue section and it’s where we’ll be focusing our attention for perceived search performance.

Over the past few months, we’ve managed to reduce the perceived search performance timing from 350ms to 190ms at the 50th percentile (and there’s still room for improvement.)

2. Scroll Performance

This is the performance associated with how quickly the device can render each new frame as the user scrolls through a list of businesses. Most device screens refresh 60 times per second (60Hz). To get the smoothest possible animation and scroll performance, we need to be able to render 60 frames per second to match that refresh rate. That gives us 16 milliseconds to do all the calculations needed to get pixels drawn on the screen, which is not a lot of time. If we miss this 16 millisecond mark, then scrolling starts to look janky as the display is refreshing but has no new frames to show the user. This delay creates a disconnect between the user’s smoothly scrolling finger on the screen and the pixels underneath, and overall makes the app feel slow.

In our performance work we’ve been able to significantly reduce jank and improve the scroll performance of our search results leading to a better user experience:

The Performance Improvement Lifecycle

Both perceived search performance and scroll performance can be improved using a common set of steps even if they require different types of technical optimizations. These are steps anyone can take on any platform to improve performance and make sure the the improvements are durable.

First, you can’t improve if you don’t know where you currently stand so it’s important to record an initial assessment of performance. Second, once you have a baseline measurement of performance for the feature you’re attempting to improve, you can start to take actions to improve this metric to achieve your goal. And last, after the improvements have been made, you want to make sure that there are no regressions in this metric, so it is important to monitor your changes. Because there is never really a limit to how much you can improve performance (only diminishing returns), this is an iterative process. This cycle of performance improvement can be defined as a series of steps:

Measure Improve Monitor Repeat

We followed these steps in our performance work to great effect and we’ll go over how to implement the “Measure” step using the tools available to us as Android developers. In two subsequent posts we’ll also explain how we were able to improve the performance of our search results and how we implemented a monitoring system to ensure that we don’t regress.

Step 1: Measuring Performance

One of the advantages of developing for the Android platform is the wide range of tools available for profiling and measuring app performance. Here’s a brief overview of the tools that can be used to profile your app along with some links to the official documentation:

Debug GPU information on screen as bars Great for visually determining where performance bottlenecks are occuring

Gfxinfo sysdump More detailed format for analyzing performance bottlenecks as a whole Outputs results as a text file through adb

FrameMetrics API Extremely detailed and programmable API for analyzing performance and identifying bottlenecks

Debug GPU Overdraw A visual tool to see where the GPU is doing duplicate work

Systrace A great tool for identifying when best practices are not being followed and pointing towards relevant documentation Good place to start for identifying high level performance issues

Traceview A detailed CPU profiling tool that can help identify slow code across different threads Useful for digging into code performance at a lower level than Systrace

Android Studio CPU Profiler Very similar to Traceview but with a nicer UI and call charts



Choose Your Metrics Wisely

It’s important to choose key metrics and see how they change during the improvement stage of the performance lifecycle. These key metrics might vary based on the feature that you’re working on, so it’s important to understand what you want to measure and to ensure that it aligns with your performance goal. In our case, we wanted to improve perceived search performance and scroll performance. For this we chose to measure two simple metrics that aligned with our goals:

Initial Search Rendering Performance

This was the measurement of the amount of time that was spent on the client rendering the results (the second blue section of the below graph). We were able to accurately measure this using the search_request and search_results_loaded timing metrics mentioned earlier. The metrics are recorded and aggregated in near real-time through Splunk on every search request and provide us with powerful insight into the performance of our search requests and rendering times. We found the baseline for our search results rendering to be 350ms at the 50th percentile.

search_results_loaded - search_request = search results rendering

Scroll Performance

This was measured by examining the percentage of total frames that took longer than 16ms to render as we scroll down the list of search results. In our initial baseline measurements, we used gfxinfo sysdump and manual testing to determine a baseline. We later moved to the FrameMetrics API and automated testing (outlined in the monitoring section) to get an accurate count of the frames dropped during scroll performance. As our baseline for scroll performance we found that 33% of all frames rendered when scrolling down the search page were dropped. That means a third of our frames were taking longer than 16ms to render. Yikes!

Testing on Older Devices

The Yelp app is installed on a wide variety of Android devices, from top-of-the-line phones to phones that were released many years ago. In establishing our baseline for performance, we wanted to be confident that the great majority of our users’ devices can run the app smoothly and have a solid user experience.

Most developers run their application in an emulator running on a high-end computer. If not, then they likely have a test device released in the past 1-2 years. That’s great for feature development (you want the app to load your changes as quickly as possible), but for performance testing you can miss out on a lot. Issues that might not affect your brand new phone might bring an older phone to its knees so testing performance on an older device can help to identify bottlenecks. By fixing performance issues on older devices, you’re also ensuring newer devices will run more smoothly and that your entire user base will have a great experience.

In our case, we tested on a Nexus 5 running Android N. The Nexus 5 was released almost 5 years ago, as of today, which is a lifetime in smartphone years. The scroll performance was noticeably worse than the scroll performance on the newer Galaxy S8. On the Nexus 5, as we slowly scrolled down the list of search results, each time a new view scrolled into the viewport, we could see a noticeable dip in performance. This same performance impact was barely noticeable on the S8 and because of this, we used the slower Nexus 5 to test our incremental improvements to scroll performance.

No User Left Behind

We also measured the performance of older devices by monitoring metrics at scale. We looked at the performance numbers not only at the 50th percentile, which shows us the performance for half of our users, but also at the 90th percentile which gave us an idea of what the user experience might be like for users on older devices. From there, we filtered out poor network performance as a factor for these slower timings by using the search_request and search_results_loaded timing metrics mentioned earlier. By doing this, we were able to determine the amount of time spent rendering on the client for the 90th percentile was 656ms which was reduced to 394ms after our performance work, which is a huge improvement for those with less powerful devices.

Conclusion

Now that we’ve completed step 1 of the improvement lifecycle by establishing a baseline on which we can improve, the next step is to actually implement the changes that will move us towards our end goal. But where do we start? What changes can we make to improve client-side performance? You might already have a few ideas in mind, but we’ll cover how we improved our performance in the next blog post!

Want to build next-generation Android application infrastructure? We're hiring! Become an Android developer at Yelp View Job

Back to blog