If you enjoy talks from 360 AnDev, please support the conference via Patreon!

Find out about the APIs and tools for finding, tracking, analyzing, debugging, and fixing performance problems in your Android applications.

Intro

Welcome to a talk about Find and Fixing Performance Problems. We’ll do a use case and case study and use all these tools and find the problems and then fix them and show you how that works. But we didn’t quite get all the way there, so it’s really more about “finding the problems”. We’re going to walk through a lot of the tools that we use on a daily basis when there are performance problems and analyze what’s going on and show you how that works from the inside.

I’m Chet Haase, I’m on the Android UI Toolkit team, and I’m here with Romain Guy from the Android Graphics team. We have dealt with performance problems a lot over the years, so we’ll show you how that works. We could have also call this talk “Fantastic Tools and Where to Use Them”.

We are going to cover some of the developer options, but then a lot of other more host side tools as well. I wrote a long explanation about developer tools. It was an analysis of the twitter client called Falcon Pro. It’s a more detailed version of what we’re going to talk about here.

We thought that instead of writing a demo app and sort of coming up with this fake problem it’d kind of be nice to use a real app. We like the idea of taking an existing application and showing the real world performance issues that you might encounter. We went on the Play Store, and we started looking at a lot of apps.

The Play Store

We’ll look at Play Store and see what we can find. There’s an app on the Play Store called Play Store which seemed to demonstrate kind of the jank that we were looking for. We’re going to scroll down on the main page and then back up and then we’re going to go into a detail view, scroll down and up and then back to the main page and down and up. What you’re looking for, is the times in an application’s life where you’re not running at 60 frames a second. Where basically there’s a noticeable pause, while it skips at least one frame, often more.

In the case of the Play Store we were really looking at when it’s running at 60 frames per second.

Every time we start to scroll, there’s a huge jank; the UI thread is blocked for up to 300 milliseconds, every time you go to a new screen. If you’re on the home page, you scroll, it janks. You go to a detail page, you scroll, it janks. You go back to the home page, you scroll, it janks. It feels like it’s not doing any caching, but we’ll see that’s not true, it’s doing a lot of caching.

To be fair, it’s a really complicated app. There is a ton of media coming in and a lot of information they’re pinging the server on. They’re downloading bitmaps, they’re doing all kinds of stuff to get the information they need to display and they’re doing some nice stuff. They had some placeholder images so that you see a little gray rectangle so they’re not janking as they load every single bitmap; but clearly there’s a lot going on there but maybe some of it they could actually take a look at some performance and maybe fix that.

Overdraw

What will we do to analyze that?

One of the areas we look at is overdraw. If you go into developer options and you can click on the dialogue it says debug GBU overdraw. Click to show the overdraw areas, and that’s going to paint the screen in lots of pretty colors.

We talk about overdraw when you draw a pixel on top of another pixel that we’ve already drawn. When you don’t see any color, that means you’ve drawn the screen only once. When it becomes blue we have an overdraw of 1X, it means we’ve drawn that pixel twice. When you have a 2X overdraw it’s green. The light red means 3X overdraw and then dark red is 4X overdraw or more. We stop there because I ran out of colors that worked, honestly. It’s really hard to find colors that work in most applications, trust me.

Overdraw is very important for many reasons. One of them is usually when you have a lot of overdraw it means your hierarchy of views is probably more complicated than it needs to be. It means you’re doing more inflations, using more memory, we’re doing more traversals in the framework. Everything gets a little bit slower. That alone is a good reason to optimize everything.

The point is, when we draw pixels on top of each other, and we have to do blending, so when there’s transparency involved, which is the case for almost everything that you see on screen, what happens is that the GPU has to read back the pixel that’s already on screen. It has to read the new pixel you want to write from memory, then it has to blend them together and write the output. It’s very costly in terms of computations but also bandwidth.

One thing to remember is that the number of cores have gone up over the past few years, the speed of each core has gone up, the instruction sets of the CPUs has gotten a lot better, and some of the instructions are extremely ridiculous. They can do many things at once. The GPUs have gotten a lot better. We can run many more instructions than we could just a few years ago.

What hasn’t changed at all, since about 2011, is the amount of bandwidth that you have in the phone. It’s about 20 to 30 gigabytes per second. To give you an idea of that stuff today, like the GPUs today we have on desktops they are several hundred of gigabytes per second. We are an order of magnitude or two behind desktops, and it’s completely flat, it’s not going up.

I hope it’s going to go up, but the point is when you have overdraw you’re wasting that bandwidth. We’re going to run out and we have more and more pixels on screen. Our displays now are 1080p or they’re 2K, and there are 4K phones out there. I wouldn’t be surprised if there are 8K phones that are coming or that exist already. We’re adding more pixels but we’re not adding more bandwidth to draw those pixels. You have to be extremely careful with overdraw.

Get more development news like this

Some overdraw is to be expected. An easy case for you see, the search bar up at the top which is light red, and then there’s dark red for the text. It’s not like you’re going to draw the text without drawing the background underneath it. You’re going to overdraw in some areas, that’s totally understandable.

What we were looking for is problematic areas where we’re overdrawing where you really shouldn’t be. There’s a couple of interesting things here.

Before we get back to the Play Store, we noticed when we were analyzing this yesterday, the status bar up at the top with the icons, is already red. Something’s going on there. Some of the icons are different. Most of the icons are blue so there’s a 1X overdraw, and that’s fine, you’re drawing an icon over a background.

Compare wifi and battery. Wifi does not have extra overdraw, this is probably because they have individual icons that represent the different states that that can be in. When they draw the state of wifi, it is just a thing. When they draw battery, my understanding of looking at this is they’re drawing the outline of a battery and then they’re filling it in according to some measure of what that is.

Instead of having 100 different states of the battery icon, they just have one represented by the actual floating point state of what that charge. That means another overdraw. Fine, that’s not a problem, but it’s just sort of interesting to see where these things are at.

Hierarchyviewer

The one part that we were curious about in the actual Play Store app, was the blue in the background. This is kind of an indicator. This is basically saying, everything there starts with a 1X overdraw. This typically happens when you’re drawing a window background, which is cool. If you don’t draw a window background you can get some artifacts, it’s kind of a good thing to avoid. It was nice hack back in the day but really don’t do that anymore.

If you have that background you’re apparently drawing on top of it as well. We were sort of curious where that came from. We used hierarchyviewer to take a look at that. Hierarchyviewer is awesome for looking at stuff. There is a replacement for this in Android Studio, called Layout Inspector. It doesn’t have all of the same functionality, but that’s coming online more over time and it’ll probably be better and easier to use.

You have the views represented by the tree on the left. Up in the right are all the views, and then you have this blueprint diagram of the layout of all the views in the lower right. You can click on the views and see more information about it.

If we deep dive here, we can see that we’re drawing a white background basically the size of the screen, the size of the window.? We have a window background and then they really wanted to make sure that it was white so they created their own background to draw right on top of that, which means we’re drawing it for the window and we’re also drawing it on top of that. You never see the window’s background.

The other interesting thing I noticed here is okay and totally reasonable. There’s a frame layout with a background. But actually, this is a frame layout with no children. It is here only to hold the background. It is a child of a frame layout. That parent frame layout could’ve drawn the background. I’m not sure why they have a child in there to have the background. Maybe there’s good logic for this, maybe there’s other states that I don’t understand, but we’re just sort of poking at this from the outside.

The Play Store has a few views in its hierarchy. The overdraw is an indication of that, but we were a little surprised when we saw the actually hierarchy. Hierarchyviewer has this option where you can export the tree of views as a PNG. The first time I did that, I actually printed it out, I think it was like 30 sheets of paper that represented Gmail back then, and I taped it on the walls in the office just to just shame them.

The point here is that hierarchyviewer could not export that file because it was running out of memory. I had to hack hierarchyviewer to request 4 gigabytes of heap on the desktop to be able to export the PNG. Now, we got the PNG and Apple Preview could not render it, Google Slides would not let us upload it because it was too large. We had to use Photoshop to down size it.

The problem is it’s not only about performance, but it’s also for you, because it probably is extremely difficult to maintain and debug when you have that many views, no matter how good you are. Do yourself a favor, have fewer views.

gfxinfo

We wondered how many views are actually in the app. There’s a couple of different ways to find that out. One is there’s a lot of different information you can get from a command line tool called gfxinfo. You can just call it without the package argument and it’ll give you information on everything going on in the system. Or you can give it the package argument, and it’ll give you information that’s specific to that application.

We said okay, gfxinfo on the Play Store app and it gave us all this stuff. Just to give you a broad overview of some of the information you’ll see in there, this is what we call jank stats or frame stats. I’m going to talk about that a little bit later when we talk about frame metrics APIs. It’s basically the information on what’s going on with your rendering. How long are you taking to render your frames and how often do you go above the line that you don’t want to go above? How often are you janking because you’re not running at 60 frames per second. This is good information to keep an eye on.

Next is the memory information. There are caches, it is the state of caches that are integral to the rendering path payments. If those caches are separated, or if you see that some of them are full most of the time, that means you’re thrashing the caches. What it means for the rendering pathpoint is every time you thrash the cache we have to do a lot of management, we have to go find items that we need to delete to be able to insert the new items that you want to draw on screen. It can be particularly useful.

The ones that are interesting to you are: the FontRenderer, you don’t have to worry about it too much unless you’re using tons of emojis. The ones that matter the most are the PathCache. That’s used for drawing anything that’s vector based. If you use a lot of animated vector drawables and you have large vector drawables, or if you use a library like Loddy that uses a lot of vectors you might want to take a look at this.

The TextDropShadowCache is also interesting if you set to drop a shadow layer on the text view or if you use paint and you set a shadow layer on your paint to draw text. Those drop shadows are extremely expensive to compute because we have to compute them for a single line of text at a time so it’s very specific to that draw text. We can’t reuse it for any other piece of text. You might want to keep an eye on it. The rest is not that important for your applications. Texture caches are an indication of how many bitmaps you have active at any point.

The thing we’re interested in is at the very bottom. We give you the total number of ViewRootImpl. You might not know what ViewRootImpl is. If you’re curious, go in the source code of Android, open ViewRootImpl of that data, look for a method called platformTraversals , it’s like three or four methods of several hundred lines each.

It has hundreds of billions that are more or less equivalent to each other but not quite. That’s what drives everything on Android. That’s layout, input, animations, and drawing. It’s one of the most important pieces of the system and also one of the hardest to maintain and understand, of course.

Every window has a ViewRootImpl. That’s basically the number of windows you have on the screen. A number of dialogues and activities and anything that creates a window will get one of those. What we care about is the fact that we have 450 views. That’s the giant diagram from earlier. If you open the Play Store and you look at a screen and if you were to ask me to use 450 views to recreate the same screen I would find that difficult because at some point I would just run out of ideas on how to waste views. Apparently they are more creative than I am.

A view is a fairly expensive object on Android. It’s not just about computation but it’s about the size and memory. Every view comes with text and drawables and all that stuff, but even if you ignore all the data that’s associated with a view, the data structure of a view itself is not that cheap. It’s around like half a kilobyte to a couple kilobytes per view. Which doesn’t sound that much but then we have hidden structures like DisplayList, that’s where we record the canvas commands. We have half a megabyte of just drawing commands stored in memory for all those views. At least some of those views are not required and we’re wasting memory.

These views are obviously not all visible but they are all attached. They may be invisible or gone in the tree, but they are in the tree, we saw them in the tree. That’s what hierarchyviewer is good at.

meminfo

We thought that Play Store was bad but it turns out it’s actually way worse than we thought, because there is this other tool called meminfo. It works the same way as gfxinfo.

If you call meminfo without an argument it’s going to print the memory information for every process in the system. If you specify the name of your package you get a dump. We’re not going to try to explain the breakdown of everything but there’s a very interesting section here.

Number of objects.

We see that now we have 722 views, but gfxinfo says there are only 450. The difference is that gfxinfo only prints information about what UI Toolkit knows about. Those are the views that are in the window that we’re going to process to draw. This number is what the art run time or dyadic on the older versions of Android knows about that class.

There’s a method in the system that you can call, you can say “tell me how many instances I have of a particular class.” That’s what we do here. We just asked run time how many instances of views exist in RAM right now.

The answer is 720.

That means that the Play Store is caching views somewhere. It’s trying to hide them from the UI Toolkit so we don’t know about them, but they are there somewhere. They’re taking a huge amount of memory and given the performance, maybe they’re not used in the best way. This is actually a pretty low number. When we were playing in the Play Store we saw that number go up to around 4,000. That’s a big waste.

GPU Profile Rendering

Something is going on with number of views; it’d be nice to check that out but there are other things that we can use to find out more about performance. We can profile GPU rendering. We go into developer options and we can say “profile these on screen as bars.” This shows you a pretty output.

Foreground activity, is going to be showing all the stuff down at the bottom and the thing that you want to keep in mind is the horizontal green line. That is a 16 millisecond line and it’s problematic whenever you see the histogram bars pop above that.

That means it didn’t run 60 frames per second for that frame. Doing it occasionally, especially when you’re popping between activities is expected. Doing it frequently is bad. Doing it constantly is horrible.

It would be nice to see what’s going on with the Play Store app. We’re going to do the same scrolling and going into the detail activity thing as we did before and then we’ll see what GPU profiling says. It shows some incredibly huge spikes and a lot of spikes that take it just a little bit above and then one that is way above the line.

You would think that if it’s bringing in content for that new scrolling area, maybe that’s causing some of it but even when we go back to the activity and it’s bringing in some of the same content we’re still losing a lot there. We’re probably missing two, three, four frames at a time really frequently there. It’d be kind of nice to chase that down, profile that and see what’s going on.

There are different ways to profile, and lot’s of different tools that you can use and more coming online all the time. In Android Studio 3, we’re using the canary, and we can see profiling tools that they just released which are pretty awesome. They’re very different profiling tools. There is network down at the bottom, memory, and CPU. CPU is what we’re interested in here but it’s interesting to see some of the other memory stuff going on.

When you enable the profiler, you can have a real time timeline of your application or any other process you can select on the system. You can actually dig deeper. By clicking any one of those timelines you can get more information specific to CPU or memory or network.

Let’s take look at some of the memory information. If we click on that memory and we can see several things. The interesting thing is it shows the whole sequence of scrolling, going into, going out of and obviously we’re allocating more memory along the way but it never gets deallocated so what’s going on with that memory?

We’re obviously allocating memory. We’re grabbing bitmaps, we’re creating all these things to show you the content, to be expected, but if they’re going away, why is the heap still this large? That would be kind of nice to know.

We are deallocating memory but we’re allocating enough that the total doesn’t change. We also saw a fair amount of GC or allocation activity. We did an allocation tracker.

If you select a region in the CPU Profiler, you get information that is akin to what you used to get with TraceView but way better. TraceView originally was an instrumented profiling tool that would show you the call graph at all times; all of the methods that you’re going through so you can figure out where the hotspots are.

The problem with TraceView in the original form was that there was so much overhead in instrumenting and running that instrumentation that that would actually skew the results. I myself have optimized code that I shouldn’t have wasted time with because TraceView told me it was a problem. The reality was it wasn’t a problem at all but I got skewed results from that.

It’s interesting to see where that call graph is but it’s giving you information that may not be relevant. They had a sampling profile version of TraceView that would instead just sample it occasionally. It doesn’t give you the full call graph history of everything, but it does give you a better idea of hotspots.

With the new profiler, you can choose between sampling and instrumented. Most of the time you’ll want sampling. Instrumentation can be useful if you have a lot of very tiny method calls that might be missed by the sample when we’re sampling and you really want to know how many of those calls you’re making; but most of the time, stick with sampling.

Once you have this view, you will see there’s, there’s a blue area over the CPU timeline. That’s a selection you can just drag and drop to restrict the amount of data you want to see. Just under the timeline, you can see every single thread in the application and whenever you see one of those green bars it means that thread is running.

It can be extremely useful to your application if you have a lot of threads, if you’re doing a lot of work in the background, you want to make sure that they’re running correctly or that they’re not running against each other. Finally, at the bottom, there’s four different types.

By default you get a frame chart. You can read from the bottom up. There is the root of the application and every method call goes up. It’s an easy way for you to see where you’re spending most of the time in that selection that we’ve done at the top. The colors are also interesting. The colors in that frame chart are based on a hash of the package names.

All the green bars are application code in this example and everything that’s orange is the framework, it’s the platform, it’s anything that’s android.something . It’s an easy way for you to identify quickly whether or not you’re either abusing the platform, your code needs optimization, or whether your code is at fault.

The other tabs, include a call chart that’s a different way of reading this frame graph. There’s also a top to bottom and bottom up that shows you percentage of execution that’s similar to what TraceView used to do.

Even if you’re looking at app code that we weren’t using, we’re not using some special version so it’s apparently been obfuscated or maybe they just like really short method names. The last bit is interesting because one of the expensive things that’s going on there is layout code that they have.

There is an important combo box. By default you have the wall clock time. That’s how much time the execution took for the user; the perceived time. You might want to change that to CPU time if you want to not take into account sleeps, for instance, if you really want to know how fast the code executes when it’s actually running. I like that it’s still called wall clock time when nobody really uses wall clocks anymore. They should call it pocket time.

Finally, if you’re using all these tools and you really want to get all the information you can, you’re going to end up in Systrace.

Systrace

Systrace can be intimidating. I know a lot of people tried once and they see the giant screen full of data and they recoil in fear and they close it and never reopen it again. It is intimidating, but it’s actually a very simple tool.

There’s not that much you can do in terms of interactions inside Systrace. It shows you a lot of data, but as application developers most of it you can completely ignore.

We grabbed the Systrace of Play Store. I’m just going to walk you through some of the keys that you can use in the tool to make your life easier and some of the things that are important for you as app developers. The rest you can just blissfully ignore. Actually, most of the data in Systrace I don’t even know what it is, but I don’t care.

There’s a path in script in the STK. You just call Systrace.PY and then you can specify a list of tags to invoke it.

If you don’t specify anything, which is actually our recommendation, we’re going to grab a set of tags that makes sense to inspect your application. Then it’s going to wait for your input. You use play with the application. When you’re done, you just press enter. It’s going to draw it in an html file and you can just open that file in your browser.

The reason why we like to output an html file is that it’s extremely easy for us to share it. Most of our performance bug reports that we have internally, they contain one of those html files. You can just attach it to a bug, you can send it by email, you don’t need a special viewer, the html file is the viewer.

As I mentioned the trace is a little intimidating, there’s a lot of data in there. The parts you care about are the alerts line at the top, and we’re going to look at it.

There’s the kernel section. That section shows all the CPUs in the device and what they are doing at any time. You can see whether your process is running, or another process is running. You can see what thread is running. You can also expand that section to get a lot more information. For example, everything called, for instance, D-S-I-O-P-P-L pixel C-L-K S-R-C, are a clock I’m sure, but I have no idea what it is. You can completely ignore them.

What’s interesting when you expand is that at the top, for every CPU you can see what the CPU’s doing, but you can see the clock frequency so you can know whether the CPU’s running at full speed or not. That is important because if you look at the profiler that you get in Android Studio, it’s showing you a percentage of utilization. If your code is using 100 percent of the CPU it does matter whether the CPU is running at 300 megahertz or two gigahertz because 100% utilization at 300 megahertz, is fine; at two gigahertz, not so fine.

The other thing about the CPU profiler is I believe it shows you the overall system utilization which means you may be pegging one CPU out of four and it’s going to show a low utilization in CPU monitor. You kind of want to sanity check that occasionally.

When you click one of the clock frequencies at the bottom, you can see the value. Wherever I clicked, we were running at 1.3 GHz on that particular CPU. That can be interesting, but most of the time you probably want to keep that section closed. Now, if I zoom in on it to see more detail.

If you’ve ever played a first person shooter on PC, you know how to use Systrace. W A S D let’s you move around in Systrace. You also use the mouse to choose where you’re looking.

When I click W I’m just zooming in. Now we can see what the CPU’s doing. For example thread “h” was the Play Store and the other CPUs were doing other things. After the kernel section, you have the section for your application. In this case we have com.android.vending , that’s the Play Store. There you’re going to see a lot of stuff.

There’s all the threads and a lot of real blocks. What’s interesting in Systrace is we have two types of timelines. We have counters and we have events. Events are small tags that we put in the code to let us know what the application is doing at a certain time. The counters are exactly what it means, they are just counters. We can see, for instance, in the animator section all the animations that are running. You can see that the Play Store is running a lot of animations.

Some of the counter’s are also interesting if you’re into that kind of stuff, but they’re not going to be that useful to you as a developer. For instance, there’s a texture counter. If you go on the left, the hwui, that’s the beautiful name of the UI renderer in the platform and that just shows us how many textures we have in memory at that point. You can ignore it, but it can be interesting if you’re creating a lot of bitmaps.

If you scroll down, you have the UI thread and the Render thread. Those show you exactly what your application is doing. The UI thread is where most of your code runs and the Render thread is where we do the rendering. The Render thread appeared in Lollipop, it does not exist on previous versions of Android.

If we want to identify what the Play Store is doing wrong usually all you want to do is look at that UI thread and identify blocks that look suspicious. You can see that sometimes you have those extremely irregular patterns. For example, if we have a lot of thin lines, that means we’re probably running at 60 frames per second, everything’s going great. Then, all of a sudden there’s this huge block of stuff that the Play Store is doing.

Choreographer

Choreographer is the tool inside the system that tells the UI Toolkit to start rendering a frame. When the screen is ready to draw, that thing wakes up and tells the UI Toolkit okay, now it’s time to measure and layout and draw.

Underneath we see traversal, so that’s what I mentioned earlier in that ViewRootImpl and that horrible class. That’s where we start doing all the work for the UI Toolkit. Then we’re going into the layout. You could see we were probably opening one of the screens during layout for some reason. That’s probably a recycler view or list view of some kind.

The Play Store is inflating a lot of views. If you click one of those events and you go look at the bottom, you can get a lot more information about what’s going on. Here we can see that inflating that one list item took eight ms in wall time and five ms in CPU time. That’s a lot.

Every list item on screen is gonna take 5 to 10 ms, and that’s how you end up with a layout that took 340 ms. Those are the big janks that you see. What’s extremely interesting about this is that for every event, when you zoom in, just above the events you can see those little blocks of colors. They are telling you whether or not the code is actually running. When it’s green, it means that the thread is being scheduled on the CPU, then we have red, it means you’re blocked. Sometimes there’s just no color, which means that you’re not being scheduled at all.

Here clearly what we’re seeing is that the Play Store is doing way too much in terms of inflation, that goes back to them having too many views and maybe views that are too complicated or maybe have custom views that are having to inflate.

We can find other things. Let’s look at the Render thread and look at what happens when we actually draw something. For a frame, the part you care about is the beginning of Drawframe. There’s this section called prepareTree, where we received all the drawing commands and we do some bookkeeping, we prepare the textures and stuff like that.

In particular here we see uploads. That’s something we have to do when we see a new bitmap or the content of a bitmap has changed. We have to upload it to the GPU. That’s an expensive operation. You can see that the Play Store is doing a few of them that looks fairly big. We have a total of 12 ms just preparing our textures, which is a lot.

The reason is, if I click on one of those uploads you can actually see the size of the texture. This one is 1080 by 500. 1080’s the width of the screen and for some reason the Play Store is uploading about half a dozen or 10 textures that are each really large but nowhere on screen have we seen those large textures. I have no idea what they are and someone who understands the code of their own app should be able to figure that out, but clearly there’s a problem there.

Maybe they’re preparing bitmaps for later but it’s blocking the execution of the current frame, or they are just trying to draw bitmaps that are really big and then they’re asking us to scale them down onscreen, which is also a bad idea.

Basically that’s the kind of stuff you can obtain from Systrace. If you scroll at the top, I mentioned the alerts line. It’s very interesting because it has all these little warning icons, and when you click on them we can give you information about what you might be doing wrong.

For instance, if I click one, it’s saying that you’re doing expensive bitmap uploads. We’ve uploaded 2.7 million pixels, I’m not sure that that’s useful, but you can see that we were uploading a lot of textures. Those are probably fine because they look like icons. There was another one that was interesting.

Another example is one that says inefficient view alpha usage. They have a custom view called DocImageView and what they’re doing is calling set alpha on it, which is something that you do a lot. Unfortunately, on Android setting the alpha on a view, sometimes is super efficient and sometimes it can be really slow.

The views that are efficient are typically text; if you’ll have a text view, if you don’t have a background, or basically any view that does not have overlapping rendering. If you don’t have a background, you’re good.

Image views, most of the time, should be fast. I don’t know why this one, DocImageView is slow. We had a Google IO talk where we talked about alpha and why we have to do it this way and why it’s expensive; but if you see this warning in the alerts section you should take a look at your set alpha calls and see if you can do it better. Maybe you have to use a hover layer, or maybe you should find a different way of doing it. We don’t have time to go into details here, but look at these little warnings there because they can give you a lot of interesting information. Usually they have a link to a documentation to a page to give you more information about why it’s bad and what you can do about it.

There are other tips down in the app area as well, that are a little more specific to the things that are going on in specific frames. If it’s red, take a look. When it’s red, that means we missed a frame, so we’re not running at 60 frames per second and sometimes when you click on it it can give you more information about what happened.

FrameMetrics

I want to talk really quickly about FrameMetrics, mostly so that you understand what they are. Think about how we can track performance in our applications and analyze data offline and, maybe more importantly, dashboard that data, so that you know when you’re creating problems in your application for performance. You can think of FrameMetrics as gfxinfo for your app. We saw gfxinfo before and the information that it can give you on the command line and you can get that same information on the app side as well.

The bars are basically stuff that your code could be running and causing problems. If you’re doing layout and layout is taking a long time, you want to deal with that. If you’re spending a long time in draw, then you’re probably trying to do too much and we’re actually running a lot of code to do that for you on every frame. For input handling, maybe we’re just calling you to tell you about that, but maybe you use that as a trigger to go ping some web service, that would be silly. For swap buffers and command issues, you can’t affect that directly, but indirectly it tells you how much you are asking the GPU to do on your behalf.

Those are the commands where we’re sending OpenGL commands up there and then if we’re waiting a long time for swap buffer, it means that it’s processing commands and it can’t get back to you immediately because it’s doing a lot of stuff. The information that is used to create those bars are these frame stats that we have underneath.

We have this sort of illegible data that you can get by running that serves command; it gives you the timestamps of all of those pieces of information, input handling, animation, the stuff that we saw, and the draw. We tell you exactly when that thing happened and you can take this data and you can copy it into a spreadsheet if you’d like or you can use something else to aggregate that instead.

Gxinfo gives you the same information but it’s aggregated into these nice histogram buckets. It tells you how many frame events were in the duration of all of these different buckets and then what the percentiles were.

In those dumpsys commands that we just showed for the histogram and the frame stats, you can run them in automation. That’s what we use actually at Google. We create dashboards for every build that we build on our build servers, we run that data on specifics and iOS for several of our apps.

For instance, we launch Gmail, we scroll the app, and then we gather the frame data, then we have a beautiful dashboard, and it’s extremely useful because every time we have a new build that brings the performance down, we know immediately. This is something I highly recommend you do, automate that.

Performance is really hard to do after the fact. If you wait six months to do your profiling and then notice that things are slower than they were six months ago; good luck finding the reason for that. If you do that on every build, and your forceful build can be from one to about a few dozen PRs, you’re going to be able to quickly identify the root cause of the issue. At least revert it until you have a solution. Use those in continuous integration if you can.

You can add a listener, you get information on every frame. You do something with it, you can aggregate it yourself, you can upload offline and analyze it, but better yet, there’s a new API in Support Lib, the version that came out around the IO timeframe. It’s no ops prior to end, because the platform functionality wasn’t there, but you can use this in your app to get information in a much better way. You can basically get that same histogram data that we saw in gfxinfo directly in your app.

You create a FrameMetrics aggregator, you add whatever activities you want to it, and then whenever you want you get the metrics. The metrics are basically that array of information for the various durations that you’ve asked for with flags when you created it. Then you do whatever you want with it.

I should point out that we aggregate this data already and we upload it to the Play Store dashboard. That started around the IO timeframe as well. There was a talk by the Play Store team where they talked about this and other information. You can actually see aggregated jank stats for your application, which is great. You can see that when we came out with this release our jank stats got worse.

If you aggregate the data on your own, if you track it at least on your own internal dashboards, using the approach Romain talked about, or the APIs we’re talking about, then you can get a much more fine grain view of what’s going on and why.

Conclusion

There were two performance talks that were at IO. If you haven’t seen them yet they’re probably worth checking out, especially if you want to know more about Systrace. Tim Murray gave one that was a real deep dive into using Systrace, lots of tips and tricks about that. That was the overview, and then Chris Craik and I gave an overview of various performance tools in APIs and use cases. You might want to check out that one as well.

That is our talk, thanks for coming.