So a little while ago @xanoxis asked me about how we do level-of-detail and other optimizations in PA to help with the performance. It's a really good question, and one I decided justified doing one of my technical rambles. So... for those of you who like this stuff, here goes.Alright then. So - one of the most important tools in any graphics programmer's toolset is the art of doing less. Not in the sense of being lazy (although I'm pretty good at that, I'd be the first to admit). Rather, in the sense of efficiency.When you're playing a game (well, a 3D game), what you see on-screen gives the illusion of you being in a rich 3D virtual world. There's a lot of stuff in front of you. You move forward. More stuff. You turn around. More stuff. It's (usually) persistent stuff - as in you can go back to a place and see the same things that were there before.But it's all a lie! (Us graphics programmers lie, as well as being lazy. OK, maybe not...). In most 3D games, if you don't see it on screen, it's not there. Off-camera is a soul-crushingly empty void. (Maybe I'm being a bit dramatic with that description...)To draw absolutely everything that's in a scene, in pretty much any game with compelling content, would tax even the most potent gaming system. In PA, you have thousands of units, multiple planets, thousands (or tens of thousands) of lights, particles... you know. Stuff. There's no way we could get interactive frame-rates if we tried to just throw all of that at the GPU every frame.Instead, we try and be smart about what we draw. We want to do as little as possible, after all. The less work you do, the less time it takes.So every frame, we start with an empty scene. Then we add as little as possible to it.For example, if something is behind the camera, we can't see it. So we don't draw it. It's fairly simple to test for that specific situation: since everything in our game has a 3D coordinate in space, you can test the position relative to the camera, and if the "z" component is negative then it's behind you and you can skip it.That only cuts down a fraction of stuff though. What about stuff that's outside of our field of view, but still in front of the camera? Or stuff that's too far away?To handle all of that, we use a technique called "frustum culling".I stole this image from here: http://www.lighthouse3d.com/tutorials/view-frustum-culling/ - which by the way is a more in-depth tutorial on this technique.Anyway, that's a frustum. Effectively, it's the truncated pyramid shape made by taking the screen (your monitor), and stretching it out to some far distance (in our case, around 5,000km in game space... and that far out, that square is really, really big). So you have six sides to this shape, each a "plane" (for the non-math-geeks out there, a flat surface...).An object is visible if it's on the inside of all six of these flat surfaces. So, we do a really quick (optimized) check between an object's bounding volume (we use spheres, since it's super-trivial to test a sphere against a plane) and each of the six sides. If we're outside of one of them, the object is invisible and we skip it.In PA, we use this foryou can see in-game. Trees, rocks and other surface features are grouped into clusters (based on a latitude/longitude grid) and the bounding sphere of each cluster is tested and added to the frame's rendering if it's visible. Each particle system calculates it's approximate bounding sphere (each frame), and is ignored if it falls off-camera. Each unit, building, and planet part, the same. Lights - same deal. We don't render a light if it's invisible.This technique speeds things up quite a bit. It helps on the GPU (less geometry to process), and on the CPU (fewer state changes, less draw calls). But it's not enough.Another trick we do is something called "level of detail". There's whole books on this subject, so I won't try and give a super in-depth description. But to summarize, the concept behind level of detail, or LOD, is this: when something is very far away, we can no longer see fine details. So a 1000 polygon mesh that's 1km away is going to show up as a few pixels, we don't need 1000 polygons.In PA, we LOD our features (trees, rocks, so forth) using a system called "impostors". I might have waffled about these a bit before - I forget. But anyway, we take each feature (tree, rock, etc), and render 3 orthogonal views of it into a texture. Then, if the feature is beyond a certain distance threshold, we render 3 flat billboards with that texture instead of the full model. This cuts down our polygon count on most planets by a factor of 2 or 3. It helps.We also LOD units, by turning of animation updates if they are too far away, and we stop rendering them completely if they are occluded by their strategic icon. We do some math that figures out how big on-screen the unit would be (in pixels) and if that's less than the icon size - we skip it. This saves us a fair bit of time, too.Right now we don't have a more complex LOD system on units, since it's not proven to be a big bottleneck. I might revisit this, though.Anyway, that's how we make stuff go faster by doing less. Now, go blow stuff up...