If you are in any way involved with anything interactive, it would be fairly safe to say that you’ve heard all the talk about the current “Flat Design” trend, and how it is a great many things. Sadly though, a lot of opinions that are being presented as facts have failed to understand and explain why this particular approach has become so pervasive in modern user interfaces.

Because of all the misinformation, some designers are convinced that “Flat” should be written off as a trend in the same way that flared jeans and flannel shirts were trends, like some User Interface flavor-of-the-month.

Okay, even though Flat is a trend, it isn’t that kind of trend.

To explain, it may be best that we go over what Flat isn’t, and why common assumptions about design trends often miss the forrest for the trees:

Is it a direct reaction to skeuomorphism?

There’s an argument that supports that, sure. But a reaction that has no tangible advantages over what it is supposedly rejecting wouldn’t succeed for long, and so I think the answer here would be: “yes, but only a little”.

Why? Well, let’s start with what skeuomorphism is. To describe it briefly: it’s a way of replicating certain elements and attributes of an object (often a physical one), and then applying them to another object or medium. (Read more about it here.)

Now as far as user interfaces are concerned, this is generally done by employing graphics that mimic real-world objects, such as the paper textures found in notebook-type applications.

The problem is that when it comes to understanding what skeuomorphism really means, most people stop at this point, and miss out on the fact that the concept can exist without wood grains and airline leather.

This is why a little part of me dies every time I see someone post about their flat, “non-skeuomorphic” icons, completely missing the irony of it all.

People tend to forget that the concept of skeuomorphism goes beyond borrowed details and textures, as even minimalist or non-visual representations, such as sound, can retain skeuomorphic attributes. As an example, let’s say your application employs a Flat aesthetic, but your solution to hierarchy is folder or container-based. Well guess what? You’ve just used skeuomorphism, because the use of folders and containers are concepts that have been borrowed from real life.

One of these folders may be viewed as Flat, but both of them are still skeuomorphs.

What this means is that all your Flat icons are (likely) still skeuomorphs, just with minimal details. And there is nothing wrong with that. After all, one of the main reasons behind skeuomorphism’s mass adoption was its effectiveness in helping ordinary people grasp how digital worked. And so actively trying to make a design “non-skeuomorphic” is, quite frankly, silly.

If nearly all interfaces are still inherently skeuomorphic, why has the industry gone on a texture diet?

That’s not exactly true. The industry has simply become better at figuring out what is appropriate. For example: If a user fires up a theatrically-heavy game such as Metal Gear Solid or Star Craft, there is a good chance they would appreciate an appropriately graphically rich user interface. This is why art direction matters, regardless of the product.

StarCraft’s immersive and texture-heavy user interface design. Image sourced from Blizzard Entertainment.

But the question remains: “why is Flat popular?” Or more importantly: “how did it get so popular?”; questions that prompt some people to start barking: “it’s a trendy hipster reaction to gaudy over-the-top graphical interfaces!”

No. At least, not completely.

Is it a reaction? You betcha. But it’s a reaction to diversity.

Flat is the child of necessity, not fashion.

You see, while rich graphics can be beautiful and immersive, they don’t often scale. This is due to the simple fact that texture-rich, raster graphics are not rendered natively. (read: raster vs vector)

Which means that in today’s world of countless screen size variations, graphics that aren’t natively scalable become a constant race against compatibility. And it’s a race that can quickly become quite expensive in more ways than one, particularly in agile environments. Imagine having to “sculpt” every icon and graphic unit for multiple devices during a sprint. Sound fun? Didn’t think so.

Multiple devices, multiple resolutions, and multiple ratios all require multiple instances of raster graphics.

A few years ago, compatibility was not a major issue, since most people experienced interaction through their laptops and desktops — devices that had fairly predictable screen sizes and ratios, which meant that interface graphics had little reason to worry about scaling, pixel densities, or portrait modes. But times have changed.

But why Flat? Simple: a Flat aesthetic can be rendered natively on most devices. Because of this, the user interface can scale with less overhead than one based on rasterized graphics.

And so while most of us still expect rich, immersive experiences when the content is delivered through our television screens, (a benefit brought about by standardized screen ratios) what we expect from our personal devices tends to lean more towards convenience and compatibility. Hence, user interfaces that scale are no longer just one of many desired features — they are now a critical core element of great user experiences.

This is why the current flat/simple/minimal design trend is just a reaction to real issues that required real solutions, and not because the world suddenly decided that denim textures aren’t going to be cool for awhile.

Consistency and compatibility often result in a more convenient experience for the user.

Besides, this isn’t something that should surprise us. When new technology and market demands change the way things are done, widespread adoption of the “new way” often leads to a number adjustments. In the case of user interfaces, the increase in device diversity discouraged the use of graphics that don’t scale, and encouraged the use of those that did.

And thus “Flat” became a trend.

The moral of the story? before you write off an approach as the next baggy jeans, take the time to learn more about it, and what may be behind its adoption. The insight may be something worth learning about.

What’s Next?

The future is always difficult to predict, but just like how the current trend was the byproduct of several factors coming together, the next shift will also be a result of changes brought in by other market trends, such as the recent rise of affordable 3D printing for example, or breakthroughs in technology, like gesture-based interaction.

In fact, we may already be seeing the baby steps of an upcoming trend, one that can described as either “3D” or “Spatial” (credit goes to Quora’s David Cole for the latter). Interestingly enough, some of the bigger players are already exploring interfaces that embody Spatial attributes. These include Microsoft and Windows 8's “Semantic Zoom”, as well as Apple and iOS7's zoom and parallax effects.

Subtle zooming. The bars highlight how iOS7 uses the zoom animation to illustrate hierarchy.

And before any of you start tweeting “Spatial! 3D! A reaction to Flat!”, despite all the arguments listed above, remember that if the Spatial approach was to become a trend, it too would just another natural evolution, brought about by the ubiquity of touch-based, smart devices, as well as the falling cost of capable hardware.

However, worrying about current and future trends and letting it influence decisions would be missing the point. Flat? Rich? Minimal? Spatial? Scaling? Non-scaling? These are not questions that should be answered by emulating popular styles in design galleries, or by following an approach just for the sake of doing so. The best solutions are the result of understanding and addressing the needs and wants of the task at hand, and then applying the most appropriate action.

Remember, great design happens when something is all that it needs to be.

So do what’s right, not what’s trendy.