Let’s look at each of the Mac design principles in turn, and their complementary Anti-Mac and Post-Mac variations.

Metaphors

Anti-Mac principle Reality | Post-Mac principle Simulacra

Metaphors play an important role in user interface. They bootstrap understanding by connecting the UI we’re using to an object, an experience, or an idea that we’re familiar with from some other place — often “real life,” but sometimes other software. iTunes evokes a car stereo; Evernote evokes a planner or notebook; Amazon loosely evokes shopping in a department store. Other online shopping sites evoke Amazon.

Les Mis. Theatre Aspen. Not Paris. http://www.theatreaspen.org/news/aspen-daily-news-epic-in-the-tent/

Metaphors in user interface are like sets in theatre. They convince us to believe that the thing we’re looking at is like something else. A metpahor changes our expectations; they are colored by the metaphor. When software says it has a “library,” I’m convinced that it has some organized collection of stuff. Of course, I don’t expect to be borrowing from it like at the public library. And I don’t expect iTunes to be as limited as a car stereo. My Amazon shopping cart is infinitely big. The metaphor is just a bridge; lots of great designs start with a metaphor to aid understanding, and extend it to realize the capacity of people and technology.

Benefits. The Mac was lauded for its ease of use partly because it relied on easy-to-learn metaphors instead of forcing people to learn how the computer worked. There was a desktop and icons for folder and files, instead of a command prompt and directories and files. You could click on a few files, and drag them into a folder, and put that folder somewhere you wanted on the desktop. Doing that with a command prompt was harder and not so easily explained. In this way, metaphors kick-start learning.

In 1984 especially, this use of metaphor provided a familiar veneer to “wrap” around idiomatic computer systems that were unfamiliar.

Drawbacks. There was a brief time at Google when lots of folks were agog over skeuomorphism — literally representing a source metaphor visually in the user interface. (Think of the iBooks bookshelf faithfully recreated on your iPhone, wood grain and all.) Infatuation with skeuomorphic interfaces comes in waves — just as they were popular recently because of a few shining examples from Apple, they were also all the rage in the mid-1990s. G and N wield the skeuomorphic Magic Cap interface in a legitimate critique of overly strict use of metaphors.

The literal, skeuomorphic desktop of Magic Cap. Do you compose an email by opening the drawer with the envelope on it, clicking the Out box, or picking up the postcard?

Reliance on metaphor can cause problems. First, G and N point out that there are often mismatches between the source (i.e. real-world metaphor) and target (i.e. software interpretation) domains. The real-world library demands that I have a library card, and I may only borrow items from it. I own my iTunes library, don’t need a membership card, and I don’t exactly “borrow” items from it. Features may be missing from either the source or target domain, or there may be things that are present in both domains but work differently. These mismatches can be sources of misunderstanding and confusion. Think of the ways in which Google Docs or Microsoft Word are different from typewriters, or the ways in which Instagram is different from a Polaroid camera.

Second, strict use of metaphors — especially in a skeuomorphic way — can introduce clumsy and unnecessary indirectness to point-and-click interaction. For example, imagine the clumsy interaction of having to open a virtual drawer on a virtual desk, to pull out a calculator, to put it on the desk, then hit the “On” button, all before you could calculate a tip.

The Anti-Mac: Reality

G and N propose that “we need to develop new interface paradigms based on the structure of computer systems and the tasks users really have to perform, rather than paradigms that enshrine outmoded technology.” They provocatively suggest that interfaces should be “based on” the structure of the computer system itself — how the computer works under the hood. In general, modeling on tasks users really have to perform makes sense. But interfaces based on the structure of computer system, less so these days.

Alan Cooper, a noted voice on interface design, suggests three conceptual “models” that can be used to describe a given piece of software: An implementation model — how the thing actually works); a mental model — how we think that thing works; and a manifest or represented model — how the thing presents itself to us. G and N suggest that products should present themselves to us in a way that more closely mirrors the implementation model. They premise their Anti-Mac principle on this forecast: “the next generation of users will make their learning investments with computers, and it is counterproductive to give them interfaces based on awkward imitations of obsolete technologies.” Yes, generations since 1996 grew up with computers, but they don’t know more about how computers work — because they grew up with magic apps on good enough devices that didn’t require them to learn the implementation models.

Post-Mac: Simulacra

As the technologies we use day-to-day have become more intricately connected with our real and social lives, a newish model has emerged. Metaphors still abound, and usefully so. But interfaces aren’t modeled on the Anti-Mac design principle of reality, of being more closely tied to the mechanics of underlying technology. Instead, we find more of what I term simulacra.

If I were to write a statement about this principle, it would be that much of interface design today approximates systems or relationships in the real world, even if only crudely. Facebook’s model is a simulacrum of my real social relationships, and a Facebook Event is a kind of simulacrum-extension of a real-world social event. Their product relies heavily on approximation of real life, though their user interface itself employs mainly tried-and-true standard controls and desktop and Web user interface elements. (Read on to see how this principle is synergistic with the “representation of meaning” principle.)

Ironically, since computers have become connected devices, they have become more personal, because they now reflect deeper integration with us as people. The personal computer of 1986 or 1996 is actually less personal than what we use today. The principle of simulacrum is a reflection of technology’s new capacity to “be aware” of our selves, relationships, and many parts of our lives.

In 1984, metaphors (the desktop), were used to make the unfamiliar technology (file systems) accessible. In 1996, G and N propose interfaces that reflect the the underlying technology and its capacities. In 2015, the more dominant model is obscuring the technology altogether, and centering the software around simulacra of everyday life — with a healthy dose of metaphor, still.

Many of the things we want technology to do simply don’t relate to how computers work — things like having a conversation with friends, buying a pair of shoes, or driving somewhere (think of GPS directions). It would be a mistake to assume that we just want more powerful user interfaces to manipulate bits on a computer. Some software is about that, but not all.

(As an aside, it’s worth noting that a good deal of what technology companies do today is “model” our behavior as consumers, along with all manner of systems and networks and phenomena. Simulacra — even if they’re reductionist or flawed — are on trend.)

Direct manipulation

Anti-Mac principle Delegation | Post-Mac principle Both

According to direct manipulation, users see visual representations of objects on screen — like files, messages, videos, business cards, pages, words, and so forth — and act on and interact “physically” with those representations. In other words, you use the interactive vocabulary — point, click, drag, type — on those representations. Drag and drop is a classic example of direct manipulation in practice. It’s hard to imagine a graphic designer composing a magazine layout without direct manipulation.

Benefits. Like pushing buttons, picking up objects, or otherwise manipulating things in the real world, you can see the thing you’re working with and what happens when you interact with it.

Drawbacks. Some of the richest actions we might want a computer to take up on our behalf would be exceedingly difficult to describe by direct manipulation. Imagine this graphics app task: You want to duplicate a star one thousand times, rotating each copy by a slightly different amount around an axis at the tippy-top point of the star, in a spiral, out from the middle, growing each star by a slight and random amount each time. There’s almost no way to do that efficiently with direct manipulation. Scripting — describing and delegating — the task is easier.

G and N present another example: Software installation. Some software installation involves moving thousands of files into all kinds of special places; that’s a lot of work with direct manipulation. So they point out that software installers had become mainstream by 1996. Amusingly, in the past few years, software installation has often become as simple as dragging and dropping a single icon, especially on contemporary Macs. But their point still stands: Dragging and dropping thousands of files to all the right places would be awfully tedious.

Not much. (Yet.)

Anti Mac: Delegation

G and N’s alternate principle is all about telling the computer — ideally with a non-natural but relatively accessible language — what to do. That works well for the graphics software case, in which it’s easier to describe what we want with precise language. And the installer case — effectively, when we click “Install” we’re bypassing direct manipulation and “telling” the computer to do something for us. The ideal case is the computer from Star Trek: The Next Generation. You can ask her virtually anything, and she’ll do the work for you.

However, G and N acknowledge a few barriers to true delegation. Foremost among the barriers is that true natural language processing requires true artificial intelligence (AI). We’re nowhere near that yet. Command lines — like the widely used Unix command line — are powerful, but depend on strict, idiosyncratic vocabulary and syntax and require a great deal of learning and practice. They mention scripting languages — which did flourish briefly around 1996 — as potential solutions, as well as interfaces similar to text-based games that could “negotiate” with the user to arrive at a mutual understandable instruction for the computer.

The proposed design solutions rely on (at least) two premises: That people need computers to do complicated things that are hard to describe with direct manipulation, and that they’re willing to learn non-natural language-based “scripting-like” solutions. It turns out that people don’t want to learn those languages (and, implicitly, more about how the computer works); they want the computers to do more for them — ironically punting the full resolution of delegation to the AI-complete future. Luckily, a few magic apps are simple enough for us to have something like delegation today — for instance, I can basically ask Google Maps to give me driving directions.

Post-Mac: Direct UIs for delegated services

(Delegation for products and services, direct manipulation for UI.)

This is an interesting case because it illustrates a gap between UI design and product or service (or, if you want, experience) design. In a sense, products perform “delegation;” the more magical an app is, the more it does behind the scenes. Think of those driving directions, or searching on Google. In these cases, the UI itself is subordinate to the magic of the service.

When it comes to the UI, direct manipulation still rules. Scripting languages are a way of solving problems with direct manipulation— like how to get a thousand files into the right places, or how to manipulate a bunch of musical notes. And most of the things that people need help with — or want — from technology are not UI problems per se. (At least not in 2015.) They may be design problems, but not strictly UI problems. So, there are lots of “one-button” apps that do magical things. And they are examples of delegation, though not with an increased role of language like G and N proposed. They’re similar to “install” buttons.

As for direct manipulation’s continued place: Along with the merger of technology with routine activities of life and increasing magic happening behind the scenes, we got these fancy new touchscreen doodads.

Smartphone and tablet apps are overwhelmingly networked, single-purpose programs that practically cry out for direct manipulation. What could be more natural than tapping, pinching, or smooshing the thing directly on the surface of our iPad? In this case, the technology wants to be used in certain ways. On your average tablet, there is only direct manipulation.

In other words, the services in the background are more sophisticated and are indeed delegated operations — calling a Lyft, posting a photo to several social networks at once, or getting driving directions — but the UI you use to set those things in motion remain (and in fact are best, especially with touch screens) actions of direct manipulation.

In terms of delegation to replace direct manipulation in UIs: Scripting languages mostly failed to gain traction, and negotiation-like UIs, including Apple’s Automator, which do perform the kind of delegation G and N refer to, are at most bit actors. Some cool services do this for a living though; If This Then That (IFTTT) is all about delegation. And I’ve seen some apps on the horizon that will execute on this promise. But it’s not the norm in 2015.

See and point

Anti-Mac principle Describe and command | Post-Mac principle See and point

This principle is closely tied to direct manipulation. It more or less states that people can only (and should only) interact with things that are visible on screen, and use the mouse to point a cursor at those things and do whatever they want with them (manipulate them directly). The things on screen might be objects — files, folders, or shapes in a graphics program — or they might be menus. But the principle states that people choose from whatever’s visible on screen.

In the Apple Human Interface Guidelines, the principle is written as “See-and-point (instead of remember-and-type).” The guidelines point out that usees can “rely on recognition, not recall.” They don’t have to remember arcane commands, or about things that aren’t visible on the screen. They simply need to look at the screen, where every available object and activity is visually represented.

Benefits. A see-and-point environment is predictable; there are no hidden agents or objects at work. The interactive vocabulary can be as straightforward as point, click, drag, type (with a single-button mouse), which makes this kind of interface easy to learn. Everything you can do is plainly visible. These interfaces work well for novices.

Drawbacks. As G and N point out, there are lots of things that can’t be represented on a given screen — especially in a connected, Internet world. Those actions, or those objects, can’t always be displayed in a pure see-and-point UI. Plus our displays are only so big: We can’t put it all on one scren! If our interactive vocabulary is limited to a combination of the nouns on screen, and the verbs that our single-button mouse and keyboard offer, we are indeed working with a limited language.

Anti-Mac: Describe and command

G and N’s primary critique of see-and-point is that there are things we might want to tell the computer to do, or to refer to, that can’t be represented on screen. That is strictly true, but it’s not always a practical concern — especially in the world of novice users spending lots of time with all kinds of little, single-purpose apps.

There are cases in which it’s obviously useful to refer to something that isn’t “on screen,” such as when I search Google for [tacofino hours]. (Tacofino is a restaurant in Vancouver, not an object on my screen.) Excessive incoming email necessitated vocabulary to describe filters. Huge music libraries demanded “smart playlists” and UI to describe (in the abstract) the kinds of arbitrary playlists we wanted.

Post-Mac: See and point

As someone who can kind of find his way around a command line and a scripting language, I absolutely see that both can provide — in principle — richer interactive vocabularies and more “power” than see-and-point UIs. But they just don’t apply so often in the Post-Mac environment of novice users expecting lots of magic apps.

If you’re paying attention, you’ll note that Siri and Google’s equivalent — and perhaps Google itself — are describe-and-command UIs. I use Siri all the time to set up reminders, but not much else. She can’t do a lot more for me than that — yet. I engage in a process of “negotiation” with her from time to time, but it’s usually fruitless. Until it’s clearer that I can say almost anything I want to Siri, she’ll always play second fiddle to see-and-point. But, like Google Search, which is effectively a describe-and-command UI, that future interface will be extremely powerful.

For now, in 2015, the most powerful and magical services like GPS, online shopping, online banking, video chatting, and so forth, have simple see-and-point UIs to facilitate the things we want. And these UIs are increasingly mobile-first, making them even more straightforward and inclined to “fit” the small touch-screen and primarily occasional use by novices. Consider Facebook, one of the most powerful applications developed in the past decade, and a revolutionary service. Its UI is almost entirely see-and-point, with little describe-and-command in use at all. (People don’t search too often, besides typing the names of friends.)

(It’s worth noting that see and point combined with rich-cue modes, is probably the predominant UI pattern for mobile interfaces.)

Consistency

Anti-Mac principle Diversity | Post-Mac principle Consistency

The Mac design principle of consistency is about applications being “consistent within themselves” and “consistent with one another.” What that means is that UIs should strive for a kind of regularity that helps people learn how things work. That is, one way to do things, one way of referring to things. There are exceptions, but UIs are not usually creative expressions; artistic but unnecessary variation can make apps more difficult to learn. Consistency can be achieved by using standard controls, by adopting platform patterns, and by rigorous attention to any intentional variation during the design process.

Benefits: Once a user learns how something works in your app, they know how it will always work. And if your app is like other apps, once they learn how it works in your app, they’ll know how it works in other apps. As Apple said in the 1986 edition of the Human Interface Guidelines, “this benefits the typical user, who usually divides working time among several applications, and it benefits every software developer because the user learning how to use a new application builds on prior experiences with the same elements in other applications.” The guidelines helped developers make Mac apps that looked and felt like Mac apps.

Drawbacks: Pure, unadulterated consistency is impossible. G and N critique it vaguely by saying that it’s hard to apply, because of “conflicting things with which you can be consistent.” Any designer with experience creating software will tell you that there are certainly tradeoffs involved in achieving sufficient consistency, but plenty of heuristics for doing so. If your user has time to learn the app and is going to spend all day in it to earn a living, then by all means deviate from the norm. But if you’re just one of a couple of dozen apps they use every day, you have to be more careful.

A pen that looks like a shoe! (by nevR-sleep on DeviantArt)

G and N trot out a silly example about pens: Two kinds of pens look different, but they’re still pens, and we can tell them apart. Well, that’s about the level of consistency that software needs to aim for, too. Don’t make your pen look like a shoe (credit to G and N for that example).

Anti-Mac: Diversity

Unfortunately, G and N didn’t articulate a clear alternative to consistency in their thought experiment. They do say that “it is the rich and fine-grained representation of objects in the real world that allows for pens or books to have a wide variety of appearances and still be easily recognizable,” and go on, “as representations of objects in the computer interface become richer and more fine-grained, the need for complete consistency will drop.” Without more detail, it’s hard to know what the Anti-Mac design principle of diversity is meant to dictate. The Wild West of UI?

Post-Mac: Consistency

This conclusion is self-evident, especially in the Post-Mac environment. People benefit more from the Mac design principle of consistency than they would from diversity that impedes learning and knowledge transfer from app to app. Diversity is fine, married with visual and interactive unity — ensuring that things look, feel, and work mostly the same. Consistency is still a valuable guiding principle, and it doesn’t preclude creativity — you just need to deliberately break it.

Networks like Twitter, Facebook, Reddit, and so forth, have all contributed to the generation and adoption of standards for sharing content, authorizing identity online, and so forth. UIs and products depend heavily on “fitting in” to models people already understand; deviation can be confusing and costly. People can just jet off to the next, more consistent and easier-learned app.

WYSIWYG (What You See is What You Get)

Anti-Mac principle Represent meaning | Post-Mac principle Represent meaning

WYSIWYG — pronounced whizzy-wig — effectively states that what you see on screen should be a faithful representation of exactly what you’ll see when you print it. The Human Interface Guidelines say that “there should be no secrets from the user, no abstract commands that promise future results.” It’s a directive primarily about parity between on-screen and printed display, and intimately connected with direct manipulation and see-and-point design principles: WYSIWYG, and you can manipulate WYS directly to SWYG. It’s corrective of a frustrating experience in contemporaries of the day, like DOS computers running WordPerfect; there you’d enter a mode to tell WordPerfect to make a word bold, but you wouldn’t see it bold on screen — only later, when you printed the document. In other words, it was an abstract command, promising a future result.

Good old WordPerfect. WYS is definitely not WYG.

Benefits. Well, what you see is what you get. No surprises.

Drawbacks. None, really. Who can take issue with the faithful-print-representation goal? G and N interpret the design principle to be more limiting, that no thing on screen should “be” anything more than it appears to be. (Out the window go metaphors like the trash can.) Of course, any object on a computer screen can be something other than what it appears to be. A word can be a word, or it can be a link. It might even be a word that “contains” a whole other document. If the word is someone’s name, it might be a kind of representation of that person. These rich semantics are lost when what you see on screen is limited to literally representing just one state or one slice of the deeper underlying object.

Anti-Mac: Represent meaning

G and N propose an alternative to WYSIWYG that actually subsumes it. Their Anti-Mac design principle suggests that semantically-rich objects should be the atomic basis of interaction. In other words, things on screen should be more than they appear to be. If I’m looking at an miniature “business card” representing a contacting my address book, it should be more than just a picture of a business card: I should have access to the underlying richer data, probably about a person or a company or both. And I should be able to “use” the business card to do things that I might want to do with the underlying person or company, like address an email, connect two people from different companies together, whatever. The object underlying a given on-screen representation could be drawn and interacted with in all kinds of ways; WYSIWYG is still possible because one of the most appropriate representations of a Word document, for example, is the view of what it’ll look like when it’s printed. But that’s not the only view.

René Magritte.

Post-Mac: Represent meaning

This has become the dominant design principle, even though sometimes it’s a bear to make it happen. (There are a million and one ways to encode the rich semantics of any given thing, and it’s hard to settle on the best way.)

So much meaning, not just pins on a map.

In the Post-Mac environment of simulacra, almost everything on screen is a partial and virtual representation of some deeper, meaningful thing. My “home” location pin on Google Maps isn’t just a metaphor or a picture of a pin; it’s just one context-sensitive instance of a richer object — a meaningful Google Maps-internal notion of the place where I live. I can tap it to reveal more about it, and it plays a substantial role when I’m interacting with Google Maps. By default, the app shows me how long it’ll take me to drive from from my home to that place I looked up. Similarly, in most places you see a friend’s name in Facebook, you can bet it’s not just the letters of their name — it’s a little textual representation of them that you can click or otherwise interact with.

In interfaces predicated on simulacra of the real world, the atomic bits of the UI are only successful if they represent meaning. A prescription-management app is only valuable if a prescription object in the UI is somehow a virtual version of my prescription; ditto for an item in my Amazon shopping cart. The list goes on.

Bonus! If you build software on this principle, you can more easily design for progressive disclosure. Reveal only a little meaning at a time, as needed.