We are living in an era where everything is changing rapidly. Our current tools will be outnumbered by many other excellent ones in the next few decades. We do not have precise sensors or clever voice assistants yet, but they certainly will be in our lives in the future. We are sensing many things are changing, but we don’t know the manners yet. Some people will invent new disruptive ways of communication and interaction. New interfaces will be born out of these new interactions.

Some people among us define the future by crossing the lines. They can see from totally different angles and match the right interfaces with the right controls. Some options are already at the table: voice, sensors, or maybe mind. We don’t know the exact solutions yet, but designers should be prepared for the future to think about how to design interfaces for tomorrow’s human-computer interaction.

Warming up for the future

Think this, only two decades years ago, we didn’t have the idea of touch controls precisely, there were only mouses and some dumb pocket computers. Before iPad, Microsoft had the idea of a ‘tablet computer’ but they didn’t ship the right experience with the right UI. They are neither attractive nor suitable for fingers. Steve Jobs knew how to fix that, he had the correct recipe: no unnecessary physical buttons and obviously a new software with a brand-new sophisticated user interface for only ‘touch’. If Steve Jobs hadn’t defined the touch that we know today, we might not have found a proper method to do that maybe for a decade.

Bill Gates (2002)

Making a device with mixed UI including both touch control and physical keyboard is also hard. For example, Metro/Modern UI of Windows creates confusion because of using a different mental model than the previous Windows operating systems. Windows mixed the touch control design with the traditional desktop computer and that didn’t go well. A Serious amount of people couldn’t adapt to new UI. Eventually, Windows separated the views (mobile and desktop), redefined the interfaces, and sought a common ground between mouse/keyboard-controlled and touch-controlled devices.

But new UI’s certainly will be shaped different, the actions we take, like saving the photo, editing document, and sending a message can now be used with only voice commands besides buttons. Or actions may be just driven by gestures, that are tracked by sensors around us. We may use our keyboards to be more precise, but gestures or voice commands will be the main tools used for passing to next song. We are already using some specified gestures such as swipe right to turn back instead of pushing back buttons. We pinch screens to magnify items instead of plus-minus buttons.

Just Setting New Rules..

Fitts’s Law describes the equation between time/distance/dimension of the object interacted with. Thanks to this equation, we can understand the mechanics behind the difference between the interfaces of TV and our smartphones. We make buttons larger at the TV screen while trying not to make buttons smaller than the 44px*44px at the smartphones.

We need further laws that describe how the human-computer interaction will be when the input methods will change. UI designers should carve out new experiences and rules from these newly emerged controls. Human-computer interaction is changing along with the technologies around us.

Apple TV user interface (selectable items are larger because of the distance)

Think about the voice controls of Amazon Echo, Google Home, and Apple Home Pod. They don’t have traditional screens to interact, instead, they take voice commands. How can designers design such interface when there is no interface? This is where things get complicated. First, UX is not considered only for screens but also for voice. Maybe this experience will require AI (this will also require voice user experiences, like in here). Second, screens will continue to be our main source to see things while interaction style is different. UI’s will be adapted to these changes. Your tablet, smart TV or computer UIs are totally different. While the ways that they are controlled are different, their interfaces should be adapted.

Exploration with the new

Like voice, gestures also provide different experiences. Sensors are tiny now, and they can track every movement easily, how will our UIs be aligned with this changes? Do we still need cursors, pointers? Should there be a button to pan the screen, or will the gesture do it? These developments pave the way for us to build a different future. User interfaces will change rapidly and emerge in different forms. Screen shape, distance from the user and the interaction methods (voice, keyboard, gesture etc.) require different types of design thinking.

Reimagining the possibilities gives us the power to create the feature. And future can only be shaped by design. Because only design can answer the questions that were not asked before by relying on the human psychology and mind. UIs will be re-shaped with every iteration that the technology provides. And at some point, an exploration of design will give the correct answer to us.