Unveiled at CES 2019, the Royole FlexPai is the first commercially available flexible smartphone/tablet; image courtesy of Royole

Is the future of smartphones flexible?

In some articles I wrote for Inexhibit in the past, I’ve investigated the history of smartphones and how the “cellphone” has evolved into a sort of handheld computer, though with some limits.

At the same time, I concluded that, now, the usefulness of all smartphones is drastically limited by two, physical, factors. On one hand, displays are still too small and to further enlarge them is difficult and implies a number of practical issues.

On the other hand, interaction and input methods are primitive and strongly hamper the possibility to use smartphones other than simple entertainment and web surfing devices, unfit even for writing a relatively complex text or editing a photo.

I suppose manufacturers are well aware that that will undermine the commercial success of their smartphones since this is already happening.

Look at Apple, for example.

Besides a, rather useless, increase of computational power and screen resolution, the strategy in Cupertino has been to progressively enlarge the phone display from the original 3.5” to the 6.5” of the iPhone XS Max. To go further without making the device impractical seems a hard job. Indeed, the iPad mini is equipped with a 7.9” display, just one inch and a half larger than iPhone XS Max’s, and I doubt many would consider it practical as a smartphone. Therefore, if Apple’s commercial strategy has been, so far, to introduce larger (and more expensive) smartphones, this strategy is becoming increasingly ineffective, as the disappointing sales of the iPhone X seem to suggest.

Moreover, we didn’t see a substantial evolution of smartphones’ interaction systems since 2007. These days, many are betting on the future of voice-assisted systems and virtual assistants such as Siri, Cortana, and Google Assistant.

I am not sure, but I suspect that, besides an initial interest, they won’t be as successful as many expect. If humans use hands, and not speech, to perform complex interactions with machines is because it’s a more practical and effective way to do it. The difference is that between whistling the Well-Tempered Clavier and playing it with…a clavier. There’s no comparison; let’s admit.

The problem of displays

To solve the problem with the display size we need to equip a smartphone with a display larger than the device “basic” physical size. This means we’ll have a small and agile device when we need it (to reply to a phone call, to view a map quickly, to share a photo with our WhatsApp group) which can transform into a true laptop on request (to edit that photo with Photoshop, to write an article from a train, to start a multi-chat from an hotel room, to play Grand Theft Auto on a decent screen).

Three ways to do that come to my mind.

One is to have a projective display, which means to embed a micro-video-projector into the device and project the screen on a surface. Technically, it’s totally feasible; yet, the power consumption is gargantuan and we have to find an appropriate surface, which is often unavailable.

Another way is to use a head-mounted display, such as the Google Glass or Microsoft’s Hololens; yet, customers seem to don’t appreciate smart glasses much so far, judging them too uncomfortable and fatiguing.

The third way is the most promising and, not by chance, that which many tech companies are developing today: flexible displays.

Head-mounted displays and smart glasses, such as Google Glass, could potentially be turned into “virtually” large screen smartphones; image courtesy of X Development LLC.

Flexible displays

The technological research on thin, rolling, and folding displays is not that new.

Early experiments were carried out at Xerox PARC in the early 1970s, leading to the first monochromatic flexible display, called Gyricon, in 1974.

A prototype of the Gyricon flexible e-paper developed at the Xerox PARC in the 1970s

In the early 2000s, a team at the Queen’s University led by Roel Vertegaal started developing computer and smartphone prototypes featuring flexible touch-screen displays in combination with what they called “Organic User Interface”.

The idea is that a flexible device also allows for new types of natural human-machine interaction; for example, the user can scroll pages by delicately bending an edge of the screen, the device can draw the user’s attention by automatically deforming its shape, the user can copy documents or “expand” the graphics window across two or more devices by simply putting them side by side, and so on. In 2013, the team, in collaboration with Plastic Logic and Intel, presented both a tablet, called PaperTab, and the MorePhone smartphone which implemented such techniques in working prototypes.

The MorePhone flexible smartphone, developed by Roel Vertegaal at Queen’s University in Canada in 2013. Image courtesy of Queen’s Universityuman Media Lab.

The Organic User Interface concept is based on natural gesture for human-machine interaction; for example, by putting two flexible tablets side by side, the “master” GUI automatically “extends” across both devices, doubling its size.

Yet, the most interesting technology currently available is arguably the OLED display.

Because of its nature, the OLED technology can “spread” a display on a thin, flexible support and, therefore, make the entire device flexible, as long as you protect the display from water (the organic components which form OLEDs are very sensitive to moisture) and solve the problem of the device “rigid” parts such as the battery (though flexible batteries are under development).

Nokia was the first manufacturer to present a flexible smartphone, in 2011; yet, also others are working on it, including Sony, the Flexible Display Center at the University of Arizona, the Chinese company Royole, and Samsung.

Since 2010, the South-Korean company has presented several flexible/bendable smartphone prototypes and the release of the Galaxy F model (F means flexible, of course) is scheduled for 2019.

Though not totally new, the Galaxy F concept is interesting because, rather than being a “bendable” phone (whose sense would be questionable, IMHO), the device is a clamshell-like smartphone whose display grows from 4,58” to 7,3” by simply “opening” it (a design already featured in the Nintendo DS handheld game console, some years ago).

In perspective, this open up new possibilities, for example, we could have a vest-pocket multiply-folded device which transforms into a large portable computer with a 15”/20” panoramic display; this way, smartphones, notebooks, and desktop computers will converge into a single device.

An apparently trivial problem is that a single display seems to be not enough, in that design. If we look at the Samsung prototype, we’ll see that it has TWO displays, indeed. One is a folding screen placed inside the “booklet” which makes the device a sort of tablet, when open. The other is a smaller external display which works when the Galaxy F is used as a “traditional” smartphone.

Yet, I suppose that a single triple-folding display would easily solve the problem.

A prototype of the Samsung Galaxy F folding smartphone; image courtesy of Samsung

The Morph concept flexible smartphone developed by Nokia in 2008

The prototype of a flexible battery created by Panasonic.

The problem of interaction

If the display is large but my capability to interact with the system is limited, I’ll be a largely passive user. Complex systems require an interaction more powerful than newborn-like hand gestures by two fingers. The key is, once again, to make the surface through which the user controls the smartphone larger than the physical size of the “idle” device. A first method is to make the flexible display, or a part of it, a touchscreen with full-size digital keyboards and touchpads. Another way is to create virtual keyboards, mouses, joysticks, and touchpads by the means of a stereoscopic vision system which “sees” the user’s hands and can translate their movements in space into complex commands.

This technology is not new and has been adopted for years in video game console peripherals, such as Microsoft Kinect, and in motion sensing assistive systems for people with disabilities (I remember that, in 2002, I developed a similar system by combining a webcam, a gesture recognition software for physically-impaired people, an interactive 3D virtual environment made in Director, and some coding in Lingo; I guess I still have it, somewhere).

Furthermore, by adding touchless haptics – based on ultrasounds, for example – we can get tactile feedback which allows you to use such immaterial peripherals in a more effective and “physical” way.

Many companies are currently working on that; for example, at IDF 2016, Intel has shown the prototype of an “immaterial” piano combining a holographic display, its RealSense motion tracking technology, and a touchless haptic feedback system.

The touchless haptic “air piano” presented by Intel in 2016

Yet, what’s the point of expanding our smartphones’ capabilities so much?

Besides the commercial reasons previously mentioned; the problem is that smartphones are still hybrids. They are no longer phones, and not yet computers. They have the computational power of a desktop computer; yet, we are using only a fraction of it. Think about it, apart from web-based ones, no applications for desktop computers are also available for smartphones, not even the most basic word processor.

This means that a smartphone is not that “powerful computer you could slip into your back pocket” many envisaged in the 2010s, but just a communication device which is growing increasingly larger, expensive, and fragile.

Indeed, people frequently own three computers: a smartphone to communicate and screw around on the Internet, a notebook to work when traveling, and a desktop PC in the office. Some years ago, many predicted the incumbent end of laptops, completely replaced by smartphones and tablets.

That scenario never materialized. Because smartphones and tablets simply can’t do what a laptop does, to date.

In conclusion, the evolution of smartphones seems to follow a precise path.

From 1985 to 2007, it’s a quest for miniaturization; from 2007 to about 2020, the devices grows bigger and bigger, in order to have increasingly larger touch-screen displays, yet they retain a solid physicality; after 2020, smartphones will “dematerialize” up to the point at which they’ll coincide with their, progressively lightweight and immaterial, display and an interaction system.

There are still many issues, admittedly.

To make its display thin and flexible doesn’t mean to, automatically, make the entire device as thin and flexible as it; there are a lot of serious technical problems in between.

Furthermore, we have no information about the reliability and commercial reception of products which don’t yet exist; some people buy an iPhone because its shiny glass-and-stainless-steel chassis looks great.

Yet, despite their price, smartphones are not luxury wristwatches, they are short-living high-tech devices which need to be constantly upgraded and “evolved” into new models to avoid a disrupting commercial failure for their manufacturers.

Flexible or not, I am pretty sure that the future of smartphones is made of hot immaterial electromagnetic waves rather than of cold metal.