Who’s who in depth sensing technology

PARIS — Since Apple rolled out the iPhone X late last year, opinions among industry observers have split over 3D sensing technology. Is 3D all the rage among smartphone users? Is it the next must-have feature for every smartphone on the planet?

The answer: not yet. The consensus among smartphone vendors today is that this is still an early phase in the 3D imaging “era.”

The industry verdict on 3D sensing [inside smartphones] varies widely — from “why bother?” to “it’s the future.” The argument against 3D sensing comes from those convinced that “in-display fingerprint sensors suffice” and “digital photography trumps 3D sensing” for smartphone camera fans. While some interpret the lack of iPhone X competitors with full-blown 3D sensing technology as a lack of market interest, others disagree. Rather, it is because “Apple has been ahead of its time,” explained Pierre Cambou, principal analyst, technology & market, Imaging at Yole Développement.

Moreover, contributing to a dearth of smartphones with structured light-based 3D sensing technology — more than eight months — is a limited supply of key 3D sensing components like vertical-cavity surface-emitting laser (VCSEL). These have been already locked down by Apple, several industry insiders pointed out.

Chinese are coming

Meanwhile, of course, China is horning into the 3D conversation.

The 3D sensing argument took a positive turn when Chinese smartphone OEMs including Xioami, Oppo, and Vivo unveiled their plans for 3D sensing over the last few months. In late May, Xiaomi announced the launch of its Mi 8 Explore Edition, touted as “the world’s first Android smartphone that supports 3D facial recognition.”

In June, Oppo announced its Find X with a 6.42-inch AMOLED display. The 3D-sensing-enabled phone is deemed a direct competitor to the Mi 8, with a higher price tag of around $750.

In June, Vivo said that it will be using a new 3D sensing technology for the Face ID authentication system. Unlike Apple’s iPhone X, which uses structured light technology for 3D sensing, Vivo will deploy time-of-flight (ToF) technology for 3D.

Xiaomi Mi 8 Explorer Edition comes with Face ID. (Source: Xiaomi)

The industry has been scrutinizing these Chinese OEMs’ 3D sensing tech partners. Xiaomi is working with Israel-based Mantis Vision, with a 3D technology based on structured light. Oppo has partnered with Orbbec 3D Technology International Inc. (Shenzhen, China), also using structured light. Vivo, using a ToF technology, has reportedly teamed up with PMD Technologies AG (Siegen, Germany). Yole’s Cambou, however, added that Oppo might be collaborating with Sony instead.

Doubling down on 3D

Against this backdrop, we asked Yole Développment, who recently published a report entitled “3D Imaging & Sensing, 2018 edition,” to identify technologies used in a variety of systems, key players in the 3D sensing ecosystem, and market size and forecast for 3D technologies.

Cambou, co-author of Yole’s report, is one of the analysts doubling down on 3D sensing. He told EE Times, “Yole is confident that 3D is here to stay.”

He noted that the complex 3D sensing module and its high cost based on a structured-light technique chosen by Apple for its TruDepth camera partly explains why it’s taking Android phones almost eight months or even longer to catch up with Apple.

While Apple’s TrueDepth had by 2017 established the trend for 3D front-facing cameras, Yole acknowledged that the wave [for 3D sensing adoption] “has started on the conservative side in terms of volume.”

3D in every new iPhone — front and back?

There is a lot of conjecture in the press speculating if Apple’s new iPhone models scheduled for announcement this fall will also feature 3D front-facing cameras.

In iPhone X, Apple used 3D sensing technology only for the front-facing camera. A bigger question now is if 3D is also going to turn around and face rearward.

Although Cambou is sure about front-facing 3D, he remains skeptical of use cases for 3D in rear-facing cameras. Pointing out a lack of momentum for VR and AR, he explained that neither the augmented reality sales pitch nor augmented gaming are yet proven on the market. He suspects that 3D in rear cameras may remain “a niche feature in the future.”

But 3D sensing suppliers for Apple appear to think otherwise. They’re bullish on the prospect of 3D sensing as “a universal interface,” acknowledged Cambou. Recent quarterly financial calls held by STMicroelectronics and ams revealed that “they are almost overly confident” that 3D sensing will go inside both the front and rear cameras of smartphones, observed Cambou.

So what percentage of smartphone cameras will have 3D cameras? What’s the penetration ratio? Yole predicts that a 1.4% penetration ratio in 2017 will grow to 55% in 2023.

What about revenue?

Yole’s forecast is coming on strong. An estimated $2 billion for 3D in 2017 (including 3D used in everything from consumer, automotive, and medical to commercial, scientific, defense, and space) will reach $18 billion in 2023. Contributing to this revenue figure, according to Cambou, are “higher average selling price (ASP)” than previously predicted and 3D adoption growth among consumer devices.

In fact, of all the market segments that deploy 3D, Yole predicts that the consumer segment will drive 3D sensing technology at a compound annual growth rate of 82% ($375 million in 2017 to $13.8 billion in 2023). He projects the automotive market at a CAGR of 35% ($391 million in 2017 to $2.4 billion in 2023).

Putting 3D in context

Clearly, 3D sensing has allowed Apple to replace fingerprint sensors with Face ID. As Yole noted, applications for 3D sensing in iPhone X have enabled “easy unlocking,” “security via facial recognition,” “gaming” (i.e. Avatar), and “morphing” including real-time filters (AR, face-swap) on Facebook, Snapchat, and Instagram.

But if you keep thinking of 3D only in the context of smartphones, you might be missing out the big picture of what 3D is bringing to the market.

“Think how Amazon’s Echo changed the man-machine interface” by connecting voice to AI and elevating speech as a primary UI, said Cambou. Similarly, 3D sensing technology makes biometric identification, such as face recognition, “foolproof,” he added, as 3D also opens the door to AI.

Smartphones, indeed, make 3D sensing more widespread. But 3D sensing isn’t just a smartphone feature. It has far-reaching social and political ramifications, cautioned Cambou.

He referred to a recent blog — entitled “Facial recognition technology: The need for public regulation and corporate responsibility” — posted by Microsoft president Brad Smith.

In his post, Smith wrote:

It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse. Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate, or even arrest for a crime.

Let that sink in a little. The impact of 3D is much bigger than a simple debate over whose smartphone comes with a 3D feature or whose 3D the phone uses.

3D technology ecosystem

From an engineering standpoint, though, most interesting is the unfolding of different approaches in depth-sensing technologies in mobile devices. Different actors in the global ecosystem are all competing for design wins in the coming era of 3D sensing.

Just to recap, structured light and ToF technologies are two different methods available for depth sensing today. Structured light does 3D scanning by projecting a known pattern onto an object. When the light hits the object, the pattern is distorted. Structured light analyzes the deformation of the known pattern to infer depth.

ToF meanwhile works by illuminating the scene with a modulated light source, and by observing the reflected light. The phase shift between the illumination and the reflection is measured and translated to distance.

While Apple has chosen the structured-light approach for its front-side depth camera as a starting point for the 3D imaging era, Yole is convinced that the front 3D module “could evolve toward ToF technology in the future.” In Cambou’s opinion, the ToF approach will show “more reliability in direct sunlight and lower computation need.”

In the table above, Yole laid out three different 3D imaging and sensing technologies ranging from stereo vision to structured light and ToF. Among these, the least technically mature is ToF. But those involved in ToF solutions include major players such as PMD, Sony, ST, and Infineon. The structured light team boasts Mantis Vision, ams and ST, Intel, Himax, and Namuga. ST plays in both camps.

Yole suspects that Apple will not soon switch from structured light to ToF. Cambou believes that Apple will wait until it rolls out new hardware in two to four years. Meanwhile, he added that ST, a major contributor to iPhone X’s structured-light-based 3D technology, “is very much on its way to launch ToF-based 3D technology.”

Digital photography vs. 3D

One of the quandaries that Apple faces today is whether Apple’s big bet on 3D could end up hurting Apple by sacrificing traditional digital photography quality.

Huawei, for example, is making a big investment in digital photography, said Cambou. In comparing the size of active matrix (used for traditional photography), he calculated that Apple is devoting only 52 mm², while Samsung has committed to 91 mm² and Huawei 112 mm².

In the following pages, EE Times, with the help of Yole, offers a Who’s Who of key players in the 3D sensing ecosystem — particularly in 3D software/computing and 3D system design.

3D Software and Computing

(in alphabetical order)

1. Faceshift (now part of Apple)

In late 2015, Apple quietly snatched up Faceshift, a startup born out of EPFL, a school in Lausanne, Switzerland. The Swiss startup developed technology to create animated avatars capable of capturing a persona’s facial recognition in real time.

Apple iPhone X offers Animoji (Source: Apple)

Sound familiar? Indeed, it was Faceshift’ face-tracking technology that became the foundation for iPhone X’s Animoji.

2. Movidius (an Intel company)

Movidius, now an Intel company, rolled out earlier this year its Myriad X Vision Processing Unit (VPU), billed as “the industry’s first system-on-chip shipping with a dedicated neural compute engine for hardware acceleration of deep-learning inference at the edge.”

Long before it was acquired, Movidius had worked on 3D. Its chips were initially used as an engine for 3D rendering. But the Movidus distinction came when it partnered with Google on Tango — Google’s Advanced Technology and Projects.

Project Tango (Source: Google)

Project Tango’s goal was to use computer vision that enables mobile devices to detect their position in 3D relative to their surroundings without GPS or other external signals. Movidius, in Project Tango, offered a chip used for computer vision in positioning and motion tracking.

Movidius’ ultra-low-power vision chip became a harbinger of 3D sensing on mobile devices years ahead of its market. Most vision-processing platforms available then (in the early 2010s), such as the PrimeSense chip in Microsoft’s original Kinect, drew much more power than that of Movidius’ vision processor. It should be noted that PrimeSense was acquired by Apple in late 2013 for $360 million.

3. Softkinetic (now Sony Depthsensing Solution)

In October 2015, Sony acquired SoftKinetics, a Belgium-based maker of 3D sensing computer vision technologies. SoftKinetic’s technologies included Microsoft’s Kinect-style depth cameras, CMOS depth chips, and gesture-tracking middleware.

In what appears to be the strongest indication that the Japanese giant is dead serious about becoming a key player in 3D CMOS time-of-flight sensors and cameras, Sony renamed SoftKinetics as Sony Depthsensing Solution late last year.

Sony’s DepthSense technology and gesture recognition software are designed into Sony’s new entertainment robot, “aibo.” (Source: Sony)

SoftKinetic, according to Sony, has consistently delivered advanced solutions in 3D sensing and processing through the development of CMOS 3D sensors, 3D camera reference designs, SDKs, algorithms, and applications for gesture recognition, object scanning, automotive control, and AR/VR.

The DepthSense camera module and software developed by the Brussels-based team has already been designed into Sony’s new entertainment robot called “aibo.”

4. Inuitive

Intuitive Ltd. is an Israel-based fabless chip company focused on 3D imaging.

Last fall, the company introduced NU4000, a multi-core vision processor that supports 3D imaging, deep learning, and computer vision processing for augmented reality and virtual reality, drones, robots, and many other applications.

Its new processor, the company claims, will enable “high-quality depth sensing, SLAM on-chip, computer vision, and deep-learning (CNN) capabilities — all in affordable form factor and minimized power consumption.”

3D System Design

(in alphabetical order)

1. LIPS

LIPSedge M3 is a versatile ToF depth camera specifically designed for embedded systems to generate real-time 3D data. (Source: LIPS)

LIPS Corp., founded in 2013 in Taiwan, is a leader in 3D depth camera and 3D total solutions. It has patented camera design and software algorithms.

LIPS designs, builds, and customizes 3D depth cameras. It also creates recognition middleware and solutions to meet OEMs’ various applications, including robotic vacuum cleaners, AR/VR, home robots, ADAS, and factory automation.

2. Mantis Vision

Mantis Vision styles itself as a key enabler for 3D vision, plotting a future for its technology to allow consumers to take 3D pictures on smartphones or scan an object using a tablet, then send it to a 3D printer.

The company has what it claims as “proprietary coded structured-light technology.”

Mantis-vision pattern utilizes a unique code (Patented-US8208719), which allows a smaller footprint to uniquely identify many more points than in standard methods. The company claims that this unique technology provides a higher resolution and accuracy for lower minimal object size. (Source: Mantis Vision)

In announcing its partnership with Xiaomi, Mantis Vision CEO Gur Arie Bittan noted that his team overcame many challenges to make what Mantis Vision calls the most cost-effective 3D structure-light camera module on the market by using its internal IPs.

He noted that the company was able to, among other things, shrink optical stack and size from centimeters to millimeters, incorporate VCSEL lasers that were still nascent technologies at the time, meet OEMs’ power consumption drastic points and conform with eye safety regulations. He said Mantis Vision was also able to build a camera bracket module that synchronizes with RGB existing cameras, define and prepare for mass-production calibration and invent effective decoding patters algorithms and pipelines running on one Arm Core.

3. Orbbec

Known as Orbbec 3D Technology International Inc., Orbbec was founded by Howard Yuanhao Huang, who graduated from Peking University and earned his Ph.D. at the MIT SMART Center. He is known for his research on laser speckle interferometry, digital speckle correlation, projectile structural light, and computer vision. He has been researching and developing 3D scanning technology since 2001.

According to Orbbec, the company “has spent the last three years getting 3D right” — perfecting the company’s Astra family of 3D cameras and designing its Orbbec Persee 3D camera with a built-in, fully functioning computer.

Astra Mini comes in two versions. The short-range version of Astra Mini S has a 0.35- to 1-meter tracking range, compared to Astra Mini long-range cameras that have a 0.6- to 5-meter range. (Source: Orbbec)

Orbbec claims that its Astra 3D cameras are “better than other cameras on the market today,” offering “the highest depth resolution, superior range, exceptional accuracy, and lowest latency.”

Orbbec is identified as a partner to Oppo, which uses structured light-based 3D sensing technology.

4. PMD

PMD Technologies AG (Siegen, Germany), founded in 2002, develops CMOS semiconductor 3D ToF components while offering engineering support in the field of digital 3D imaging.

In particular, the company takes pride in its leading ToF technology — integrated on a chip, which PMD says is “small, scalable, and robust.”

Leica designed a dedicated optical lens for PMD’s new 3D depth sensing imager for mobile devices and the corresponding camera module. (Source: PMD)

PMD also offers support for OEMs’ camera development with reference designs — both standard and customized. PMD claims that its years of experience in ToF camera design help system designers accelerate their hardware development process, as the company’s scalable technology “will fit in with one’s own specific camera module requirements.”

Infineon Technologies AG (Munich, Germany) has been the long-term partner of PMD. Together they developed a ToF image sensor for use in 3D face ID. Infineon’s Real3 (or IRS238xC) enables a camera module for integration in smartphones with a footprint of less than 12mm by 8mm, including the receiving optics and VCSEL illumination.

PMD is seen as one of the tech companies working with Chinese smartphone vendor Vivo for smartphones featuring ToF-based 3D technology.

— Junko Yoshida, Global Co-Editor-In-Chief, AspenCore Media, Chief International Correspondent, EE Times