Apple: 3D Sensing Marches On

| About: Apple Inc. (AAPL)

Summary

A rear-facing 3D sensor for iPhone.

The difference between time of flight and structured light.

BMO Capital capitulates in the face of Nvidia success.

What will the next Nvidia GPU be called?

Rethink Technology business briefs for November 14, 2017.

A rear facing 3D sensor for iPhone

Source: Apple

Back in August, I wrote an article (exclusive for subscribers) called Investing In 3D Sensing For Mobile Devices. A key element of the investment thesis was that 3D sensing was going to spread well beyond the limited application that Apple (AAPL) had implemented in iPhone X.

Apple's front-facing TrueDepth sensor is mainly used for FaceID, although it can also be used to create facial animations that track the user's expression in real time, Animojis. There are a broader set of applications surrounding augmented or mixed reality that 3D sensing could support, but only if the sensor faces the rear.

A rear facing 3D sensor could allow augmented reality (AR) animations to interact more realistically with physical objects. The 3D sensor would be used to map those objects and provide the information to the augmented reality app. Currently, Apple's implementation of AR simply places the AR animation on a flat, level surface.

A rear-facing 3D sensor also would pave the way for AR glasses, or smartglasses, something that Apple is reportedly working on. Indeed, the use of 3D sensing in AR glasses already has been demonstrated in Microsoft's (MSFT) ungainly though innovative Hololens system.

And, there are other potential uses for such a sensor, as demonstrated in Google's (NASDAQ:GOOG) (NASDAQ:GOOGL) Project Tango. Tango was Google's attempt to create a smartphone that was more “aware” of its surroundings.

So, I thought that 3D sensing would spread, not only to the rear of the smartphone but to many other mixed reality and AR devices, including various headsets and glasses. Today, Bloomberg reports that Apple is working on a rear facing 3D sensor for iPhone to be introduced in 2019. The report states that the sensor will improve augmented reality apps.

The difference between time of flight and structured light

What I found interesting about the report is that it claimed that Apple was evaluating a different approach for the rear sensor than is currently used in the iPhone X TrueDepth system. The TrueDepth system uses what's called a structured light approach. This was pioneered by the Israeli company PrimeSense that Apple acquired in 2013.

PrimeSense developed the Kinect Sensor for Microsoft, based on the structured light approach. The approach projects a pattern of dots and then observes the angular shift of the pattern of individual dots that results when they are projected on a non-flat surface. From the shift in the dots, the three dimensional shape of the surface can be deduced.

There's an alternative approach that's more difficult to implement, called time of flight. In time of flight, a surface or physical space is scanned with a laser. The system measures the time it takes for the laser light to reach the object and return. Since light travels at a (more or less) constant speed through the air, simple arithmetic can be used to determine distance to the object.

Here's a useful illustration of the different sensing methods from Business Insider:

Time of flight is what has been used for most LIDAR applications. Its main disadvantage is that it typically can only measure distance to one particular point in space at a time. Thus, most LIDAR systems use some form of mechanical scanner. The LIDAR systems currently being used on self-driving cars usually feature these scanners, visible as “hockey pucks” or rotating drums on the roof of the vehicle.

The Bloomberg article claims that Apple is evaluating a time of flight approach. Obviously, mechanical scanners wouldn't be acceptable for iPhone. Instead, there's a newer technology called Flash LIDAR that dispenses with the scanner. Instead, it uses a special camera, similar to a video camera, that measures time of flight for each individual pixel.

Such sensors are still developmental and suffer from several limitations. Each pixel needs to have its own time of flight measurement circuitry, which includes a very fast clock, a triggering circuit, and pulse detection circuit. It's because of the challenges of Flash LIDAR that Apple went with the structured light approach for the TrueDepth sensor.

There has been some confusion on this point. For instance, Paulo Santos recently stated that the TrueDepth sensor of iPhone X is based on time of flight. This is incorrect.

The limitations of Flash LIDAR are gradually being overcome, but they're still pretty significant. The resolution of the sensors tends to be very low, like VGA resolution.

And there's the problem of providing a suitable laser illumination. Light propagates at about one foot per nanosecond, so most LIDAR and Flash LIDAR systems make use of very short 1-2 nanosecond laser pulses. Producing this light of sufficient intensity and uniformity to be detected by the Flash LIDAR sensor also is challenging.

Apple almost is certainly evaluating time of flight, but I doubt that it has reached a decision, as the Bloomberg article seems to suggest. Structured light is still perfectly usable in room sized spaces, which is why it's used for Kinect and almost certainly used for Hololens.

Bloomberg suggests that Apple would adopt time of flight in order to get around the manufacturing difficulties of the current TrueDepth sensor. I doubt that the reported manufacturing difficulties for the TrueDepth sensor would outweigh the greater complexity of Flash LIDAR. Apple is simply doing what it should do, evaluating various alternative technologies. My guess is that structured light will still win based on cost and relative maturity.

Apple is part of the Rethink Technology Portfolio and is a recommended buy.

BMO Capital capitulates in the face of Nvidia success

Nvidia (NVDA) has been a personal favorite of the Rethink Technology Portfolio and continues to help fuel its exceptional total return of 66.68%. Nvidia has not always been a favorite of analysts, however.

BMO Capital analyst Ambrish Srivastava has been bearish since April of this year, as he noted, based on supply chain data, that GPU shipments would be down sequentially in Nvidia's fiscal 2018 Q1, ending April 30. And he was right. Nvidia's GPU business was down 16% sequentially, and the Gaming market segment was down 24% y/y.

Many attributed this to competitive pressure from Advanced Micro Devices (AMD). But it was more the pressure of expectations than reality. When consumer RX Vega cards were finally released in August, they would fall short of Nvidia's year old Pascal architecture in energy efficiency and in absolute performance.

By Nvidia's just reported fiscal Q3 earnings report, the gaming market segment was up 32% sequentially and 25% y/y. I called Nvidia unstoppable in my earnings coverage.

Apparently, BMO has realized that Nvidia is unstoppable as well. In a note to clients, reported by CNBC, Srivastava wrote:

We have been reluctant to change our view, but now recognize that our underperform call did not work out. Our negative stance to date was based largely on our view that the gaming business would see a marked deceleration in CY17 vs. CY16. However, the diversity in the business with wins at Nintendo, and help from the cryptocurrency market, has enabled the business to sustain at a higher level than what we were modeling. . .

Nvidia has done a very good job in capitalizing on the demand for AI/ML, and primarily on the training side, where the company really has no competition. While we are ourselves believers in the secular trend to heterogeneous computing, we have been wrong on the tailwind for Nvidia's business in this market. . .

Additionally, Nvidia has also demonstrated a far higher amount of leverage in its model than we had been anticipating. The recently reported quarter was an example of the earnings power in the model.

It still feels a little like damning with faint praise, doesn't it? Coming at this point, the admission is painfully obvious and necessary.

Nvidia is part of the Rethink Technology Portfolio and is a recommended buy.

What will the next Nvidia GPU be called?

One of the hallmarks of Nvidia's recent earnings conference calls has been a reticence about the nature, timing, or architecture of Nvidia's next consumer GPU architecture. The fiscal 2018 Q3 conference call was no exception. However, we know that something has to be on the way, and it might be an all-new, non-Volta, architecture based on a new TSMC (TSM) processing node such as 10 nm.

From the German site heise.de has come an interesting report that the next consumer GPU architecture will be called Ampere and be announced at Nvidia's GPU Technology Conference 2018. This could simply be a good guess. I'd be curious to hear from readers about this. Feel free to leave a suggestion for the next Nvidia GPU name.

Disclosure: I am/we are long AAPL, NVDA, TSM.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

About this article:

Expand
Author payment: $35 + $0.01/page view. Authors of PRO articles receive a minimum guaranteed payment of $150-500.
Want to share your opinion on this article? Add a comment.
Disagree with this article? .
To report a factual error in this article, click here