Seeking Alpha
Research analyst, tech
Profile| Send Message|
( followers)

Editor's Note: This article was originally published on June 10, 2014.

As we move into an increasingly connected world, where the number of devices we own and use continues to rise and the activities we’re trying to track and control continues to expand, there’s at least one obvious challenge confronting both the industry and us. Where do we get shown the information/content we want to see?

While that may seem like of a bit of an odd or even naïve question, I think it could become a very important one. Plus, I believe the answer to it will have a number of important implications on the development of new technologies and new devices, particularly in burgeoning areas like wearables and home automation.

Of course, the obvious answer to the question would be on the screen of whatever new device we purchase for that particular tracking or controlling application. After all, screen display technologies continue to improve and expand at a rapid rate. Plus, as many modern device categories evolve, it’s often the screens themselves that are both the main hardware attraction as well as the center of our attention.

But I’m starting to wonder how far we can really take that argument. Does every new device we purchase really need to have its own dedicated screen? I’m sure my old friends and colleagues in the display industry won’t be happy to hear this, but I think we could soon reach a point of diminishing returns when it comes to adding big, beautiful displays to all our new devices.

To put it more practically, do I need to put a great display on a wearable device or a home automation gateway or any of a number of other interesting Internet of Things (IOT)-related devices that I expect we’ll see introduced over the next several years? My take is, no, probably not.

Of course, some might argue there’s isn’t so much a limit on the screens we need as the number of devices themselves. But as much as I would like to think that there will be an increasing degree of device consolidation and a desire for consumers to reduce the numbers of devices they own and use, I see absolutely no evidence to suggest that possibility. In fact, the number of devices per person just continues to increase.

So, what does this mean? I believe it means we need and will start to see more developments that leverage the incredible set of screens we already have. Between big-screen HDTVs, higher resolution PC monitors, notebooks, tablets and smartphones, a large percentage of people in the U.S. already have access to relatively wide choice of screen sizes, most all of which have HD-level resolutions (if not higher).

The challenge is that you can’t easily connect to and/or “take over” those screens from other devices. Most people are unlikely to want to use cables from newer devices -- especially ones that are likely to be small -- so the only realistic option is for wireless connections. To do that, you need some kind of intelligence in both the sending and receiving devices as well as agreed upon standards.

For years, there were several different competing wireless video standards, some from the CE industry and others from the PC industry, but most of them have now fallen by the wayside. Two of the last survivors -- Intel’s (NASDAQ:INTC) WiDi technology and the Wi-Fi Consortium’s Miracast -- have essentially merged into a single standard as of this time last year, allowing a wide range of devices from Windows-based PCs to Android-based smartphones and tablets to connect to a select set of Miracast or WiDi-enabled TVs. (Unfortunately, backwards capability with legacy devices, including early implementations of either Miracast or WiDi, isn’t always great.)

As with many areas, Apple (NASDAQ:AAPL) has its own standard for wireless video connections called AirPlay. For iOS-based devices, in particular, AirPlay enables applications like sending video from an iPad to an AppleTV device plugged into a larger TV.

In the emerging worlds of wearables and other IOT-type applications, however, it’s not clear how connections between those devices and the likely screen targets of smartphones and tablets are going to work. Right now, many of the wearables and IOT devices function as dedicated accessories to host devices but, in several cases, the range of host OSes supported is very limited.

The problem that vendors face is essentially a philosophical question about the nature of each device. If it includes a reasonable size screen, it’s a standalone device and if it doesn’t, it’s essentially an accessory. While it’s tempting to think that each device should be able to function as a standalone master device, I think consumers could tire of too many “masters.” Instead, accessorizing a few primary devices, particularly the smartphone, could prove to be a more fruitful path.

As vendors start to offer a wider range of devices and consumers try to integrate these new options into their existing device collections, the need for more and better adoption of screen-sharing technologies will become quickly evident. In fact, I’d argue that we could even see faster adoption of new technologies if there were easier ways to share screens because doing so will make consumers feel like the devices work with their existing equipment and that, in turn, will encourage adoption of them.

The problem now is that few vendors are spending much time or effort on screen sharing. But if the Internet of Things is truly to take hold, the ability for screen-less devices to leverage existing displays will be a critical enabling technology. So, let’s hope we start to see more screen sharing soon.

Disclosure: None.

Disclaimer: Some of the author's clients are vendors in the tech industry.

Source: Screen Overload To Drive Screen-Less Devices