Seeking Alpha

Alex Daley's  Instablog

Alex Daley
Send Message
Alex Daley is the senior editor of Casey’s Extraordinary Technology. In his varied career, he’s worked as a senior research executive, a software developer, project manager, senior IT executive, and technology marketer. He’s a technologist who has collaborated on the development of... More
My company:
Casey Research
View Alex Daley's Instablogs on:
  • "Why Your Health Care Is So Darn Expensive"…

    By Alex Daley and Doug Hornig,
    Senior Editors, Casey Extraordinary Technology

    The cellphone in your pocket is NASA-smart. Yet it costs just a couple hundred dollars.

    So why is it that rising technical capabilities are leading to drastically falling prices happening everywhere, except in your medical bill?

    The answer may surprise you…

    Continual microchip technology breakthroughs mean you can now do more on a phone bought for $200 than you ever could have thought of doing on a $2,000 computer just a decade ago.

    In fact, it has more computer power than all of NASA had back in 1969 - the year it sent two astronauts to the moon.

    Video games, which consume enormous amounts of computer power to simulate 3D situations, use more computer power than mainframe computers of the previous decade.

    The $300 Sony Playstation in the kids' room has the power of a military supercomputer of 1997, which cost millions of dollars.

    Warp-speed progress.

    So just think what computers can do to help doctors cure you when you're sick.

    Indeed, computers do keep us healthier and living longer.

    Illnesses are diagnosed faster. Computer scans catch killer diseases earlier, giving the patient a better survival rate than ever in history.

    New treatments are being created at an astonishing rate. All kinds of conditions that would have killed you a decade ago now are controlled and even cured, thanks to new technology.

    But with all these advances in technology, shouldn't medical care - just like the mobile phone and video games - be getting cheaper?

    Yet here we are, still paying through the nose for every powder, pill, and potion. And it seems like nothing ever gets cheaper when it comes to medical treatment.

    It Must be Price Fixing
    We're Just Making Doctors and Big Pharma Rich, Right?

    Or are we?

    It seems that the increase in cost is not because doctors are making a lot more money than before, as you'll see in a moment. (It might surprise you).

    "The bottom line is that you are paying for extending your life and curing diseases that until recently would likely have killed you."

    A longer life has a bigger price ticket.

    But there's more to it than that.

    Are We Really Living That Much Longer?

    Doctors cure your ills and repair the damage you do to yourself by accidents, old age, abuse, and general wear and tear.

    It is easy to dismiss the days of people's lives spanning a mere three decades as prehistoric... but it wasn't really that long ago.

    Consider that according to data compiled by the World Health Organization, the average global lifespan as recently as the year 1900 was just 30 years.

    If you were lucky enough to be born in the richest few countries on Earth at the time, the number still rarely crossed 50.

    However, it was just about that time that public health came into its own, with major efforts from both the private and public sectors.

    In 1913, the Rockefeller Foundation was looking for diseases that might be controlled or perhaps even eradicated in the space of a few years or a couple of decades.

    Curing the Six Killer Diseases of Childhood

    The result of this concerted public-health push included nearly eradicating smallpox, leprosy, and other debilitating or deadly diseases.

    It also included vaccines against the six killer diseases of childhood: tetanus, polio, measles, diphtheria, tuberculosis, and whooping cough.

    A simple graph illustrates the dramatic change.

    (Click on image to enlarge)

    In the US, the average lifespan is now 78.2 years, according to the World Bank. In many countries in the world, it is well over 80.

    But the story isn't so simple. Like all averages, it's affected mainly by the extremes.

    For instance, in the early part of the 1900s, the data point that weighed most heavily on average lifespans was child mortality.

    Back then families were much larger, and parents routinely expected some of their children to die.

    (Click on image to enlarge)

    But the flip side, as can be seen in the graph, is that for anyone lucky enough to survive childhood at the turn of the last century, life expectancy was not that much lower than it is today.

    It seems that for all of our advances in medicine,
    we only live about 20 to 30% longer.

    Not only is the increase quite small - relative, say, to the explosion in computing power over the same period of time - the amount of money we spend adding another year or two to the average lifespan is on the rise.

    So if we exclude high child mortality, we are not living all that much longer today than we once were. So where does all the money we spend actually go?

    We can get a glimpse of the answer in the following graph.

    Intuitively, one would think that there should be a relationship between the economic well-being of a country and the life expectancy of its citizens. And, as you would imagine, there is a strong correlation between wealth and health.

    (Click on image to enlarge)

    The important takeaway from this graph is the flattening of the curve along the top.

    What it means is that in many countries with lower GDPs, those with less to spend on health care can attain life expectancies in the 65-75 range.

    Pushing the boundaries beyond that (as the richer countries do) clearly requires much greater resources. By implication, that means spending more to do it.

    Here's something else we've discovered:

    The cost of battling the diseases of adulthood rises dramatically with age.

    How much so?

    Well, per capita lifetime healthcare expenditure in the US is $316,600, a third higher for females ($361,200) than males ($268,700).

    But two-fifths of this difference owes entirely to women's longer life expectancy.

    For everyone, nearly one-third of lifetime expenditures is incurred during middle age, and nearly half during the senior years. And for survivors to age 85, more than one-third of their lifetime expenditures will accrue in just the years they have left.

    Technologies and the cutting-edge companies that create them help to drive these costs down, while creating a profitable business for themselves and their investors.

    To take a simple example, the MRI machine invented by Raymond V. Damadian may seem expensive on the surface, but it accomplishes things that previously required a much heavier investment in time and diverse professional expertise.

    Or consider how a company like NxStage Medical a company the team at Casey Extraordinary Technology have been following closely and profiting handsomely from for quite some time.

    A business like this has revolutionized the delivery of renal care. Their home-based units save a ton of money compared with the traditional, thrice-weekly visits to a special dialysis clinic.

    Innovations like these from NxStage are changing the way patients receive care and the way a company produces income for its shareholders.

    But the problem remains that overall, medical costs continue to rise faster than improved technology can serve as a countervailing force.

    There are three easily identifiable reasons for this:

    • Diminishing marginal returns
    • Rising costs of non-technology inputs
    • Increased quality of life

    The Law of Decelerating Returns

    Technology in most arenas is a field of rapidly increasing marginal return on investment, i.e., accelerating change.

    In other words, things don't just get continually better or cheaper; they tend to get better or cheaper at a faster rate over time.

    There is a simple concept in finance, hard sciences, and any sufficiently quantitative field - anywhere that numbers dictate behavior - called "economies of scale," and it mainly refers to the idea of changing returns over time. We refer to these as "marginal returns."

    Imagine for a moment that you are a manufacturer:

    • Once you've paid off the cost of your factory or equipment - your "fixed" costs - you maybe make a widget to sell for $10, with a cost per unit of $5.
    • If you make and sell a thousand units, you make $5,000 profit. That is the marginal return.
    • But now imagine that if you make 100,000 units, your cost per unit drops to $4 - you have more negotiating power over your raw materials suppliers, you can run your staff with less slack, etc.
    • And if you can make 200,000 you save/make another dollar.

      That means you have "accelerating marginal returns" - we're used to that in technology.

    Every year Intel is able to lower the cost of the processing power it sells on the market.

    Making the chips becomes easier with time and scale, although there is fierce competition from companies like ARM Holdings and Taiwan Semiconductor to take some share.

    Fortunately for Intel, a chip is a commodity product: it's the same for nearly all consumers, and the market is global. (Not to mention the small need for highly trained service practitioners, the lack of spoilage, and other nice benefits of dealing in circuit board technology.)

    Of course medicine is not quite the same - at least not in the most important instances.

    No doctor treats a wide range of diseases. They're forced to specialize.

    • Moreover, they usually only see patients from within a certain physical radius.
    • Additionally, they must undergo never-ending education and certification.
    • They often practice in expensive buildings.
    • They require complex equipment used only by a handful of fellow specialists.

    Ultimately, there are few places to find so-called economies of scale.

    Treatment Difficulties

    The simple fact is that, in our self-centered zeal to live to the age of 80+, we have made a trade-off.

    We've left behind the diseases of youth - diseases that mostly strike once, resulting either in death or fading chances of a long life - but they've been replaced by a host of new, chronic diseases. Diseases of age. Diseases of environment. And diseases of design.

    These are the challenges companies like NxStage are dealing with every day. It's a time-intensive endeavor. It can take time, but successful medicines are big winners for investors... sometimes very big.

    The illnesses we fall prey to these days as a result of living longer - conditions such as diabetes, ischemic heart disease, and cancer - are all much more complicated than their predecessors.

    First, none is caused by a single, easily identifiable agent. There's no virus to isolate and eradicate. There's no pathogen sample to convert to a vaccine.

    These are diseases born of the complexity of our bodies and the challenges of understanding what our bodies, as they grow older than our predecessors could ever have hoped, are capable of fighting off.

    (Click on image to enlarge)

    These conditions cost considerably more to treat than the traditional infectious disease does. More labor is involved. More time. And available drug treatments rarely cure in a few doses, if ever.

    So, chronic conditions breed chronic costs.

    Of course they do; that's their nature.

    Keeping someone with lung cancer alive for twice as long as would have been the case 30 years ago is a great feat, but it comes at considerable additional cost in terms of the time devoted by the many healthcare professionals involved.

    And that means troubling questions like this must be asked: If every patient can live twice as long, but it takes twice as many net people-hours to care for them, has there been a net gain for society?

    The Driving Force

    Our medical progress has been won through a major increase in net costs per person.

    In 1987, US per capita spending on health care was $2,051. That's $3,873 in 2009 dollars.

    But in 2009, actual spending amounted to $7,960 per capita. Why?

    Some of that is attributable, pure and simple, to rising costs that have outpaced inflation.

    In 1986, the average pharmacist made $31,600, or $66,260 in 2012 dollars. Today, the real average salary is $115,181 - nearly double.

    On the other hand, it's not universal.

    Radiologists, for example, have seen their salaries drop from an inflation-adjusted $425,000+ to $386,000 in the same period.

    Also, costs for surgeries and diagnostics are not a clear-cut contributor.

    Data are hard to compile as costs vary greatly:

    • California recently saw charges for appendectomies in the range of $1,500 to $180,000.
    • In Dallas, getting an MRI at one center can be more than 50% more expensive than another across town.

    Most indications seem to point to lower, not higher, real costs over time for most common conditions.

    Average hospital stays post appendectomies have fallen from 4.8 to just 2.3 days in the past 25 years, for instance. That's thanks largely to insurance requirements, as well as better sutures, pain medicines, and surgical equipment.

    As hard as procedural costs are to compare, the outcomes are much more clear-cut.

    In cancer, the improvement has been significant in some cases and less dramatic in others.

    For those diagnosed with cancer in 1975-'77, the five-year survival rate was 49.1% (and only 41.9% for males).

    For those diagnosed between 2001 and 2007, five-year survival increased to 67.4% for both sexes and jumped to 68.1% for men.

    Even if you're diagnosed when over age 65, you have a 58.4% probability of living another five years.

    Prognoses, however, vary widely with disease specifics.

    If you contract pancreatic cancer, for instance, your prospects are the grimmest.

    It's likely to be terminal very quickly. Among the most recently diagnosed cohort, a meager 5.6% survived for five years. That's more than double the rate from 30 years ago, but small comfort.

    Liver cancer sufferers' five-year survival rate has more than quadrupled, but only from 3.4 to 15%.

    Lung cancer is also still a near-certain killer. In the 2001-'07 group, a meager 16.3% survived for five years, only a slight tick up from the 12.3% rate of 30 years ago.

    Brain cancer is quite lethal as well, with only 34.8% surviving for five years today - more than 50% better than the 22.4% rate of 30 years ago, but not great.

    On the other side of the ledger, breast-cancer victims are doing very well.

    90% survive for at least five years if diagnosed after 2001, vs. 75% in 1975-'77.

    And prostate-cancer treatments have been the most spectacularly successful. Five-year survival is fully 99.9% of those diagnosed in the past ten years, vs. only 68.3% in 1975-'77.

    Longer survival rates are, of course, impossible to document in recently diagnosed patients, since we're not there yet. But to give you some idea, here are the 20-year survival rates for the above cancers, taken from the NCI's 1973-'98 database:

    Pancreas, 2.7%; liver, 7.6%; lung, 6.5%; brain, 26.1%; breast, 65%; and prostate, 81.1.%.

    These are big steps forward, no question. They enhance not only the length but the quality of life, as well.

    However, with each rising year of average age, we increase our medical expenses rapidly.

    When we eradicated the big childhood killers, we solved most of the easy problems.

    As a result we all live longer. And we all live to face the much more complicated and much more expensive to treat diseases of age.

    At that point, it isn't lifestyle changes that are keeping us alive - it's machines and doctors and medicines doing a lot of the heavy lifting in order to grant us those precious extra days.

    It's the Hippocratic Oath writ large:

    Physician, thou shalt do whatever it takes to prolong life, no matter the price.

    All of that costs money, and lots of it.

    So we are not dying of the most dreaded ailments as quickly as we once were.

    But that's not due to much in the way of real advances in curing the major chronic illnesses of our time - heart disease, diabetes, cancer, and AIDS.

    The truth is that we've primarily extended the amount of time we can live with them.

    Mexico's health minister, Dr. Julio Frenk, noted the irony here when he said, "In health, we are always victims of our own successes." We are living longer... and we're costing a lot more in the process.

    Doug Hornig

    Senior Editor

    Doug Hornig is the editor of Casey Daily Resource Plus, a frequent contributor to both Casey Research's BIG GOLD, the go-to information source for Gold investors and Casey Daily Dispatch, a simple and fast way to keep up with the ever-changing investing landscape.

    Doug is not just an investment writer, however. Doug is an Edgar Award nominee, a finalist for the Virginia Prize in both fiction and poetry, and the winner of several open literary competitions, including the 2000 Virginia Governor's Screenwriting contest.

    Doug has authored ten books, done investigative journalism for Virginia's leading newspaper, and written articles for Business Week, The Writer, Playboy, Whole Earth Review, and other national publications.

    Doug lives on 30 mountainous acres in a county that has 14,000 residents and a single stop light.

    Alex Daley

    Chief Technology Investment Strategist

    Alex Daley is the senior editor of the technology investor's friend, Casey Extraordinary Technology.

    In his varied career, he's worked as a senior research executive, a software developer, project manager, senior IT executive, and technology marketer.

    He's an industry insider of the highest order, having been involved in numerous startups as an advisor to venture capital companies. He's a trusted advisor to the CEOs and strategic planners of some of the world's largest tech companies, and he's a successful angel investor in his own right, with a long history of spectacular investment successes.

    Disclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

    Jul 19 4:04 PM | Link | Comment!
  • Is The Generational Divide In Technology Widening?

    My son doesn't know how to use a mouse.

    He doesn't even know what one is. As far as he's concerned, it's a furry animal he's only seen in books and running around the floor of the Newark airport.

    While I've known this for some time, it recently moved from the back of my mind to front and center following a brief car trip a few days ago. From the back seat, my eldest son - who for some inexplicable reason loves to watch the instructions tick by on the screen of the GPS unit sitting on the dashboard - requested that I program the unit to give us directions home. I politely declined, pointing out that I couldn't be messing around with the screen as I was already driving. He followed up with that well-known, youthful naïveté that borders on soul-piercing in its effectiveness to point our shortcomings in ourselves and our world by asking:

    "Why can't you just tell it where you want to go? Like the Xbox."

    "I don't know, son…"

    Unhappy with the answer he'd received, the conversation then turned in the direction of endless questions about computers versus video games versus the car's GPS. In all the hubbub of explaining to my eldest about the differences between them, especially how we interact with them, my youngest, despite spending many an hour on particularly snowy Vermont days upstairs in the home office playing Curious George games on the computer, piped in abruptly:

    "Dad! Why would your computer have a mouse in it?! You're just making that up!"

    A lot has been made over the past few years of so-called "digital natives" - children who were born and raised in the age of the computer. Kids like Mark Zuckerberg, who was born in 1984, seven years after the release of Apple's first computer. For all of his life (and mine; if I'm being honest, I'm not that much older than Zuck), computers have been part of human existence.

    We were both part of the first digital generation. But still, even then the computer was something distinct from everyday life. It was culturally defining. It was epochal, some might even say. But it was by no means universally prevalent.

    One of my fondest childhood memories is of the arrival of a fresh, new Tandy 1000 SL one Christmas morning. I remember it well. 50-odd lb., 13-inch CRT monitor. Big honking base "CPU." Keyboard… and no mouse. That came later, with the next-generation model.

    The arrival of the Tandy was the moment we went from a family without a computer to a family with one. I look back at it, and I like to think it compares to the arrival of the first television set in many households in the 1940s and 1950s. The family gathered around the set to watch Ed Sullivan, neighbors aglow with jealousy.

    We were by no means the first on the block to own a computer. Still, at that stage many a family didn't yet have one. Maybe Dad or Mom used one at work, and of course, our small suburban school had a little "lab" of them, used to mainly to teach typing. So I'd used them before, on occasion, but the power of the device - and my borderline addiction to it - was not apparent until it invaded the home. Or at least my home (there is probably a good "nature vs. nurture" debate in the evolution of the computer geek).

    But it wasn't much like the arrival of the television at all. I only found out years later that my mother had to work hard to convince my father that it was more than his choice description - "a $3,000 pet rock." We never huddled around the phosphorescent glow of the screen as a family. My brother ignored it nearly entirely (maybe that was only because I hogged it?). The same was true for the majority of children in the original group of digital natives: the computer was part of, but mostly peripheral to, the average child's life. It was only a select set who really made it a part of their everyday lives.

    Yet today that seems to have changed, simply by the ubiquity thrust upon the current generation of children. Computers, in the looser definition of the word that includes smart phones, tablets, traditional PCs, interactive video players, GPS devices, game consoles, and a host of other consumer and business tools and toys, are everywhere. Our car stereos become speakerphones and road maps on demand. We even use touchscreens to order lunch at deli chains and burrito joints. Living a day free of interaction with a computer is now much more difficult than it was for the first digital cohort.

    Quantifying the impact of this generational shift is difficult, if not downright impossible. But we can garner quite a bit of insight from the anecdotal experiences we have with our own children today. I am a father of two wonderful sons, ages four and six. As you can imagine, given my geekish tendencies and a career centered on being up to date on some of the most advanced technologies around, my home is replete with the latest gadgets - everything from the run-of-the-mill consumer electronics to quad copters and 3D printers.

    Amongst the most prized of those gadgets (from the perspective of my six-year-old at least) are the video game consoles. We have a Wii, a Playstation, and most notably for this story, an Xbox with the Kinect attachment, hooked into the surprisingly modest-sized television in the living room. (I've never been a prime time TV or sports addict, so a TV bigger than the 37-inch LCD is one of those things I occasionally think of grabbing, but never bother to.)

    For those of you unfamiliar with it, the Xbox with Kinect allows you to eschew the traditional joystick and instead use gestures to control the game. No controllers. No remotes. Just stand in front of the three-eyed digital camera contraption and it senses where your head, heads, feet, etc., are and where they are moving to. The resulting paths of motion are translated by the console into swipes that slide content along the screen, kicks that send virtual soccer balls flying, and (thanks to some munificent math by game designers) Olympic-record-breaking long jumps you'd never be capable of in real life.

    The Kinect is not the only sensing device on the market. The video below highlights another one, the Leap Motion:

    This video shows the power of these devices firsthand. Like the Kinect, like the multi-touch screens of the iPhone, iPad, Androids, and other devices, the Leap Motion captures far more than just the location of a single dot. Instead it maps a wide variety of motions onto a map of intended actions. It attempts to allow for natural gestures to become the language in which we communicate with our computers.

    It's not uncommon these days for kids to experience computing without the traditional tethers of keyboard and mouse, or even remote controls and game controllers. These novel, unwired interfaces are not only coming to market, they are on the verge of becoming ubiquitous.

    Take another keyboard- and mouse-free device for instance: the iPad. Just a little over two years after its introduction, the touchscreen-centric iPad is the number-one selling non-phone personal computer in the world. It outsells - in sheer volume of units shipped - the total of all computers shipped by any one of the top PC makers in the world: HP, Dell, Lenovo, etc. That's all their many models of desktops and laptops rolled into one:

    Or consider the iPhone, which is the number-one selling phone in the world, hands down. Its next closest competitors virtually all sport touchscreens as well. Now, 50% of all phones sold in the US are smartphones, and virtually all are powered by touchscreen interfaces.

    But iPad- and iPhone-like touch-based devices are just supplements for most households in the developed world (in developing nations, like most in Africa and large swaths of Asia, increasing numbers of households count their smartphones as the first and sole computing device). In the West, the touchscreen-centric devices add to their owners' computers, but still rarely wholesale replace them. The Western world, even at home, is still dominated by Windows and traditional Macs.

    However, recently Microsoft announced that its Windows 8 desktop operating system, due out this fall, will be fully touch enabled. In other words, its interface will look much more like the iPhone than it does the traditional Windows interface hundreds of millions of people know today. In fact, it will look exactly like the interface on the new Windows phones, dubbed "Metro."

    (Click on image to enlarge)

    The sleek, tile-based interface is meant to work on touch screens that vary in size from phone to wall uses. And all at the tip of your fingers. The multi-touch revolution is literally remaking the computer as we know it. And more and more often, users - children especially - will be able to simply eschew the mouse and even the keyboard.

    That's because it's not just touch that Microsoft is eyeing. The same gesture and voice technologies that control the Xbox will also be brought to Windows 8 as well.

    The company already produces a Kinect for Windows, and hackers have been busy working on connecting the device to older versions of Windows and to a whole host of other devices, including robots:

    With devices like the Leap Motion following the Kinect, gestures may someday become as common as the touchscreen is today.

    You'll be able to use your machine's microphone to control it as well. Microsoft already brought speech recognition to cars with Ford and Fiat's infotainment systems, and now it plans to make it ubiquitous in every device it touches.

    We've just begun what promises to be a wholesale revolution in the way we interact with computers, as big or larger than the introduction of the mouse and graphical user interface, yet already, the first crop of these devices is beginning to change the entire way we think about interacting with computers, from top to bottom.

    First, it's not that we have "a" computer; we now have multiple computers. And they carry names like "phone," "tablet," and "Xbox." With each, we touch the screens, talk to them, wave at them, and expect them to understand what we're doing. Increasingly, they even interact back with us through speech or by navigating our physical world.

    By the time my sons reach 8 and 10 - I was 10 when I received my Tandy, which came standard with a 256-color video graphics setup that I thought was pretty awesome at the time - the term "click here" will have about as much personal relevance to them as "turning" the channel or "dialing" the telephone.

    The fact that I had to "sit down at the keyboard" to type up this message is even a half-truth. I've been bitten by the speech recognition bug, and the majority of what you read here was spoken aloud to my computer, which did the typing for me, whilst I paced around my office.

    For me, that's still novel. But for my sons, who have known nothing different in their short lives, gestures and voice controls and touchscreens are so common that they now expect as much from every new device they encounter. To them, it makes no sense that they cannot just talk to the GPS (something which, now that it's been pointed out to me, seems equally preposterous given its position inside the car where inevitably both of my hands will be otherwise occupied at the 10 and 2 positions on the steering wheel).

    The touchscreen is for them - sons of a geek - the lowest common denominator. Everything does that. Speech? Gestures? Why not?

    User interface expectations are built very early on. Painted on the blank canvas that is a screen, they often come to be based on metaphors we know from our previous lives. Once comfortable with the way things work, it takes a pretty large benefit for us to change our behaviors (if that were not the case, the iPad onscreen keypad would have used the Dvorak layout, which has been proven time and again more efficient for typers than the QWERTY keyboard, which was invented to minimize mechanical movement and thus repairs of mechanical typewriters - like the metric system for most American people, it's just not enough better to make it worth even considering).

    It is likely for this exact reason that, despite my penchant for gadgets, we still live in an iPad-free household. It's because Dad (i.e., me this time) doesn't like the thing. I find it terribly constrained. I cannot bear to type on the screen. There's no easy way to position the screen to a good angle. But most of all, I hate not having a file system where I can download a presentation and leaf through it, making small changes, adding slides, etc. The idea that a computer doesn't contain folders and files is as foreign to me as the lack of voice control in the car's GPS system is to my sons.

    Luckily, as one of the technological one-percenters from my own, original digital-age group, adjusting is easier for me than for most.

    I almost never thumb in a message on my Android phone. I rely instead on the excellent voice recognition built in (I only wish there was a button on the phone to hold to put it into voice mode, like on the iPhone).

    I use the Kinect voice controls regularly... so much so that given the choice between hopping around the nice "Metro" interface of the Xbox with my voice commands and trying to surf through cable channels, I end up watching "reruns" (another of those archaeologically rooted technical terms) on Netflix, via the Xbox, every single time. (Bonus: I never have to find that darned remote again!)

    My youngest son, sneaking upstairs for some additional fun with Curious George's online games, has (largely unnoticed by me until now) made the same choice with the computer in my office. He's elected to exclusively use the giant touchscreen I installed up there - as a geeky thing for me to explore and mostly never use - as his sole input device. To him, the mouse on the desk might as well be the furry little creature, as it is has just as little to do with the computer as its mammalian namesake.

    No, for my two young sons, their Tandy moment will not involve a black screen with blinking cursor. They may not even have a Tandy moment; or they may have had many much smaller ones already. Maybe, just maybe, they may never even know what it's like to understand a colossal leap forward in technology stepping into their lives seemingly overnight. After all, for them, computing is already an immersive experience - one where you interact with dozens of devices, each purpose-built for its task, each designed to work around you, rather than you having to bend to their somewhat quirky and limited means of interaction.

    While members of my generation were the original "digital natives," things will look much different viewed through the eyes of our own children. What to expect of computers has changed in a seeming flash. But still, the geek in me knows deep down that it is precisely because many of the most inclined in their generation - like me, Zuck, and millions of others in the prior age cohort - will be as frustrated by the limitations of what today's adults dreamt up that they too will work to throw them out and replace them with something even further, inspired not by Star Trek, whose vision of the user computer interface wasn't much beyond what's in the Xbox and iPad, but maybe by Ready Player One… or even Harry Potter.

    The implications of this trend loom large for investors as well. The new paradigm for computing is about natural interaction. And any company that ignores it will ultimately limit its market going forward. PCs ate the mainframe. The Blackberry destroyed the mobile phone. The iPhone wiped out the Blackberry. The Xbox trounced the Wii. What will the next major shift in the interface bring? Time will tell, but our experiences thus far suggest the mouse will likely play a lesser role, and our hands, voices, and maybe even just our minds will play a much larger one.

    I'm excitedly awaiting the arrival on my doorstep of a novel "learning" thermostat (yes, I'm that kind of geek). Just adjust the temperature by turning the dial as you go in and out, as you wake and get ready to sleep, and it learns your patterns, creating a constantly adapting program to both make you comfortable and save energy. It adjusts to weekends - it knows what date and time it is. The weather - it knows where you live. When you aren't home - it has motion sensors. Cool stuff.

    But when it arrives, I am sure my son will ask why I have to "turn the dial" in the first place. Why can't I just tell it to make it cooler? Why not, indeed…

    As amazing as these advances are, they all are driven by the brilliant individuals whose visionary dreams guide their work. To be in on the companies most likely to survive the stiff competition in tech, an investor must understand this and keep up with the ever-shifting front lines of the tech wars.

    Disclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

    Jun 07 7:09 PM | Link | Comment!
  • Biometrics – Sci-Fi Becomes Reality

    For many years technology prognosticators have warned about the coming onslaught of “biometrics”: a fingerprint instead of one’s credit card at the ATM to draw cash, or a retinal scan at the border to verify one’s identity against one’s passport. Yet with decades of research and development behind the technologies, very few widespread uses of biometrics have found their way into our lives.

    That is starting to change, however – and if the latest technology is any indication, you can probably expect a lot more biometrics in your life real soon.

    When we think of technology, we often dream of the whiz-bang new capabilities it has brought to our lives, from ATMs to DVRs to smartphones. For a technology to go mainstream, first and foremost it generally has to also reduce someone’s “pain” – whether that be saving businesses money, or allowing an individual to conveniently catch his or her favorite program.

    Ultimately, it usually comes down to the end user of a piece of technology who has to like the outcome before it will really catch on in a big way. Just because banks would prefer to save money with ATMs doesn’t mean customers will prefer them over live tellers. But put them in places where one can’t put a bank – like convenience stores and malls – and suddenly they are beneficial to both parties. That’s a recipe for widespread proliferation.

    This is the problem that biometrics has suffered for decades: End users not only usually get little to no benefit, but they incur a significant perceived (and possibly actual) risk by using the system and giving up a digital copy of this highly personal identifiable information.

    Give a criminal a record of your fingerprint, and he will find a way to submit that record to the system without actually needing your finger. Advances both in technology, as well as in security practices, have made that scenario less likely. Connections between sensors and computers can be made virtually hack-proof using secure communications techniques that are resistant to spoofing and man-in-the-middle attacks. Data is encrypted from end to end. Much more thought is put into security of new technology now than a decade ago, thanks to the high-profile breaches we have all become so accustomed to hearing about these days.

    But even if biometric systems have become relatively secure, up to this point they have been generally unreliable. For many decades, the primary pursuits of biometric researchers have been the two ends of the spectrum, from the seemingly simplest and most readily available – the fingerprint – to the figurative holy grail – the retina scan. Both have suffered from major, insurmountable problems.

    Fingerprints, despite the positive reputation gained from television crime dramas, are simply not that unique. The systems used to measure them digitally are cheap and widely available now. However, they have a tendency to achieve large error rates when matching against large databases of samples. This is because they are inherently imprecise, sampling just a few spots. If they are to be made more precise, the cost suddenly becomes relatively impractical for the limited improvement in match rates. Fingerprints just make bad identifiers. The machines also usually require you to physically touch the sensor, which can make them less durable and subject to breakage.

    We’ve all seen the movie scenes where retinal scans are much more precise. They look at the pattern of tissue in the eyes which is incredibly unique, and doesn’t suffer from the precision problems that fingerprints have been long known to have. It also doesn’t require a user to touch the actual sensor, since it uses light to take its measurements. That means less wear and tear and more reliability. But it also requires the user to keep the eyeball relatively still (a very hard thing for some people to do), and usually to put one’s head into a machine that makes it easier for the hardware to see and recognize the eyeball.

    In one word: uncomfortable.

    Researchers have posited that technology could be developed to make retinal scans work from across the room – in a matter of milliseconds – just by having a user look at a focal point (like a camera lens at the DMV). However, the reality has proven more complicated than the theory (as it usually does), and no working system like that has ever been demonstrated to work in the field, at a low cost, and be mass producible.

    That’s where the local hospital comes in – if you live in New York City, that is. The NYU Langone Medical Center has recently launched a new biometrics-based registration system for patients. At check-in to the medical center, one is asked to present a hand for a quick palm scan.

    The system they’ve employed, provided by HT Systems, uses infrared light to read the pattern of veins in a palm. The result is an image like this one:

    It might not look like much, but this image of the blood-flow pattern within a hand has proven about 100 times more reliable than fingerprints, according to the testing done by NYU Langone prior to choosing the system. It is less intimidating than a retinal scan, much cheaper to implement, and altogether more practical.

    The vein pattern is matched up against the medical records database, and if a patient already has a record he is checked in immediately, with no forms to fill out. The electronic health record (which launched in tandem with the biometric check-in system) is then accessible to the doctors and nurses who need it throughout the hospital system.  

    The impetus behind the system from NYU’s perspective was to avoid costly mistakes when registering patients. Their database has over 125,000 records with matching names. When using forms, mistakes happen where the wrong patient is checked in, and the wrong medical records are presented to the doctors and nurses. In the best of scenarios, this causes confusion and delay. In the worst-case situation, serious injury or death can result when the wrong medication is given or similar mix-ups occur. Such errors bring serious liability to the hospital, so their motivation is obvious for wanting a system that can potentially reduce human error, save time, and reduce liability.

    But what about patients? Have they reacted well to the system, which has been in use for just over a week now?

    Despite what one may at first think, patients have apparently taken to the system with very little pushback. The reason most cited is that elderly patients especially find it much easier than trying to read and fill out forms every time they arrive. Instead, one just presents a palm and is ready to go.

    A handful of patients each day have refused to use the system, but according to NYU representatives the primary concern is not privacy but “radiation.” The system employs infrared light to do its scanning, not x-rays or other dangerous forms of radiation, so at least those concerns are unwarranted.

    In press releases and interviews online, the representatives from NYU Langone insist the system is secure. And one can imagine few instances where an attacker, other than a malicious one intent on providing incorrect data to the system, would have a desire to access or manipulate the data. The palmprint alone is not sufficient to access any medical records. That still requires secure login by hospital staff. It is only a record locator at this point.

    However, if the same technology is eventually employed by banks or credit card companies – possibly as a better alternative to ATM PIN codes – suddenly the data output by these systems will be much more valuable. We can only hope that hospitals, banks, and the companies who help them implement these systems use best practices for security and stick to multifactor authentication (e.g., something you have, and something you know) and secure communications. Even then – as recent incidents with hacked credit card terminals at Aldi, Michaels, and other national chains have proven – every complex system is only as secure as its weakest point. In this case, we may be reliant on hospitals to secure our data – something they have proven to not do well so far, with hospitals around the country guilty of losing patient records from clinical trials, epidemiology research studies, and various other programs.

    The palm-scan technology has almost all the earmarks of a potentially mainstream technology. The systems are cheap to produce, reliable, and accurate. They save their buyers money by reducing complex mistakes or fraud. And hospital patients are – so far – seeing a direct benefit from the use of the system. One question now remains: If the biometric onslaught is finally about to begin, how will it affect our security? Only time will tell.

    For now, we welcome the convenience of the new technology at the doctor’s office, but will remain skeptical of using it beyond that. We certainly don’t intend to give it up at the local grocery store to pay for the junk food that is going to send us to the hospital with a heart attack soon enough...

    [Did you know that technology has become the single largest sector of the American economy? Even so, investing wisely in tech stocks isn’t easy. That’s why Casey Research hired a high-tech mercenary (Alex Daley) to run their technology letter - Casey Extraordinary Technology. Read on to learn more about Alex’s incredible background in high-tech and how you can get a risk-free, three-month trial subscription to his technology investment newsletter, Casey Extraordinary Technology.]

    Jul 06 12:24 PM | Link | Comment!
Full index of posts »
Latest Followers


More »
Posts by Themes
Instablogs are Seeking Alpha's free blogging platform customized for finance, with instant set up and exposure to millions of readers interested in the financial markets. Publish your own instablog in minutes.