Seeking Alpha

Alex Daley's  Instablog

Alex Daley
Send Message
Alex Daley is the senior editor of Casey’s Extraordinary Technology. In his varied career, he’s worked as a senior research executive, a software developer, project manager, senior IT executive, and technology marketer. He’s a technologist who has collaborated on the development of... More
My company:
Casey Research
  • Breaking Down A Biotech Winner

    Traditional cancer treatment options are little more than a crude mix of "slash, burn, and poison" - that is surgery, radiation, and chemotherapy. There are radical new treatments in labs and trials all over the world that promise to throw out this trifecta; no other disease has received more of the research interest and funding that have defined modern biotechnology over the past three decades.

    I'm not going to tell you about any of those here. Sure, many of them will be wildly successful and make many investors fabulously wealthy over the next few decades. But most will fail. And those that don't will take a long time to turn a profit for investors.

    Yet, there is one small company whose unique twist on cancer treatment is proving to be a major upgrade. We profiled this company in a recent edition of Casey Extraordinary Technology, and it turned in a gain of over 167% for subscribers in just six months' time. It may yet make billions more still for investors.

    You see, in recent years chemotherapy has become the core treatment for most cancerous malignancies. And while these toxic cocktails of chemicals have proven effective at destroying cancerous cells, they also have one problem. A big one.

    Chemo, being essentially a poison, doesn't just attack cancerous cells - it attacks a broad range of healthy cells too. As a result, the treatment can sometimes be as harmful as the cancer itself in the short run. The side effects are awful, and its use can quickly erode patients' health. Some have even described chemo as a "cure that's worse than the disease."

    This sad state of affairs for the world's second most-prevalent chronic disease is why the cancer-research arena has been exploding over the past few years with the goal of developing more targeted, less-toxic therapies - in other words, to do a better job killing cancer cells while leaving healthy cells alone.

    That's exactly what Lawrenceville, New Jersey-based Celsion Corp. (NASDAQ:CLSN) has the technology to do. And chances are the company is on to one of the biggest cancer-treatment breakthroughs in decades.

    How It Works

    Our story starts with liposomes. These nanosized artificial vesicles are made from the same material as our cell membranes - natural phospholipids, i.e., a version of the chemicals that make up everything from fat to earwax, and cholesterol.

    Not long after their discovery in the 1960s, scientists began experimenting with liposomes as a means of encapsulating drugs, especially cancer drugs. Why? Something called the "enhanced permeability and retention" (NYSE:EPR) effect. This is a property of certain sizes of molecules - for example, liposomes, nanoparticles, and macromolecular drugs - which tend to accumulate in tumor tissue much more than they do in normal tissues. It's a useful feature for a cancer drug.

    Thus, they offer a potential way to combat the two biggest drawbacks of traditional chemotherapeutics: systemic toxicity and low bioavailability at the tumor site. In other words, the drugs now employed are themselves are toxic to normal cells, and they tend to get largely used up before they even reach the tumor site.

    Early attempts to encapsulate drugs inside liposomes did an okay job of dealing with the toxicity issue, but bioavailability at the tumor site was still limited. Our immune system saw to that. Just like virtually anything else artificial we put into our bodies, traditional liposomes were seen as invaders. Thus, they were rapidly cleared by the mononuclear phagocyte system, the part of the immune system centered around the spleen (yes, we do use it) that destroys viruses, fungi, and other foreign invaders.

    However, a breakthrough arrived when scientists came up with a new way to sneak these artificial compounds into the body undetected by our defenses. The process gave us what are call "PEGylated" liposomes, with a covalent attachment of polyethylene glycol polymer chains. The effect of attaching these little plastic chains to the end of the liposome was to create a "stealth" liposome-encapsulated drug that was hardly noticed by the system.

    Problem solved, right? Well, not exactly. A lot of hard work went into getting drugs into liposomes to reduce toxicity, then a bunch more into stopping our immune system from kicking in. But there was still yet another problem. The drug-release rates of these stealth liposomes were generally so low that tumor cells barely got a dose. Scientist had made them so stealthy that they even skated right by cancer cells, usually failing to kill off the tumors.

    After decades of experimenting with liposome-encapsulated cancer drugs, scientists still had not been able to safely deliver therapeutic concentrations of the chemotherapy drugs to all tumor cells.

    They had to devise a way to induce drug release when and where it would be more effective.

    The next big idea came in more recent years, as scientists devised temperature-sensitive liposomes. Heat them and they pop, releasing the drugs just when you need them to. From stealth to non-stealth in a matter of seconds, and right on target.

    Fortunately, they were able to make it work, but unfortunately, not at temperatures that didn't essentially cook patients from the inside - sort of defeating the purpose of keeping the chemo at bay to reduce collateral damage. They failed to perform at tolerable levels of heat or time. Fifteen minutes of baking and still only 40% or so of the drug was released, and it took temperatures up to 112° Fahrenheit. It might not sound like much, but it was enough to be intensely painful and damaging as well.

    That's where Celsion came in. It's designed and developed a novel form of these temperature-sensitive chemo sacks - the first of their kind to work effectively and safely - otherwise known as a lysolipid thermally sensitive liposome (LTSL).

    Celsion's liposomes are engineered to release their contents between 39-42° C, or 102.2-107.6° F (thus, another translation of LTSL has become "low-temperature sensitive liposome"). And they release the contents at an extremely fast rate, to boot.

    A Better Way to Use Chemo

    These unique properties of Celsion's LTSL technology make it vastly superior to previous liposome technology for a number of reasons.

    • For starters, the temperature range is much more tolerable to patients and won't injure normal tissue.
    • Second, the temperature range takes advantage of the natural effect mild hyperthermia has on tumor vasculature. Numerous studies have shown that temperatures between 39-43° C increase blood flow and vascular permeability (or leakiness) of a tumor, which is ideal for drug delivery since the cancer-killing chemicals have easy access to all areas of the tumor. These effects are not seen at temperatures below this threshold, and temperatures above it tend to result in hemorrhage, which may reduce or cease blood flow, hampering drug delivery. It's the Goldilocks Effect: The in-between range is perfect.
    • Third, Celsion's LTSL technology promotes an accelerated release of the drug when and where it will be most effective. That allows for direct targeting of organ-specific tumors.

    Celsion's LTSL technology has shown that it's capable of delivering drugs to the tumor site at concentrations up to 30 times greater than those achievable with chemotherapeutics alone, and three to five times greater than those of more traditional liposome-encapsulated drug-delivery systems.

    The company's first drug under development is ThermoDox, which uses its breakthrough LTSL technology to encapsulate doxorubicin, a widely used chemotherapeutic agent that is already approved to treat a wide range of cancers.

    Currently, ThermoDox is undergoing a pivotal Phase III global clinical trial - denoted the "HEAT study" - for the treatment of primary liver cancer (hepatocellular carcinoma, or HCC), in combination with radiofrequency ablation (NYSEMKT:RFA).

    RFA uses high-frequency radio waves to generate a high temperature that is applied with a probe placed directly in the tumor, which by itself kills tumor cells in the immediate vicinity of the probe. Cells on the outer margins of larger tumors may survive, however, because temperatures in the surrounding area are not high enough to destroy them. But the temperatures are high enough to activate Celsion's LTSL technology. Thus, the heat from the radio-frequency device thermally activates the liposomes in ThermoDox in and around the periphery of the tumor, releasing the encapsulated doxorubicin to kill remaining viable cancer cells throughout the region, all the way to the tumor margin.

    ThermoDox is also undergoing a Phase I/II clinical trial for the treatment of recurrent chest wall (RCW) breast cancer (known as the "DIGNITY study"), and a Phase II clinical trial for the treatment of colorectal liver metastases (the "ABLATE study"). But most of the drug's (and hence the company's) value is tied up in the HEAT study.

    The HEAT trial is a pivotal 700-patient global Phase III study being conducted at 79 clinical sites under a special protocol assessment (NYSE:SPA) agreement with the FDA. The FDA has designated the HEAT study as a fast-track development program, which provides for expedited regulatory review; and it has granted orphan-drug status to ThermoDox for the treatment of HCC, providing seven years of market exclusivity following FDA approval. Furthermore, other major regulatory agencies, including the European Medicines Agency (NYSEMKT:EMA) and China's equivalent, have all agreed to use the results of the HEAT study as an acceptable basis to approve ThermoDox.

    The primary endpoint for the HEAT study is progression-free survival - living longer with no cancer growth. There's a secondary confirmatory endpoint of overall survival, too. Both the oncological and investing community are eagerly awaiting the results, which are due any day now.

    So then, why are we on the sidelines now, right when the big news is due to hit? That all goes back to why Celsion was such a good investment to begin with, and what it can tell us about finding other big wins in the technology stock market.

    A Winner in the Making

    When we're looking for a strong pick in the biotechnology, pharmaceuticals, and medical devices fields - once we have established the quality of the technology itself and ensured it will likely work as expected - there is a simple set of tests we apply to ensure that we've found a stock that can deliver significant, near-term upside. The most critical of these are:

    • The technology must provide a distinct competitive advantage over the current standard of care and be superior to any competitors' effort that will come to market before or shortly after our subject's does. In other words, it must improve outcomes, by improving patients' length or quality of life (i.e., a cure for a disease, or a maintenance medication with fewer side effects), or lower costs while maintaining quality of care (i.e., a generic drug). A therapy that does both is all the better.
    • The market must be measurable and addressable. There must be some way to say specifically how many patients would benefit from a therapy, and to ensure that those patients have providers caring for them that would make efficient distribution of the therapy possible. For instance, a successful treatment for Parkinson's disease might be applicable to hundreds of thousands of patients, with little competition from other treatments, whereas a treatment for Von Hippel-Lindau (VHL) might only reach hundreds. If the goal is to recover years of research investment and profit above and beyond that, then market size matters, as do current and future competitors that might limit your reach within a treatment area.
    • Payers should be easily convinced to cover the new therapy at profitable rates. In the modern world of health care, failure of a treatment to garner coverage from government medical programs like Medicare and the UK Health Service, and private insurance companies (which generally cooperate closely to decide how to classify and whether to cover a treatment) is usually a game-ender. Payers have a responsibility not just to patients but to their shareholders or taxpayers to stay financially solvent. This means that if a therapy does not provide a compelling cost/benefit ratio, then it won't be covered. For instance, if you release a new painkiller that is only as effective as Tylenol and costs $1,000 per dose, you're obviously not going to see support.
    • There must a clear path to market in the short term, or another catalyst to propel the stock upward. An investment in a great technology does not always make for a great investment. You have to consider the quality of the management team and structure of the company, including its ability to pay the bills and get to market without defaulting or diluting you out of your positions. And of course, time. The biggest and most frequent mistake investors make in technology is assuming that it is smooth and short sailing from concept to market. Reality is much harsher than that, and in biotechnology and pharmaceuticals in particular - with a tough regulatory gamut to run - the timeline to take a new technology to market can be anywhere from a decade to thirty, forty, or even fifty years.

    Liposomes are a perfect example of that. Twenty years ago, I probably could have told you a story about a technology that was very similar to what was laid out above. It would be compelling and enticing to investors of all stripes - a breakthrough technology with the promise to revolutionize cancer care by making chemo less toxic and more effective at the same time. Yet had you invested in that promise alone, chances are you'd be completely wiped out by now, or maybe - just maybe - still waiting for a return.

    That is why we invest in proof, not promises. So, how does Celsion stack up against our four main proof points?

    Time to market: When we first recommended Celsion, it was in Phase III pivotal trials. This is the last major stage of human testing usually required before a company can submit an FDA New Drug Application and apply to market the product.

    The process of bringing a drug to market, even once a specific compound has been identified and proven to work in vitro (in the lab), is perilous. Many things can go wrong along the way. If you look at investing in a company whose drugs are just entering Phase I clinical trials, for instance, it is still unclear if the therapy is effective in vivo (in the human body). This is a critical stumbling block for many companies, whose promising compounds immediately prove less effective or more dangerous than testing suggested. Even if Phase I goes well, it can take up to a decade and sometimes longer to get from there to market with a drug. And then, even Phase II trials often leave treatments five or more years from market - though there are exceptions in cases where a therapy is proven very effective or a disease has so few treatment options available. But shortcuts are rare, and investors have to consider the time and expense (which leads to fundraising and ultimately dilutes your return) of getting from A to Z.

    In this regard, Celsion made a uniquely great investment. When we first recommended the company, it was in the midst of a pivotal Phase III trial and looked to be about a year or so away from its first commercialization. (Though, speaking to the length of these trials, this one had been started back in 2008.)

    With many of the most high-profile companies in the industry - those working on vogue treatment areas and conditions, like hepatitis C treatments of late - when they get this close to market, the large banks bid up stocks to high levels, content to squeeze just a few percentage points out at the end. They have to be conservative, since they're investing large amounts of other people's money. However, biotechnology is such a fragmented space with far more companies than Wall Street can possibly cover in depth, that coming across a gem like Celsion late in the game with a potentially big win is not as uncommon as you'd think. The "efficient market" hypothesis fails to account for the fact that no one can know everything, including every stock. And Celsion had gone all but unnoticed for some time.

    Payer acceptability: Celsion has the benefit of developing a 2.0-style product, an improvement over something that already exists. RFA is already in relatively widespread use and has proven effective enough that most every insurance and benefits provider will cover it. Even the early generations of LTSL, while not quite as safe or effective as desired, were enough of a benefit to gather pretty solid support from payers.

    Celsion, through its clinical trial process, has proven its unique blend is safer, better tolerated by patients, and much more effective than its predecessors. Thus, payer support at a reasonable price is a pretty sure bet.

    Market size: When we originally recommended Celsion, we stated that the company was sitting on a multibillion-dollar opportunity. And we stand by that statement. However, just because something is eventually worth that amount does not mean it's bankable today as a short-term investment. So we try to keep our analysis narrowly focused on what can be directly counted on and measured. In Celsion's case, that's the Phase III treatment, Thermodox, and the one area in which it is being studied: primary liver cancer (NYSE:HCC). Even just in this narrow band, however, we see the market opportunity for Celsion as in excess of $1 billion.

    HCC is one of the most deadly forms of cancer. It currently ranks as the fifth most-common solid tumor cancer, and it's quickly moving up. With the fastest rate of growth among all cancer types, HCC projects to be the most prevalent form of cancer by 2020. The incidence of primary liver cancer is nearly 30,000 cases per year in the US, and approximately 40,000 cases per year in Europe. But the situation worldwide is far worse, with HCC growing at approximately 750,000 cases per year, due to the high prevalence of hepatitis B and C in developing countries.

    If caught early, the standard first-line treatment for primary liver cancer is surgical resection of the tumor. Early-stage liver cancer generally has few symptoms, however, so when the disease is finally detected, the tumor is usually too large for surgery. Thus, at least 80% of patients are ineligible for surgery or transplantation by the time they are diagnosed. And there are few nonsurgical therapeutic treatment options available, as radiation and chemotherapy are largely ineffective.

    RFA has emerged as the standard of care for non-resectable liver tumors, but it has limitations. The treatment becomes less effective for larger tumors, as local recurrence rates after RFA directly correlate to the size of the tumor. (As noted earlier, RFA often fails at the margins.) ThermoDox promises the ability to reduce the recurrence rate in HCC patients when used in combination with RFA. If it proves itself in Phase III, there's no doubt the drug will be broadly adopted throughout the world once it is approved.

    A quick look at the numbers: According to the most recent data from the National Cancer Institute, the incidence rates of HCC per 100,000 people in the three major markets are 4 in the US, 5 in Europe, and approximately 27 in China. Based on these incidence rates, the total addressable market in these three regions (which we will conservatively assume to be the total addressable worldwide population for the time being) is approximately 400,000 (12,000 in the US, 40,000 in Europe, and 351,000 in China).

    Assuming that 50% of HCC patients are eligible for nonsurgical invasive therapy such as RFA, approximately 200,000 patients worldwide would be eligible for ThermoDox. Further assuming an annual cost of treatment for ThermoDox of $20,000 in the US, $15,000 in Europe, and $5,000 in China, in line with similar treatments of the same variety, we estimate that the market potential of ThermoDox could be up to $1.3 billion. Not to mention the countless thousands of lives saved. (And that's before the rest of the developing world comes online.)

    Of course, this is an estimate of ThermoDox's potential assuming 100% market penetration - something that simply never happens. While we expect ThermoDox in combination with RFA to become the standard of care for primary liver cancer, a more reasonable expectation for maximum market penetration after a six-year ramp-up to peak sales (from an expected approval in 2013) is probably 40%.

    Improving outcomes or lowering costs: This is exactly what the Phase III trial was intended to prove: efficacy beyond a shadow of a doubt. Given preliminary data and earlier trial results, it was already a pretty sure thing, so in our model, we assumed about a 70% chance of success (to be on the conservative side, as always - it's better to be right by a mile than to miss by an inch).

    Once we incorporate that probability of success into our model, we come to a probability-weighted peak sales figure in 2019 of approximately $365,000,000 annually.

    The average-price-to-sales ratio among the big players in biotech these days is about 5. If we apply a sales multiple of 3 (i.e., just 60% of the average) to Celsion's probability-weighted peak sales for ThermoDox in 2019, we come up with a value for the company of nearly $1.1 billion, which would equate to about $33 per share if it did not issue any new stock between now and then - that's more than 17 times where the stock was trading when we recommended a buy.

    And remember, these numbers are only for ThermoDox under the HCC indication.

    Our Move to the Sidelines

    With final data from the current Phase III pivotal trial due expected to come in within the next few weeks, Celsion's stock has ballooned in value from the $2 range to $7.50 or so in the past few weeks. Now, that's a far cry from the $33 price we mentioned above, but remember, that's a target for 2019. And it doesn't allow for a whole range of things that could go wrong.

    Chief among those concerns is that the Phase III data come in more poorly than expected. Even just a small variance in efficacy or a simple question about safety can knock a few hundred million dollars off those sales figures. Or it can push trials back a year or two, delaying returns and sending short-term-minded investors, like those who have recently bid up CLSN shares, retreating to the hills for the time being.

    Further downfield there is sure to be competition as well, and of course we may get those miraculous chemo-free treatments mentioned up front.

    In short, we don't have a crystal ball and can't tell you what the world will look like in 2019. If you believe yours is clear, ask yourself if you thought touchscreen phones and tablets would outsell traditional computers by 3 to 1 globally in 2012. If not, you might want to give the crystal a polish.

    To be clear, the value of Celsion in the near term hinges on a binary event - the results of the ongoing HEAT trial. We are of the opinion that CLSN represents one of the best opportunities we've come across since we started this letter, and that the probability of a successful trial is high. Nevertheless, there is substantial down side if the trial is unsuccessful. And it could take years to recover, if ever, on news of a delay from any concerns raised.

    We'd already advised subscribers to take a free ride early on in our coverage of the stock, taking all of the original investment risk away. However, even with that protection, the short-term potential is still more heavily weighted to the down side. Thus, we booked our profits and stepped to the sidelines on this one.

    Celsion continues to be a model, even at today's prices, for a great biotech investment with significant upside potential. But we're content to wait for the market to hand us another, similar opportunity.

    The pages of Casey Extraordinary Technology are filled with investments just like Celsion - up-and-coming technology companies the market has yet to discover. With 2012 coming to a close, the service's track record for the year is a remarkable 9 winners out of 9 closed positions, with an average gain of 61%. Get in on it now: subscribe today and save 25% off the regular price - as always, backed by our unconditional money-back guarantee.

    Disclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

    Dec 11 2:47 PM | Link | Comment!
  • How Dangerous Is Genetically Modified Food?

    Last month, a group of Australian scientists published a warning to the citizens of the country and of the world who collectively gobble up some $34 billion annually of its agricultural exports. The warning concerned the safety of a new type of wheat.

    As Australia's number-one export, a $6-billion annual industry, and the most-consumed grain locally, wheat is of the utmost importance to the country. A serious safety risk from wheat - a mad wheat disease of sorts - would have disastrous effects for the country and for its customers.

    Which is why the alarm bells are being rung over a new variety of wheat being ushered toward production by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) of Australia. In a sense, the crop is little different than the wide variety of modern genetically modified foods. A sequence of the plant's genes has been turned off to change the wheat's natural behavior a bit, to make it more commercially viable (hardier, higher yielding, slower decaying, etc.).

    Franken-Wheat?

    What's really different this time - and what has Professor Jack Heinemann of the University of Canterbury, NZ, and Associate Professor Judy Carman, a biochemist at Flinders University in Australia, holding press conferences to garner attention to the subject - is the technique employed to effectuate the genetic change. It doesn't modify the genes of the wheat plants in question; instead, a specialized gene blocker interferes with the natural action of the genes.

    The process at issue, dubbed RNA interference or RNAi for short, has been a hotbed of research activity ever since the Nobel Prize-winning 1997 research paper that described the process. It is one of a number of so-called "antisense" technologies that help suppress natural genetic expression and provide a mechanism for suppressing undesirable genetic behaviors.

    RNAi's appeal is simple: it can potentially provide a temporary, reversible off switch for genes. Unlike most other genetic modification techniques, it doesn't require making permanent changes to the underlying genome of the target. Instead, specialized siRNAs - chemical DNA blockers based on the same mechanism our own bodies use to temporarily turn genes on and off as needed - are delivered into the target organism and act to block the messages cells use to express a particular gene. When those messages meet with their chemical opposites, they turn inert. And when all of the siRNA is used up, the effect wears off.

    The new wheat is in early-stage field trials (i.e., it's been planted to grow somewhere, but has not yet been tested for human consumption), part of a multi-year process on its way to potential approval and not unlike the rigorous process many drugs go through. The researchers responsible are using RNAi to turn down the production of glycogen. They are targeting the production of the wheat branching enzyme which, if suppressed, would result in a much lower starch level for the wheat.

    The result would be a grain with a lower glycemic index - i.e., healthier wheat.

    This is a noble goal. However, Professors Heinemann and Carman warn, there's a risk that the gene silencing done to these plants might make its way into humans and wreak havoc on our bodies. In their press conference and subsequent papers, they describe the possibility that the siRNA molecules - which are pretty hardy little chemicals and not easily gotten rid of - could wind up interacting with our RNA.

    If their theories prove true, the results might be as bad as mimicking glycogen storage disease IV, a super-rare genetic disorder which almost always leads to early childhood death.

    "Franken-Wheat Causes Massive Deaths from Liver Failure!"

    Now that is potentially headline-grabbing stuff. Unfortunately, much of it is mere speculation at this point, albeit rooted in scientific expertise on the subject.

    What they've produced is a series of opinion papers - not scientific research nor empirical data to prove that what they suspect might happen, actually does. They point to the possibilities that could happen if a number of criteria are met:

    • If the siRNAs remain in the wheat in transferrable form, in large quantities, when the grain makes it to your plate. And…
    • If the siRNA molecules interfere with the somewhat different but largely similar human branching enzyme as well.

    Then the result might be symptoms similar to such a condition, on some scale or another, anywhere from completely unnoticeable to highly impactful.

    They further postulate that if the same effect is seen in animals, it could result in devastating ecological impact. Dead bugs and dead wild animals.

    Luckily for us, as potential consumers of these foods, all of these are easily testable theories. And this is precisely the type of data the lengthy approval process is meant to look at.

    Opinion papers like this - while not to be confused with conclusions resulting from solid research - are a critically important part of the scientific process, challenging researchers to provide hard data on areas that other experts suspect could be overlooked. Professors Carman and Heinemann provide a very important public good in challenging the strength of the due-diligence process for RNAi's use in agriculture, an incomplete subject we continue to discover more about every day.

    However, we'll have to wait until the data come back on this particular experiment - among thousands of similar ones being conducted at government labs, universities, and in the research facilities of commercial agribusinesses like Monsanto and Cargill - to know if this wheat variety would in fact result in a dietary apocalypse.

    That's a notion many anti-genetically modified organism (NYSEMKT:GMO) pundits seem to have latched onto following the press conference the professors held. But if the history of modern agriculture can teach us anything, it's that far more aggressive forms of GMO foods appear to have had a huge net positive effect on the global economy and our lives. Not only have they not killed us, in many ways GMO foods have been responsible for the massive increases in public health and quality of life around the world.

    The Roots of the GMO Food Debate

    The debate over genetically modified (NYSE:GM) food is a heated one. Few contest that we are working in somewhat murky waters when it comes to genetically modified anything, human or plant alike. At issue, really, is the question of whether we are prepared to use the technologies we've discovered.

    In other words, are we the equivalent of a herd of monkeys armed with bazookas, unable to comprehend the sheer destructive power we possess yet perfectly capable of pulling the trigger?

    Or do we simply face the same type of daunting intellectual challenge as those who discovered fire, electricity, or even penicillin, at a time when the tools to fully understand how they worked had not yet been conceived of?

    In all of those cases, we were able to probe, study, and learn the mysteries of these incredible discoveries over time. Sure, there were certainly costly mistakes along the way. But we were also able to make great use of them to advance civilization long before we fully understood how they worked at a scientific level.

    Much is the same in the study and practical use of GM foods.

    While the fundamentals of DNA have been well understood for decades, we are still in the process of uncovering many of the inner workings of what is arguably the single most advanced form of programming humans have ever encountered. It is still very much a rapidly evolving science to this day.

    For example, in the 1990s, an idea known simply as "gene therapy" - really a generalized term for a host of new-at-the-time experimental techniques that share the simple characteristic of permanently modifying the genetic make-up of an organism - was all the rage in medical study. Two decades on, it's hardly ever spoken of. That's because the great majority of attempted disease therapies from genetic modification failed, with many resulting in terrible side effects and even death for the patients who underwent the treatments. Its use in the early days, of course, was limited almost exclusively to some of the world's most debilitating, genetically rooted diseases. Still - whether in their zeal to use a fledgling tool to cure a dreadful malady or in selfish, hurried desire to be recognized among the pioneers of what they thought would be the very future of medicine - doctors chose to move forward at a dangerous pace with gene therapy.

    In one famous case, and somewhat typical of the times, University of Pennsylvania physicians enrolled a sick 18-year-old boy with a liver mutation into a trial for a gene therapy that was known to have resulted in the deaths of some of the monkeys it had just been tested on. The treatment resulted in the young man's death a few days later, and the lengthy investigation that followed resulted in serious accusations of what can only be called "cowboy medicine."

    Not one of science's prouder moments, to be sure. But could GM foods be following the same dangerous path?

    After all, the first GM foods made their way to market during the same time period. The 1980s saw large-scale genetic-science research and experimentation from agricultural companies, producing everything from antibiotic-resistant tobacco to pesticide-hardy corn. After much debate and study, in 1994 the FDA gave approval to the first GM food to be sold in the United States: the ironically named Flavr Savr tomato, with its delayed ripening genes which made it an ideal candidate for sitting for days or weeks on grocery store shelves.

    Ever since, there has been a seeming rush of modified foods into the marketplace.

    Modern GM foods include soybeans, corn, cotton, canola, sugar beets, and a number of squash and greens varieties, as well as products made from them. One of the most prevalent modifications is to make plants glyphosate-resistant, or in common terms, "Roundup Ready." This yields varieties that are able to stand up to much heavier doses of the herbicide Roundup, which is used to keep weeds and other pest plants from damaging large monoculture fields, thereby reducing costs and lowering risks.

    In total it is estimated that modern GM crops have grown to become a $12 billion annual business since their commercialization in 1994, according to the International Service for the Acquisition of Agri-biotech Applications (ISAAA). Over 15 million farms around the world are reported to have grown GM crops, and their popularity increases every year.

    They've brought huge improvements in shelf life, pathogen and other stress resistance, and even added nutritional benefits. For instance, yellow rice - which was the first approved crop with an entirely new genetic pathway added artificially - provides beta-carotene to a large population of people around the world who otherwise struggle to find enough in their diets.

    However, the race for horticulturalists to the genetic table in the past few decades - what could be described accurately as the transgenic generation of research - has by no means been our first experiment with the genetic manipulation of food. In fact, if anything, it is a more deliberate, well studied, and careful advance than those that came before it.

    A VERY Brief History of Genetically Modified Food

    Some proponents of GMO foods are quick to point out that humans have been modifying foods at the genetic level since the dawn of agriculture itself. We crossbreed plants with each other to produce hybrids (can I interest you in a boysenberry?). And of course, we select our crops for breeding from those with the most desirable traits, effectively encouraging genetic mutations that would have otherwise resulted in natural failure, if not helped along by human hands. Corn as we know it, for example, would never have survived in nature without our help in breeding it.

    Using that as a justification for genetic meddling, however, is like saying we know that NASCAR drivers don't need seatbelts because kids have been building soapbox racers without them for years. Nature, had the mix not been near ideal to begin with, would have prevented such crossbreeding. Despite Hollywood's desires, one can't simply crossbreed a human and a fly, or even a bee and a mosquito, for that matter - their genetics are too different to naturally mix. And even if it did somehow occur, if it did not make for a hardier result, then natural selection would have quickly kicked in.

    No, I am talking about real, scientific genetic mucking - the kind we imagined would result in the destruction of the world from giant killer tomatoes or man-eating cockroaches in our B-grade science-fiction films. Radiation mutants.

    Enterprising agrarians have been blasting plants with radiation of all sorts ever since we starting messing around with atomic science at the dawn of the 20th century. In the 1920s, just when Einstein and Fermi were getting in their grooves, Dr. Lewis Stadler at the University of Missouri was busy blasting barley seeds with X-rays - research that would usher in a frenzy of mutation breeding to follow.

    With the advent of nuclear technology from the war effort, X-rays expanded into atomic radiation, with the use of gamma rays leading the pack. The United States even actively encouraged the practice for decades, through a program dubbed "Atoms for Peace" that proliferated nuclear technology throughout various parts of the private sector in a hope that it would improve the lives of many. And it did.

    Today, thousands of agricultural varieties we take for granted - including, according to a 2007 New York Times feature on the practice, "rice, wheat, barley, pears, peas, cotton, peppermint, sunflowers, peanuts, grapefruit, sesame, bananas, cassava and sorghum" - are a direct result of mutation breeding. They would not be classified as GM foods, in the sense that we did not use modern transgenic techniques to make them, but they are genetically altered nonetheless, to the same or greater degree than most modern GMO strains.

    Unlike modern GM foods - which are often closely protected by patents and armies of lawyers to ensure the inventing companies reap maximum profits from their use - the overwhelming majority of the original generations of radiation-mutated plant varieties came out of academic and government sponsored research, and thus were provided free and clear for farmers to use without restriction.

    With the chemical revolution of the mid-20th century, radiation-based mutations were followed by the use of chemical agents like the methyl sulfate family of mutagens. And after that, the crudest forms of organic genetic manipulation came into use, such as the uses of transposons, highly repetitive strands of DNA discovered in 1948 that can be used like biological duct tape to cover whole sections the genome.

    These modified crops stood up better to pests, lessened famines, reduced reliance on pesticides, and most of all enabled farmers to increase their effective yields. Coupled with the development of commercial machinery like tractors and harvesters, the rise of mutagenic breeding resulted in an agricultural revolution of a magnitude few truly appreciate. In the late 1800s, the overwhelming majority of global populations lived in rural areas, and most people spent their lives in agrarian pursuits. From subsistence farmers to small commercial operations, the majority of the population of every country, the US included, was employed in agriculture.

    Today, less than 2% of the American population (legal and illegal combined) works in farming of any kind. Yet we have more than enough food to feed all of our people, and a surplus to export to more densely populated nations like China and India.

    The result is that a sizable percentage of the world's plant crops today - the ones on top of which much of the modern-era GMO experiments are done - are already genetic mutants. Hence the slippery slope that serves as the foundation of the resistance from regulators over the labeling of GM food products. Where do you draw the line on what to label? And frankly, how do you even know for sure, following the Wild-West days of blasting everything that could grow with some form or another of radiation, what plants are truly virgin DNA?

    The world's public is largely unaware that many of the foods they eat today - far more than those targeted by anti-GMO protestors and labeling advocates - are genetically modified. Yet we don't seem to be dying off in large numbers, like the anti-RNAi researchers project will happen. In fact, global lifespans have increased dramatically across the board in the last century.

    The Rise of Careful

    The science of GM food has advanced considerably since the dark ages of the 1920s. Previous versions of mutation breeding were akin to trying to fix a pair of eyeglasses with a sledgehammer - messy and imprecise, with rare positive results. And the outputs of those experiments were often foisted upon a public without any knowledge or understanding of what they were consuming.

    Modern-day GM foods are produced with a much more precise toolset, which means less unintended collateral damage. Of course it also opens up a veritable Pandora's box of new possibilities (glow-in-the-dark corn, anyone?) and with it a whole host of potential new risks. Like any sufficiently powerful technology, such as the radiation and harsh chemicals used in prior generations of mutation breeding, without careful control over its use, the results can be devastating. This fact is only outweighed by the massive improvements over the prior, messier generation of techniques.

    And thus, regulatory regimes from the FDA to CSIRO to the European Food Safety Authority (EFSA) are taking increasing steps to ensure that GM foods are thoroughly tested long before they come to market. In many ways, the tests are far more rigorous than those that prescription drugs undergo, as the target population is not sick and in need of urgent care, and for which side effects can be tolerated. This is why a great many of the proposed GM foods of the last 20 years, including the controversial "suicide seeds" meant to protect the intellectual property of the large GM seed producers like Monsanto (which bought out Calgene, the inventor of that Flavr Savr tomato, and is now the 800-lb. gorilla of the GM food business), were never allowed to market.

    Still, with the 15 years from 1996 to 2011 seeing a 96-fold increase in the amount of land dedicated to growing GM crops and the incalculable success of the generations of pre-transgenic mutants before them, scientists and corporations are still in a mad sprint to find the next billion-dollar GM blockbuster.

    In doing so they are seeking tools that make the discovery of such breakthroughs faster and more reliable. With RNAi, they may just have found one such tool. If it holds true to its laboratory promises, its benefits will be obvious from all sides.

    Unlike previous generations of GMO, RNAi-treated crops do not need to be permanently modified. This means that mutations which outlive their usefulness, like resistance to a plague which is eradicated, do not need to live on forever. This allows companies to be more responsive, and potentially provides a big relief to consumers concerned about the implications of eating foods with permanent genetic modifications.

    The simple science of creating RNAi molecules is also attractive to people who develop these new agricultural products, as once a messenger RNA is identified, there is a precise formula to tell you exactly how to shut it off, potentially saving millions or even billions of dollars that would be spent in the research lab trying to figure out exactly how to affect a particular genetic process.

    And with the temporary nature of the technique, both the farmers and the Monsantos of the world can breathe easily over the huge intellectual-property questions of how to deal with genetically altered seeds. Not to mention the questions of natural spread of strains between farms who might not want GMO crops in their midst. Instead of needing to engineer in complex genetic functions to ensure progeny don't pass down enhancements for free and that black markets in GMO seeds don't flourish, the economic equation becomes as simple as fertilizer: use it or don't.

    While RNAi is not a panacea for GMO scientists - it serves as an off switch, but cannot add new traits nor even turn on dormant ones - the dawn of antisense techniques is likely to mean an even further acceleration of the science of genetic meddling in agriculture. Its tools are more precise even than many of the most recent permanent genetic-modification methods. And the temporary nature of the technique - the ability to apply it selectively as needed versus breeding it directly into plants which may not benefit from the change decades on - is sure to please farmers, and maybe even consumers as well.

    That is, unless the scientists in Australia are proven correct, and the siRNAs used in experiments today make their way into humans and affect the same genetic functions in us as they do in the plants. The science behind their assertions still needs a great deal of testing. Much of their assertion defies the basic understanding of how siRNA molecules are delivered - an incredibly difficult and delicate process that has been the subject of hundreds of millions of dollars of research thus far, and still remains, thanks to our incredible immune systems, a daunting challenge in front of one of the most promising forms of medicine (and now of farming too).

    Still, their perspective is important food for thought... and likely fuel for much more debate to come. After all, even if you must label your products as containing GMO-derived ingredients, does that apply if you just treated an otherwise normal plant with a temporary, consumable, genetic on or off switch? In theory, the plant which ends up on your plate is once again genetically no different than the one which would have been on your plate had no siRNAs been used during its formative stages.

    One thing is sure: the GMO food train left the station nearly a century ago and is now a very big business that will continue to grow and to innovate, using RNAi and other techniques to come.

    Technology is the largest sector of the US economy right now - but that doesn't make selecting the best investments any easier. Not only must a new development get regulatory approval, it has to cross "the chasm"... the dangerous zone between early adopters picking it up and the mainstream accepting it. Learn how to choose the tech most likely to achieve this, and you'll be on your way to windfall gains.

    Disclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

    Nov 12 1:20 PM | Link | Comment!
  • Are Visa's Days Numbered?

    "Your credit card may soon be worthless."

    That's the notion being promoted by many in the investment industry these days. They are referring to a new technology that is supposedly Visa's worst nightmare and a threat to the status quo of the credit-card industry worth billions. And they are positioning one small company as the holder of the secret keys to cash in on what is promised to be a multibillion-dollar shift in the way we pay for everything from a candy bar to an oil change. But is it really true? Will this technology really turn the credit-card industry on its head?

    Doubtful. That's because all the popular analysis on the nascent new technology ignores the fundamental underpinnings of all successful technologies. In fact, it ignores basic economics.

    The supposedly revolutionary technology involves a specialized chip in your cellphone that communicates wirelessly with other devices within very close proximity, in order to pass data between two devices. This "Near Field Communication" (NYSEMKT:NFC) chip could pass the equivalent of digital business cards between devices - especially smartphones, where it makes the most sense to be deployed - by simply holding them up to each other. Your phone or tablet could be used to identify you when entering a secure business or even your home, as well as to connect to a rental car's speakerphone or access the local Starbucks Wi-Fi, just by waving it near a device that activates the link. Pass two devices within a few centimeters of each other, and voilà, data is moved between them.

    Thus, the theory goes, you can simply wave your phone by a payment terminal at the gas station, grocery store, or other location where credit cards are accepted.

    This weekend, I encountered a situation where NFC would have been useful. I was boarding a United Airlines plane, and the passenger in front of me went to swipe her virtual boarding pass across the bar-code reader, only to have it not work. In the time she'd been waiting in line, her screen had timed out. Thus, she was trying to swipe a blank screen over a bar-code reader. Near Field Communication would have alleviated that problem and made it so we were not all held up while the well-meaning woman tried to type in her phone's passcode and bring the browser back up (only to find she had to refresh the page and re-enter her ticket number… leaving us all waiting while the gate agent assisted her).

    So NFC could be a major improvement for the ticketing and access control industry, given that it works wirelessly and can be completely non-interactive when desired.

    But does that make it a logical choice for contactless payment, the promised multibillion-dollar opportunity? Many seem to think so, and not just our fellow stock pickers. Germany, Austria, Finland, and New Zealand are among the nations that have trialed NFC ticketing for public transport, allowing users to swipe their phones or NFC-enabled payment cards to grab a ride and get billed later. It's basically EZ Pass without the car, and like EZ Pass, it works well in some closely defined scenarios.

    But outside of those circumstances, things get messier. Markets dislike messy. So, in order for a new technology to hit the mainstream, it is critically important that one of the players in the value chain today has an overwhelming reason to support the technology. Is that the case here?

    Today, the overwhelming majority of electronic payments - I'm talking 99% plus - are processed by the credit- and debit-card providers of the world. That's Visa - the 800-lb. gorilla of the industry - MasterCard, and American Express, along with lesser known names like Novus, Star, Maestro, and others from the debit/ATM world.

    These companies already have well-established networks of equipment, merchants who accept their cards, and businesses and consumers who use them.

    NFC promises to bring two changes to the industry:

    1. Introduce new players into the ecosystem with an incentive for adoption, in the form of whoever supplies the software operating the NFC device the consumer carries.
    2. Help secure transactions by only submitting the credit-card information over encrypted channels, allowing interactive security controls, and limiting the range of the devices.

    But the consequences of these changes may not be as straightforward as they first seem.

    Is NFC Another PayPal?

    The first of those two changes is most unwelcome for the industry's leaders. However, it's one that they've faced in the past, with PayPal and other direct-payment providers. When PayPal began life, it was meant to facilitate direct consumer-to-consumer transfers of money. I sell a trinket on eBay, you buy it, and PayPal moves the money from your bank account to mine for a small fee. Essentially, that's the same business Visa is in.

    Luckily for Visa, eBay soon experienced a large rise in fraud. The credit-card companies saw a massive opportunity there to put their considerable financial assets and longtime experience dealing with fraudsters to use, and chose to increase the amount of indemnity they provided customers against fraud. They marketed this heavily and drove demand for users to use credit cards instead of PayPal-type accounts online.

    At the same time, PayPal's potential users also frequently griped about not wanting to sign up for an account and provide a bunch of banking info, just to buy something. So PayPal caved to pressure from sellers on eBay who were losing business to the requirement and made it easy for buyers to just check out with a credit card. While through their website checking accounts are still the default, most of PayPal's business today is as a plain old credit-card merchant-services provider, behind the scenes and unseen, processing credit and debit cards.

    It also meant that the fees to sellers included both credit-card fees and PayPal fees on most transactions, a double dip that put a serious dent into PayPal's attractiveness to merchants. Other than on sites like eBay (which basically forced sellers to use PayPal), there was little reason for broad adoption.

    This cut seriously into the account growth at PayPal and reduced its threat as an end run around Visa.

    Instead, PayPal's success outside of eBay came from making it simple for someone to become a merchant. Signing up for a PayPal account was faster and easier than getting set up with a traditional merchant account. For Visa and company, that was a win, as they had a firm that would get paid to reach a market filled with merchants too small to be reached individually. PayPal transformed from an end run around the payment providers into the world's most popular payment gateway, funneling the overwhelming majority of that traffic right through to the credit-card providers. Threat averted.

    But providers of NFC-equipped phones, such as Google, are gearing up to provide services similar to PayPal. Google's Wallet software lets a user store multiple cards in a single account and choose the appropriate one to use when checking out. One of the potential options for payment is Google's own Checkout service, which works just like PayPal and does it for less than typical charge-card fees - something merchants could get excited about much the way they did with cheaper debit-card payments. Just swipe your phone and choose the card on your screen (or vice versa), and you are using Google's services to pay and get paid, whether that service is from Visa or directly through Google.

    This sudden entry into the space by big, powerful, connected companies like Google or Apple (which has yet to load NFC into any of its phones, but has long been rumored to be looking at such an option) - with their preexisting relationships with hundreds of millions of global consumers - into the payment chain is a far more pressing threat than PayPal ever was. These companies don't face the "cold start" problem on registrations, have rabidly loyal fan bases, and reach into nearly every home and business in the industrialized world.

    On the other hand, they also don't have the expertise to combat fraud, putting them into a potentially risky situation if they fail to do their job effectively.

    The phone companies - the other potential middlemen which could support putting NFC in the hands of millions of consumers if they thought they could get value out of it - have seen this movie before:

    • Long-distance "bumping"
    • 900 numbers
    • TXT subscription services

    Many times in the past, telephone network operators have tried to insert themselves into the billing relationships of their clients and other parties, and nearly every time it has ended in disaster. Fraud was rampant. Client complaints skyrocketed, boosting costs of keeping customers happy and problem-free. Overall, it was a series of giant headaches, and one would assume that not only did they learn their lesson, but that maybe Apple and Google would learn it from them as well.

    Payment-card providers have every incentive in the world to prevent adding another middleman - especially one with big influence on consumers - into the equation. They'll do that with a combination of marketing, lobbying, and threatening behind the scenes, doing everything in their power to stop the technology from taking off, unless they control its use and dictate its terms. And those middlemen may just find themselves treading over treacherous paths that their business models have not prepared them for.

    Wireless Security

    The other argument for NFC is a technical one. Proponents of the technology say that NFC will be far more secure than the traditional magnetic-stripe card. They point to examples of widespread adoption of "chip + pin" technology in Canada, Europe, and Latin America as evidence that magnetic-stripe cards are faced with fraud problems and that technology can overcome them.

    NFC is pitched as something far more convenient than the traditional credit card, without the security holes associated with previous wireless payment technologies. Thanks to the interaction between the NFC device and its host (a phone in most cases), layers of security can be added to require passwords, PIN codes, and other protections prior to the credit-card data being sent across to the merchant's payment terminal.

    Unlike previous wireless technology, the data stream between the merchant and the consumer is encrypted for extra security.

    However, all this security becomes moot when a device simply transmits that data to an unsecured machine or network. Over the past few years, the overwhelming majority of credit-card theft has happened at the system level. Hackers have done everything from cloning cards - something NFC theoretically could prevent, but more on that below - to hacking hundreds of readers that broadcast card info wirelessly; to breaking into websites, telephone networks, and corporate intranets to steal cards; and even to hacking the networks of large credit-card-transaction aggregators to steal thousands or millions of cards at a time. NFC does nothing to deal with the latter, a far more pressing and larger-ticket fraud than consumer-level security issues today.

    With consumers educated to protect their cards and indemnified from damages, and lower-level credit-card fraud a well understood and easily policed threat, Visa and company have little reason to get behind NFC, even if it does offer some minor security benefits.

    This is why only a single bank, CitiBank, and a single card company, the distant third-place MasterCard, have signed on for using the technology. And even they are hedging their bets. "NFC may become really important in the future," says, Ed Olebe, head of PayPass Wallet services for MasterCard. But "we are waiting to see how the industry works out its issues."

    Enter the Merchants

    Even if credit-card companies don't want to see that middleman, won't their direct customers - the merchants who accept their cards - be willing to jump on the bandwagon and push the card providers to support it?

    Probably not. In order to add NFC, they would need to upgrade their payment terminals. For small businesses leasing the terminals, the account provider would have to foot the bill for the new technology. And with that, they would have to take on more technical support calls for any issues that creep up from adopting NFC, a far more complex technology stack than a magnetic-card reader, or even than a traditional radio-frequency card. That dynamic has always ensured slow adoption of new payment technology for small retailers.

    For larger vendors - like market-leading retailers and restaurants - the ultimate decision point is checkout time. A busy Starbucks or crowded Target will quickly lose money from frustrated customers who walk away from long lines. So they do whatever they can to optimize the checkout process. Checks are discouraged heavily, as they are the slowest. Cash isn't too bad. But cards - especially now that they have lobbied card companies to do away with signatures on small purchases - are king.

    NFC potentially complicates the payment chain - especially if advanced security features are used.

    The point is that the payment providers and most merchants have little real incentive to support this new technology. Their businesses are less complicated and quite secure enough already.

    Nor are mobile phone companies in the United States apt to jump into this space. Visa and company will firmly protect their turf, squeezing margins from any attempt by mobile providers to insert themselves into the existing payment chain. The AT&Ts and Verizons, with their teams of lawyers, accountants, and economists, will recognize the magnitude of the challenge and are likely to take a pass.

    Consumer Demand

    All of this leaves only consumer demand to drive adoption. If there is enough of that, carriers will have no choice but to accept the technology, and payment-services providers to support it. Sure, carriers will drag their feet. And payment providers may even attempt to fight it, by making it less convenient still than the current system. But if the technology is well designed, widely implemented, and serves a need for the consumer, no matter the business model behind it, it will prevail in the market.

    Is there any reason why consumers would demand the use of NFC? We are adopting smartphones at an astounding rate, after all. More than 50% of all phones sold in the US are smartphones now, up from low single-digit percentages just five years ago. That's a clear sign that a simple fear of technology is not holding back consumer demand.

    And we sure picked up on other convenient technologies quickly as well... like Wi-Fi.

    The reason of course for our adoption of these new devices, and these new wireless capabilities, is that they directly eliminated a pain point or provided an entirely new convenience.

    Before Wi-Fi, laptops were mostly tethered to a wall. You had to wire your laptop to an ethernet port to get access, rendering your portable computer a digital ball and chain. No sitting out on the porch to work from home on a sunny day.

    Before the smartphone or tablet, if you wanted to get an email, to see what your friends were up to online, or just to find some news or videos to pass the time, you were stuck flipping open a five-pound laptop. Now it's all in the palm of our hands.

    Consumers are quick to embrace any technology that makes their life easier. Credit and debit do, and they now comprise nearly 50% of all transactions per person per month, as this pie chart shows:

    (click to enlarge)

    Does NFC similarly provide a benefit or solve a problem?

    Maybe taking a look at its failed predecessor - RFID - will give us some insight into how high the bar to supplant the credit card actually is. These small Radio-Frequency ID chips - which can transmit a signal from an otherwise inert little device (like a plastic card) when passed near enough to a reader - were the last hot new invention that was supposed to end the supremacy of the magnetic-stripe credit card.

    You would be freed from the terrible, awful, painful hassle of removing a credit card from your wallet, thanks to RFID. Heck, they didn't even have to be cards. Instead, companies like Mobil gas stations created keychain versions of the same and gave them useful sounding names like "SpeedPass."

    However, there was a simple logistical problem: most people carry more than one card. If you wanted to simply swipe your wallet or purse by a machine, or even walk through an EZ Pass-for-people-style gateway, how would it tell which card to use? Interference problems from multiple cards notwithstanding, if a reader could get a clear list of all the cards, then it would still have to prompt a customer for their choice of cards...

    Quick, which of your cards is AMEX ...6057 versus AMEX ...7221?

    Again, you have customers reaching into their wallets to avoid confusion and adding to checkout delays. Visit any Target or Starbucks during prime hours and tell me if you think another 30 seconds per transaction would be no big deal.

    Then pile on the security problems. In theory, anyone with a relatively cheap reader could pass by you on a sidewalk and read your card(s) from the outside. RFID would have all but put pickpockets out of business... at least, the less technically savvy ones.

    In either event, the answer was to make the devices so low-power that they had to be held up next to the reader to work (still not alleviating the wallet issue, meaning you have to get out a specific card).

    Needless to say, after one simple look at the problems, merchants weren't exactly beating down the door to install costly new RFID readers in place of their current equipment. It was no better at speeding customers through checkout during busy times, and it was no more convenient. Customers weren't exactly complaining about the failures of their credit cards, and with all the potential problems coming from RFID, the bandwagon for consumers, for merchants, or for payment providers never really filled up.

    So let's tally up the scorecard for RFID:

    • New equipment for merchants
    • Less secure than the magnetic stripe

    Enter NFC to save the day!

    First, let's tackle those pesky duplicate reads. Few people carry more than one cellphone. Even if they did, chances are they are not going to turn on the NFC features of both, and load them up with payment details. A problem affecting 99% of customers just became one affecting less than 1%.

    Then there is security. Unlike RFID, NFC transmissions can be encrypted. The two devices that talk - up to a maximum of 20 centimeters apart from each other (a nominal distance intended to limit the chance of accidental cross signals and make spying harder), establish their connection in a few milliseconds, and then set up an encrypted channel to talk through before exchanging any sensitive information, like your credit card number - further reducing the likelihood of that data being intercepted.

    In addition, NFC Forum, the industry trade group pushing the technology (similar to the Wi-Fi alliance and the Bluetooth working group, each of which helped popularize their respective protocols and create huge businesses in the process), made sure NFC was fully compatible with old RFID tags and reading infrastructure, to accommodate that small number of merchants already invested in RFID readers, like McDonalds.

    Thanks to the fact that the NFC hardware can store multiple potential payment methods in its secure vault, choice in payment to match the good old-fashioned wallet is restored. And it can do this trick with just one device, prompting you on screen to choose your account of choice (or simply defaulting to a standard payment method until you choose differently).

    So what's not to like with NFC then? Well, let me ask you a quick question: Has your cellphone battery ever died? Yeah, that's what I thought.

    After I was done waiting in that United line, I was greeted on the plane by another interesting situation courtesy of the cellphone boarding passes. Once seated, I noted a man being asked again and again by the people around me to leave his seat. He'd sit down, the next person on would tell him it was their seat, and he'd move back a row. Again and again. After some discussion, it turned out that his cellphone battery had gotten him only as far as the door to the plane and then died. He had no idea what seat he was supposed to be in. Upon asking the flight attendant for help, she told him that he would have to wait until everyone was boarded and she could get a copy of the manifest. In the meantime, he was to sit down "anywhere" until everyone else was on board.

    NFC is intended to address this problem, of course. The electronics in the phone for the NFC are activated not by the phone's battery but by the actual reader, wirelessly. In this regard it works much like RFID. In fact, any NFC reader can read a standards-compliant RFID tag too. Problem solved, right?

    For the ticketing industry, maybe. But for payments, not so much. More than just transmitting a device ID, the NFC system has to transmit credit-card data. If a phone is dead, that means the NFC system will have to make a choice: transmit the data RFID-style, to any application that requests it without prompting the user; or stop functioning. That's either a pretty big security hole or a pain-in-the-butt inconvenience.

    Further, have you ever had a hard time getting a program on your computer or phone to work reliably? Or just fumbled trying to find the notification or popup for an application that needs your attention? Now imagine the kind, older woman in front of you in line at Kohl's selecting her payment method on the touchscreen of her iPhone, tapping in the specific PIN code for that card, then swiping her phone across the reader in the time provided. Sure, the workflow can be altered a little. Enter that PIN after swiping, maybe. Or have the PIN entry on the payment terminal like they do today with debit cards - just don't move the phone out of range when you do.

    The practical implementation of NFC leaves a lot of open questions about security, about complexity, about who ultimately controls the experience - credit-card provider, hardware maker, mobile-network operator, or merchant - and about who handles all the support calls that will result.

    After all of that, the result is no faster or more convenient than a simple magnetic-stripe card reader. And you introduce the complexities of battery life and other mobile-phone-specific issues to a process that otherwise works great as is.

    Credit cards took off because they were infinitely safer, more convenient, and faster than cash - all valuable benefits in these harried lives we lead. The debit card made small leaps from there, in cost for merchants and in convenience for the many Americans who have no credit or prefer to use it more cautiously.

    But NFC - just like RFID before it - provides no real improvement in any part of the credit/debit-card value chain. Almost no one is demonstrably better off for adopting NFC, and thus chances are very low that it will find wide success in the payments industry. There may be other reasons it comes into massive scale adoption - another "killer app" as they say - and that will change the equation down the line. But until then, Visa has little to nothing to fear from NFC. And those other investors hopping on the supposedly multibillion-dollar bandwagon should think long and hard about whether their investment is likely to succeed in the long run.

    Like many other "revolutionary" technologies before it, NFC just may be a solution in search of a problem. That's precisely the wrong approach to creating a tech breakthrough... but it's one overeager or ill-informed tech investors may fall for repeatedly. But there are big tech breakthroughs on the brink of going mainstream, especially in health care. Learn about one development now, and get in on a great opportunity for expert guidance at prices you likely won't see again. Get all the details now, while this offer is good.

    Disclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

    Oct 30 5:32 PM | Link | Comment!
Full index of posts »
Latest Followers

StockTalks

More »
Posts by Themes
Instablogs are Seeking Alpha's free blogging platform customized for finance, with instant set up and exposure to millions of readers interested in the financial markets. Publish your own instablog in minutes.