Entering text into the input field will update the search result below

Quick & Dirty Primer On Artificial Intelligence / Machine Learning

Dec. 31, 2020 9:09 AM ETTSLA, NVDA, GOOG, MSFT, CRM, META, SQ
Ron P profile picture
Ron P's Blog
2 Followers
Please Note: Blog posts are not selected, edited or screened by Seeking Alpha editors.

Summary

  • Glossary and a brief background about machine learner algorithms – provides much needed clarity for investors who are not trained in computer science.
  • The highlighted statistics section provides figures and elaborations of the total addressable market (TAM) for business opportunities created by the application of deep learning.
  • Significant players in the artificial intelligence / machine learning value chain will be: TSLA, NVDA, GOOG, MSFT, CRM, FB, SQ.

CONTENTS:

  1. What is Artificial Intelligence – AI?
    • Glossary
    • 5 Schools Of Thought In Machine Learning
  2. Highlighted Statistics
  3. Elaboration On Highlighted Statistics
  4. Long Term Investment Opportunities
  5. Conclusion

What is Artificial Intelligence – AI?

For many of us, the phrase “artificial intelligence” became an overused cliché a long time ago. AI is everywhere in tech right now, said to be powering everything from your TV to your smartphone to your toothbrush, but never before have the words themselves meant less. It’s better, then, to talk about “machine learning” (ML) rather than AI. Machine learning is a subfield of artificial intelligence, and one that encompasses pretty much all the technology having the biggest impact on the world right now (including what’s called deep learning). As a phrase, it doesn’t have the mystique of “AI,” but it’s more helpful in explaining what the technology does, and consequently, allows readers to gain the ability to correctly gauge the significance of each new AI-related development.

Before we delve further into the rabbit hole, we need to familiarize ourselves with several key terminologies so that the reader doesn’t need to run Google searches every now and then – thus serving the purpose of this article being a quick primer in artificial intelligence and machine learning:

  1. Algorithm
    • A sequence of instructions telling a computer how to solve a particular problem.
    • Computers are made of billions of tiny switches called transistors, and algorithms turn those switches on and off billions of times per second, which generates “1”s (for “ON”) and “0”s (for “OFF”) – like the Morse Code, but in binary form.
    • The alternating ON/OFF state of the transistors generates the binary code which can be translated by a computer into texts and numbers or other appropriate output.
  2. Machine Learner
    • In this article, the type of algorithm we focus on is called a “machine learner”, which is tasked with the burden of teaching itself to learn how to do a specific task better over time, provided they acquire larger and larger data sets.
    • It is an algorithm that writes other algorithms – albeit updated or modified versions of themselves, called iterations. This is how a machine learner “learns”.
  3. Artificial Narrow Intelligence (ANI)
    • In a nutshell, a machine intelligence that is very good a particular task or a narrow set of tasks, comparable to an extremely autistic-savant human being.
    • Non-existent common sense.
    • Real-word examples include Apple’s Siri, Tesla’s autonomous driving AI, YouTube’s recommendation algorithm, Facebook’s moderation algorithm, and many more.
  4. Artificial General Intelligence (AGI)
    • In layman’s terms, a machine intelligence that has common sense.
    • Not particularly exceptional at any one task, much like a regular, non-autistic savant human being.
    • According to Wikipedia, it is a hypothetical machine intelligence that has the capacity to understand or learn any intellectual task that a human being can.
    • Currently no real-world examples.
  5. Supervised learning
    • Uses labeled dataset to train the machine learner algorithm – also known as “guided by humans”.
    • Once sufficiently trained, the machine learner should theoretically be able to adjust itself to the task it is assigned to do.
  6. Unsupervised learning
    • Revolves around dimensionality reduction – see below.
    • Does not use training data to train the machine learner algorithm, instead it lets the algorithm learns by itself using vast amounts of data – also known as “not guided by humans”. Contemporary machine learners had not fully mastered unsupervised learning.
    • Uses a variety of techniques, which includes but are not limited to:
      1. Reinforcement learning
      2. Relational learning
      3. Chunking
      4. Clustering
      5. Principal Component Analysis and Isomap
  7. Perceptron
    • A device or software that receives a given input or data, and then reacts by outputting a reactionary set of data. These early devices are the formal depiction of how a brain cell – a neuron – works.
    • Invented by Frank Rosenblatt, a Cornell psychologist in the late 1950s. Rosenblatt’s perceptron is a physical device, because software speeds during those days were painfully slow. Contemporary perceptrons and multilayer perceptrons are all built virtually, in software.
  8. Multilayer perceptron
    • In a nutshell, it is a system of interconnected perceptrons, and is the basic building block of a neural network. The modern version is called an "autoencoder" and is a key component of modern deep learning neural nets.
  9. Dimensionality reduction
    • Clustering massive amounts of random data around only a few of the most useful data points. In layman’s terms, it can be said that dimensionality reduction is the reduction in complexity of the machine learner’s task by way of generalization.
  10. Overfitting
    • The opposite of drawing a best fit line through a scatterplot. Overfitting means the machine learner algorithm draws lines to accurately fit all its training data, leaving it completely useless when faced with real world data outside its training sample.
    • To paraphrase Carveth Read / John Maynard Keynes, properly developed machine learners strive to be roughly correct, to avoid being precisely wrong.

In my research, I found that the book "The Master Algorithm" by Pedro Domingos serves as an adequate guide to the world of AI and ML for outsiders to the field of computer science. This primer owes much of its existence to Mr Domingos’ material. Pedro, being an educator, classified the taxonomy of knowledge within the field of machine learning into 5 schools of thought, each with starkly different idealogies and approaches to the subject matter, much like the different schools of thought in the field of economics. He elucidates in detail the approaches that each school of thought takes to achieve unsupervised machine learning, with their ultimate aim to create the "Master Algorithm", which is a machine learner that can teach itself to learn better, without human supervision. In other words, the specific task assigned to the Master Algorithm is to learn more efficient and effective ways to learn better. Each school of thought has a different algorithmic candidate that they claim should be the Master Algorithm - the DNA of the world's first Artificial General Intelligence (AGI).

The aforementioned 5 schools of thought are:

  1. Symbolists
    • Takes ideas from philosophy, psychology, and logic.
    • Their candidate for the Master Algorithm is Inverse Deduction. Inverse deduction figures out what knowledge is missing in order to make a deduction go through, and then makes the missing link as general as possible, in order to apply to as many yet to be identified datasets as possible.
  2. Connectionists
    • Inspired by neuroscience and physics, wants to reverse engineer the brain.
    • Their candidate algorithm is Backpropagation - a technique that trains deep networks by applying calculus to labeled data sets. This is the engine of deep learning, supported by a more recent invention - the autoencoder, which is in essence a multilayer perceptron. In fact, deep learning is a modern name for an old technology - artificial neural networks - which is a computer program loosely inspired by the structure of the biological brain.
  3. Evolutionaries
    • Inspired by the evolution of life on the only known planet to create intelligent life, they want to simulate evolution on the computer and draw on genetics and evolutionary biology. Used rules called "classifier systems". Also called "structured learning".
    • Their candidate algorithm is Genetic Programming.
  4. Bayesians
    • They believe that learning is a form of probabilistic inference and have their roots in statistics.
    • Their candidate algorithm is Bayesian Inference.
  5. Analogizers
    • Believes learning is done by extrapolating from similarity judgments – by analogy – and are influenced by psychology and mathematical optimization.
    • Their candidate algorithm is the Support Vector Machine, which figures out which key experiences (out of the noisy dataset) to remember and how to combine them to make new predictions.

Deep learning, which is an aforementioned technique from the Connectionist School, combined the deep learning neural net, the autoencoder, and the backpropagation algorithm to become the first machine learner to achieve unsupervised learning, becoming the frontrunner in the race to AGI. But perhaps more significantly to investors in particular, it is also the machine learner with the most viable modern commercial applications. With integration with the best features from the other algorithms from the other 4 schools of thought, the Master Algorithm is within reach. Thanks to this breakthrough, computer vision, voice recognition, speech synthesis, machine translation, game playing, drug discovery, and robotics are setting game-changing performance records.

Highlighted Statistics[1]:

  • $17 trillion in market capitalization creation from deep learning companies by 2036.
  • $6 trillion in revenue from autonomous on-demand transportation by 2027.
  • $6 billion in revenue for deep learning processors in the data center by 2022.
  • $16 billion addressable market for diagnostic radiology.
  • $100-$170 billion in savings and profit from improved credit scoring.
  • $12 trillion in real GDP growth in the US from automation by 2035.

Elaboration on highlighted statistics:

$17 trillion in market capitalization creation from deep learning companies by 2036

In making this estimate, the analysts at ARK Invest identified companies that emerged and flourished because of the internet, while excluding names like Apple, Qualcomm, and Microsoft that benefited from the internet but were founded on different core technologies. Based on this criteria, 12 stocks in the US emerged as direct beneficiaries, representing 8.6% of the S&P 500, or $1.7 trillion in market capitalization. In other words, the estimated market capitalization ADDITION due to the internet is US$1.7 trillion, from the birth of the first internet companies (Yahoo in 1994) until Jun 2017, their time of writing. The companies ARK analysts used are born due to the advancement of the internet. They are GOOG, AMZN, FB, CSCO, NFLX, CRM, Yahoo, EBAY, AKAM, JNPR, VRSN, FFIV.

When scaled to global equity markets, ARK analysts believe the share of stocks born out of the internet is approximately 6.6% (down from 8.6%), based on lower share of internet-born companies globally versus the US. The S&P 500 CAGR from 3 Jan 1994 to 26 Nov 2020 is 8.21% p.a., but ARK analysts estimated the global equity market CAGR to be 6.9% p.a. I heartily agree with the more conservative estimates to maximize my margin of safety. With the 6.9% p.a. appreciation, the global market capitalization would approximate to $264 trillion two decades from now. If deep learning-based firms were to grow to the same share of stocks born out of the internet over roughly the same time frame, that would imply a potential $17.424 trillion in market capitalization by the mid-2030s. (MATH: $264 trillion * 0.066 = $17.424 trillion).

In our view, this estimate is quite conservative, for two reasons. First, it counts only market capitalization creation by new companies; however, deep learning could generate as much value for existing enterprises as for new ones. Second, this estimate assumes deep learning grows at the same rate as the global equity market even though technology is improving at an accelerating rate: deep learning should grow much faster during its first twenty years than will the global equity market.

$6 trillion in revenue from autonomous on-demand transportation by 2027

ARK analysts believe that deep learning is a fundamental requirement for level 4 or higher autonomous driving[2]. Deep learning solves the two key problems facing autonomous driving: sensing and path planning. Neural nets allow a computer to segment the world into drivable and non-drivable paths, detect obstacles, interpret road signs, and respond to traffic lights[3].

While self-driving systems have yet to reach the level required for full autonomous driving, the observed rate of progress from Google and others suggests that self-driving technology will be available by the end of this decade[4]. This has proven to be inaccurate, due to the covid disruption to travel. However, in parts of Phoenix, Arizona, self-driving car company Waymo - a fully owned subsidiary of Alphabet - announced in October 2020 that its autonomous vehicles are now available to the general public (or at least paying customers)[5].

ARK Invest's analysts had estimated that the cost per mile of electrical autonomous taxis to be half that of the average private car that runs on ICE, which is $0.70 per mile, due to economies of scale from higher utilization rates (they obviously did not account for covid). They did not provide mathematical working on their assumption on this estimate, so it is impossible for me to accept this data at face value. However, I will give them this - autonomous taxis will be an order of magnitude cheaper and more convenient than traditional taxis or even ride hailing services, all because of the savings in wages or commissions to the human driver - just that we are unable to reliably estimate the TAM.

$6 billion in revenue for deep learning processors (accelerators) in the data center by 2022

Deep learning could shift the focus of the microprocessor industry from general application performance to neural net performance - thus shifting the focus from CPUs to GPUs (already well under way). Do take note that the GPU is not the only processor capable of accelerating deep learning.

FPGAs are another class of processors that offers high performance. Although difficult to program and slower in speed than GPUs, FPGAs can provide power-efficient inference acceleration. Microsoft claims that prospectively almost all of its servers will be coupled with an FPGA to accelerate AI and other internet workloads. Meanwhile in the cloud, Amazon’s EC2 F1 servers provide FPGA based acceleration on demand. More recently, AMD’s acquisition of XLNX opens the gate (pun not intended) for the chipmaker to become a more serious data center contender.

Yet another way to accelerate deep learning would be a custom chip. A processor designed specifically for deep learning would be an order of magnitude faster than today’s GPUs, as it would swap the high precision execution graphics units for lower precision deep learning units, and remove fixed function hardware to save space and power. Companies such as Google, Intel, and Graphcore are following this route. Google’s chip, the Tensor Processing Unit (TPU), were already deployed in Google’s data centers[6]. The TPU powered the deep learning program that defeated world Go champion Lee Sedol, AlphaGo. Google claims that the TPU enables performance an order of magnitude higher than GPUs and FPGAs. Intel's acquisition of deep learning startup Nervana[7] hints at their intention to develop ASICs of their own. Note that GOOG's TPU is a type of ASIC (application-specific integrated circuit).

Deep learning is unique enough as a workload to justify a new architecture and to support the daunting cost of ongoing chip development. With space and power limited on client devices, deep learning eventually could reside on-chip as part of an acceleration block, similar to the way graphics is handled today.

Deep learning has two functions:

  1. Training in which the neural net learns how to do a task by ingesting large amounts of data, and
  2. Inference in which the trained neural net does actual work.

Training currently makes up the majority of revenue since accelerators are a must-have for efficient training. In contrast, inference can be run on standard servers. Training should grow to a $3 billion business thanks to continued investment by hyperscale vendors (the companies mentioned in the summary at the top of this article), the increased availability of GPU based servers in the cloud, and the adoption of deep learning by non-internet industries, particularly automotive where the technology will be key for autonomous vehicles.

To arrive at the $6 billion figure for accelerator revenue, ARK Invest made the following estimates:

  • Deep learning will become one of the top workloads at data centers over the next five years.
  • Public cloud servers with accelerators will make up 40% of public cloud server revenue by 2022.
  • Training and inference accelerators will roughly be equal in revenue by 2022.
  • Accelerators will account for 65% of the cost of training servers.
  • Accelerators will account for 25% of the cost of inference servers.

ARK estimates that up to 75% of the value of deep learning servers accrues to the GPU at the present. While they have not been deployed widely for inference, GPUs and other accelerators could account for 25% of the value of future deep learning inference servers. ARK estimates that deep learning accelerator revenue will grow 70% annually from $400 million in 2016 to $6 billion by 2022. At that time, according to our research, roughly half of accelerator revenue will be for training, and half for inference.

Interesting to note, these estimates focus only on deep learning chips in the data center. The level of deployment in client devices, such as computers, smartphones, cars, cameras, and IoT devices, could be orders of magnitude higher, albeit at lower unit-revenue. Client devices probably will use single chip solutions, with some portion of the chip dedicated to deep learning operations. Their wide adoption, however, will create another source of demand for training and a virtuous cycle of data center and end user adoption.

$16 billion addressable market for diagnostic radiology

From revenues of $1 billion today, the growth in medical software companies and imaging device manufacturers could average 20-35% growth per year as deep learning enhances their productivity and creates new products and services during the next ten to fifteen years.

Diagnostic radiology is essential to modern health care; yet, the visual interpretation of medical images is a laborious and error prone process. Historically the average diagnosis error rate among radiologists is around 30%, according to studies dating from 1949 to 1992. (Source: “Computer-Aided Diagnosis in Medical Imaging: Historical Review, Current Status and Future Potential,” NCBI, 3/8/2007 Computer-Aided Diagnosis in Medical Imaging: Historical Review, Current Status and Future Potential). Intelligent software powered by deep learning has the potential to change the status quo. Early results are promising: the latest deep learning systems already outperform radiologists and existing algorithms in a variety of diagnostic tasks, as evidenced below:

  1. Enlitic, a San Francisco based startup, says its deep learning based diagnostic system can detect lung cancer nodules 50% more accurately than a panel of radiologists when benchmarked in an NIH-funded lung image data set. Enlitic’s system detects extremity bone fractures, say on the wrist, with 97% accuracy compared to 85% accuracy of radiologists and 71% accuracy of previous computer vision algorithms.[8]
  2. Harvard Medical School built a deep learning system that detects breast cancer with 97% accuracy compared to 96% accuracy of a radiologist. When the radiologist was aided by the diagnostic system, accuracy improved to 99%.[9]
  3. McMaster University’s deep learning system achieves 98-99% accuracy in detecting Alzheimer’s disease in magnetic resonance images. Previous computer vision algorithms achieve only 84% accuracy.[10]

Achieved in a relatively short amount of time, these super-human results have the potential to shake up the traditional computer aided diagnosis (CAD) market. Currently, the CAD market is dominated by companies such as Siemens, Philips, Hologic, and iCad. Global revenues totaled roughly $1 billion in 2016 and, according to Grand View Research, will grow at a compounded annual rate of 11% to $1.9 billion by 2022.[11]

ARK Invest's estimate is based on 34,000 radiologists in the US reviewing 20,000 cases per year[12]. Given that radiologists pay up to $2 per case for existing Picture Archiving and Communication Systems (PACS), a better-than human diagnostic system could be priced at $10/case. Assuming full adoption, the US market alone would be worth $6.8 billion. Normalized to incorporate the rest of the world’s health care spending brings the total addressable market worldwide to $16.3 billion. But bear in mind that both market incumbents and new entrants are well positioned to attack this market.

While deep learning based diagnostics offer great promise, deployment will be a gradual process, especially because CAD software is regulated by government health agencies in the US, EU, and China. After regulatory approval, integrating with hospital IT systems, training doctors, and obtaining insurance reimbursements will take more time and effort. Once these obstacles are overcome, radiology could become far more automated, accurate, and accessible.

$100-$170 billion in savings and profit from improved credit scoring

With its vast asset base and large data sets, the financial services industry is in an excellent position to benefit from machine learning and deep learning. In this section, we include both classes of algorithms to reflect the latest benchmark results. These algorithms hold particular promise in assessing the relative risk of prospective borrowers more efficiently and accurately. If applied broadly to the US consumer credit industry, ARK estimates that machine learning could improve the lifetime profits associated with new and revolving loans each year by up to $170 billion.

To calculate this estimate, we considered three categories of consumer debt in the US: home loans, revolving consumer credit, and non-revolving consumer credit. According to the Mortgage Bankers’ Association home loans totaled $1.9 trillion in 2016, while revolving consumer credit, primarily credit cards, and non-revolving consumer credit were $980 billion and $180 billion, respectively. This $3 trillion offered financial companies significant opportunities and risks associated with granting new loans in terms of requiring a credit rating or revolving loans to be re-rated[13].

After reviewing 41 machine learning algorithms in a comprehensive survey published in 2015, ARK Invest analysts concluded that artificial intelligence outperformed the commonly used logistical regression classifiers significantly[14]. The ARK Invest analysts reviewed both individual algorithms and ensemble algorithms. An ensemble is a collection of individual algorithms in one system to enhance accuracy and reduce bias. Artificial neural nets, or deep learning algorithms, improved the profitability of a loan by 3.4% compared to a logistic regression. The Hill-Climbing and Random Forest ensembles of algorithms were even more productive, 4.8% and 5.7%, respectively. Applying the improvement rates would impact the profitability of the $3 trillion in US consumer debt issued in 2016 dramatically. Depending on the algorithm used, total lifetime profitability on these loans could increase by $100-$170 billion.

The math:

0.034*$3 trillion < increased profitability < 0.057*$3 trillion

= $102 billion < increased profitability < $171 billion.

$12 trillion in real GDP growth in the US from automation by 2035

ARK estimates that the cost of industrial robots, which currently are roughly $100,000, will fall by half over the next ten years. Concurrently, a new breed of robots designed for co-operative use with humans will cost on the order of $30,000. Retail assistant robots like SoftBank’s Pepper cost about $10,000 when service fees are included. Leveraging components from the consumer electronics industry such as cameras, processors, and sensors should drive costs closer to those of consumer products[15][16].

Industrial robots are not designed from a usercentric point of view. They require precise programming using industrial control systems in which each task must be broken down into a series of movements in six dimensions. New tasks must be programmed explicitly: the robot has no ability to learn from experience. Deep learning can transform robots into learning machines. Instead of precise programming, robots learn from a combination of data and experience, allowing them to take on a wide variety of tasks. According to Preferred Networks, a robotics company based on deep learning, a human programmer must work for several days to teach a robot a new task. By comparison, using deep learning, a robot learns the same task in about eight hours.41 When eight robots jointly learn the task, training time drops to one hour. Thus deep learning provides a 5x increase in training speed and, via parallel learning, offers firms the ability to compound that improvement as they devote more hardware to the task[17].

In the Amazon Picking Challenge, a robotics competition, robots attempt to pick random items from a shelf and place them in a box. In 2015, the winning robot was able to pick 30 items per hour. In 2016, picking performance more than tripled to 100 items per hour, with the top two teams using deep learning as the core algorithm for vision and grasp. Were improvements to continue at that rate, robotics-based item picking would exceed human picking in two years.

ARK’s research shows that industrial robots will remain the workhorse of high-capacity manufacturing, but their unit volumes will be dwarfed by the newer, nimbler robots, just as mainframe computers were swamped by workstation units, followed by PCs and then smartphones. As a result, robot unit volume shipments could soar 10 to 100-fold. Like drones, many of the new robots will bear little resemblance to the robots that dominate the market today. Smartphones don’t look much like mainframes either.

Long Term Investment Opportunities:

From an investor perspective, deep learning is only five years old and particularly compelling given its low revenue base, large addressable market, and high growth rate. The technology industry is rallying around deep learning because of its step-function increase in performance and broad-based applications. Google, Facebook, Baidu, Microsoft, NVIDIA, Intel, IBM, OpenAI, and various startups have made deep learning a central focus.

Andrew Ng, formerly Baidu’s Chief Scientist, has called AI the new electricity[18]. We believe its ultimate potential is analogous to the internet. Much like the internet, deep learning will have broad and deep ramifications. In the early days of the internet, for example, analysts anticipated ecommerce and media portals, but could not fathom the existence and power of cloud computing, social media, or sharing economy platforms such as AirBnB. In the same way, deep learning could create new use cases and companies that are impossible to foresee today. Specifically:

  • Like the internet, deep learning is relevant for every industry, not just for the computing industry. Deep learning software is a core differentiator for retail, automotive, health care, agriculture, defense, and many other industries.
  • Like the internet, deep learning endows computers with previously unimaginable capabilities. The internet made it possible to search for information, communicate via social media, and shop online. Deep learning enables computers to understand photos, translate language, diagnose diseases, forecast crops, and drive cars.
  • Like the internet, deep learning is an open technology that anyone can use to build new applications. While deep learning used to rely on computer scientists and specialized hardware, it now can be done using a laptop and a few lines of Python code.
  • Like the internet, deep learning should be highly disruptive, perhaps far more disruptive than the internet. The internet has been disruptive to media, advertising, retail, and enterprise software. Deep learning could change the manufacturing, automotive, health care, and finance industries dramatically. Interesting to note, those industries have been sheltered to some extent from technological disruption to date.

Conclusion

AI and automation are unstoppable secular macro-trends and will disrupt our way of life like no other force in history. With the investment opportunities already discussed in detail, it is appropriate that we conclude with some expert opinions on the question of when true AI (AGI) will evolve, as well as the breakthroughs that are still required.

Kai-Fu Lee, a venture capitalist and former AI researcher, describes the current moment as the “age of implementation” — one where the technology starts “spilling out of the lab and into the world.” Benedict Evans, another VC strategist, compares machine learning to relational databases, a type of enterprise software that made fortunes in the ‘90s and revolutionized whole industries. The point both these people are making is that we’re now at the point where AI is going to get normal fast. “Eventually, pretty much everything will have [machine learning] somewhere inside and no-one will care,” says Evans.

Martin Ford, who interviewed 23 AI researchers for his book "Architects of Intelligence: The Truth about AI from the People Building It", noted that all his interviewees cited the limitations of current AI systems and mentioned key skills they’ve yet to master. These include transfer learning, where knowledge in one domain is applied to another, and unsupervised learning, where systems learn without human direction. The vast majority of machine learning methods currently rely on data that has been labeled by humans, which is a serious bottleneck for development. Interviewees also stressed the sheer impossibility of making predictions in a field like artificial intelligence where research has come in fits and spurts and where key technologies have only reached their full potential decades after they were first discovered.

Stuart Russell, a professor at the University of California, Berkeley who wrote one of the foundational textbooks on AI, begs to differ. He said that the sort of breakthroughs needed to create AGI have “nothing to do with bigger datasets or faster machines,” so they can’t be easily mapped out. “I always tell the story of what happened in nuclear physics,” Russell said in his interview. “The consensus view as expressed by Ernest Rutherford on September 11th, 1933, was that it would never be possible to extract atomic energy from atoms. So, his prediction was ‘never,’ but what turned out to be the case was that the next morning Leo Szilard read Rutherford’s speech, became annoyed by it, and invented a nuclear chain reaction mediated by neutrons! Rutherford’s prediction was ‘never’ and the truth was about 16 hours later. In a similar way, it feels quite futile for me to make a quantitative prediction about when these breakthroughs in AGI will arrive.”


[1] White Papers Listing - ARK Invest

[2] Source: J3016B: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles - SAE International

[3] Source: “The Three Pillars of Autonomous Driving,” Amnon Shashua, 6/20/2016, - YouTube

[4] Source: “DMV Autonomous Vehicle Disengagement Reports,” 2016 Autonomous Vehicles - California DMV

[5] Source: https://theconversation.com/robot-take-the-wheel-waymo-has-launched-a-self-driving-taxi-service -147908#:~:text=The%20age%20of%20the%20driverless,or%20at%20least%20paying%20customers

[6] Source: “Google supercharges machine learning tasks with TPU custom chip,” 5/18/2016 Google supercharges machine learning tasks with TPU custom chip

[7]Source: "Intel is paying more than $400 million to buy deep-learning startup Nervana Systems,” ReCode, 8/9/2016 https://www. recode.net/2016/8/9/12413600/intel-buys-nervana--350-million

[8] Source: “Enlitic and Capitol Health Announce Global Partnership,” 10/27/2015 http://www.enlitic.com/press-release-10272015.html

[9] Source: “Deep Learning Drops Error Rate for Breast Cancer Diagnoses by 85%,” NVIDIA Blog, 9/19/2016, Deep Learning Cuts Error Rate for Breast Cancer Diagnosis | NVIDIA Blog

[10] Source: “DeepAD: Alzheimer′s Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI,” Sarraf et al. 8/22/2016 DeepAD: Alzheimer's Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI

[11] Source: “Computer Aided Detection Market Worth $1.9 Billion By 2022,” Grand View Research, 8/2016 Computer Aided Detection Market Worth $1.9 Billion By 2022

[12] Source:“How Many Radiologists? It Depends on Who You Ask!” Health Policy Institute, 4/14/2015, How Many Radiologists? It Depends on Who You Ask!

[13] Source: The Mortgage Banker’s Association, Forecasts and Commentary

[14] Source: “Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research,” 5/2015 (PDF) Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research

[15] Source: “How much do industrial robots cost?” RobotWorx, How Much Do Industrial Robots Cost?

[16] Source: “Robots rub shoulders with human buddies,” Financial Times, 3/19/2015 https://www.ft.com/content/ed-

7be188-cd50-11e4-a15a-00144feab7de#axzz46Ygbi71C

[17] Source: “Zero to Expert in Eight Hours: These Robots Can Learn For Themselves,” Bloomberg, 3/12/2015, https://www.bloomberg.com/news/articles/2015-12-03/zero-to-expert-in-eight-hours-these-robots-can-learn-for-themselves

[18] Source: “Andrew Ng: Why AI is the new electricity,” Stanford News, 2017 http://news.stanford.edu/thedish/2017/03/14/andrew-ngwhy-ai-is-the-new-electricity/

Analyst's Disclosure: I am/we are long SQ, NVDA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it. I have no business relationship with any company whose stock is mentioned in this article.

This article's purpose is stated in its title - a primer for non computer science geeks like finance guys to quickly get acquainted with AI/ML concepts in order to make informed investment decisions in the field. As this is a top-down approach, investors will still need to perform adequate due diligence that will include but is not limited to company level fundamental analysis, technical analysis, with proper financial management according to each investor's risk tolerance.

Seeking Alpha's Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.

Recommended For You

To ensure this doesn’t happen in the future, please enable Javascript and cookies in your browser.
Is this happening to you frequently? Please report it on our feedback forum.
If you have an ad-blocker enabled you may be blocked from proceeding. Please disable your ad-blocker and refresh.