It's still the early days of AI (chips).
AI is growing fast and will drive unprecedented demand for compute over the next decade.
Intel grew AI revenue >20% YoY in 2019 to over $3.5 billion, with an over $30 billion data center and edge opportunity ahead.
As the leader in AI, Intel is well-positioned to capitalize from edge to cloud with its comprehensive silicon portfolio.
The market is not valuing Intel as a growth company. Hence, AI is a key catalyst for upside.
Editor's note: Seeking Alpha is proud to welcome Arne Verheyde as a new contributor. It's easy to become a Seeking Alpha contributor and earn money for your best investment ideas. Active contributors also get free access to SA PREMIUM. Click here to find out more »
Thesis and Market Opportunity
The investment thesis for Intel (INTC) is relatively straightforward: demand for compute, storage and networking will continue to increase over the next decade, driving increased demand for Intel's silicon. As Intel showed at its investor meeting in May, digital data is being created at a 25% CAGR, compute demand is growing at a 50% rate, storage at 30%, and network demand at 25% CAGR (slide below). This is opening up a new $30 billion or greater artificial intelligence opportunity over the next several years, per Tractica. The stock market is not valuing this opportunity as Intel has faced a several quarter data center digestion period and Intel continues to trade at very low P/E values compared to its peers.
AI, among many of its applications, allows for new insights in existing data as well as for the processing of new data. In numbers: Intel expects AI compute demand to grow by 64x over the next two years. The data center AI silicon market was estimated to be $4 billion in 2018 (by Intel), and will be growing at an over 20% CAGR to $8-10 billion in 2022, according to Intel. The market will be split slightly in favor of inference, between the inference (applying deep learning models) and training (creating deep learning models) sides. This growth is driven by an increase in adoption, which is still in its early stages, as well as a rapid increase in model sizes (the largest AI models have been doubling at a pace of once every 3.5 months).
Qualcomm even puts the data center inference market alone at $17 billion by 2025. Taking this value together with the training market, AI will possibly drive an over $30 billion silicon market opportunity in the data center over the few years. This is larger than Intel's $23.0 billion 2018 data center business. Put another way, Intel's data center group has the potential to double in size over the next five years from artificial intelligence alone. Intel's other growth driver in the data center are cloud computing and (5G) networking.
With its comprehensive A.I. product portfolio, Intel should be positioned to capitalize on this opportunity.
CPU Performance: Xeon Scalable
Intel started restructuring itself in 2016 to become a data-centric company, at the dawn of AI. The company bought Nervana (data center training), Movidius (edge inference) and recently Habana (data center training and inference) for their dedicated AI accelerators.
Those are just now beginning to ship, but Intel hasn't stood still, as Xeon processors are still the standard on the inference side and are likely to drive a meaningful portion of revenue of the $17 billion 2025 inference revenue that Qualcomm envisions.
Intel has increased the AI capabilities of its standard Xeon Scalable processors through extensive software optimizations, and most recently with Cascade Lake also, with some specific DLBoost instructions for AI. Further DLBoost capabilities are expected with Cooper Lake (+60% performance) in the first half of 2020 and Sapphire Rapids in 2021.
For some performance context, when Intel launched the first-generation Xeon Scalable processors in middle of 2017, it claimed that they would offer an over 2x improvement (over Broadwell-EP) in deep learning training and inference performance. This due to the higher core count and new AVX-512 compute units, which are twice as wide as the 256-bit AVX2 units they replaced. Intel claimed that a Skylake-SP server was 113x faster in training than a three-year old Ivy Bridge-EP server (2x28 vs. 2x12 cores), which is surely an impressive gain. I estimate the gains from hardware alone are ~10x, meaning that the other ~10x was enabled through Intel's software optimizations efforts.
Since then, Intel achieved another 1.4x increase in performance in training and another 5.4x improvement in inference performance through software optimizations (such as by adding INT8 support to its deep neural network library).
Cascade Lake's DLBoost (which has broad ecosystem support, according to Intel) added native INT8 (8-bit integer number representation) instructions for another ~3x improvement in performance in just one generation. This brought Cascade Lake's early 2019 total performance improvement over Skylake (at its introduction) to 14x in less than two years. Add another 2x performance for the 56-core Cascade Lake-AP (which is two 28-core chips in a single package).
Although Cascade Lake-AP does not belong to the regular Xeon Platinum line of processors and features a different socket, which makes them quite limited, Intel plans to double the core count of the mainstream SP-series to 56 with Cooper Lake in the first half of 2020, for a 30x performance improvement in less than three years. Intel has also teased that it is targeting a 10x improvement over Cascade Lake with Sapphire Rapids in 2021.
If anything, the Xeon processors have been and are quickly increasing their AI performance through software optimizations, higher core count and specific hardware optimizations, which makes them a solid platform for AI inference. These large generational performance increases are also likely to drive fast adoption and replacement cycles, even as dedicated NPUs are coming to market
This is especially true as inference in nature is a workload that is not always as easily separated as training a neural network. This is why Intel for example added two Ice Lake cores to its homegrown Nervana NNP-I for inference that recently started shipping. While there is definitely a market for dedicated accelerators, the importance of AI performance as a real-world workload is a key value proposition of Xeons, also over Epycs.
AMD (AMD) has no credible AI solution as yet. Although AMD offered a 4x improvement in floating-point performance with Epyc Rome over Naples (2x performance per core and 2x more cores), it is stuck with the older AVX2 units for the time being and AMD does not have anything akin to DLBoost for INT8 support. This puts Epyc theoretically at a 6x performance per core disadvantage, which its current 2.3x core count advantage can't compensate.
As mentioned, Cooper Lake's core count increase to 56 will eliminate AMD's core count advantage, and I expect that this will preemptively offset Zen 3's floating-point improvements in Milan with roughly half a year earlier time to market. Summed up, this should keep the status quo as it currently is until Sapphire Rapids in 2021 and Granite Rapids in 2022 extend Intel's CPU leadership in AI.
For all this performance talk, Intel said it would generate over $3.5 billion in revenue in 2019 from AI (across all businesses, including IoTG and Mobileye), up from over $1 billion in 2017 and up over 20% over 2018. For the data center specifically, Intel said it had generated over $1.7 billion in revenue in 2018.
Intel's strategy is to offer dedicated Nervana NNP-T (training) and NNP-I (inference) and Movidius VPU (edge) chips, and to infuse AI across the remainder of its portfolio. For the time being, Intel's Xeon processors are leading the way in revenue, for inference workloads, as the dedicated Nervana NNP-I and Habana Goya chips just starting to ship. The 10nm Agilex FPGA also has optimizations for AI inference and training.
For the compute-intensive training part, Intel intends to offer dedicated solutions to compete against Nvidia's (NVDA) AI-infused GPUs. Intel's main selling point of the Nervana NNP-T is high hardware utilization and very high multi-node scaling efficiency with a glueless fabric, as high as 95%. The Habana Gaudi also has much higher scaling efficiency than Nvidia via its Ethernet-based interconnect.
Intel is also developing its own AI-infused GPUs: Ponte Vecchio GPU will feature a matrix engine that is akin to Nvidia's tensor cores. It will be Intel's 7nm lead product in the fourth quarter of 2021, which means that it should enjoy a nice time to market advantage over Nvidia's 5nm GPUs, as Nvidia still hasn't moved to 7nm over two years after that process came in the market. By that cadence, Nvidia's 5nm GPUs would be expected some time in 2022. (Intel's 7nm process technology will be similar to TSMC's 5nm, according to Intel.)
One point that investors perhaps should be aware of is that with the recent Habana acquisition, Intel now has two overlapping series of chips for AI inference and training. I naturally expect that Intel will likely streamline its portfolio there over time.
The importance of edge chips can't be understated, too, as the AI edge opportunity is estimated to be 3x larger than the data center, at 75% of the $40 billion 2022 AI silicon market, according to Intel's presented data of Tractica (slide below).
Movidius recently announced its next-generation Keem Bay chip that shows leadership performance according to Intel's data, such as similar performance to Nvidia's larger Xavier at one-fifth the power consumption.
Mobileye recently announced the EyeQ6, slated for 2023, with roughly 5x the performance of the EyeQ5, which Mobileye recently put in its self driving fleet. (Mobileye will combine as many as nine EyeQ5 chips in its commercial self-driving system in 2022 intended for robotaxi applications.)
On the software side, Intel hasn't just been working on performance optimizations, but has also worked on providing developer tools, such as its OpenVINO toolkit for computer vision edge inference. Intel calls it "the fastest growing tool in Intel history" with 150,000 developers.
Intel has plenty of competitors including Nvidia (V100 and T4 chips for training and inference), Qualcomm (QCOM) (Cloud 100 for inference), Huawei (Ascend 310 and 910 for edge and data center), Google (TPU for inference) and Baidu (BIDU) (Kunlun for data center). Intel intends to defends its established position by increasing the AI performance of its Xeons each generation while developing its own line of dedicated AI accelerators alongside the Movidius chips for the edge.
As the market for dedicated inference is completely new, it is open for anyone, but Intel's existing data center presence should help to acquire customers. Its NNP-I and Goya chips respectively have leadership performance per watt and performance.
The data center training market is untapped potential for Intel. Intel has three product lines currently in development to take share from Nvidia: the NNP-T and Gaudi neural network processors, and its upcoming Ponte Vecchio discrete GPU. This is a $3 billion market today in which Intel has no real market share.
While some may think Intel is spreading itself thin, Intel has its own alternative to any company that may want to compete in this space: it can match AMD and Nvidia GPUs with its own GPUs with AI acceleration and it can match dedicated neural processing units (NPUs) on the market with its own neural network processors (NNPs).
I expect Ponte Vecchio (or perhaps one of the Nervana/Habana NPUs) in 2021-22 to put significant pressure on Nvidia's data center business. This could be an important catalyst once investors recognize this pressure on Nvidia's business, and could lead investor sentiment to swing from AMD and Nvidia towards Intel.
While no investment is free of risk, Intel is track to report 2019 as its fourth annual growth year. In the second half of 2019, memory and the data center both improved materially. Intel is also putting capacity in place to eliminate the ongoing supply issues.
For AI, while the market possibly might not grow as fast as predicted by third parties, Intel's strong A.I. revenue growth from over $1 billion to over $3.5 shows that the early stage of growth is happening.
In terms of products, Intel's diversified portfolio gives it a good chance to compete in the market.
One issues, however, could be the 7nm process. While Moore's Law predicts two year process technology cycles, Intel's 10nm was delayed for three years and will likely never reach the volumes as 14nm currently. Nevertheless, Intel has repeatedly iterated that 7nm (Ponte Vecchio) is on track for the fourth quarter of 2021, and has committed to the Aurora supercomputer in 2021 on that node.
In terms of financials and stock performance, A.I. should drive strong data center and IoT performance for years to come. Of course, negative news such as supply issues, process delays and AMD performance could prove headwinds as usual.
Intel has been climbing to multi-year heights in recent months, but is still valued at a forward P/E of just 13, half that of TSMC for instance as it has overtaken Intel in market cap recently. Intel is also trading at significant P/E discounts compared to Qualcomm, Nvidia and AMD.
Analysts are currently forecasting Intel to earn $73.6 billion in revenue in 2021, compared to $71.0 billion in 2019. This compares to Intel's expectation of $76-78 billion in revenue in 2021 and its goal to deliver $85 billion in revenue and $6 EPS by 2022 or 2023.
This indicates that the market is not valuing Intel as a growth company with the significant near and longer term opportunities it has to grow its top and bottom lines, as it has grown in the last few years. Meanwhile the company is also significantly improving its efficiency by reducing spending from 36% of revenue in 2015 to 30% in 2018 to 25% in 2021, and improving its FCF/earnings ratio from 66% in 2018 to over 80% by 2021 (source: Investor Meeting, May 2019).
In the most recent quarter, Intel already surprised the market with an $800 million revenue beat, driven by a better than expected data center market after a three quarter digestion period and the headwinds it faced due to its shortages. Intel has also faced headwinds due to the multiple years of 10nm delays. Those will significantly improve as Intel is getting back to a two-year cadence with 7nm in late 2021 and 5nm in late 2023. This in turn will improve Intel's product portfolio significantly and drive additional demand and competitiveness against Nvidia.
AI is and will continue to be a major driver for additional compute demand over the next decade, and will help to fuel Intel's growth. With over $3.5 billion in revenue in 2019 (including IoTG and Mobileye), Intel is a larger player in AI than Nvidia. I believe the market currently does not value this.
Intel is already benefiting from AI with Xeon Scalable, which has a large performance lead over AMD for AI. Meanwhile, its dedicated inference chips have shown leadership performance and performance per watt against Nvidia's T4. The inference data center market is estimated to be worth $17 billion by 2025, compared with Intel's $1.7 billion revenue in 2018 and current analyst expectations of just $2.6 billion in revenue growth across all of Intel's businesses over the next two years.
Intel also has no real current presence in training, which represents an almost equally as large opportunity, but its NNP-T and Gaudi accelerators are competitive with Nvidia and have started shipping. The Ponte Vecchio GPU as Intel's lead 7nm product is a bold statement and seem on track to beat Nvidia's 5nm GPUs to market by a respectable margin. While A.I. will lead Intel to outperform current market expectations, I expect that if Intel beats Nvidia at what is seen as Nvidia's market, investor sentiment will further swing in Intel's favor.
This makes Intel positioned to remain at least one of the leaders in AI as it gets widely deployed, and to capitalize on the explosive AI silicon demand that is expected over the next few years and decade. As this market is largely new and TAM expansive, this should be viewed as a positive catalyst for Intel.
The market does not seem to have realized or valued these opportunities yet (beyond the recent revenue beat), as analysts are expecting much lower revenue growth than Intel itself and the market is pricing Intel very cheaply compared to most of its peers who are trading at P/E ratios of sometimes well over 20 compared to Intel's 13.
Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.