In a previous article, I presented an analysis of the following slide presented at Intel's (NASDAQ:INTC) fall 2013 investor meeting proclaiming Intel's scaling advantage over TSMC (NYSE:TSM) beginning at the 14 nm node. The TSMC data plotted on the slide is based on an assessment of remarks made during the TSMC keynote at the 2012 ARM TechCon. Intel's message is that while TSMC will have trouble scaling bellow 20 nm, Intel will not. As a consequence, Intel forecasts a 35% scaling advantage at 14 nm FinFET vs. TSMC's 16 nm FinFET and the advantage grows to ~45% at 10 nm. In this article I pointed out that if Intel is able to ramp its 10 nm process before TSMC does, products made on Intel's 10 nm process will be competing against those made on TSMC's 16 nm process and Intel's scaling advantage will be more like 64%.
(source: Intel 2013 Investor Meeting)
TSMC's Rebuttal Of Intel's Scaling Advantage
A reader reminded me in the comments that TSMC made a rebuttal of Intel's claim in its recent conference call. In this rebuttal (summarized here), TSMC stated that Intel's forecast was based on outdated data. They went on to state that while the Intel forecast shows TSMC's scaling actually getting marginally worse at 16 nm FinFET, TSMC is now predicting ~15% smaller chips at 16 nm followed by "industry leading performance and density" at 10 nm. Here is an excerpt from the TSMC conference call:
We take the approach of significantly using the FinFET transistor to improve the transistor performance on top of the similar back-end technology of our 20-nanometer. Therefore, we leverage the volume experience in the volume production this year to be able to immediately go down to the 16 volume production next year, within 1 year, and this transistor performance and innovative layout methodology can improve the chip size by about 15%. This is because the driving of the transistor is much stronger so that you don't need such a big area to deliver the same driving circuitry.
And for the 10-nanometer, we haven't announced it, but we did communicate with many of our customers that, that will be the aggressive scaling of technology we're doing. And so in the summary, our 10 FinFET technology will be qualified by the end of 2015. 10 FinFET transistor will be our third-generation FinFET transistor. This technology will come with industry's leading performance and density. So I want to leave this slot by 16-FinFET scaling is much better than Intel's set but still a little bit behind. However, the real competition is between our customers' product and Intel's product or Samsung's product.
This call was accompanied by the following plot showing TSMC's version of the Intel slide shown above.
Click to enlarge(source: SemiWiki.com)
What's Wrong With This Plot?
Several things about this plot piqued my interest. First, why do the lines not actually connect the centers of the data points. It makes the figure look more like a drawing than a plot of actual data. Second, why does the TSMC 10 nm data point sit exactly where the Intel data point sits? Either this is a huge coincidence, or Intel and TSMC will be using exactly the same process, or TSMC decided to make the points match to tell a good story. Finally, how does a 15% improvement from the TSMC 20 nm to 16 nm nodes make up much more than half of the 35% advantage Intel predicts at the 14/16 nm node? 15% is less than half of 35% especially on a log scale. This final observation, combined with the first two, led me to suspect that the TSMC figure isn't a plot of actual data. Instead, it is just a picture with dots connected by lines on coordinate axes to make it look quantitative.
TSMC said that Intel's forecast was based on outdated data. This suggests that TSMC has more recent data. However, if my suspicion was correct, the figure TSMC showed investors to refute Intel's scaling advantage claim is not a plot of this data.
Suspicion isn't good enough for me. So, I decided to do some quantification to see if I could get some clarity.
Unpacking the Logarithmic Data On Intel's Slide
Even if the TSMC plot is not quantitative, Intel's data could be made up too. So I began my analysis with the assumption that Intel's figure isn't showing real data either. To test this assumption, my first task was to convert the log scale data in the Intel plot to a linear scale with arbitrary units. It turned out that this was not an easy task since there are no values given on the vertical axis. Nor is there a solid baseline on a log-scaled plot like there is a zero on a linear-scaled plot. However, Intel did give us one piece of data, namely the 35% scaling advantage of Intel's 14 nm FinFET node over TSMC's 16 nm FinFET node. That was all I needed to calculate the values of each data point plotted in arbitrary units. That means my values differ from the actual values plotted by Intel by a scalar multiplicative factor. The process I undertook to find these values was as follows. If you don't care to know the mathematics, skip to the data table and plot below.
First I extracted the pixel values of all the data on the Intel plot. Then I recognized that the vertical pixel value for each data point would be related to the actual data value (scale) purported to be plotted via
a = b log(x) + c,
where a is the vertical-pixel value, x is the actual data value, b is a calibration factor, and c is an offset that serves to normalize the data to any arbitrary units one chooses. The calibration factor, b, can be found using the 35% scaling advantage claimed by Intel at 14/16 nm. Mathematically this statement is
xI14 - 0.35 xI14 = xT16,
where xI14 is the Intel scale at 14 nm and xT16 is the TSMC scale at 16 nm. Using the two equations above, the pixel values corresponding to xI14 and xT16, that is, aI14 and aT16 respectively, and a lot of algebra, the calibration factor is found to be
b = (aT16 - aI14)/log(1/(1-0.35)).
I selected an offset, c, so that the largest data value would be Intel's scale at 32 nm in arbitrary units. I transformed all the pixel values from the Intel figure and plotted them. This plot and the transformed data itself are shown below.
Table 1: Scale of Intel and TSMC processes in arbitrary units from 32/28 nm to 10 nm according to Intel's internal analysis.
Figure 1: A plot of Intel and TSMC process scale as a function of node using data from Intel's internal analysis.
Now, with this data in hand, I was ready to test if the Intel plot is quantitative or qualitative. To do this, I calculated the scaling advantage between Intel and TSMC at 10 nm. I calculated 47.1%. Taking into account the small error introduced when extracting pixel values from the original graph, this value is sufficiently close to the ~45% advantage claimed by Intel that I feel confident concluding that Intel's figure is an actual quantitatively correct plot of data points with values calculated by Intel. In other words, Intel's figure is a plot of real data and is not just dots drawn on a set of coordinate axes connected by lines.
Back To The TSMC Rebuttal
Next, I recreated the TSMC plot using the information given in the TSMC conference call. In the call TSMC claimed a ~15% scaling improvement from 20 nm to 16 nm FinFET. This is followed by "industry leading performance and density" at 10 nm. Let's assume that this means density parity with Intel at 10 nm like the TSMC plot showed. Using these parameters and the data plotted above, the TSMC plot should have looked like this.
Figure 2: A plot of Intel and TSMC process scale as a function of node using data from Intel's internal analysis. Also shown is data calculated from remarks made during a TSMC rebuttal to Intel's claims.
Take note of the position of the TSMC 16 nm rebuttal data point in my plot relative to the position of this "data point" in the TSMC generated figure. They are different. There are other quantitative discrepancies too. My conclusion is that TSMC's figure is just a drawing. It isn't a plot of actual data values. It may be qualitatively correct but it definitely isn't quantitatively correct. This begs the question. If Intel's data is outdated and TSMC has more recent data to refute Intel's scaling advantage claim, why are they not sharing this data with investors.
I contacted TSMC to share my analysis and for a comment on whether the figure shown in rebuttal was a plot of actual data or just a qualitative drawing. TSMC confirmed that the figure is just qualitative and it must be so since "no one has the actual data." Below is the TSMC reply to my inquiry.
Ms. Elizabeth Sun, Director, TSMC Corporate Communication Division
Thank you for your message. Regarding your question about the area scaling chart, the plot is qualitative since no one has the actual data. That said, the 15% area scaling from 20nm to 16FF is made possible by optimizing our 16FF rules to provide smaller cells which can reduce chip area. Meanwhile, I would like to point out one dimension that has not been captured in this graph, and that will be "timing" for the nodes. The reason the area gain on 16FF vs. 20 SoC is smaller compared to other node-to-node transition is because we have decided to leverage our 20SoC product ramp experience and make 16FF ready sooner. Please note that the timing difference between our 20SoC risk production and 16FF risk production is only one year (instead of the typical 2 year gap). However, we will be very aggressive on 10nm in density and performance, which is captured on the graph. Currently, in our roadmap, 10nm will follow 16FF in 2 years. I hope the above clarification is helpful.
Based on my quantification of Intel's scaling-advantage slide and remarks made in a TSMC rebuttal, I discovered that the figure on Intel's slide is likely a plot of real data while the figure presented in TSMC's rebuttal is not. In fact, TSMC confirmed that their figure is just a qualitatively composed drawing. During the conference call, TSMC refuted Intel's claim stating that Intel's data was outdated. Whether or not Intel's data ultimately proves correct, my analysis shows that it is real. On the other hand, my investigation concludes that TSMC didn't actually refute Intel's claims with real data of their own. TSMC told a good story, but in light of these facts, I have trouble believing it.
Epilogue: Matching Intel at 10 nm, Questionable Math, And Other Interesting Observations
Even though it is just qualitative, let's assume for the moment that the TSMC rebuttal is actual data and consider the implications. In the following table, I again show the scale (in arbitrary units) of the 32/28 nm to 10 nm TSMC and Intel nodes. I also show the scale of the TSMC nodes based on remarks made during TSMC's conference call.
|Node (INTC/TSMC)||TSMC||INTC||TSMC (REBUTTAL)|
Table 2: Scale of Intel and TSMC processes in arbitrary units from 32/28 nm to 10 nm according to Intel's internal analysis. Also shown is TSMC data calculated from remarks made during a TSMC rebuttal to Intel's claims.
Next, I show the node-to-node improvements in scaling these data represent.
|Node (INTC/TSMC)||TSMC||INTC||TSMC (REBUTTAL)|
Table 3: Node-to-node scaling improvements for Intel and TSMC processes from 32/28 nm to 10 nm according to Intel's internal analysis. Also shown is TSMC data calculated from remarks made during a TSMC rebuttal to Intel's claims.
One important thing to consider from these numbers is that if TSMC is forecasting process density parity with Intel at 10 nm, then their own "data" suggests they would have to show a 62% improvement from their 16 nm process. This would be the largest node-to-node improvement exhibited by either company since at least 32/28 nm. Is such an improvement actually realistic? Share your thoughts below.
Another implication of my analysis is that even if TSMC sees a 15% improvement going from 20 nm planar to 16 nm FinFET, Intel's data suggests Intel will still have a 22% scaling advantage at 14/16 nm. And if Intel gets to 10 nm before TSMC gets off of 16 nm, Intel will still enjoy a 62% scaling advantage. So even if the TSMC rebuttal represented actual forecast data, Intel's data suggests it will still have a large scaling advantage over the next couple of years.
Finally, a rough analysis of the plot shown in the TSMC rebuttal indicates that the 16 nm data point is about 15% closer to the horizontal axis than the one preceding it. Perhaps the person making the plot thought this would be the correct way to show a 15% decrease in scale from 20 nm. However, reducing a log scaled number by 15% is like raising the actual data value to the (1-0.15) power. Doing so is nonsensical mathematically as the proportional change resulting when one raises a number to a power is dependent on the number itself.
Disclosure: I am long INTC. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.