"...but it is inevitable that Intel's GPU will soon get good enough to obsolete all discrete GPUs from AMD and NVDA."
The premise of the article was that Intel will eventually integrate all functions necessary for I/O, display, communications, etc onto a single chip.
I would like to take a moment to demonstrate why I think this is an unlikely statement at any point in the near term specifically regarding graphics.
Intel's Best Integrated Graphics are 'Adequate', but Not Good
Intel's integrated graphics have been getting significantly better, I will not question that. The above graph shows roughly an 87% increase in performance between their 3rd and 4th generation Core CPUs.
3DMark06 is an older test used to measure graphics performance, but it does make sense to use in this frame of reference since Intel is looking back to 2006 for comparison.
So you can see above that Intel does indeed increase graphics performance in accordance with their PR slide, for a small and expensive subset of their 4th Generation Core CPUs (source: Anandtech).
The majority of the CPUs sold will most likely use the less expensive HD4600 integrated graphics part or below. When looking at these cheaper parts we see a more modest increase in performance between generations (HD4000 to HD4600 is roughly a ~30% increase). Also, you can see that an Advanced Micro Devices (AMD) mobile APU has as much raw graphical horsepower as Intel's 3rd Generation iGPU in this test.
Looking at real world performance for the expensive integrated GPUs and picking a couple benchmarks at "random":
I chose the first 2 benchmarks because they are favorable situations for Intel's best integrated GPUs. I chose the last benchmark to illustrate a point at the end of the article. The best GPUs are also the biggest at roughly 260 mm^2 for the entire GPU portion of the CPU. I recommend reading the articles on Anandtech.com regarding HD5000 vs HD4000, and HD4600 vs AMD's equivalent solution.
In contrast to these expensive integrated solutions, Lenovo has a laptop sporting 2 GT 650M discrete GPUs for under $1000. OEMs probably don't pay anywhere near these prices for these solutions, but you'll be hard pressed to find an HD5200 equipped laptop for this price is my bet, and the graphics performance will probably be worse.
My final point regarding Intel's top tier graphics offerings is this: the best case for Intel is that some newer titles are barely playable at less than 1080p resolution. This is my speculation, but other than Apple's MacBook Pro I am not seeing a very large market for the expensive Iris Pro equipped CPUs. They are barely good enough for gaming, so the added money is a waste for the consumer that actually cares about graphics performance. HD5000 or lower will drive higher resolution displays, but none of these parts hit the high water mark for playing games more than casually.
Intel Loses the Performance/Price War
The flagship desktop APU from AMD is currently the "Richland" based A10-6800K, which can be had for around $135-$150. It has a die size of ~250 mm^2, is built on a 32nm process, and is a refresh of a processor that launched last year. The GPU portion of the die is roughly 42% of the total area, or ~100 mm^2.
Intel's top tier mainstream desktop CPU is the i7-4770k. It has a die size of roughly 177 mm^2, and costs approximately $340. The GPU portion is ~90 mm^2 (see the Anand articles linked to previously).
When comparing these two chips, Anand stated, "Despite Haswell's arrival on the desktop, AMD is in no trouble at all from a graphics perspective."
Given that the AMD APU offers roughly ~30% (based on Anand's review) more GPU horsepower at less than half the price of an Intel's HD4600 equipped i7, AMD is at a distinct advantage. The mobile benchmarks for Intel's HD 4400 solution compared to AMD's A10-5750M mobile APU also demonstrate an advantage for AMD.
For the amount of die real estate spent on graphics, AMD has superiority over Intel despite being at greater than a one node disadvantage, and this is based on an aging GPU architecture (VLIW4) that is being replaced with 28 nm GCN cores.
Intel's top integrated graphics solutions have adequate performance for light gaming, and are surprisingly good at GPU compute. But they're very expensive. Most consumers that care about graphics would be better served by a discrete solution from AMD or Nvidia (NVDA), or an APU from AMD in the foreseeable future. The screenshots below are captured from Valve's Steam website. Steam is a massive online gaming service that collects data on user hardware to come up with graphs like the ones below. I have read of some potential issues in data collection, so the data below, in my opinion, is best used to look at trends.
Intel has managed to supplant a portion of the graphics market, and the share is undoubtedly larger among non-gamers. Intel owns ~85% of the total PC market, but looking specifically at gaming it seems they are less dominant.
I have demonstrated before that I believe 2 GCN cores (GCN cores are repeatable blocks of IP that AMD scales and integrates into GPUs and APUs) are roughly 24mm^2.
The integrated GPU on all these AMD parts I have used for demonstrations throughout this article are built around an architecture that is older, larger, less efficient, and has less performance than these GCN cores. Later this year or early next year (if the leaks turn out to be true) AMD will update their APU lineup that will feature higher performing CPU cores and an integrated GPU with the updated GCN cores.
These new APUs will most likely feature around 8 GCN cores, given that AMD shoots for around 40% die area being dedicated to the GPU and an overall die size of ~200 mm^2. A Radeon HD 7750 also uses 8 GCN cores.
In Battlefield 3, Intel's Iris Pro was unable to break 30 FPS, and HD4600 was around 15 fps. A GPU with 8 GCN cores pushes 17.7 FPS with AF set to high. For non technical readers, this uses more resources and causes a lower score. Note that the AMD chip will feature a new memory architecture, and no official specs have been released for Kaveri yet, so these estimations are meant as a ballpark.
For comparison, AMD's A4-5000 low power APU renders Skyrim playable, but just barely. This low power APU uses a single channel memory controller, which is roughly an automatic 25% performance hit. It also only uses 2 GCN cores.
With AF set to 0, Kaveri could possibly match the performance of Iris Pro parts, and would have a substantial lead on HD4600. The GPU portion of Kaveri should be roughly ~96 mm^2, whereas the HD5200 uses almost 3x that much space.
This is significant because Intel's parts are built on a 22nm node. So AMD being built on a process that is one node behind has the potential to offer equal performance to Intel's most expensive solution using about 35% of the space, and does so at a process disadvantage. This is based on the best assumptions I can make at the time and I will revisit this analysis when Kaveri launches.
For these reasons, I am not worried about Intel making discrete GPUs obsolete at any point in the next few years. Monitor technology is continually improving, and 4K displays will likely be the norm in a few years. The HD5200 GPU is incapable of playing newer games at less 1080p resolution now, and even if performance doubles, 4K displays are ~8.3 Megapixels compared to 2 Megapixels for 1080p.
The biggest problem for nVidia is they do not have the same advantage AMD has in that Nvidia does not build chips with integrated graphics capable of running x86 based computers. As far as AMD is concerned, currently their 32nm APUs with outdated GPU technology more than hold their own against Intel's GPUs built at a 1.5 node advantage over AMD. Intel should move to 14nm next year, so there is potential for AMD to compete against 14nm Intel silicon with 28nm silicon. AMD currently outperforms Intel at a 1.5 node disadvantage given equal die size, which is the same situation AMD is currently facing now.
Intel will likely show a large improvement in shifting from 22nm to 14nm, given that shifting from 32nm integrated GPUs to 22nm GPUs yielded roughly a ~40% performance bump. 14nm releases next year, most likely there will be a product refresh in 2015 based on 14nm, so it will likely be 2016 before we see 10nm. Given that 1080p gaming is pretty much out of the question for Intel's most expensive integrated GPUs, unless Intel radically improves their GPU technology, I am not seeing a collapse of the discrete GPU within the next three years as long as the video game market doesn't collapse.
Additional disclosure: I actively trade my AMD position. I may add to or liquidate my position at anytime.