Nvidia (NASDAQ:NVDA) has recently released the GTX 1060 with its related website reviews and it has shown that Pascal is competitive in the mid-low end market too, providing a $249 product that is generally faster than the $239 RX 480 from AMD (NASDAQ:AMD).
Obviously, the results depend on which kind of API (Application Programming Interface) is used (Open G&L, Vulkan, DX 12, DX 11), which game is benchmarked and if it is an AMD or Nvidia biased game. Eventually, it still depends on the driver optimization even with DX 12 as I will show later in the article.
What I want to focus on is the power efficiency matter behind the Pascal architecture and TSMC production. The GTX 1060 is the perfect example to analyze why Nvidia has done a very sensible move and it has looked far ahead in the mobile sector. Not to mention that the anticipated release of the new Titan X (2nd August) sets new high absolute performances for the video card consumer sector, which could exploit a multiplier effect with lower products.
GTX 1060 Reviews
First of all, the GTX 1060 reveals itself as a good product with a relevant but not astonishing performance advantage over the AMD RX 480. Obviously, this kind of performance advantage doesn't translate into any considerable effective playability advantage apart from very few games such as Anno 2205, Battlefield 4 or Project Cars.
Considering Tom's Hardware's review, we can see that the GTX 1060 leads by an average of +10% compared to the RX 480. The GTX 1060 generally increases its lead with DX11 APIs, but things get different and more competitive with DX12 APIs:
- Ashes of the Singularity - the RX 480 beats the GTX 1060 by 4% on Tom's Hardware website, but the difference decreases to 1% on Hardware Upgrade or Guru 3D websites, while they're on par on Gamers Nexus and the GTX 1060 leads by 2% on ArsTechnica;
- Hitman - the RX 480 beats the GTX 1060 by 17% on Guru 3D while the lead decreases to 13% on Hardware Upgrade and it crashes to only 3% in ArsTechnica - this is clearly an AMD biased game, in fact AMD always has a good performance even with DX11 APIs. Even previous Hitman games showed a similar behavior;
- Rise of Tomb Rider - the GTX 1060 leads by 19% on Guru 3D, by 16% on Hardware Upgrade and by 30% on ArsTechnica - ROTR is clearly an Nvidia biased game and the results do not change a lot between DX11 and DX12;
- Total Warhammer - the RX 480 leads by 3% on Guru 3D and it is on par on Hardware Upgrade;
All of this to say that there is an actual average performance parity between these two video cards considering DX12 games, or a very slight advantage for the RX 480 (Hitman is clearly an AMD biased game given its DX11 and DX12 similar behavior, ROTR is clearly Nvidia biased and the other two games show a too narrow performance difference). DX11 games are generally consistently in Nvidia's favor instead, and they still have a massive majority, but DX12 will gradually obtain the major focus in the next months and years. The price factor will obviously be very important to determine the product convenience, even if the $10 (or €10) price difference is not consistent at all.
Nvidia Drivers DX12 Considerations
Analyzing the various reviews from the 1080 release and the following ones about the GTX 1070, the RX 480 and the GTX 1060, I noticed that there are some peculiar things to be underlined about Nvidia drivers: for example, the GTX 1080 improves a lot in Ashes of the Singularity once there is the switch from DX11 to DX12 APIs, but the performance increase is reduced for the GTX 1070 and it is slightly negative for the GTX 1060. At the same time, the GTX 1070 and the GTX 1080 still lose a lot of performance with DX12 APIs in Total Warhammer, while the GTX 1080 gains a good 20% in DOOM with Vulkan APIs, while the GTX 1060 does not benefit at all from it.
(If you take a look at Total Warhammer, you will see that a Fury X scores 73 fps in DX11 and 96 fps in DX12, while the GTX 1070 scores 117 fps in DX11 and 92 fps in DX12. This is caused by a very poor DX11 optimization for the GCN architecture and a poor DX12 optimization for the Pascal/Maxwell architecture. But this does not happen in Ashes of The Singularity instead, where some optimization has been carried out and probably dynamic balance is enabled)
It is clear that the dynamic balance is working better day by day for the GTX 1080, and if you recall my previous articles, you will remember that this is a driver-enabled feature and its performance is essentially driver-dependent.
This underlines a couple of things: the negative is that Pascal is still extremely driver-dependent and its performance, for what regards the asynchronous computation in the form of the dynamic balance, it is still restricted; but the positive is that Pascal still has a good performance potential, moreover if we consider that the GTX 1070 and the GTX 1060 are very likely to improve in the next weeks.
Not to mention that the GTX 1080 gains up to 40% using DX12 APIs in Ashes of Singularity heavy batches mode, meaning that Nvidia is working hard on this aspect and its async compute/dynamic balance works it out when its drivers are well crafted.
But what is really interesting, it is Tom's Hardware analysis about the power consumption of the GTX 1060 compared to the RX 480, and the outcome has very important effects on the mobile field. But let's start with the analysis.
Tom's Hardware takes Metro Last Light as an example of good compromise between power consumption and performance, and there is no real clear bias towards Nvidia or AMD. The results are the following (setting power save mode):
- 1080p - in order to generate 130 fps, the GTX 1060 consumes 61W while the RX 480 consumes 139W. Therefore, the GTX 1060 has a better power efficiency by +128% at these performance levels.
- 1440p - in order to generate 90 fps, the GTX 1060 consumes 62W while the RX 480 consumes 146W. Therefore, the GTX 1060 has a better power efficiency by 135% at these performance levels.
- 2160p - at this resolution, the GTX 1060 scores 50,3 fps consuming only 62W, while the RX 480 scores only 48,6 fps consuming 164W. In this case scenario, the efficiency difference cannot be calculated, but it is obviously way better for Nvidia product.
Now, the absolute performances of these video cards are different from what we see here, but the power efficiency difference cannot be disregarded. And it becomes fundamental for the mobile sector.
It must be underlined that the GTX 1060 (without power efficiency settings) generally consumes 89W with 1080p games and 110W with 1440p games. It means that this video card will have to lower the clock only a little to meet the forecasted 65W TDP of the GTX 1060 Mobile, thanks to the reduced chip voltage required.
Taking a look at the previous Tom's Hardware chart, we see that switching from the 1080p typical power consumption (89W) to a 65W power consumption, the performance loss is around 10% (and this card is not projected to run at lower frequencies and lower voltages, therefore, these results are even a little pessimistic). In fact, first leaked benchmarks show that a GTX 1060 Mobile scores around 10295 points in 3D Mark Fire Strike, while a GTX 1060 Desktop version scores roughly 11225 point: -8% performance in lieu of -28% of power consumption looks to be a very good compromise to me. The GTX 1060 simply has to run around 1500 MHz in order to meet the 65W TDP for the mobile version.
This essentially means that Nvidia is able to provide a full potential GTX 1060 for the mobile market, and we can expect a similar behavior from the GTX 1070 (85W) and the GTX 1080 (125W). On the contrary, the AMD solution starts from a bad performance/power consumption ratio, since its 140-150W performance cannot even match the Nvidia 60W performance in certain scenarios (obviously, in an AMD biased game such as Hitman things will be different, and the same goes for Nvidia biased games such as Project Cars or Anno 2205). But this doesn't surprise me, as I have already shown in my previous articles. The use of a hot and/or power consuming video card is generally not a problem inside a desktop case, since the space is enough to dissipate the hot air. PSUs are built to withstand high power requirements and video cards could employ even big heat sinks in order to dissipate the generated heat better. In such a situation, AMD is able to partially counterbalance its lower efficiency with a higher power consumption in order to obtain similar performances. On the mobile side instead, the constraints are heavy, the power consumption and heat generation must be greatly reduced, the power spikes must be constrained and the available space is nothing to be compared to a desktop environment. In such a scenario, Pascal architecture looks to be more favored.
Nvidia has decided to run this route, being conscious that the mobile market is growing over the desktop one and that the mobile discrete video card market is absorbing the discrete video card desktop market.
Nvidia has also made the paper launch of its new flagship video card for the consumer market, the new Titan X. This video card was initially expected to be released at the end of 2016 or in 1Q 2017, but Nvidia has decided to anticipate its release, maybe because there is the will to employ the HBM 2.0 modules for further video cards and/or only for professional products and future 2Q 2017 Volta products. The fact is that this choice has obviously pushed Nvidia to employ GDDRX5 modules in lieu of HBM 2.0 modules.
By the way, this card provides tremendous performance and computational power with its 11 TFLOPS, 3584 CUDA cores and 480 GB/s of bandwidth. This video card may be the first true full capable 4K video card on the market, and I personally expect it to remain in that position even with Vega in 1Q 2017, since I do not expect Vega 10 to go further than 4096 cores (1) with a maximum frequency of 1350 MHz (and a projected maximum computational power around 11 TFLOPS for a TDP of 275-300W). AMD may still go further with GCN cores to release a Vega 11 with 6144 GCN cores, but such a GPU would have to set its frequency at 1000 MHz or below to meet a TDP below 300W TDP (personally, I am more keen to the idea of a Vega 11 with 4608 GCN cores) and Vega 10 would have to lower its frequencies in order to not overlap the performances. In addition, the die size area would be really big and very difficult to achieve. Taking for granted that the new architectures do not provide any real big IPC improvement, and considering how the GTX 1080 performs in comparison to the Fury X (8.6 TFLOPS and it generally never reaches any thermal throttling or frequency TDP limited downgrade), such an affirmation may work out to be sensible. Surely, this Titan X looks to be a very beast.
Anyway, at that point, Nvidia would release the new Volta architecture within May 2017, meaning that AMD would have to face a renewed competition against a new architecture. It will be interesting to see the relative performance next year.
Anyway, the real battle between AMD and Nvidia will take on the prices and the availability levels.
In Europe, the GTX 1060 looks to be already available in Italy, Germany, France and other countries, starting from 270/275€ (while the official price suggested by Nvidia is 279€), while the RX 480 is generally available starting from 260/265€. Therefore, the price difference looks very narrow at the moment, and it respects the respective prices announced by Nvidia and AMD. At this point, both of these cards are likely to sell good, and the sales driver will probably depend on both the actual and projected DX11/DX12 performances and the flagship attractive factor (a part of the customers may consider the lack of a flagship proposal by AMD as a negative signal from AMD Polaris architecture. The same goes for the huge difference between the RX 480 and the GTX 1080/1070 performance with similar power consumptions). It must also be remembered that the GTX 1060 has a slightly higher price, but it also employs fewer GDDR5 memory modules, meaning that Nvidia is very likely to gain higher margins in this market.
But what is very important is that Pascal has a power efficiency advantage in the mobile sector. AMD is expected to employ Polaris 11 for the M480X and M485X while Polaris 10 will be employed for the M490X and M495X (which would be the flagship solution with 125W TDP). This means that AMD top solution would meet the performance of a RX 470, while Nvidia solutions are expected to provide a performance very similar to the desktop solutions. The GTX 1080 is expected to employ GDDR5 modules and little lower frequencies, the GTX 1070 is expected to do the same, but it will employ 128 more CUDA cores, while the GTX 1060 will only reduce the GPU frequencies a little.
It is clear that Nvidia is trying to provide very similar performances in both the desktop and mobile markets, and this could be a very effective marketing move. Gaming notebooks are growing despite the general PC decline: notebooks are growing in the US and East Europe market, and even the West Europe market is looking better, and that is a good news for this sector and the hardware vendors.
I expect Nvidia to really enjoy the mobile market and to benefit from its Pascal architecture, which looks really attractive and efficient for the mobile sector. AMD, at least with Polaris, may be competitive only in the low end of that market, unless it still has some secret optimization on the go.
If you recall, I have previously advised to sell Nvidia shares in order to take a new long position after a possible retracement of the stock, since I considered the chance that Polaris could reveal to be a competitive and efficient GPU. However, Polaris has partially disappointed the leaks and expectations, and it has also brought some power issues to the light. This caused Nvidia stock price to not decrease, and it still remains solid on its GP100 cards, its new compact servers, the Drive PX2 platform, the joint-venture with IBM and the highly efficient Pascal architecture for the consumer market. Considering that Nvidia is well positioned in the mobile market, and that it will provide its Tegra SoCs to Nintendo for the upcoming Nintendo NX (it is probably the new Pascal powered Tegra), new revenues and profits are setting for Nvidia. Anyway, I reiterate my sell position for a 12 months target given that Nvidia stock has grown a lot during the last months and it is in an overpriced range: I personally would wait for a retracement that is likely to happen in the next months. But if you love the risk, long (and relatively brief) positions could be sensible since the Nintendo NX announcement could be an additional catalyst for Nvidia. On the contrary, AMD looks not so competitive in the mobile market, and it only deserves a speculative position just around the first reliable future (leaked) benchmarks about ZEN, probably 2 months before its release.
Disclosure: I/we have no positions in any stocks mentioned, but may initiate a long position in NVDA over the next 72 hours.
I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Additional disclosure: The author does not guarantee the performance of any investments and potential investors should always do their own due diligence before making any investment decisions. Although the author believes that the information presented here is correct to the best of his knowledge, no warranties are made and potential investors should always conduct their own independent research before making any investment decisions. Investing carries risk of loss and is not suitable for all individuals.