Seeking Alpha

DigiHound

DigiHound
Send Message
View as an RSS Feed
View DigiHound's Comments BY TICKER:
Latest  |  Highest rated
  • AMD Fell Short Of Expectations, So What Now? [View article]
    "- Management hints at SoC and server wins turned out to be hot air"

    AMD has said it expects $250-500M in revenue over the lifetime of two additional major product wins. If the lifetime of each is 5 years, that's an extra 100-200M a year. Not too shabby.

    If, on the other hand, these are truly long term embedded plays of 6-8 years, then we're looking at a much lower level. Until wins are announced, they're vaporware.

    One investment from Verizon doth not a successful platform make. In any context.
    Jul 18 10:14 AM | 2 Likes Like |Link to Comment
  • AMD: The APU And High Bandwidth Memory - Maintaining Graphics And Total Compute Performance Leadership [View article]
    That's not really the kind of mention I was talking about.

    AMD rarely talks about its cohesive plans for SeaMicro. Yes, it bought FreedomFabric and it is doing some work there on future product qualification -- you can find this information -- but if you look at its broad messaging, SeaMicro just doesn't get talked about. It's rarely called out in messaging, and when AMD talks about future server parts, they don't exactly put SM front and center.

    Obviously they had a reason for buying the company, but I think there was some expectation that SM would be fairly central to their future server plans. Maybe it will be in the future, but so far it's been a very muted discussion.
    Jul 17 04:31 PM | Likes Like |Link to Comment
  • AMD: The APU And High Bandwidth Memory - Maintaining Graphics And Total Compute Performance Leadership [View article]
    You know, it's amazing how much AMD doesn't talk about SeaMicro. Ever. At all.

    Not even a tiny bit.

    As far as I know, GF's 20nm process wasn't canceled, so any design tapeouts there will continue. We don't know yet if the ARM/x86 cores coming in 2016 will be on 20nm (early in the year) or 14nm (likely later in the year).
    Jul 17 03:08 PM | Likes Like |Link to Comment
  • BlackBerry Might Make It [View article]
    "BB is trying to sell 10 million handsets a year, and it looks like they will achieve that goal, how can you flop when achieving your goal?"

    Easily?

    You can flop by having a cost structure that isn't supported by moving 10 million handsets.

    You can flop by failing to attract the developers and customer interest that will maintain and expand that sales figure long term.

    You can succeed enough to keep the lights on but find yourself boxed into a niche market depending on a few big contract wins to really pay the bills while the market moves on past you.

    You can flop in many, many, *many* ways, all while meeting a sales target.
    Jul 15 04:03 PM | 4 Likes Like |Link to Comment
  • BlackBerry Might Make It [View article]
    That's hilarious, considering the company has bled the overwhelming majority of its value in a downward spiral.

    Market share and net profit are the only things that count here. Anything but meaningful sustained improvement in those two sectors is just running out the clock.
    I'm not anti-BBM -- in fact, I respect them for trying to make a go of it -- but I honestly think the ship has sailed on this one.
    Jul 15 01:38 PM | 7 Likes Like |Link to Comment
  • AMD: The APU And High Bandwidth Memory - Maintaining Graphics And Total Compute Performance Leadership [View article]
    "DigiHound, my understanding is that most games are not CPU constrained, they are mostly GPU constrained."

    Depends on the game and the title. The best way to understand it is this: For any given modern title (meaning, not something 5-6 years old) there's almost always a point where it *becomes* GPU-constrained.

    Whether or not you hit that point depends entirely on your graphics settings. But you have to understand that AMD's single-thread performance is so poor, it really *does* matter, even in many games.

    http://bit.ly/OyF3dG

    Check just the top two results -- the A10-7850K + R7 260X and the Core i3-4330 + R7 260X. Note that the R7 260X is often notably faster when paired with the Core i3-4330, even though that's just a dual-core chip + Hyper-Threading while the AMD A10-7850K is a full quad-core. In some games, it matters. In some games, it doesn't.

    "When there is a CPU constraint, it's mostly due to the inefficient Microsoft DX system."

    This is something of a misnomer. Let me explain.

    If you run Mantle on an AMD APU with no dGPU, you may pick up 5-10% more frames. The most I've ever gotten in an actual game is, I think, 10%. The onboard GPU is too memory bandwidth limited to benefit much.

    If you run Mantle on an Intel chip with an R9 290X, you might pick up 8-12% (single GPU mode). The Intel CPU is fast enough that it's just not very constrained.

    If you combine Mantle on an AMD A10-7850K + R9 290X, you may see a 25-35% performance improvement. Why? Because the AMD CPU isn't fast enough to keep the DX11 thread fed well.

    Check the Mantle reviews in various titles and you'll find the biggest gains are *always* for lopsided AMD CPU + high-end Radeon GPU configurations. Why? Because AMD's CPUs *need* that offload far more than Intel's do.

    That doesn't mean AMD is lying when it talks about reducing the cost of certain functions or lowering driver overhead -- it just means that Intel's CPUs are powerful enough to burn through that overhead without a problem, whereas it causes a significant issue for AMD.

    If Kaveri had HBM, or a quad-channel memory interface, or a high-end GDDR5 interface, its GPU performance would improve by 25-35%. This would be a notable gain for it, no question.

    But just because AMD would see a further performance gain from boosting the GPU does not mean the GPU is the *problem.* AMD already beats Intel at iGPU gaming.
    Jul 15 01:17 AM | 1 Like Like |Link to Comment
  • AMD: The APU And High Bandwidth Memory - Maintaining Graphics And Total Compute Performance Leadership [View article]
    Geek,

    The problem with saying "No one would notice," is that yes, you absolutely *do* notice.

    No, I can't tell if I'm on an AMD or an Intel system when it comes to surfing the web -- but gaming is often visibly different. I'm a gamer. I care about that. Even when frame rates are within 1-2 FPS, frame *time* -- how long it takes an engine to render a frame within one second -- often suffers on AMD platforms. If I have to do anything that requires significant CPU performance, I can tell the difference between an AMD and an Intel platform.

    Now, much of this is context-dependent. As we've established, at the lower power envelopes, AMD more competitive -- but any time you hit serial threading or are comparing against true Intel quad cores, the differences are visible.

    (Kaveri's GPU is somewhat hurt by the relatively high price of low-end GCN hardware. The R7 250, for example, is $79 (without a rebate) for 384 GPU cores. You can buy an R7 260X for $99 -- and that's the same clock speed but with 2.3x as many GPU cores on it. That's kind of an intrinsically bad deal -- the R7 260X is faster than any integrated GPU and far faster than the R7 250.)

    If you could buy an R7 250 for $40 and pair up to an A10-7850K at $140, you'd have an argument for using dual graphics. With the R7 250 at $79 and the A10-7850K at $180, you're better off just going straight for the R7 260X.

    I think Kaveri is another example of the best chip AMD could reasonably build given the constraints and complexity they were trying to pull off.
    Jul 14 06:20 PM | 1 Like Like |Link to Comment
  • AMD: The APU And High Bandwidth Memory - Maintaining Graphics And Total Compute Performance Leadership [View article]
    Kaveri's low power game is better than its high-power comparisons. The new mobile Kaveri chips are 15-20% faster in CPU tests than the Richland silicon they replace.

    I have never trashed Kaveri. I've said that the A10-7850K, at $180, was badly overpriced, as was the A10-7700K, and that the A8-7600 needed to ship as soon as possible. That should happen soon.
    Jul 14 05:21 PM | 1 Like Like |Link to Comment
  • AMD: The APU And High Bandwidth Memory - Maintaining Graphics And Total Compute Performance Leadership [View article]
    "Kaveri is not intended to compete head-to-head with Intel CPU's."

    Then why are they priced against them? I'm sorry, but this is specious. Kaveri is absolutely intended to compete against Intel CPUs.

    "There are many important reasons for compromising raw performance."

    Agreed. AMD has done the best job it could with the resources it has.

    "For example, the games chips may not have as much compute power on old code as common PC's, but they are huge, complex, single chips, produced in large quantity to customer specifications."

    Agreed. I have no problem with the console designs.

    "Presumably encouraged by the success of the Semimcustom business, AMD is strategically taking chip design modular."

    Yup.

    "Apparently AMD has been validating this process step-by-step, first as a business with Semicustom, then technically, in both the HSA-enabled GCN GPU and the HSA interfaces on the "not so great" Kaveri APU."

    I *wrote* the deep dive into AMD's fabric improvements. http://bit.ly/1j7GGXq Preaching to the choir.

    " If AMD actually does demonstrate an ability to field new chips at something like 3x the normal rate (and at 1/3 the usual design cost),"

    Huge "if." When it happens, if it happens, it'll be news. Until then, it's vapor.


    But I stand by what I've said. AMD will not invest enormous resources attempting to put one last futile coat of polish on Excavator with something like HBM. Instead, it'll save back those new technologies for mass implementation in 2016 or later.

    And I maintain that no, Kaveri doesn't particularly *need* more bandwidth to the GPU at this point in time. While it would be useful, it's not the problem.
    Jul 14 11:58 AM | Likes Like |Link to Comment
  • AMD: The APU And High Bandwidth Memory - Maintaining Graphics And Total Compute Performance Leadership [View article]
    I never said HBM wouldn't improve Kaveri, I said HBM wouldn't improve Kaveri where Kaveri most needs improving.

    AMD already wins the GPU comparison against Intel. Sure, I'll take more performance in that category, but no one is comparing AMD APUs against Intel integrated GPUs and choosing Intel because their onboard graphics is so much better.
    Jul 14 12:24 AM | Likes Like |Link to Comment
  • AMD: The APU And High Bandwidth Memory - Maintaining Graphics And Total Compute Performance Leadership [View article]
    I'm not saying it's impossible, but based on what I've heard of Carrizo, it's just not likely. That design was done quite some time ago. Since all AMD's current APUs are built at GF, that would mean GF would have to have a 20nm process for HBM ramped and ready (we haven't heard of it, if s

    Look at this from AMD's perspective. The existing GPU tech in Kaveri is not the problem or the bottleneck. The problem is that the CPU core Kaveri is chained to just isn't all that great compared to Intel. Yes, AMD can conceivably invest more resources to boost GPU performance, but the return from doing that when its APUs can't command high price margins just isn't there.

    I absolutely believe that AMD is researching HBM and plans to bring it to market, but it makes more sense to deploy the technology on higher margin parts -- either GPUs or APUs following in 2016 when it has a better product stack. Attempting to leverage this tech in Carrizo simply isn't going to be a winning proposition relative to where Kaveri most needs improvement.
    Jul 13 03:04 PM | Likes Like |Link to Comment
  • How Apple And Intel Crushed Nvidia's Tegra [View article]
    I think it's a mistake to be forever ascribing dubious financial motivations to any and all comers. Just because a person disagrees with you doesn't mean they're getting paid off to do so.
    Jul 13 02:57 PM | Likes Like |Link to Comment
  • How Apple And Intel Crushed Nvidia's Tegra [View article]
    Pohzzer,

    No, it really doesn't. Not if you put my statement in context. I attribute Nvidia's failure to achieve those design wins to two broad faults:

    1). It didn't have an integrated modem. Its primary competitor did.
    2). Its chips were still based on 40nm when competitors were rolling out 28nm (Tegra 3) or had validation problems (Tegra 4).

    Those facts explain why Tegra lost market share -- it failed to adequately address the needs of the market or the desires of OEMs. But they do not mean Tegra 3 or Tegra 4 was a bad *chip.*

    Let me give you an example:

    Last year, I reviewed a Samsung 8-inch tablet that had a dual-core Cortex-A9, 8GB of RAM, and an onboard GPU from Vivante (this last being why I reviewed it). It was not the kind of product that sets the world on fire, but it ran basic apps and benchmarks just fine. You could do plenty of things with it. It was about 80% the speed of my iPhone 4S.

    Then Google comes out with the Nexus 7 2013, which sells for the same $199 price point. Put toe-to-toe, the Nexus 7 annihilates this little Samsung job. It's not even close.

    This makes the Samsung tablet a poor value. I would never buy one new. I would never recommend anyone else buy one new, either. But it's not a bad *chip* -- and at a $100 price point instead of a $200 price point, it would've been a *great* tablet again.

    When something is a bad product, that means it's intrinsically bad. A bad product is a product you're going to have a terrible experience with, at any price point. It's absolutely possible to have a good chip at a bad price point or a good chip isn't good *enough* relative to what the competition is doing.
    Jul 12 07:56 PM | 2 Likes Like |Link to Comment
  • AMD: The APU And High Bandwidth Memory - Maintaining Graphics And Total Compute Performance Leadership [View article]
    Edit to the above: The first Fusion chips were supposed to be based on Phenom, not Phenom II. My timeline slipped there. ;)
    Jul 12 03:20 PM | Likes Like |Link to Comment
  • How Apple And Intel Crushed Nvidia's Tegra [View article]
    I think the author is right to see NV's strategic withdrawal from mobile and focus on tablet as a sea change -- but blaming Intel for it is silly.

    Tegra 2 was the right chip at the right time -- even a year late, it still lit up the sales charts. Nvidia had a huge win with it, and for pretty good reasons. But they didn't really forecast where things would move after very effectively -- Tegra 3 was 40nm when competitor chips were 28nm, Tegra 4 never caught much wind in tablets, Tegra 4i's integrated Icera modem took too long to debug and won no traction here. it didn't help that Tegra 4i was still Cortex-A9 when the budget space was moving to Cortex-A7 either.

    None of this means Nvidia doesn't make a good *chip* mind you -- but I think if anything, Nvidia and Intel are actually in similar boats. There's nothing intrinsically wrong with Merrifield, but it's a dual-core + HT at a time when OEMs want quad cores and customers in countries like China explicitly want high core counts. OEMs in the US and Europe are leery of going with unproven solutions or luxury pricing when the iPhone has so strangled that market. With Samsung and Apple accounting for almost all industry profits on hardware, you either grab those spaces or you flounder.

    Tegra K1's programmability and capability may turn the tide at least a little, but I think it's very fair to say that this isn't the trend Nvidia sketched out for its mobile business 3-4 years back.
    Jul 12 01:06 PM | Likes Like |Link to Comment
COMMENTS STATS
271 Comments
382 Likes