Seeking Alpha


Send Message
View as an RSS Feed
View RandSec's Comments BY TICKER:
Latest  |  Highest rated
  • AMD Gains Graphics Market Share In Q2, Driven By Kaveri APU [View article]
    @xxavatarxx: "To bad that doesn't drive stock price."

    Nor does corporate worth drive stock price, at least day-to-day, since
    corporate worth does not often change drastically overnight. Yet commentors go on and on about stock price not improving, with the false implication that the company is not improving.

    After the share price drop with the last ER, which generally made--not missed--management numbers, we know that actual earnings results do not drive stock price either.

    What drives stock price is propaganda from some analysts, promoters and commentators, who manipulate the market for trader profit and investor loss. The propaganda sets up buyer desire and then disappointment, causing causing shares to be bought high and sold low to profit manipulators. All this is independent of true corporate worth.
    Aug 21 01:28 PM | 5 Likes Like |Link to Comment
  • Why HP's $199 Laptop Is Good For AMD [View article]
    @Jomamaiscool: "Even Intel themselves said their 14nm density will be comparable to "competitor's" 16nm nodes due to how large their transistors are."

    If you have that from Intel, list the source. Most sources disagree strongly:

    “Others are pausing to develop 14nm FinFET technology. Their 20nm technologies are still using old-style planar transistors while what they call 14nm or 16nm will convert to the new FinFET or tri-gate transistors,” he said. “Our 14nm is both denser and earlier than what others call 14mm or 16nm.” [Mark Bohr, a Senior Fellow in the Technology & Manufacturing Group [Intel]]


    "At its fall 2013 investor meeting, Intel suggested the company will enjoy a 35% scaling advantage over TSMC at the 14 nm node"


    "Based on published information from TSMC and the IBM alliance, and using the scaling formula (gate pitch x metal pitch), Intel claims that TSMC's upcoming 16nm node yields no logic area scaling improvement over 20nm"


    "Intel is claiming that TSMC, GF, and Samsung will have to collectively pause below 20nm, re-tune for FinFET, and only then begin scaling downwards once again. As far as we're aware, that's true -- TSMC and GlobalFoundries both forecast only modest improvements for die size and density at 20nm.

    "The FinFET transition, meanwhile, isn't going to deliver a huge advantage in <em>scaling</... for either company. Every major foundry has announced plans to combine a 14/16nm front-end process with FinFETs with a back-end 20nm technology line. That doesn't mean power and performance won't improve at these future nodes, but any density improvements will come from re-architecting SoCs to run with FinFET technology -- not intrinsic die shrinks. "


    "In short a shrink from A -> B no longer means the minimum drawn feature size conforms to (A^2) = 2(B^2) but a cell size of X on process A will be X/2 on process B. That is the key, the area used by a cell halves but the how changes and does not necessarily track to a specific factor like minimum drawn feature size. This is why SemiAccurate and others are annoyed by Samsung, Global Foundries, and TSMC calling their ‘shrink’ from 20nm ’16nm’. What they are delivering is a 20nm process with planar transistors replaced by FinFETs but the Back End of Line (BEoL) or metal layers barely changes. That means almost no effective shrink, Samsung is claiming between a 7% and 15% for their 20nm to their 14nm nodes.

    "Such shrinks are more in line with a normal mid-life process optimization than a real shrink from 20nm to 16nm. This is the long way of saying other than Intel’s, no 14/16nm process is an actual shrink from 20nm planar, cell size does not come close to the theoretical (20^2)/(16^2) shrink factor, it is almost non-existent. If Intel does deliver what they claim, we buy the naming of their “14nm” process, the rest are effectively slinging a load of BS to the non-technical masses this generation."

    Aug 21 12:58 PM | Likes Like |Link to Comment
  • AMD Gains Graphics Market Share In Q2, Driven By Kaveri APU [View article]
    @xxavatarxx: "BUT WILL IT MAKE MONEY???"

    We can ask the same of any new baby, yet many parents continue to add to their investment for 18 years or more. MAYBE THEY ARE ALL STUPID!!!
    Aug 21 12:23 PM | 3 Likes Like |Link to Comment
  • AMD Gains Graphics Market Share In Q2, Driven By Kaveri APU [View article]
    @User 12115671: "its not a fact but"

    All anyone needs to know.
    Aug 21 12:19 PM | Likes Like |Link to Comment
  • AMD Gains Graphics Market Share In Q2, Driven By Kaveri APU [View article]
    @User 12115671: "A GDDR5 version was developed and ditched due to cost and inefficiency."

    If you have a reference from AMD, show it.

    More likely the issue was that GDDR5 must be soldered onto the motherboard close to the APU, which makes a completely different type of product.

    "Kaveri also has a weak memory controller"

    The Kaveri memory controller has more to do than Intel's, since it supports HSA and Intel does not.

    HSA is the advance in CPU architecture everybody seems to want. But no modest architecture increment is going to overcome a production node transistor count advantage. Fortunately, a major architectural advance like HSA can.
    Aug 20 09:37 PM | 1 Like Like |Link to Comment
  • Advanced Micro Devices Might Reward Patient Investors [View article]
    @user1969: "Branching can be costly to performance."

    Branching for looping is also costly on a CPU, we just do not think about it because linear execution has no good alternative. But a GCN GPU with SIMD can be that good alternative in many cases.
    Aug 15 04:57 PM | Likes Like |Link to Comment
  • Advanced Micro Devices Might Reward Patient Investors [View article]
    @user1969: "Only software that can have instruction run in parallel across multiple cores without branching benefit from a GPU."

    OF COURSE an AMD GCN GPU can branch. See, for example:



    And there are coding alternatives to branching, such as computing all possible branch combination results in parallel, then choosing the appropriate result.

    "Does any of the software you use need that?"

    Sure. Many, many, programs use "loops" to repeatedly execute the same sequence of instructions on different data items. But with SIMD (Single Instruction Multiple Data), a GCN GPU can compute on each of the internal loop data items in parallel, producing a result for each, all at the same time (or as many as can be handled at a shot). Since lowest-level loops are notorious for burning execution time, by using SIMD even common programs can be made to run much faster.
    Aug 15 04:27 PM | Likes Like |Link to Comment
  • 40 Million More Reasons To Short AMD [View article]
    @Bruce24: "As far as your "ability to combine things on chip, and eventually do it much faster than anyone else", I don't Intel lacks the ability to integrate"

    Actually, integrate is what Intel does not do well. Rather than developing relationships, then listening and responding with appropriate product, they tend to tell customers what they should have.

    Classically, the idea was to build one thing which answered most of the market, so only one thing needed to be designed, produced and sold. Having a range of specialty items was avoided, because each cost nearly as much to design as the original, with far less return.

    "and if Intel wants and especially needs to do something, their resources put them in a good position to get it done quickly."

    The issue is not so much resources as strategy: A typical complete chip takes 3 years, more or less, idea to production, no matter who you are. The key to faster chip design is to have a modular design, so that typically only a single module will have to be re-done, taking (in SkyBridge) perhaps 1/3 the time and at 1/3 the cost. Smaller changes may be even faster, but in the end we are ultimately limited by perhaps 3 turns through fab at perhaps 3 months each, and even that assumes things do not go south.

    AMD is now designing to HSA hardware standards in the chip to support arbitrary compute modules (provided they fit in the existing SoC compute slots, although expanding those somewhat may be only a minor problem). Now, there is nothing magic about a particular hardware standard inside, but understanding the needs and developing that standard takes real time. For example, it soon becomes apparent that all the computation devices (e.g., CPU's and GPU's) need the guts to handle the same interface, which may imply very significant re-design and even re-thinking. This does add some overhead to computation and chip size. As far as is known, Intel has just not wanted to do that.
    Aug 15 11:18 AM | Likes Like |Link to Comment
  • 40 Million More Reasons To Short AMD [View article]
    RandSec,...."In my view, this is not about Keller, and also not about having raw CPU performance"

    "Unless you think AMD is going to give up on x86 desktop, notebook and server processors, then one of the most important things they need, if they want to gain back share and command higher ASPs, is a better CPU core."

    New mobile chips have been released in the last few months.

    New cores are already in the pipeline, as SkyBridge APU modules.

    "Without it, in mobile and low/mid priced desktops they are wasting the significant advantage they have with their iGPU."

    Every improvement helps. Not having the best is not waste. It is important to direct innovation to where it will actually make a difference, hopefully a lasting difference.

    "In high-end desktop and servers they will continue to be insignificant (ie. Server market share is now down to 2.2%, it was 30% when the Core 2 server chips first came out in 2007)."

    Both high-end desktop and server markets have the same issues: If they are on a larger node, they have fewer transistors available for computer architecture, and they also cannot compete with a lower node part on both performance and power simultaneously.

    So if one selects performance, power will be crazy and the Intel marketing machine will make power the most important spec. And actually achieving performance will require the normal low-power process to be painful optimized for speed, to say nothing about processor architectural issues.

    And if one selects power, one starts out clearly behind the smaller node which is already optimized for better power. Tricks can be used, like turning off everything when not needed, but Intel probably has tricks too.

    There is a reason why each new Intel processor generation improves only slightly, despite having a whole lot of really smart people around, and that reason is that all the simple improvements have already been made. Many of the complex improvements have already been made. And in the context of a highly-engineered whole, true innovations are as likely to cause problems as produce performance.

    HSA "tight CPU / GPU compute" is a true innovation and a major performance step in computer architecture. It is just not the historic type of processor innovation everyone wants to see. But nobody complains much about new Intel instructions, which also do not work on old code. Like new instructions, before HSA can be used, equipment must be able to use it, and so far, only Kaveri qualifies. Of course, HSA "tight CPU / GPU compute" also needs to be demonstrated and taught in online videos so it can be used. Hopefully, the Mantle API will help.

    All these issues may be why microservers look like a good option, since they may not use the highest performing processors. It is possible to win particular segments while not having highest performance at lowest power.

    AMD's advantage is not, and will not be, performance, but instead the ability to combine things on chip, and eventually do it much faster than anyone else. This supports pursuing a multitude of smaller but needy markets away from Intel. Imagining that new CPU cores can be the key to AMD resurgence is just delusion.

    AMD got the console chips because only they could deliver both a decent CPU and a leading GPU on one chip. But Intel will eventually catch up in GPU, and when they do, AMD needs to be both ahead and established elsewhere. CPU performance is not going to do that. HSA computing performance just might.
    Aug 15 09:15 AM | Likes Like |Link to Comment
  • 40 Million More Reasons To Short AMD [View article]
    @User 12115671: "AMD has made it very clear, no need for speculation."

    Well, let us just see how clear that is:

    "They are producing an all new x86 design and a custom ARM core on the same time line."

    Yes, so far: SkyBridge.

    "They plan on competing with Intel (skylake) 2016 head to head,"

    Really? Please link to any such statement from the company.

    "we'll see how it plays out. GF and Samsung are big, with deep pockets so 14nm 2016 is possible for them, not a certainty."

    A "14nm" process may be "available" in 2016, but probably will not have size benefits over 20nm, only power benefits, and so probably would not be much help in the raw CPU race.

    "The issues at 10nm are real, so the pack should come pretty close together when the planned new architectures from AMD and Intel arrive 2016."

    In 3Q 2016, Intel will have been actually producing in real 14nm for 2 years, and AMD may have been producing some 20nm for a year, starting 3Q 2015, as a guess. Mostly, these would be new designs.

    It also seems just possible, although not likely at all, that AMD might be able to start some "14nm" production 1Q 2016. Why then? Because, no matter what, we can plan on 3 months through the fab, with one turn for test, one for samples, and then production. That is 9 months in fab, so the simplest possible 3 month engineering would still mean a year overall from 20nm successful testing in 1Q 2015. Do not plan on either date. It would be a non-size-competitive "14nm." After the first success, hopefully the "14nm" parts would be simple conversions at 3 months engineering each, and again a year overall, at least at first.

    10nm is too far to discuss, but Intel are on real 14nm already, so it is hard to imagine that they would not be at 10nm long before any of the independent fabs, which have yet to produce at 20nm in volume. There is no reason to believe there will be a "pack" at 10nm. My guess for Intel is 3 years to 10nm, and so 3Q 2017, but it may take longer.

    "AMD's server strategy is ARM and x86, 2015 puma and a57 and 2016 peak (X86) and K12 (arm). It is more of a rumor though then a roadmap that AMD plans to come back to HEDT. "

    My point in this thread has been that the desktop rumor represents a hope that will not die, which goes well beyond mere speculation to what we expect from a delusion. If such a chip were produced, it would attract a massive R&D response from Intel, which ultimately, would mean another loss. Why would we want the company to invest in failure? Much better to spend the effort on some project with a decent return and of minimal interest to Intel.

    I am aware of no official indication that the high-performance cores would be independent chips; in fact there is some implication that the new cores would fit the HSA SoC slots, which would be smaller than any pure-CPU design. If these were to be part of a server strategy, there would also have to be a new server SoC in the pipleline.

    If the new cores work well within a standard APU SoC, of course they could be used for High-End Desk Top, but still would not challenge an Intel CPU from a smaller node when running old code. The high-performance APU's would be 20nm designs, competing against Intel parts on a real (not fake) 14nm process. Direct competition is not possible until both parties have a similar process available.

    But that might not matter, if, say, the APU's make a decent single-chip game machine, or if some popular program takes advantage of HSA. The APU's would of course dominate on HSA code, and Mantle might improve old code somewhat. A new form of Mantle might help with "close CPU / GPU compute," which would be a big deal. It is not necessary to concentrate on beating Intel when running old code, because that is not going to happen anyway.

    All my opinion. Also the process timings and future dates are guesses, which must vary even in the best case.
    Aug 14 04:16 PM | Likes Like |Link to Comment
  • 40 Million More Reasons To Short AMD [View article]
    @Justin Jaynes: "As far what quarters these were booked - no clue, and AMD hasn't said."

    One would expect engineering expenses to be handled monthly, although it could have been anything else, of course, under contract. Payments would have been distributed across the development period.

    In the YouTube video "Interview with AMD VP Saeid Moshkelani" from Jun 25, 2013, AMD VP Saeid Moshkelani tells us that development from completed spec to released parts took just under 2 years. So perhaps from 3Q 2011 through 2Q 2013, although spec development may have been contracted earlier as well.

    Aug 14 03:42 PM | 1 Like Like |Link to Comment
  • 40 Million More Reasons To Short AMD [View article]
    @User 12115671: "A good design one node larger can out perform a chip one node smaller."

    Nope. For one thing, more transistors can mean larger caches or even more cores. And Intel can optimize a leading-edge process solely for their parts, whereas AMD must make do with modified standard processes a node behind.

    Piledriver was not the problem. Trying to compete head-to-head against a company with its own fabs was the problem. No mere CPU design is going to solve that.

    In my opinion.
    Aug 14 01:06 PM | Likes Like |Link to Comment
  • 40 Million More Reasons To Short AMD [View article]
    @Bruce24: "a large part of the reason AMD's has lost so much market share to Intel over the last 7 or so years is due the performance of their CPU core. Hopefully Jim Keller and his team fix this, but until they do AMD's Computing Solutions division is going to continue to have a hard time of it."

    Unfortunately, Keller cannot change the fab process, and if the best process AMD can get is still a node or more behind Intel, they simply have fewer transistors available to use. While we can imagine improving even a decades-old architecture somewhat, being better overall from within the limitations of the old processing node seems very unlikely.

    Intel has unlimited R&D and their own fabs, and could design a process specifically to favor their competitive part. Did we not register the fact of AMD almost going out of business the last time they tried to beat Intel?

    In my view, this is not about Keller, and also not about having raw CPU performance on old code to rival Intel. That is just not going to happen. Instead, the action is going to be in markets now not well served, and in chips which are more than just CPU's.
    Aug 14 12:39 PM | 1 Like Like |Link to Comment
  • 40 Million More Reasons To Short AMD [View article]
    "On Margin:
    Devinder Kumar - SVP and CFO
    I will say semi-custom as you know is lower than company average,"

    Well, to my understanding, there is margin, and then there is margin. The problem is that Semi-custom is a different business model than the classic semiconductor business, and the margin difference is confusing. Probably there are specific accounting terms we can use, but the margin typical of the semiconductor industry leaves more expenses to pay from profit than the Semi-custom margin which has already paid those expenses.

    A normal semiconductor project involves years of expensive engineering R&D, plus fab R&D, plus production, marketing and sales costs. Massive fab capital investments and R&D are needed, but not normally factored into this "margin." Fab costs still must be paid, however, along with marketing and sales, so a decent part of the after-margin profit is further allocated, which is why a high margin is important.

    For a Semi-custom project, the customer pays part or most of engineering R&D as it occurs, the fab R&D and capital expense is factored into part cost from the independent fab and so is paid inside margin, and there are no marketing or sales expenses either. Thus, the Semi-custom "margin" is mostly clear profit, having already paid for things the normal model still must buy.
    Aug 14 12:16 PM | Likes Like |Link to Comment
  • 40 Million More Reasons To Short AMD [View article]
    @Andreas Hopf:

    Thanks for the useful link, but I find your interpretation odd: The 2014 report mostly covers the last-gen console market. Sales always droop at the end of a generation, since the potential buyers know that next-gen equipment and games are coming. That is not new to this generation, and hardly unexpected.

    From previous game console cycles, we expect hardware sales to peak perhaps 3 years in (perhaps for the 2017 report), and game sales to peak later (perhaps for the 2019 or 2020 report), as the equipment base continues to increase. Last-gen consoles are still being sold, but I think already next-gen game sales are exceeding last-gen, even though last-gen still has a vastly larger hardware base.

    I expect PS4 supply to start getting tight again soon, and then remain from tight to impossible through January. Those are mostly 4th quarter consumer electronics sales, but probably 3rd quarter AMD production. The 3rd quarter peak is normal for suppliers to the consumer electronics market, so 4th quarter is traditionally down in the industry. AMD is bragging that other quarters will be generally higher than when only supplying consumer electronics.

    Things always change. But the size of the electronic games market, and the extent of the market, both across ages and sexes, really seems quite encouraging for the long term.
    Aug 13 07:42 AM | 1 Like Like |Link to Comment