A Brief Overview of CODECs, Encoding, Decoding, And Other Boring Stuff
For the nerdier readers out there, Tom's Hardware and ExtremeTech both have very good write-ups on the stuff I'll be talking about if you want more information about the specifics, and various benchmarks and images used in this article will be from one of those sources unless otherwise stated.
So you're less confused for the rest of this article, I'll start out with the most important definition to understand (via TechTerms.com):
- CODEC - The name "codec" is short for "coder-decoder," which is pretty much what a codec does. Most audio and video formats use some sort of compression so that they don't take up a ridiculous amount of disk space. Audio and video files are compressed with a certain codec when they are saved and then decompressed by the codec when they are played back.
So for this whole process to work, you start out with some sort of media, so in this instance let's say that grandma is running around with her iPhone and can't wait to show you what Kitty is doing. So grandma calls you up on facetime, and films Kitty sleeping. Her phone then encodes that data, transmits it wirelessly to your phone, where the video is then decoded and converted into color data that your iPhone can then use to drive the screen to show you all the amazing things Kitty is doing.
One of the CODECs that FaceTime uses (as far as I know) is H.264. This is a popular CODEC used to transmit all kinds of cat videos, and one that Intel (INTC) supports via Quick Sync.
Quick Sync is faster than the competition by a landslide. Both the Advanced Micro Devices (AMD) and Nvidia (NVDA) GPU assisted times are roughly in the same ball park of ~70 sec, with Quick Sync being miles faster, and Quick Sync is compatible with H.264. The prevalence of H.264 makes Quick Sync an important feature to many users.
As H.265 is a relatively new standard, the slate will be more of a clean one between AMD and Intel in the beginning. And there will be some unknowns as well due to H.265 being designed with processing parallel data in mind.
What if We Want Higher than HD Kitty Videos?
Common ways to measure display screens are by resolution, as well as pixel density. And for comparison, a 1080p display is 1920x1080 pixels, or ~2 megapixels. A UHD, or 4K (whichever you prefer) is 3840x2160 pixels, or just over 8 megapixels. To transform this into pixel density, simply divide the total number of pixels by the physical area of this screen.
This highly portable 20" tablet probably has quite the pixel density.
While this tablet may seem outlandish, think about the number of devices (Apple's "Retina" displays, for example) that advertise higher than HD as a selling point. And now think about the example I give above of the difference between HD and UHD being ~4x more pixels for the later.
This increase of data presents a myriad of problems:
- Increased storage requirements
- Increased bandwidth needed for streaming and transmitting
- Computationally more complex
How Do We Solve These Problems? Enter HEVC
By picking and choosing your battle, you can essentially choose which problems above you want to fight. The way the industry seems to be leaning is by dealing with number 3 to minimize the effects of 1 and 2.
HEVC stands for High Efficiency Video Codec, and is a standard for encoding video that offers many benefits over h.264 encoding. According to x265's website:
- HEVC provides superior video quality and up to twice the data compression as the previous standard (H.264/MPEG-4 AVC)
HEVC is being coordinated by MultiCoreWare, and MultiCoreWare is a member of the HSA Foundation at the supporter level.
HEVC requires more computational power, but much of the process can be parallelized, with this parallelization helping out most during the encode stage, which is where much of the "heavy lifting" is accomplished.
The key points in the above slide are the 33 prediction directions for h.265 vs 8 for h.264, and the picture being able to be divided into a bigger range of shapes and sizes. Simple, but numerous, operations like these are where parallel (graphics) processors shine. By using a larger number of blocks and a much larger number of motion vectors when compared to h.264, the bits of data assigned to the video hold more information per bit. The tradeoff is there is more number crunching required to assign those bits.
What this slide above is showing is that at 1080p, the encoding efficiency is pretty big over h.264. By doing most of the heavy lifting on the front end (encoding) and back end (decoding), the stuff in the middle (streaming and storage) has less stringent requirements. Specifically, if the slide above were much better quality, you would notice that despite the streamrate of the pictures on the far left being identical that the image quality of the h.265 would be much better. But don't take AMD's word for this.
The above slide is one of the more useful to demonstrate my point. It shows that at equal quality settings, the same movie file being encoded takes up significantly more storage space when using H.264 compared against H.265.
So by "sucking it up" and using more compute power on the front end and back end, stored video files will occupy substantially less storage space (or higher quality stored in the same amount of space), making it easier to either store or stream video.
Potential Impact Of Video Transcoding
Mobile video is growing exponentially.
Take for example the "What does the Fox Say?" video. If you were to watch this video end to end an amount of time that is equal to the number of views, it would take you ~2350 years.
If H.265 were used, the amount of data used to transmit the video ~330M times would be substantially less. And, according to Tested.com, H.265 was designed with parallel computation in mind, which plays to the strength of AMD. This same article also states that hardware is about 18 months away, and was published in January last year.
SatelliteToday has an article that very clearly explains much of what I have outlined above, but breaks it down as to why this technology is important for businesses. The short of it is that it reduces cost in many aspects, including bandwidth usage and file storage.
The jump from streaming 1920×1080 to 3840×2160 is not something that can be done by just flipping a switch. First of all, viewers need a 4K TV, which practically no one has yet. PCMag's Chloe Albanesius has informed us that Netflix's 4K content will require "somewhere between 12 and 15 Mbps" to stream properly. That's a pretty serious connection which, again, not many Americans yet have at home.
Then, even if they have the bandwidth needed to handle such a stream, the device receiving it needs to be able to receive the content and push it to the TV with the a reasonable level of performance. Because of these expectations, you won't see an Xbox One, PS4, Roku, or Chromecast with this ability anytime soon. Instead, Netflix is going straight to the screen itself by targeting 4K-capable smart TVs. [More: How to start watching 4K right now.]
By using H.265 HEVC (High Efficiency Video Coding) moving forward instead of the currently popular AVC H.264, Netflix thinks they will be able to stream the same quality they currently transmit at half the bitrate. Not only does this mean there's room for higher quality 4K streams, but the current HD content will be transmitted more efficiently.
So H.265 cannot only help out at higher bitrates, but it can also allow content at current quality to be streamed using lower amounts of data.
And to be quite honest, the potential impact on Kaveri sales for this is harder to judge. My guess is that this will help AMD's competitive standing in the consumer space more than the data center. HEVC is still in its infacy, and quite a bit could change, so anything at this point is a guess.
Readers very often ask how HSA could be important to AMD, and the answer is that any impact would likely be indirect and in very specific cases in which GPU compute is an important factor in consumer choice. I chose to elaborate on HEVC since it was something I hadn't seen mentioned elsewhere to provide an illustrative example. Other areas of interest include tasks such as image editing and number crunching.
This article is not to say that AMD, along with x265, will not have competition. Companies like Texas Instruments (TXN) deliver DSPs (digital signal processors) to perform these functions on a large scale, and Intel (downloadable PDF here) looks to be making Quick Sync more programmable, making it easier for Quick Sync to serve a more universal purpose. And even the x265 spec has competition, as Google (GOOG) is looking to introduce its own CODEC for UHD.
Source: Seronx's Imgur Album
In one of the press slides that was leaked earlier during CES, the slide explicitly states HSA acceleration is applied to both the encode and decode stages, as well as providing the link to what looks like to be a professional grade media player/conversion tool. This is not to say that h.265 is some magical CODEC that automatically takes advantage of HSA, but rather in this specific instance the Media player from Telestream is designed to take advantage of HSA and the benefits offered in HSA accelerated h.265 encoding and playback.
However, unlike H.264, H.265 is still more of an open field, meaning there is a chance for AMD to somewhat close the gap with Intel here.
There are a lot of unknowns that remain to be seen with h.265, but right now bandwidth limitations make it more difficult for 4K streaming, and larger file sizes eat storage space, so more efficient data compression is sorely needed. H.265 looks to play into the strength of AMD's IP. This could be short lived however, as Quick Sync demonstrates with H.264 encoding.
This was just a fun way to write-up one aspect of AMD's upcoming chip release that I had not seen elsewhere, and one that made it easy to demonstrate the potential impact of HSA.
Mobile is a huge market in general. And there is some thought out there that the only way to capitalize on this market, and the Internet of Things in general, is to make tiny chips that power tiny devices.
That's not true.
All these tiny devices will rely on processing power from "the cloud", and new ways to process the data will need to be found in order to feed these devices and store the data, opening up potential growth avenues for AMD, along with its competitors, or even storage companies that will have to provide the hardware to house this data.
Just a few things to think about as the Internet of Things begins its expansion.
Additional disclosure: I am long AMD in both shares and options, and actively trade my position. I may add or sell shares/options at anytime, or initiate a hedge prior to earnings.