Stream Optimization Vendors Make Big Claims About Reducing Bitrates, But Not Educating Market

There are a lot of product verticals within the streaming media industry, and one of the lesser known ones includes a small handful of vendors that are typically referred to as offering stream optimization technology. While many of them have very different solutions, the goal of all of them is the same. To reduce the size of video bitrates, without reducing quality. Vendors in the market include A2Z Logix, Beamr, Cinova, EuclidIQ, eyeIO, Faroudja Enterprises, InterDigital, Sigala and QuickFire Networks (just acquired by Facebook). Some of these vendors would take issue with me listing them next to others they feel don’t compete with them, but amongst  content owners, they are all thought of as offering ways to optimize video, even if many of them do it very differently. I don’t put them all in the same bucket, but many content owners do.

These compression technologies are already being applied to images, where companies like Yahoo and Netflix use solutions to make images load faster and save money. Over the past month, I’ve been looking deeper at some of the vendors in this space as it pertains to video compression, and have had conversations with a dozen OTT providers, asking them what they think of solutions currently on the market. What I’ve found is that there is a lot of confusion, mostly as a result of vendors not educating content owners and the industry on the technology, ROI, cost savings, impact on workflow amongst a host of other subjects.

Visiting the websites of many of the vendors provides little details on use cases, case studies, target demographics, cost savings, technical white papers and customers. Instead, many companies simply highlight how many patents they have, make claims of compression savings, have nice generic workflow diagrams, use lots of marketing buzzwords and all say how good their quality is. With some vendors, you can’t even tell from their website if they offer a service, a technology, a platform or a device. There are no reports, that I can find, from third-party companies, to back up claims that many of them make, which makes it hard for everyone to judge their solutions. I have yet to see any independent testing results, from any vendor (except one) that compares their technology to others in the market, or even to traditional H.264/HEVC encoding.

As I haven’t used any of these solutions myself, I talked to some of the biggest OTT providers, who deliver some of the largest volume of video on the web today. I won’t mention them by name since many gave me off-the-record comments or gave me feedback about specific vendors, but all of the ones I spoke to have evaluated or looked at stream optimization solutions in the market, in detail. Many of them have evaluated vendor’s products in their lab, under NDA. What I heard from nearly all of the content owners is that many of them view “some” of these solutions in a bad light. There is a stigma about them because some vendors make claims that simply aren’t accurate when the product is tested. Many content owners are still skeptical of the how companies can claim to reduce bitrates by 50%, yet still keep the same quality.

Also hurting this segment of the market was an article published a few years back that accused one of the vendors of selling “snake oil”. While that’s not a fair description of these services in the market as a whole, as one content owner said to me, “it helped create a stigma around the tech.” Yet some vendors are starting to show some signs of success and while this post isn’t about comparing technology from one provider to another, one or two vendors do seem to be rising above the noise, but it’s a challenge.

When it came to feedback from companies who have looked at stream optimization solutions, some told me that, “in real world testing, we didn’t see any compression decreases that we couldn’t reproduce with our own transcoding system.” Another major complaint by many is that the solutions “don’t run in real-time so it is a non-starter or way more expensive and latent for live encode.” Some vendors say they do offer real-time solutions, but content owners I spoke with said, “it didn’t work at scale” or that “the real-time functionality is only in beta.”

The following is a list of the most common feedback I got from OTT providers, about stream optimization solutions in the market. These don’t apply to every vendor, but there is a lot of crossover, especially when it comes to the impact on the video workflow:

  • lack of real time processing
  • adds more latency for live
  • it doesn’t neatly fit into a production workflow
  • forces the client device to work harder
  • inconsistent reduction in file size
  • bandwidth savings that don’t really exist
  • affects the battery life of mobile phones
  • makes the encode less desirable for encapsulated streaming
  • impacts other pieces of my workflow in a negative way
  • solution is simply too slow
  • didn’t see any compression decreases that we couldn’t reproduce with our own transcoding system
  • cost vs. ROI not clear, pricing too high

It should also be noted that one of, but not the only value proposition these vendors make, is the ability for content owners to save money on bandwidth. While that’s a good idea, the problem is that many content owners already expect to save on bandwidth, over time, simply by moving to HEVC. Content owners expect to see anywhere between 20%-40% compression savings, once HEVC takes hold. So that alone will save them on bandwidth, however HEVC adoption, at critical mass, is still years away. Vendors selling stream optimization products have to have more than just a bandwidth savings ROI, but that’s most of what they all pitch today. That is a hard sell and one that’s only going to get harder.

Another problem these vendors face is that while they are routinely talked about when the topic of 4K streaming comes us, and the need to deliver better quality with lower bitrates, there is no definition of what classifies 4K streaming. Bitrates can always be compressed, but with what tradeoff? What is considered acceptable? There is no standard and no spec, for 4K streaming, which is a big barrier to adoption, something that doesn’t help these vendors. One will tell you they can do 4K at 10Mbps, with no quality loss, another will say they can do it at 6Mbps. But those are big difference. Are they both considered 4K?. I don’t know the answer, no one does, since the industry hasn’t agreed on what 4K means for the web.

Even for a vendor who’s stuff works, they really have their work cut out for them. There is simply a lack of education in the market and vendors treat their technology as top secret, which doesn’t help the market grow and stifles education. Also missing is how one technology compares to others, what is/isn’t pre-processing and why you would want/not want one kind over the other etc.

Not to mention, no one knows what this stuff costs. Does it add 20% to a customer’s encoding costs but then reduces their delivery costs by 2x that? What’s the total cost to their workflow? What size company can use these solutions? When does it makes sense? How much traffic/content do you need to have? There are no studies are out there, that I can find, with real numbers, real ROI metrics, savings calculators etc. Most of these vendors don’t explain any of this on their websites, or say who their ideal customer is, which makes it hard for a content owner to know how big they need to be before it makes economic sense. This tech may very well work, be useful, have a good ROI etc. but to date, vendors have not shown/documented/proven that in the market, with any real education.

To be fair, some vendors have announced a few major customers, but we don’t know how these content owners are using it and how it’s been implemented. And the customer announcements are sparse. eyeIO announced Netflix as a customer in February of 2012, but I don’t know if they are still a customer or even using the technology anymore. Also, in the three years since eyeIO’s press release about Netflix, they have only announced one other customer in 2013, and none in 2014. Cinova, InterDigital, Faroudja Enterprises, Sigala and A2Z Logix, haven’t announced a single content owner customer via press release, that I can find. I know some of them do have a customer here or there, but nothing announced to the market. EuclidIQ has the best website in terms of the info they provide, but again, no customer details. Beamr has publicly announced M-GO and Crackle (Sony Pictures Entertainment) as customers and I hear has some others that have yet to be announced, but like the other vendors, their website provides almost no details.

As one content owner said to me about stream optimization technology, “In short its got just as many drawbacks as advantages best case.” That’s not good for a niche segment of the industry that is trying to grow and get adoption. The lack of education alone could kill this in the market before it even has a chance to try to grow.

iPhone 6 Display Making Image Delivery Harder For CDNs, Forcing Shift To Responsive Web Design

iphone6The iPhone 6 and the iPhone 6+ hit the device market like a freight train. The dynamic duo shipped 10 million units in the first weekend alone, breaking all device release records. Some analysts expect 60-70 million iPhone 6 and 6+ unit to ship in the first year after its release. Lots of people have written about how these two new phones are forcing web publishers to up their game in preparation for these richer displays. What they may not have realized is that the iPhone 6 and iPhone 6+ also presents a challenging problem for content delivery networks.

Over the past few months, multiple content owners have told me how the iPhone 6 and 6+ have made it harder for some CDNs to deliver their content to these devices, with good quality. Apple’s new phones challenge CDNs with a perfect storm of significantly larger images and a non-negotiable requirement to deliver images generated on the fly even over wireless phone networks. This is a territory where CDNs have historically struggled even with the lighter requirements of earlier generation smart phones with smaller displays.

The new iPhones are the final nail in the coffin for old-style m.dot websites and a forcing factor for the shift to Responsive Web Design (1, 2) sites. With the arrival of these two devices, websites must support nearly a dozen Apple device screen sizes. Add this to the assortment of popular Android display sizes and image management becomes a massive headache. On m.dot sites, this creates significant pain because that means each of the image sizes must be cached on the edge of the network for a CDN to deliver it properly.

There are software solutions to automatically generate and manage multiple image sizes as soon as a user hits a website, but those solutions mean that the customers IT organization is incurring significant technical overhead and creating a single point of failure for delivery. This may not have been as big a deal a few years ago when sites did not change their larger images often, but today, most travel, ecommerce and media sites update images as often as several times per day.

Application delivery provider Instart Logic gave me an example of such a customer, Commune Hotels, a hip hotel chain that owns brands like Joie De Vivre and Thomson, who is actually baking high definition user photos from Instagram in, updating them constantly. This type of behavior means flushing a cache in a CDN in a timely fashion can become a logistical nightmare all by itself. As a result, almost every web performance engineer and senior executive overseeing website performance that I am speaking with is in some stage of transition to a Responsive Web Design site. This, too, presents big problems, namely for the CDNs. Responsive Web Design sites rely on scalar image resizing on the edge of the network. That means images in a site code are expressed as ratios rather than sizes.

With Responsive Web Design, images must be generated on-the-fly to fit the specific device type and display size as called for by any user. Responsive Web Design also forces a site owner to generate multiple images in anticipation of user behaviors, like resizing browser windows, zooming in on multi-touch, etc. Some CDNs are wholly reliant on cached images and start to show significant performance degradation when they are forced to rely on images generated on the fly.

A critical compounding factor is the higher pixel ratio required in the iPhone 6 and iPhone 6+. These devices go further than any previous Apple mobile product with a pixel ratio of 3. This means that website publishers will need to push images to devices that are actually three-times as large as the display size. This is in anticipation of user behavior on multi-touch devices, zoom, pinch, moving in and out. The behavior is common on smartphones and is a critical part of the user experience for digital experience delivery on image-rich sites in ecommerce, travel and media. (See Akamai’s video on the subject)

This means that not only will CDNs need to handle on-the-fly image resizing but they will also have to quickly push out to their caches images that are considerably larger than ever before. And then the CDNs are at the mercy of the “last mile” of the cell phone networks where network performance is highly variable depending on location, time-of-day, micro-geography, user density, and network connection type (LTE, etc). What’s more, web publishers pushing out these larger images via CDNs can experience signficantly higher bandwidth costs. Over 60% of website payloads are already images, the additional bump could add mightily to what large sites or image-heavy sites are paying each month to use CDNs.

By extension, methods to maintain image quality or serve larger images with less bandwidth could have a tremendous impact. Instart Logic, for example, has some new and intriguing ways to categorize the content of images and then optimize the first paint of an image to maintain quality but slash data required to display. The algorithm can, for example, figure out whether a photo is of a face or a of a beach. If it’s of a beach, then far less data is required to paint an image without noticeable quality loss. This can save 30% to 70% on initial paints. These types of software-defined solutions that rely on smarter ways to pass data over the last mile will trump the old brute force methods of CDNs, PoPs and fiber to the near-edge (because the edge now, is always the cell tower).

Another untapped area that will become important is tapping the power of browser caches on the devices. Newer devices and new HTML5 browsers have multiple caches. Some are higher performance than others, by design. So savvy web performance solution providers will need to tap into those browser caches and route the content that is likely to be persistent on a site – header images, for example – and push it into the browser for fast recall. This will allow pages to appear to load much faster even if the site is waiting for other components to load.

Ultimately, CDNs will either need to figure out a way to deal with delivering much bigger images efficiently over the last mile or they will struggle with serious performance issues on the iPhone 6 and iPhone 6+ and other new devices coming to market. Their alternative will be deploying more capacity on the edge of the network, which is by and large not a cost-effective strategy.

I’d love to hear from other content owners in the comments section below on how they are handing the delivery of content to the iPhone 6 and 6+.

Image credit: myclever

The Adoption Of 4K Streaming Will Be Stalled By Bandwidth, Not Hardware & Devices

With all the talk of 4K that took place at CES, some within the industry are making statements and assumptions about 4K streaming bitrates that simply aren’t accurate. Many are under the impression that 4K streaming will soon be delivered at around 10-12Mbps using HEVC and are also quoting data from Akamai incorrectly. If you look at the HEVC testing that guys like Jan Ozer and Alex Zambelli have done, and look at the data Netflix has presented around their 4K encoding (Netflix’s current bitrate for 4K is 15.6Mbps), the bitrates won’t get down to 10-12Mbps anytime soon.

The reality is that true 4K streaming can’t take place at even 12-15Mbps unless there is a 40% efficiency in encoding going from H.264 to HEVC and the content is 24/30 fps, not 60 fps. Netflix has stated they expect HEVC to provide a 20-30% encoding efficiency vs H.264, within two years. That’s a long way away from the 40% required to get bitrates down to 12-15Mbps. While 4K can in theory be compressed at 10-12Mbps, this is typically achieved by reducing the frame rate or sacrificing quality. As Encoding.com points out, to date, “most of the HEVC we’ve seen in the market is heavily noise-reduced with high frequency details blurred out to fake the 40% efficiency”. The optimal bandwidth for high quality 4K is higher than 20Mbps. UMAX in Korea, for instance, compresses its 4K p60 streams at 32Mbps (i.e. using a rate of 60 frames per second, progressive). For the full effect of sports and documentary content, this is a more realistic bit rate at today’s compression efficiency.

As state of the art HEVC improves, some benefit will be reaped in terms of target bit rate. If the 40% efficiency improvements do indeed come true for HEVC, years from now we might see 4K streaming bitrates at the 10-12Mbps level, but it would not be for a very long time. OTT streaming is completely driven by the economics of bandwidth and what it costs to deliver the content. So the video only gets delivered at the minimum bit rate required to make the video look generally acceptable. Costs drive adoption. As I have written about before, the dirty little secret about 4K streaming is that content owners can’t afford the bandwidth costs. At Frost & Sullivan, we have done a lot of work on HEVC and 4K streaming trying to set the record straight on what is and isn’t possible. See [Cutting Through The Hype Of HEVC] and [Why MSOs Should Not Consider Switching Directly from MPEG-2 to HEVC].

With Netflix already encoding 4K content at 15.6Mbps today, and with the expertise they have in encoding and the money they spend on bandwidth, they will get the bitrate lower over time. Some observers think it might go down to 10-12Mbps, but that would only be possible down the road and at 24/30 fps, not 60 fps. If you want 60 fps, it’s going to be even higher. But even if we use the 10-12Mbps number, no ISP can sustain it, at scale. So while everyone wants to talk about compression rates, and bitrates, no one is talking about what the last mile can support or how content owners are going to pay to deliver all the additional bits. The bottom line is that for the next few years at least, 4K streaming will be near impossible to deliver at scale, even at 10-12Mbps, via the cloud with guaranteed QoS.

When it comes to the percentage of consumers in the U.S. that have Internet speeds capable of getting 4K content, with a threshold of 15Mbps, many are using Akamai’s data incorrectly. Multiple media outlets have said that, “Akamai says 19% of U.S. homes now can sustain the average 15 Mbps broadband speeds necessary to stream 4K/Ultra HD video.” That is NOT what Akamai said, nor what their data shows. Akamai’s data from their State Of The Internet Report isn’t breaking down what percentage of U.S. households have 4K ready connections, but rather speaks to the percentage of unique IP addresses from the United States that connected to their platform during the third quarter that had average connection speeds over 15 Mbps. However, there’s no direct correlation between unique IP addresses and households.

So for all those repeating the 19% of U.S. households can get 4K streaming number, that is not accurate. Don’t repeat it just because you read it somewhere, check the source of the data yourself. The bottom line is that 4K and HEVC is exciting and it is the future. But vendors, content owners and the media need to have realistic expectations of what is and isn’t possible with 4K streaming and use real numbers when it comes to bitrates, costs, efficiencies and Internet speeds.

Microsoft In Partnership With Verizon For Azure Cloud CDN Service

In the second half of last year, Microsoft made the decision not to continue the build out of their own CDN for their Azure cloud platform and instead, struck a deal with Verizon to white label Verizon’s EdgeCast CDN. While no partnership deal was ever officially announced, Microsoft has confirmed the deal to me saying, “Microsoft licenses technology from many partners to complement our product offerings and to give customers complete solutions. We are happy to partner with EdgeCast to provide an integral component of the Azure Media Services workflow.” Some might think it strange for Microsoft to shut down their in-house Azure CDN, and reply on a third-party, but considering Microsoft’s approach to the market, it makes sense.

While many cloud providers like Amazon and others want to build everything in-house, Microsoft’s approach with Azure has always been to offer customers more flexibility and deeper functionality, by building the Azure platform with help from other focused solution providers in the market. They took the same approach last September when they rolled out live streaming and content protection offerings within the Azure Media Services group, partnering with Telestream and Newtek amongst others. Microsoft’s goal isn’t simply to build cloud components, but rather to offer an end-to-end ecosystem for video. The announcement this morning that GameStop will be using the Azure cloud platform to stream video game content direct to consumers and to devices in-store, shows the kind of solution that Microsoft is building with partners. Working with best of breed third party providers makes sense when competing with the likes of Amazon and Google, as offering greater product performance and depth helps Azure differentiate their service offering compared with internally built solutions from competitors.

It’s too early for me to say just how much revenue Verizon’s EdgeCast CDN will get from being the backend CDN for Azure, but it should be significant over time. Microsoft’s Azure cloud service continues to get more traction in the market and while Amazon’s cloud service has a lot more in the way of products, with the EdgeCast partnership, Azure has an opportunity to leapfrog ahead of Amazon’s CloudFront, given EdgeCast’s performance focus and CDN product development focus. Looking beyond CDN however, Azure is looking at solving the multitude of video workflow challenges, which is much more complex than just storage and delivery. Broadcasters and other media customers that need to be able to ingest, transcode, protect and deliver their content are out in the market looking for a single cloud based platform that can do it all.

Microsoft’s goal with Azure is to become a robust and easy to use platform for customers who need an ecosystem platform, as opposed to stand alone components. Microsoft still has a way to go with their Azure Media Services platform, but based on what they have done already, and the partners they have chosen, they are on the right track and will be one to watch in the new year. It also seems pretty powerful to me to have a big network player like Verizon and a big Cloud software player like Microsoft partnering up to take a serious run at the Enterprise Cloud Segment, a market where both companies have strength and Amazon and others hope to penetrate.

Bankers Say Roku Will Go Public Soon, Project Revenue Of $275M-$300M In 2015

roku_logo_lRoku’s CEO Anthony Wood was on CNBC earlier today and the first question he was asked was if Roku plans to go public this year. While he wouldn’t comment on anything having to do with funding, which is expected, Wall Street bankers I have spoken with tell me Roku will go public shortly. What exactly “shortly” means remains to be seen, and while I haven’t heard a specific date, I’ve been told that Roku is already well into the IPO process. Bankers tell me Roku’s revenue for 2014 was over $200M with them projecting 2015 revenue to be in the range of $275M-$300M. I’m also hearing that Roku is expected to become profitable in Q1 of this year. To date, Roku has raised over $150M in venture funding.

I don’t know how much money Roku is looking to raise in their IPO, but I would estimate it to be $100M on the low-end and as much as $150M on the high-end. While 2014 wasn’t one of the better year’s for new IPOs, GoPro has done well and even thought they sell a different type of consumer product than Roku, many on Wall Street will use GoPro’s IPO success to excite others about Roku’s business. In Q3 of last year, Roku said it had sold 10M players in the U.S. since launching in the market in 2008.