Clearing Up The Cloud Confusion re: Amazon, Disney, Hulu, BAMTech, Akamai and Netflix

Over the past few days there has been a lot of infrastructure news surrounding how video is delivered from third-party content delivery networks. Between the news around Disney’s upcoming OTT services, Hulu using Amazon for their live streaming service, and BAMTech now being 75% owned by Disney, some in the media are making inaccurate statements.

Lets start with the press release that Hulu is using Amazon’s CDN CloudFront to help deliver some of their streams for Hulu’s new live service. This isn’t really “new” news as Hulu confirmed for me in May they were using Amazon, along with Akamai for their live streams and other CDNs as well for their VOD content. For live stream ingestion, Hulu is taking all of the live signals via third-party vendors including BAMTech and iStreamPlanet. What the Hulu and Amazon tie up does show us is just how commoditized the service of delivering video over the Internet really is. Nearly every live linear OTT service is using a multi-CDN approach, even for their premium service. Case in point, AT&T is using Level 3, Limelight and Akamai for their DirecTV Now live service, and this is the norm, not the exception. There was also a blog post saying it’s important for Disney and BAMTech to “own not just content assets, but also delivery infrastructure.” But BAMTech doesn’t own any delivery infrastructure, they use third-party CDNs.

In the news round up of Amazon and Hulu, multiple blogs are also implying that Netflix also uses Amazon to deliver Netflix videos. Statements like, “Netflix depends on Amazon to deliver its ever-growing library of shows and movies to customers“, is not accurate. Yes, Netflix relies heavily on Amazon’s cloud services, but not for video delivery. Netflix delivers all of their videos from their own content distribution network and doesn’t use Amazon’s CloudFront CDN for video delivery at all.

When it comes to Disney’s new ESPN OTT service, due out in 2018, and their Disney branded movie/content service due out in 2019, some have said third-party content delivery networks, and in particular Akamai, will see a “boost” or “great benefit” from these new services. But the realty is, they won’t. And not just Akamai, but any of the third-party CDNs that BAMTech uses, of which they use many. If you just run the numbers, you can see what the value of a contract from Disney would be worth, to any third-party CDN, specific to the bits consumed. If ESPN had 3M subs from day one, which they won’t, and each user watched 5 hours a day, with 50% of their viewing on mobile and 50% on a large screen, each user would consume about 130GB of data per month. At a price of about $0.004 per GB delivered, each viewer would be worth about $0.52 to a CDN.

And with BAMTech using multiple CDNs for their live streaming, if three CDNs all got 1/3 of the traffic, the value to each CDN would be worth just over $500k per month. But ESPN won’t have 3M subs from day one, so the value would be even lower. For a company like Akamai that had $276M in media revenue for Q2, an extra $1M or less in revenue per quarter, isn’t a “boost” at all. So for some to write posts saying ESPN’s new OTT service could be a “large business opportunity for the company [Akamai],” it’s simply not true. Reporters should stop using words like “large” and “big” when discussing opportunities in the market, if they aren’t willing to define them with metrics and actual numbers.

No OTT Service Has Figured Out How To Achieve Service & Monetization Parity Across Traditional & Online Broadcasts

It’s no secret that TV by appointment is giving way to OTT-centric preferences. Frost & Sullivan’s research numbers corroborate this trend at many levels such as growing rates of OTT viewership, falling STB sales, soaring connected device and smart device usage, and thriving growth in multi-screen video transcoding and protection solutions. We also see continued expansion of online video offerings from websites and via apps, both by pay TV service providers and directly by broadcasters.

Against this backdrop, we see recent service offerings available in the market, such as Hulu Live, YouTube Live and FOX making all of their primetime programming available live to all US markets. Hulu is now nearly a decade old and broadcasters like CBS, NBC and ABC have offered OTT streaming for some time now, as have HBO and ESPN. Content is often free for pay TV subscribers after username and password authentication; monthly fees for standalone consumption are nominal. And yet, no OTT provider has yet to figure out how to achieve service and monetization parity across traditional broadcasts.

FOX has showed some success because they allow local affiliates to control advertising and branding of channels. All of FOX’s primetime entertainment is streamed live, rather than select shows. 210 regional U.S. markets are covered, as opposed to more select coverage with other broadcasters. Consequently, FOX boasts that nearly all pay TV households in the US can now view FOX channels online via their streaming media devices, smart TVs, and tablets. The reason FOX was able to achieve ubiquity of coverage in the U.S. where other broadcasters had so far failed is by its ability to allow local affiliates to control the advertising and branding of the channels being offered.

This is in stark contrast to the ongoing trend of disintermediation where broadcasters seek to go directly to end consumers, bypassing the pay TV service providers. This second difference, in terms of monetization and branding, holds the promise of solving one of the most vexing challenges with OTT today, which is monetization. Targeted ads and usage fees have thus far fallen short of their promise. Programmers, service providers and broadcasters have all been challenged to maintain their business brands in a market where consumers often confer loyalty to specific shows, specific talent, or select social media destinations more than channels or service providers. By managing to cooperatively partner with affiliates on advertising and branding and thereby avoiding conflict and competition, FOX may perhaps have found a win-win middle ground.

This is of course easier said than done, and much will depend on the quality of experience and inventory of ads that will be delivered. The initial statistics are certainly promising. The third difference appears to be that this will truly be live-streamed content, in contrast to other offerings where episodes are made available for on-demand viewing concurrently with or at a short delay after the conventional broadcast goes live. While this technological difference is significant and noteworthy to infrastructure vendors, I’m also of the opinion that everyday users should neither care about this distinction nor become aware of it.

Which brings us to the flip side of these services, which is that it sheds light on the many shortcomings of the OTT ecosystem today. FOX is not currently providing sports content through this framework. Sports continue to be provided through a separate app and presumably a separate set of agreements. Viewers, even pay TV subscribers, continue to be subject to the disparity and lack of consistency in content access across types of content, channels, resolutions, regions and in some cases device support. Service levels can vary dramatically by location, even for the same user. Service provider apps and destinations offer overlapping content with broadcaster apps and destinations, with online video services often joining in the same fray. Users are left to figure out the nuances of true live streaming, catch-up TV, cloud DVR and video on demand, all of which should “ideally” simply be “TV on any screen”.

Content services are most beloved when they offer delightful, consistent, cross-device OTT experiences, which are at par with conventional live linear managed experiences. While tier-1 services such as Comcast and others are coming closer to this idea in the U.S., the overall problem is far from solved. Even a decade after Netflix and Hulu first began to stream content, no one has fully figured out how to achieve service and monetization parity across traditional and online broadcasts.

Apple’s Adoption Of HEVC Will Drive A Massive Increase In Encoding Costs Requiring Cloud Hardware Acceleration

For the last 10 years, H.264/AVC has been the dominant video codec used for streaming but with Apple adopting H.265/HEVC in iOS 11 and Google heavily supporting VP9 in Android, a change is on the horizon. Next year the Alliance for Open Media will release their AV1 codec which will again improve video compression efficiency even further. But the end result is that the codec market is about to get very fragmented, with content owners soon having to decide if they need to support three codecs (H.264, H.265, and VP9) instead of just H.264 and with AV1 expected to be released in 2019.

As a result of what’s take place in the codec market, and with better quality video being demanded by consumers, content owners, broadcasters and OTT providers are starting to see a massive increase in encoding costs. New codecs like H.265 and VP9 need 5x the servers costs because of their complexity. Currently, AV1 needs over 20x the server costs. The mix of SD, HD and UHD continues to move to better quality: e.g. HDR, 10-bit and higher frame rates. Server encoding cost to move from 1080p SDR to 4K HDR is 5x. 360 and Facebook’s 6DoF video are also growing in consumption by consumers which again increases encoding costs by at least 4x.

If you add up all these variables, it’s not hard to do the math and see that for some, encoding costs could increase by 500x over the next few years as new codecs, higher quality video, 360 video and general demand increases. If you want to see how I get to that number, here’s the math:

  • 5x number of minutes to be encoded over today
  • 5x the encoding costs for new codecs like VP9 and HEVC over H.264
  • 5x as more video is in higher resolution, higher frame rate, HDR (e.g. 1080p60 SDR vs 4Kp60 HDR is 5x pixels)
  • 2x as now you have to support two codecs (H.264 & HEVC or VP9)
  • 2x if you have to support 360 video and Facebook’s 6DoF (Degrees of Freedom)

This is why over the past year, a new type of accelerator in public clouds called Field Programmable Gate Array (FPGA) is growing in the market. Unlike CPUs and GPUs FPGAs are not programmed by using an instruction set but instead by wiring up an electrical circuit. This is the same way traditional Application Specific Integrated Circuits (ASIC) are programmed but a big difference is that FPGA can be programmed “in the field”. This means it can be programmed on demand in the cloud just like CPUs and GPUs are. Fortunately, customers just need to change a single line of code to replace a software encoder with an FPGA encoder and still get the benefits of using common frameworks like FFmpeg.

Encoding software such as x265 contains a great many presets that allow the user to customize settings and trade-off overall computing requirements against the size of the encoded video. x265 can produce very high-quality results with the “veryslow” preset. The coding rate (frames per second encoded) is low, yielding the best compression, but with considerable cost of encoding resources. On the AWS EC2 c4.8xlarge instance running X265 deliver only 3 frames per second (fps) of 1080p video. Hence to deliver 60fps 20x c4.8xlarge instances would be required which would cost around $33 an hour.

To put that in comparison, video compression vendor NGCodec’s encoder running in the AWS EC2 FPGA instance f1.2xlarge can deliver better visual quality than x264 ‘veryslow’ but can deliver over 60 fps on a single f1.2xlarge instance. The total cost would be around $3 including the cost of the f1 instance and the cost of the codec. This is a total savings of over 10x as well as avoiding the complexity of parallelizing live video to use multiple C4 instances. This cost and quality benefit is why public cloud providers like Amazon, Baidu, Nimbix, and OVH have already deployed FPGA instances which their customers can use on demand. Many other data centers providers tell me they are also in development of FPGA public instances and I expect this trend to continue.

I’d be interested to hear what others think of FPGA and welcome their comments below.

Why Vendors Need To Hire Marketing Leaders From Outside Their Industry

With the growth we have seen in the streaming media market, most vendors have a long list of open reqs, all desperately trying to hire the same people. And while it makes sense to hire sales and product folks from within the industry, I’m a firm believer that when it comes to marketing positions, vendors need to start hiring from outside our industry – immediately.

At first many might think that’s an odd statement for me to make, but marketing is a skill set. Either you understand how to market a product or you don’t. Understanding the technology helps to a degree, but companies aren’t selling technology, they are selling a service. No offense to the marketers in our industry, but we need some fresh blood, with those who know how to market a service/produce, and bring a different perspective to the industry. Even for myself, I recently hired a marketing specialist to help me re-think and re-imagine my brand in the market. A good marketing person knows how to tell a story and transform a product and service into something compelling, even if they don’t know how to encode a piece of video.

As an industry, we are all using far too many high-level and generic words like speed, quality, performance, scale etc. with no real meaning behind them. Good marketing and branding is an art. It involves knowing how to price, package and productize a product/service and do it in a way that resonates with the customer, be it b2b or b2c. Those with good marketing skills know how to transcend verticals and markets, while delivering a clear message. And the really good marketers can meld companies, industries and make brands more valuable and relevant.

As I have seen first hand from the marketing person I am working with, great marketers are remarkable observers. They love to observe people’s behaviors and can quickly tell what a person likes, what resonates with them, and what creates the experience the client is looking for. A good marketing person is also extremely curious, they constantly ask questions, and want to know what businesses and people think of things. Skillful marketers always have questions to ask and never run out of ways to think about how a person or business reacts to a name, a brand, a service or a feeling. In short, really good marketing folks are geniuses because they aren’t afraid to try something new, to disrupt the market, to change how people think.

A good marketing person doesn’t work 9 to 5. They spend a lot of personal hours watching people, questioning the norm, researching, looking at data and advancing their skills. They tend to read everything they can and absorb information like a sponge, constantly retaining it for later. They are great planners, but even better doers. Marketing professionals live in the trenches, because it’s where they get their energy from and they don’t use buzz words or quotes from books, because they have been there and done it. They have the hands-on experience, are always thinking, coming up with ideas, and trying something new. They also love their community, are aware of their surroundings, love challenges, and I’ve found, they never start a conversation with a list of their achievements. They are most interested in their client’s challenges and how they can solve them.

I also have found that really good marketers are quite humble, don’t come with an ego and are not seeking glory. They take great pride in their work and they love to see a campaign and branding exercise be successful. Great marketers believe in accountability, and are not afraid of data, reporting and have a tangible methodology to determine the clients ROI.

When it comes to marketing products and services in the online video industry, it’s time for our market to be disrupted. We need change. We need to evolve. We all need a fresh perspective. Even me. It does not matter how long you have been in the space, in fact, I think the longer you have been in the industry, is actually a dis-service when it comes to having a fresh marketing approach to the market. Right now I am having someone look at what I do, critic it, change it, and find ways to make it even more relevant, make it transcend verticals – which is the only way for any business and brand to grow. And that is the true value of a marketing genius, growing a company. If you are interested in branding, marketing, and packaging help, feel free to reach out to me and I’ll put you in contact with the marketing genius I am using.

PacketZoom Looking To Solve TCP Slow Start By Boosting Mobile App Performance

TCP is a protocol meant for stable networks, and all too often, mobile is anything but stable, with unreliable and often bottlenecked connections that conflict with TCP methodologies like slow starts on transfers. But while TCP looks inevitable from some angles, PacketZoom, an in-app technology that boosts mobile app performance, says outperforming TCP is actually not as difficult as it looks. The problem is certainly complex, when many moving pieces are involved, especially when we’re dealing with the last-mile. But if you break the problem down into digestible chunks and apply observations from the real world, improvement becomes achievable.

The internet is made up of links of different capacities, connected by routers. The goal is to achieve maximum possible throughput in this system. However, the links all possess different capacities, and we don’t know which will present bottlenecks to bandwidth ahead of time. PacketZoom utilizes a freeway analogy to describe the system. Imagine we have to move trucks from a start to end point. At the start point, the freeway has six lanes — but at the end, it has only two lanes. When the trucks are received at the end, an “acknowledgement” is sent back. Each acknowledgement improves the knowledge of road conditions, allowing trucks to be sent more efficiently.

The goal is for the maximum number of trucks to reach the destination as fast as possible. You could, of course, send six lanes worth of trucks to start, but if the four extra lanes suddenly end, the extra trucks must be taken off the road and resent. Given the unknown road conditions at the start of the journey, TCP Shipping, Inc. sensibly starts out with just one lane worth of trucks. This is the “slow start” phase of TCP, and it’s one of the protocol’s unsolved problems in mobile. TCP will start with the same amount of data for all kinds of channels, and thus often gets into the situation of an initial sending rate that’s far smaller than the available bandwidth, or, if it we were to try to start off with too much data, the rate would be larger than available bandwidth and require resending.

However, PacketZoom says it is possible to be smarter by choosing the initial estimate based on channel properties discernible from the mobile context. In other words, we could start off with the ideal 2 lanes of trucks, shown below in frame C.

Of course, it’s impossible to have perfect knowledge of the networks ahead of time but it is possible to get a much better estimate based on prior knowledge of network conditions. If estimates are close to correct, the TCP slow start problem can be enormously improvd upon. The contention that PacketZoom makes is not that TCP has never improved in any use case. Traditionally, TCP was set to 3 MSS (maximum segment size, the MTU of the path between two endpoints). As networks improved, this was set to 10 MSS; then Google’s Quick UDP Internet Connections protocol, in use in the Chrome browser, raised it to 32 MSS.

But mobile has largely been passed by, because traffic run through a browser is the minority on mobile. A larger fixed channel, like QUIC, is also not a solution, given the vast range of conditions between a 2G network in India and an ultra-fast wifi connection at Google’s Mountain View campus. That’s why today, a very common case is mobile devices accessing content via wireless networks where the bandwidth bottleneck is the last mobile mile. And that path has very specific characteristics based on the type of network, carrier and location. For instance, 3G/T-Mobile in New York would behave differently than LTE/AT&T in San Francisco.

From the real world data Packetzoom has collected, the company has observed initial values ranging from 3 MSS to over 100 MSS for different network conditions. Historical knowledge of these conditions is what allows them to avoid slow starts, to instead have a head start. Crunching a giant stream of performance data for worldwide networks to constantly update bandwidth estimates is not a trivial problem. But it’s not an intractable problem either, given the computing power available to us today. In a typical scenario, if a connection starts with a good estimate and performs a couple of round trips across the network, it can very quickly find an accurate estimate of available bandwidth. Consider the following wireshark graph, which shows how quickly a 4MB file was transferred with no slow start (red line) versus TCP slow start (blue line).

TCP started with a very low value and took 3 seconds to fully utilize the bandwidth. In the controlled experiment shown by the red line, full bandwidth was in use nearly from the start. The blue line also shows some pretty aggressive backoff that’s typical of TCP. In cricket, they often say that you need a good start to pace your innings well, and to strike on the loose deliveries to win matches. PacketZoom says that in their case, the initial knowledge of bottlenecks gets the good start. And beyond the start, there’s even more that they can do to “pace” the traffic and look for “loose deliveries.” The payoff to these changes would be a truly big win: a huge increase in the efficiency, and speed, of mobile data transfer.

Mobile app performance is a facinitating topic as now, more than ever, we’re all consuming more content via apps over mobile networks, as opposed to using the browser. I’d be interested to hear from others on how they think the bottlenecks can be solved over the mobile last-mile.