Majority Of Mobile Video Viewing Still Under 3 Minutes In Length

This morning Ooyala issued its Q2 2014 Global Video Index Report, providing insights into video viewing trends on mobile, desktop, tablet and TV screens. Not surprisingly to anyone, multi-screen video consumption is growing and in the past year, mobile video viewing has more than doubled to become over 25% of all online viewing. While that’s impressive growth, the data also shows that the majority of users who watch video on mobiles devices are still consuming short-form clips, under three minutes in length. As the report details, viewers are looking to big screens for big chunks of their entertainment. Here are some highlights from the report, but to get the full picture I suggest you download it.

  • On connected TVs, viewers spent 65% of their time watching videos 30 minutes or longer; and over half of that time (54%) was with content longer than 60 minutes.
  • On tablets, viewers spent 23% of their time watching video of 30–60 minutes in length, more than on any other device.
  • 81% of time watched on the largest screen, connected TVs, was with videos longer than 10 minutes.
  • Mobile video share has increased 127% year-over- year and 400% in the past two years.

Screen Shot 2014-09-15 at 11.49.07 AM

Transparent Caching Provider PeerApp Now Has 450 Deployments, New CEO

Transparent caching provide PeerApp has been pretty quiet over the past year, with new startup Qwilt seemingly getting all of the attention in the transparent caching space. Still the leader in the market, based on revenue, PeerApp is looking to accelerate their business and has a made a lot of new executive hires recently, including the appointment of a new CEO in June. This morning, the company announced they have added 50 new customers since the start of the year, and that their solution is now deployed at over 450 network operators and enterprises worldwide. The company also disclosed that many customers are managing 100-500 Gigabit capacity on PeerApp’s platform.

As I detailed in my last transparent caching report, I have some concerns around the long-term viability of the transparent cache as a stand-alone product. Content delivery and Web acceleration vendors are moving towards integrating transparent caching technology into a broader set of Web-optimization platforms. This will create a bigger ecosystem, faster deployment, and more traction for the technology as a whole, but it will also cause transparent caching to no longer be thought of as a stand-alone offering in the market. Vendors will need to move up the stack with their offerings and integrate their platforms into larger delivery ecosystems.

The other problem transparent caching vendors are encountering is the amount of HTTPS traffic (YouTube) that can’t be cached, deployed caches by Netflix directly, and the absolute destruction of per Mbps pricing that makes expanding the business very price sensitive. However, the good news is that the global transparent caching industry is still healthy and I expect it to grow at a compound annual growth rate (CAGR) of 30.2% from 2012 to 2017. Content delivery on the web is constantly changing, requiring caches that must intelligently and dynamically identify and adapt to shifting content access patterns. Higher-quality video is coming, live video is exploding, operators are demanding better QoE and the market for transparent caching solutions is only going to accelerate.

IBC News Recap: HEVC, Cloud Workflows, Media Management, Video Encoding & Optimization

I wasn’t at the IBC show this year, but here’s a run down of the news announcements that I saw. Lots of focus on HEVC as expected, but less talk about 4K compared to last year. If I had to pick one theme from this show it would be cloud based media management platforms. I found two announcements to be particularly interesting; Brightcove’s new stand-alone video player platform, de-coupled from their OVP services and Microsoft Azure Media Services new live streaming platform. I’ll have more details and thoughts on both of these new offerings later in the week.

Inside Apple’s Live Event Stream Failure, And Why It Happened: It Wasn’t A Capacity Issue

Apple’s live stream of the unveiling of the iPhone 6 and Watch was a disaster today right from the start, with many users like myself having problems trying to watch the event. While at first I assumed it must be a capacity issue pertaining to Akamai, a deeper look at the code on Apple’s page and some other elements from the event shows that decisions made by Apple pertaining to their website, and problems with how they setup storage on Amazon’s S3 service, contributed the biggest problems to the event.

Unlike the last live stream Apple did, this time around Apple decided to add some Javascript JSON (JavaScript Object Notation) code to the page which added an interactive element on the bottom showing tweets about the event. As a result, this was causing the page to make refresh calls every few milliseconds. By Apple making the decision to add the JSON JavaScript code, it made the website un-cachable. By contrast, Apple usually has Akamai caching the page for their live events but this time around there would have been no way for Akamai to have done that, which causes a huge impact on the performance when it comes to loading the page and the stream. And since Apple embeds their video directly in the web page, any performance problems in the page also impacts the video. Akamai didn’t return my call asking for more details, but looking at the code shows there was no way Akamai could have cached it. This is also one of the reasons why when I tried to load the Apple live event page on my iPad, it would make Safari quit. That’s a problem with the code on the page, not with the video.

Because of all the refresh calls from the JSON-related JavaScript code, it looks like it artificially forced the player to degrade the quality of the video, dropping it down to a lower bitrate, because it thought there were more requests for the stream than there was. As for the foreign language translation that we heard for the first 27 minutes of the event, that’s all on Apple as they do the encoding themselves for their events, from the location the event is at. Clearly someone on Apple’s side didn’t have the encoder setup right and their primary and backup streams were also way out of sync. So whatever Apple sent to Akamai’s CDN is what got delivered and in this case, the video was overlaid with a foreign language track. I also saw at least one instance where I believe that Apple’s encoder(s) were rebooted after the event had already started which probably also contributed to the “could not load movie” and “you don’t have permission to access” error messages.

Looking at the metadata from the event page, you could see that Apple was hosting content from the interactive element on the event page on Amazon’s S3 cloud storage service. From what I can tell, it looks like Apple setup the content in a single bucket on S3 with little to no cache hit ratio, with poor bucket configuration. Amazon didn’t reply to my request for more info, but it’s clear that Apple didn’t setup their S3 storage correctly, which caused huge performance issues when all the requests hit Amazon’s network in a single location.

As for Akamai’s involvement in the event, they were the only CDN Apple used. Traceroutes from all over the planet (thanks to all who sent them in to me) showed that Apple relied solely on Akamai for the delivery. Without Akamai being able to cache Apple’s webpage, the performance to the videos took a huge hit. If Akamai can’t cache the website at the edge, then all requests have to go back to a central location, which defeats the whole purpose of using Akamai or any other CDNs to begin with. All CDNs architecture is based on being able to cache content, which in this case, Akamai clearly was not able to do. The below chart from third-party web performance provider Cedexis shows Akamai’s availability dropping to 98.5% in Eastern Europe during the event, which isn’t surprising if no caching is being used.

akamaiThe bottom line with this event is that the encoding, translation, JavaScript code, the video player, the call to S3 single storage location and the millisecond refreshes all didn’t work properly together and was the root cause of Apple’s failed attempt to make the live stream work without any problems. So while it would be easy to say it was a CDN capacity issue, which was my initial thought considering how many events are taking place today and this week, it does not appear that a lack of capacity played any part in the event not working properly. Apple simply didn’t provision and plan for the event properly.

Updated Thursday Sept. 9th: From talking to transit providers & looking at DeepField data, Apple’s live video stream did 6-8Tbps at peak. World Cup peak on Akamai was 6.8Tbps. So the idea that this was a capacity issue isn’t accurate and the event didn’t generate some of the numbers I see people saying, like “hundreds of millions” watching the stream.

Updated Thursday Sept. 9th: While some in the comments section want to argue with me that problems with the webpage didn’t impact the video, here is another post from someone who explains, in much better detail than me, many of the problems Apple had with their website, that contributed to the live stream issues. See: Learning from Apple’s livestream perf fiasco

Internet Traffic Records Could Be Broken This Week Thanks To Apple, NFL, Sony, Xbox, EA and Others

Thanks to so many large scale live events and large file downloads taking place this week, it’s going to be a huge week of traffic on the Internet with content delivery networks and last mile providers preparing for what’s to come. Tomorrow in particular will be a big day on the net with so many things taking place all on the same day. Apple product announcements always make for a busy day on the web and while iOS 8 won’t be available for download tomorrow, here is a list of all the other events taking place tomorrow, or this week.

  • Monday Night Football (WatchESPN)
  • Apple’s Product Announcement (Tuesday)
  • Microsoft Security Patches (Tuesday)
  • NFL Now/Game Rewind Highlights (Tuesday is busiest day for NFL videos)
  • Yahoo! Aerosmith Concert (Tuesday)
  • Bungie’s Release Of Destiny Game (Tuesday)
  • EA Sports Fifa 15 Game Beta (Tuesday)
  • EA Sports NHL 15 Game (Tuesday)
  • League Of Legends NA/Europe Seed Matches (Tuesday)
  • Xbox Free Game Releases (including Halo: Reach)
  • Sony PS4 White Destiny Bundle (Tuesday)
  • New York Fashion Week Live (Monday-Thursday)
  • Apple’s iOS 8 Download (Probably Thursday)
  • President’s ISIS Speech (Wednesday)

Delivering video over the Internet at the same scale and quality that you do over a cable network isn’t possible. The Internet is not a cable network and if you think otherwise, you will be proven wrong this week. We’re going to see long download times, more buffering of streams, more QoS issues and ISPs that will take steps to deal with the traffic, knowing it will have a negative impact on the user experience. When iOS 8 comes out, some last mile providers are going to struggle and some will rate limit their network connections, as we saw the last time Apple’s iOS download was available. For some ISPs, iOS 7 downloads took up 40% of their traffic. Also, all other content providers are going to have to compete with this traffic and many I spoke to about it are keeping an eye on their quality guarantees and SLAs with their CDNs this week.

As for which CDNs are delivering all this content, I’ll be doing a lot of traceroutes this week, but Akamai, Limelight and Level 3 are all in the mix. I know Akamai will see the most web traffic from news sites covering Apple’s product announcements. Last time I looked, Microsoft’s security patches were being delivered by Akamai, Limelight and Level 3. Yahoo’s concert is being done by Akamai. Level 3 and others do the Xbox releases, Limelight was doing a lot of Sony downloads last I checked and when iOS 8 is available, I expect a lot of it to be delivered by Apple themselves, with Akamai and maybe also Level 3. Many of these events, both live and downloads, push more than 1Tbps via a single CDN, let alone those that use dual vendors, so it’s going to be a very busy week on the web for CDNs.

For all the talk of paid interconnects being such a bad idea, or causing great harm to the Internet, none of what is going to take place this week would be possible if these paid connections between CDNs and ISPs were not in place. So complain all you want, but it’s why the Internet works the way it does and why hopefully, all will go smoothly this week.

I will be collecting as many traceroutes as I can from multiple regions during all of these events, so if you can do a traceroute from your location, please send it to me at