Volar Video Selling Stream Stitching & Video Platform Assets

Screen Shot 2015-02-09 at 10.41.55 AMLexington Kentucky based Volar Video is selling its streaming software and platform that handled over 13,000 live streams and over 2 million viewers over the past six months. The company has asked me to make it public that they are now considering all strategic alternatives, including an asset sale, acqui-hire, or selling a majority stake. You can download an overview of Volar’s platform and technology to get more details on what they have to offer.

Volar’s major clients are primarily in the live sports vertical and include the Mountain West Conference and many other NCAA Division I, II, and III colleges and universities. Volar has also recently worked with Root Sports, Fox Sports Midwest, Time Warner Oceania, as well as Silver Chalice and Encompass Media. Volar was one of the first to figure out stream stitching when they demoed their mid-roll ad solution to me years ago. Volar’s platform functionality includes:

  • a cloud-base live streaming and VOD platform
  • VAST compliant mid-roll ad insertion that works on desktop, mobile SDKs and iOS mobile web
  • advanced multi-party ad inventory management functionality
  • streaming software for Mac and PC
  • real time analytics, SDKs and APIs

Volar has ten engineers that have worked together for two years building the software and platform. Anyone interested in talking to the company can email me and I’ll be happy to make an introduction.

FCC’s Proposed Internet Rules Changes Little, No Real Impact On Interconnection or Choice

FCC Chairman Wheeler released a fact sheet today that outlined the new rules he is proposing for the Internet, which falls far short of solving the main complaints we’ve heard about in the market for so long. Many think it’s a big win for consumers that the proposed laws will prohibit ISPs from blocking, throttling, or prioritization content on their network, yet to date, no ISP has been accused of doing this. It’s nice that these restrictions might be a law going forward, but it doesn’t do anything to address the complaints of what takes place outside the last mile, or all the debate around consumers wanting more choices for broadband services.

In fairness, we haven’t seen the full proposal or all the details, but the fact is, that one of the biggest complaints we read about is that consumers want more choice when it comes to Internet service providers. The proposed rules won’t require any last-mile unbundling, so those that think the rules will foster more ISP services will be sadly mistaken. Think of how many times we read about consumers contending with local monopolies for their broadband Internet service and want more choice. Isn’t that the number one complaint by consumers? These new rules do nothing to address that. Not that I think they should, but this proposal doesn’t unbundle the last-mile and doesn’t regulate rates. So for those that call this a “win” for consumers, I don’t see it. There will be no new competition. The proposed new rules also allow ISPs to do “reasonable network management”, so those that wanted that off the table, won’t be happy either.

When it comes to the topic of interconnection taking place outside of the last mile, which so far Netflix has been the only content owner to complain about, the proposed new rules won’t actually govern them. The little bit of language we have on the topic, so far, says that the “Commission would have authority to hear complaints and take appropriate enforcement action if necessary, if it determines the interconnection activities of ISPs are not just and reasonable.” That’s not a law. It’s simply a way for the FCC to hear any gripes and then try to figure out what to do with them. How does the FCC plan to define “just and reasonable”? Traditionally, “just and reasonable” is defined by reference to the “cost” of providing the service. As a practical matter, this has been accomplished through the use of tariffs and investigations into tariffs. I couldn’t find any prior case where the FCC has assessed whether a non-tariffed rate is just and reasonable.

Who or what will be the authority on what “just and reasonable” market rates are? Will these rates be compared to pricing from transit providers, third-party CDN providers or some other form of alternate distribution? And will the decision only be on cost, or on the quality of service? I find it interesting that so far, in this whole net neutrality debate, people are arguing over capacity and speed, but never bring up quality of service. Capacity means nothing without performance and a good user experience. Also, while this may sound silly, the FCC is going to have to define what they classify as an interconnection. The language makes reference to the “interconnection activities of ISPs”, but what about those who aren’t ISPs? If people truly want an “open Internet” and transparency, it’s not fair that Cogent can secretly prioritize packets and impact the consumer experience, but doesn’t fall under the same rules.

One article I read today said, “without specific rules, ISPs would be tempted to ban, slow down or seek payment from content providers.” Why would they be tempted to do that? They don’t get paid a lot of money from interconnect deals, just look at the revenue numbers Comcast made public. ($40M-$60M in 2013) And by law, Comcast already isn’t allowed to block or throttle content due to their purchase of NBC. So for all the people acting like we have all kinds of blocking or throttling of content, by ISPs, we don’t have a single example of it being done.

Again, why not draft a proposal that deals with the actual complaints of consumers, instead of perceived issues that no consumers are actually dealing with. And before anyone says this is what Netflix has been complaining about, it isn’t. Netflix has never once accused Comcast or any other ISP of blocking or throttling their content, but rather a lack of wanting, and not getting for free, more capacity at interconnection points. Netflix’s CEO was quoted as saying, “it has no evidence or belief that its service is being throttled.” We need to stop using this term of “throttling” or implying that it’s happening to Netflix, or to anyone else, until someone makes that claim, and shows evidence of it happening. Implying it is taking place, only fuels the fire, and makes people debate non-facts, which does not help.

I read one post that said these proposed rules are a “big win for Netflix”, but in reality, that’s not the case. Netflix will have a hard time trying to convince the FCC that they are being mistreated when the interconnection deal they have with Comcast costs them less money than using transit providers and third-party CDNs, improves the video quality for consumers, and comes with an install SLA, packet loss SLA and latency SLA from Comcast. In Q2 of 2014 alone, Netflix paid third-party CDN provider Limelight Networks $5.4M, to deliver a small percentage of their overall traffic. Clearly if the FCC felt interconnect deals were a big enough problem, or that Netflix was truly getting treated unfairly, they would have proposed something much stronger than what is primarily a way to just “hear complaints.”

Another question I have from reading the proposed new rules is how the FCC is going to reclassify mobile broadband, when we have clear language protecting mobile broadband from Title II. I also can’t tell from the proposal if the FCC plans to reclassify retail broadband service only, or those services they provide to edge providers as well. The bottom line is that this outline we have seen today doesn’t really addresses the issues and leaves us with a lot of unanswered questions. We need to see the full proposal to know the details and see the language that will be used, but this is just another step along the way of what is going to continue to be a very long debate on the topic of net neutrality. It brings no real clarity to the debate, still has to be voted on, pass any legal hurdles and be put into practice. That’s not happening anytime soon.

One final thought, it says these new laws are intended to let consumers “access the legal content and applications that they choose online, without interference from their broadband network provider.” That’s funny considering my broadband provider is never what prevents me from accessing content. It’s always the combination of the device, the OS platform and the closed and highly controlled ecosystem that run on these devices.

The Super Bowl Stream Wasn’t As Bad As Many In The Media Said It Was

I’ve read quite a few blog posts about NBC Sports live stream of the Super Bowl and it’s clear that the vast majority of the media don’t understand what the workflow for a live event looks like, the pieces that are involved and the various factors that determine the quality of the live stream. A post on DSLReports.com says the Super Bowl stream “Struggled Under Load”, yet provides no details of any kind to back up that claim. The fact is, capacity wasn’t an issue at all. [Update Tuesday 10:58amDSLReports.com has changed the headline of their post to no longer reference any kind of capacity issue.]

NBC Sports used third-party CDN provider Akamai to deliver the stream and had Level 3’s CDN in backup mode in case they got more traffic than expected, but never had to use them. Media members that complained about the stream didn’t provide any tech details of how it worked, how it was delivered, the companies involved and didn’t speak to any of the third-party companies that were monitoring the stream in real-time. They really made no effort to learn what was really going on with the live stream or speak to the companies responsible for it, which is just lazy reporting. Cedexis data shows Akamai’s availability did drop during the game, to 98.81% in the Northeast, but not significantly.

Screen Shot 2015-02-02 at 10.09.58 PMNBC Sports said that the live stream peaked at 1.3M simultaneous, which isn’t a big number for Akamai. Six years ago, Akamai’s network peaked at 7.7M streams, 3.8M of which came from the Obama inauguration webcast. Akamai has plenty of capacity to handle the live stream of the Super Bowl and has done live events, including those for Apple, that make the Super Bowl look small in comparison. Slate Magazine’s post called the Super Bowl webcast a “disaster”, with the biggest complaint being the fact the live stream had a delay, when compared to cable TV. Clearly the author doesn’t understand how the Super Bowl stream worked or he would realize that based on the setup, the delay was unavoidable.

The video was encoded in the cloud using Microsoft’s Azure platform, which adds a delay. On top of that, using HLS adds an additional delay and doing HLS over Akamai adds even more. Talk to any of Akamai’s largest live customers and they will tell you the number one complaint of Akamai, when a live stream is using certain parameters, is the delay in Akamai delivering the stream. Akamai’s network requires a lot of buffering time, for both HLS (and RTMP), otherwise you can get audio drop-outs on bitrate switches. NBC Sports used both HDS (HTTP Dynamic Streaming) for desktop and HLS (HTTP Live Streaming) for devices. So before some members of the media start blaming NBC Sports for the delay, learn all the pieces in the live workflow and understand how it all works. Even twenty years later, there are limitations in the technology. I simplified the workflow, but there were many more pieces involved in making the stream possible and many of them can add a delay in the live stream.

For those that want more tech details on the encoding, the average bitrate for the stream was 2.5 Mbps with an average viewing duration of 84.2 minutes. Also of note is that NBC Sports optimized their in browser display at 2.2 Mbps for target video size with a max bit rate of 3.5 Mbps in full screen mode.

The Slate piece also goes on to say that “NBC was dealing with huge traffic for its Super Bowl stream” and that the “traffic would be tremendous.”Again, statements that want to imply there would be capacity issues, which simply wasn’t the case. A piece by Mashable called the stream “slow” saying it was a “bit disconcerting for anyone who wants to keep up with up-to-the-second plays on social media”. Not all forms of content get delivered, in real-time, at the same speed. If you want “up-to-the-second” then the video stream is not for you. But it’s not the fault of the live stream, which has to get ingested, encoded, delivered and then played back with a player/app. Compare that workflow to a tweet, it’s not even remotely similar.

As for my experience with the Super Bowl stream, I did experience some problems, in regards to the actual video quality. I worked with NBC Sports tech team, giving them specs on my setup and they looked up my IP and tracked me throughout the game, having me test various links and setups. While we still don’t know what my issue was, it only appeared when I was using Verizon, but didn’t crop up when I used Optimum. So one thing people have to remember is that it’s not always the CDNs fault. Many times, it’s things down at the ISP level. As an example, I was having a lot of issues with streaming YouTube and Google looked into it and found there was a specific issue inside Verizon that was causing it.

During the live stream, NBC Sports was using multiple third-party quality measurements platforms, including those by Conviva and Cedexis that. Conviva’s is in real-time and can show everything from buffering times to failed stream requests. The media needs to learn more about these third-party platforms as you’ll notice, they don’t know anything about them, nor ever seem to look at their data after a large event. Stop coming up with “theories” around capacity and dig into the real data. While NBC Sports isn’t going to give out all the data we want, any member of the media who has connections could have easily talked to some of these third-party companies and gotten info or guidance of what they saw and any impact it might have had on performance. For the majority of users who turned into the live stream, it worked and worked well. There were some like myself and others that did experience intermittent problems, but we were the minority and in many cases, problems down at the ISP and WiFI level always causes quality issues with both live and on-demand video. Media members who considered the live webacst a failure due to it not being real-time, or lack of certain ads shown, should then be focusing on the business aspects of the stream, not the technical ones.

One last thing. For all the people reporting that the Super Bowl stream was a “record”, it wasn’t. The raw logs are not verified by any third-party company, there are many different ways to count streams, (simultaneous, unique simultaneous etc.) and if you want to look at just the sports vertical, you have events by ESPN, eSports and others that did more than 1.3M simultaneous. Quantity is important, but it’s not the single biggest piece of methodology that should be used to determine the success or failure of a live webcast. There is no such thing as the largest when it comes to live events as many times, numbers aren’t even put out after the event. Just look at Apple’s live streams, we don’t know their stream count and on the days they do a live webcast, Akamai takes down their real time web metrics chart that shows the live stream count on their network, just so no one knows how many streams Apple is doing.

If there is one thing the Super Bowl stream did reinforce, it’s that streaming video technology can’t replace traditional TV distribution, for quality, or scale. Yes, I know some will want to argue that point, but if you talk to those who are smarter than me, building out these networks to deliver content, not only are their many technical limitations, but there are just as many business ones as well.

Stream Optimization Vendors Make Big Claims About Reducing Bitrates, But Not Educating Market

There are a lot of product verticals within the streaming media industry, and one of the lesser known ones includes a small handful of vendors that are typically referred to as offering stream optimization technology. While many of them have very different solutions, the goal of all of them is the same. To reduce the size of video bitrates, without reducing quality. Vendors in the market include A2Z Logix, Beamr, Cinova, EuclidIQ, eyeIO, Faroudja Enterprises, InterDigital, Sigala and QuickFire Networks (just acquired by Facebook). Some of these vendors would take issue with me listing them next to others they feel don’t compete with them, but amongst  content owners, they are all thought of as offering ways to optimize video, even if many of them do it very differently. I don’t put them all in the same bucket, but many content owners do.

These compression technologies are already being applied to images, where companies like Yahoo and Netflix use solutions to make images load faster and save money. Over the past month, I’ve been looking deeper at some of the vendors in this space as it pertains to video compression, and have had conversations with a dozen OTT providers, asking them what they think of solutions currently on the market. What I’ve found is that there is a lot of confusion, mostly as a result of vendors not educating content owners and the industry on the technology, ROI, cost savings, impact on workflow amongst a host of other subjects.

Visiting the websites of many of the vendors provides little details on use cases, case studies, target demographics, cost savings, technical white papers and customers. Instead, many companies simply highlight how many patents they have, make claims of compression savings, have nice generic workflow diagrams, use lots of marketing buzzwords and all say how good their quality is. With some vendors, you can’t even tell from their website if they offer a service, a technology, a platform or a device. There are no reports, that I can find, from third-party companies, to back up claims that many of them make, which makes it hard for everyone to judge their solutions. I have yet to see any independent testing results, from any vendor (except one) that compares their technology to others in the market, or even to traditional H.264/HEVC encoding.

As I haven’t used any of these solutions myself, I talked to some of the biggest OTT providers, who deliver some of the largest volume of video on the web today. I won’t mention them by name since many gave me off-the-record comments or gave me feedback about specific vendors, but all of the ones I spoke to have evaluated or looked at stream optimization solutions in the market, in detail. Many of them have evaluated vendor’s products in their lab, under NDA. What I heard from nearly all of the content owners is that many of them view “some” of these solutions in a bad light. There is a stigma about them because some vendors make claims that simply aren’t accurate when the product is tested. Many content owners are still skeptical of the how companies can claim to reduce bitrates by 50%, yet still keep the same quality.

Also hurting this segment of the market was an article published a few years back that accused one of the vendors of selling “snake oil”. While that’s not a fair description of these services in the market as a whole, as one content owner said to me, “it helped create a stigma around the tech.” Yet some vendors are starting to show some signs of success and while this post isn’t about comparing technology from one provider to another, one or two vendors do seem to be rising above the noise, but it’s a challenge.

When it came to feedback from companies who have looked at stream optimization solutions, some told me that, “in real world testing, we didn’t see any compression decreases that we couldn’t reproduce with our own transcoding system.” Another major complaint by many is that the solutions “don’t run in real-time so it is a non-starter or way more expensive and latent for live encode.” Some vendors say they do offer real-time solutions, but content owners I spoke with said, “it didn’t work at scale” or that “the real-time functionality is only in beta.”

The following is a list of the most common feedback I got from OTT providers, about stream optimization solutions in the market. These don’t apply to every vendor, but there is a lot of crossover, especially when it comes to the impact on the video workflow:

  • lack of real time processing
  • adds more latency for live
  • it doesn’t neatly fit into a production workflow
  • forces the client device to work harder
  • inconsistent reduction in file size
  • bandwidth savings that don’t really exist
  • affects the battery life of mobile phones
  • makes the encode less desirable for encapsulated streaming
  • impacts other pieces of my workflow in a negative way
  • solution is simply too slow
  • didn’t see any compression decreases that we couldn’t reproduce with our own transcoding system
  • cost vs. ROI not clear, pricing too high

It should also be noted that one of, but not the only value proposition these vendors make, is the ability for content owners to save money on bandwidth. While that’s a good idea, the problem is that many content owners already expect to save on bandwidth, over time, simply by moving to HEVC. Content owners expect to see anywhere between 20%-40% compression savings, once HEVC takes hold. So that alone will save them on bandwidth, however HEVC adoption, at critical mass, is still years away. Vendors selling stream optimization products have to have more than just a bandwidth savings ROI, but that’s most of what they all pitch today. That is a hard sell and one that’s only going to get harder.

Another problem these vendors face is that while they are routinely talked about when the topic of 4K streaming comes us, and the need to deliver better quality with lower bitrates, there is no definition of what classifies 4K streaming. Bitrates can always be compressed, but with what tradeoff? What is considered acceptable? There is no standard and no spec, for 4K streaming, which is a big barrier to adoption, something that doesn’t help these vendors. One will tell you they can do 4K at 10Mbps, with no quality loss, another will say they can do it at 6Mbps. But those are big difference. Are they both considered 4K?. I don’t know the answer, no one does, since the industry hasn’t agreed on what 4K means for the web.

Even for a vendor who’s stuff works, they really have their work cut out for them. There is simply a lack of education in the market and vendors treat their technology as top secret, which doesn’t help the market grow and stifles education. Also missing is how one technology compares to others, what is/isn’t pre-processing and why you would want/not want one kind over the other etc.

Not to mention, no one knows what this stuff costs. Does it add 20% to a customer’s encoding costs but then reduces their delivery costs by 2x that? What’s the total cost to their workflow? What size company can use these solutions? When does it makes sense? How much traffic/content do you need to have? There are no studies are out there, that I can find, with real numbers, real ROI metrics, savings calculators etc. Most of these vendors don’t explain any of this on their websites, or say who their ideal customer is, which makes it hard for a content owner to know how big they need to be before it makes economic sense. This tech may very well work, be useful, have a good ROI etc. but to date, vendors have not shown/documented/proven that in the market, with any real education.

To be fair, some vendors have announced a few major customers, but we don’t know how these content owners are using it and how it’s been implemented. And the customer announcements are sparse. eyeIO announced Netflix as a customer in February of 2012, but I don’t know if they are still a customer or even using the technology anymore. Also, in the three years since eyeIO’s press release about Netflix, they have only announced one other customer in 2013, and none in 2014. Cinova, InterDigital, Faroudja Enterprises, Sigala and A2Z Logix, haven’t announced a single content owner customer via press release, that I can find. I know some of them do have a customer here or there, but nothing announced to the market. EuclidIQ has the best website in terms of the info they provide, but again, no customer details. Beamr has publicly announced M-GO and Crackle (Sony Pictures Entertainment) as customers and I hear has some others that have yet to be announced, but like the other vendors, their website provides almost no details.

As one content owner said to me about stream optimization technology, “In short its got just as many drawbacks as advantages best case.” That’s not good for a niche segment of the industry that is trying to grow and get adoption. The lack of education alone could kill this in the market before it even has a chance to try to grow.

iPhone 6 Display Making Image Delivery Harder For CDNs, Forcing Shift To Responsive Web Design

iphone6The iPhone 6 and the iPhone 6+ hit the device market like a freight train. The dynamic duo shipped 10 million units in the first weekend alone, breaking all device release records. Some analysts expect 60-70 million iPhone 6 and 6+ unit to ship in the first year after its release. Lots of people have written about how these two new phones are forcing web publishers to up their game in preparation for these richer displays. What they may not have realized is that the iPhone 6 and iPhone 6+ also presents a challenging problem for content delivery networks.

Over the past few months, multiple content owners have told me how the iPhone 6 and 6+ have made it harder for some CDNs to deliver their content to these devices, with good quality. Apple’s new phones challenge CDNs with a perfect storm of significantly larger images and a non-negotiable requirement to deliver images generated on the fly even over wireless phone networks. This is a territory where CDNs have historically struggled even with the lighter requirements of earlier generation smart phones with smaller displays.

The new iPhones are the final nail in the coffin for old-style m.dot websites and a forcing factor for the shift to Responsive Web Design (1, 2) sites. With the arrival of these two devices, websites must support nearly a dozen Apple device screen sizes. Add this to the assortment of popular Android display sizes and image management becomes a massive headache. On m.dot sites, this creates significant pain because that means each of the image sizes must be cached on the edge of the network for a CDN to deliver it properly.

There are software solutions to automatically generate and manage multiple image sizes as soon as a user hits a website, but those solutions mean that the customers IT organization is incurring significant technical overhead and creating a single point of failure for delivery. This may not have been as big a deal a few years ago when sites did not change their larger images often, but today, most travel, ecommerce and media sites update images as often as several times per day.

Application delivery provider Instart Logic gave me an example of such a customer, Commune Hotels, a hip hotel chain that owns brands like Joie De Vivre and Thomson, who is actually baking high definition user photos from Instagram in, updating them constantly. This type of behavior means flushing a cache in a CDN in a timely fashion can become a logistical nightmare all by itself. As a result, almost every web performance engineer and senior executive overseeing website performance that I am speaking with is in some stage of transition to a Responsive Web Design site. This, too, presents big problems, namely for the CDNs. Responsive Web Design sites rely on scalar image resizing on the edge of the network. That means images in a site code are expressed as ratios rather than sizes.

With Responsive Web Design, images must be generated on-the-fly to fit the specific device type and display size as called for by any user. Responsive Web Design also forces a site owner to generate multiple images in anticipation of user behaviors, like resizing browser windows, zooming in on multi-touch, etc. Some CDNs are wholly reliant on cached images and start to show significant performance degradation when they are forced to rely on images generated on the fly.

A critical compounding factor is the higher pixel ratio required in the iPhone 6 and iPhone 6+. These devices go further than any previous Apple mobile product with a pixel ratio of 3. This means that website publishers will need to push images to devices that are actually three-times as large as the display size. This is in anticipation of user behavior on multi-touch devices, zoom, pinch, moving in and out. The behavior is common on smartphones and is a critical part of the user experience for digital experience delivery on image-rich sites in ecommerce, travel and media. (See Akamai’s video on the subject)

This means that not only will CDNs need to handle on-the-fly image resizing but they will also have to quickly push out to their caches images that are considerably larger than ever before. And then the CDNs are at the mercy of the “last mile” of the cell phone networks where network performance is highly variable depending on location, time-of-day, micro-geography, user density, and network connection type (LTE, etc). What’s more, web publishers pushing out these larger images via CDNs can experience signficantly higher bandwidth costs. Over 60% of website payloads are already images, the additional bump could add mightily to what large sites or image-heavy sites are paying each month to use CDNs.

By extension, methods to maintain image quality or serve larger images with less bandwidth could have a tremendous impact. Instart Logic, for example, has some new and intriguing ways to categorize the content of images and then optimize the first paint of an image to maintain quality but slash data required to display. The algorithm can, for example, figure out whether a photo is of a face or a of a beach. If it’s of a beach, then far less data is required to paint an image without noticeable quality loss. This can save 30% to 70% on initial paints. These types of software-defined solutions that rely on smarter ways to pass data over the last mile will trump the old brute force methods of CDNs, PoPs and fiber to the near-edge (because the edge now, is always the cell tower).

Another untapped area that will become important is tapping the power of browser caches on the devices. Newer devices and new HTML5 browsers have multiple caches. Some are higher performance than others, by design. So savvy web performance solution providers will need to tap into those browser caches and route the content that is likely to be persistent on a site – header images, for example – and push it into the browser for fast recall. This will allow pages to appear to load much faster even if the site is waiting for other components to load.

Ultimately, CDNs will either need to figure out a way to deal with delivering much bigger images efficiently over the last mile or they will struggle with serious performance issues on the iPhone 6 and iPhone 6+ and other new devices coming to market. Their alternative will be deploying more capacity on the edge of the network, which is by and large not a cost-effective strategy.

I’d love to hear from other content owners in the comments section below on how they are handing the delivery of content to the iPhone 6 and 6+.

Image credit: myclever