WSJ Report Inaccurate: Content Owners Not Asking ISPs For “Separate Lanes”

Yesterday, a story in the Wall Street Journal created a lot of stir implying that HBO, Sony and Showtime were asking ISPs for their content to be given “special treatment” by delivering it via a “separate lane” within the ISPs network. After speaking to multiple ISPs and some of the content owners mentioned in the story, they tell me the WSJ post is inaccurate and that they don’t expect any ISP would treat their content differently from another.

Those I spoke were confused as to what exactly the WSJ is implying, when terms like “special treatment” are being used, without any definition of what is “special” about the treatment. There is also no agreed upon definition of what a “managed service” is and the article doesn’t detail how they define it. They also reference a “separate lane” within the ISPs network, but there is only one lane into your house on the Internet. Again, lots of buzz words, no definitions.

The article says the reason the content owners would want to do this is to “move them away from the congestion of the Internet.” The problem with this idea is that neither HBO, Sony nor Showtime owns their own CDN. They rely on third-party CDNs like Akamai, Limelight and Level 3 to deliver their content and these CDNs already have their servers inside ISP networks, or connected directly to them via interconnection deals. That’s the main value of using a service based CDN is to avoid congestion, which HBO and others are already doing. In fact, HBO has been doing this with Verizon since 2010, by allowing Verizon to cache HBO’s content inside Verizon’s network. But that content is not “prioritized” or given any “special treatment” of any kind inside the last mile.

The article also says that media companies feel that the “last mile of public Internet pipe, as it exists today, won’t be able to handle the surge in bandwidth use for all the online-video services.” The problem with that argument is that the congestion we see on the Internet isn’t taking place in the “last mile”, it’s taking place at network access points outside the last mile. To prove that, just look at the latest Measuring Broadband America report by the FCC that measures ISPs advertised speed versus delivered speed. The data shows that there is very little congestion in the actual last mile. So the WSJ argument as to why HBO and other content owners would want to do this doesn’t make sense and take into account the technical details of how it all works.

The WSJ article waits until halfway through the piece to mention that no ISP has actually agreed to whatever it is that the WSJ is suggesting content owners want. The article says that Comcast “wasn’t willing to do anything for any one content provider that it couldn’t offer to every other company.” So the WSJ is saying that content owners asked for something that ISPs said no to. But the piece then goes out-of-the-way to make it sound like this is a potential problem, ties in the topic of Net Neutrality but then never defines, what exactly is being proposed. What does “special treatment” mean? Are they implying the “prioritization” of packets? We simply don’t know as they use high-level terms without any definition of how they are applying them.

Another argument the WSJ makes for why content owners would want this is that some content owners don’t want their service to count against the ISPs bandwidth cap. Problem with that argument is that you don’t need a “managed service” to make that happen. Netflix recently struck deals in Australia where their content does not count against the ISPs cap with no “managed services” taking place.

The WSJ also says, “media companies say the costs of guaranteeing problem-free streaming for users are rising.” What they don’t say is whom those costs are rising for? The content owners? The ISPs? The consumer? It sounds like they are saying the costs to deliver video for the content owner is increasing, but in fact, it’s the opposite. Costs to deliver video via third-party CDNs have fallen at least 15% each year, since 2008. (Source: one, two) Also, there is no way to “guarantee” problem-free streaming no matter how much money you spend so that notion is false. CDNs offer SLAs, but they don’t “guarantee” anything outside their network once it hits the last mile. And ISPs only guarantee customer’s access out of their last mile, which is done on a “best effort” basis. For the WSJ to imply otherwise is inaccurate.

ISPs I spoke to made it clear that they are not in discussions with OTT providers to manage their traffic differently from other content owners or provide them with special treatment of any kind. What they think the WSJ might be confusing is the idea of caching content inside their last mile, but again, that doesn’t come with any kind of “special treatment” or prioritization of any kind. The WSJ story uses a lot of generic undefined words that sound very scary, but when you look at the details rationally, you can see that they simply created controversy where none exists.

Video Platform Provider Voped Looking To Sell Company, OTT Platform Available

Screen Shot 2015-03-18 at 3.15.12 PMThere continues to be a shake out within the tier-2 video platform space, (see Volar Video Selling Stream Stitching & Video Platform Assets) with the latest coming from Voped. I recently heard from Voped President and sole investor Mark Serrano, who tells me that he has decided to offer the platform for acquisition.

Mark tells me the company is already in preliminary discussions with a couple of large companies now, but also wanted to put the word out about their availability considering what’s happening in the space and the technology jump-start that his platform can offer. Voped offers an end-to-end solution to manage, encode, secure, deliver, and monetize video globally on the web, mobile, and other connected devices. So for the right company, acquiring versus buying can give them the advantage of time to market and the extensive experience of the team that built the platform.

Mark sees an advantage to the small size of his team (four original team members; the parent company provides numerous support services separately), in that it will make for an easy transition to bring the technology under a new banner. He says the company has a very efficient turnkey offering and has built it at a fraction of the cost compared to what the large platforms have invested. They have a lot of experience with custom development, from features to larger integrations – such as with Widevine DRM, payment gateways, a turnkey website solution, and custom user interfaces.

For information on Voped’s technology highlights you can check out this PDF deck and for those interested, you can contact Mark Serrano directly.

Free Book Download: Hands-On Guide To Webcasting Production

51UV65ljoKLWebcasting guru Steve Mack and I wrote a webcasting production book entitled “Hands-On Guide To Webcasting” (Amazon), which we’re now giving away as a free PDF download. You might notice that the book was published in 2005 and since that time, webcasting has evolved into the mainstream application it is today. But some of the best practices regarding encoding, connectivity, and audio and video production techniques etc. have never changed. We felt the book could still be a valuable resource to many and we wanted to make it available to everyone, with now re-directing to this post.

This book was one of the eight books in my series that combined, have sold more than 25,000 copies, with the webcasting book being the most popular. So we’re happy to have gotten the rights back to the publication to be able to share it with everyone. The help email included in the book still works, so those with questions can still reach out to us, and we’ll try to answer any follow-up questions. You may re-purpose content from the book as you like, as long as you don’t charge for it and you credit the source and link back to Here’s a quick breakdown on the chapters

  • Chapter 1 is a Quick Start, which shows you just how simple webcasting can be. If you want to start webcasting immediately, start here.
  • Chapters 2 and 3 provide some background about streaming media and digital audio and video.
  • Chapters 4 and 5 are focused on the business of webcasting. These chapters discuss the legal intricacies of a webcast, along with expected costs and revenues.
  • Chapters 6 through 8 deal with webcast production practice. Planning, equipment, crew requirements, connectivity, and audio and video production techniques etc.
  • Chapters 9 and 10 cover encoding and authoring best practices. This section also covers how to author simple metafiles and HTML pages with embedded players and how to ensure that the method you use scales properly during large events.
  • Chapter 11 is concerned with distribution. This section discusses how to plan and implement a redundant server infrastructure, and how to estimate what your infrastructure needs are.
  • Chapter 12 highlights a number of case studies, both successful and not so successful. These case studies provide you with some real-life examples of how webcasts are planned and executed, how they were justified, what went right, and possibly more important, what went wrong.

I’ll also be giving away my business book in the coming days.

The Impact Of HTTPS On Caching Deployments In Operator Networks

When Google made the decision in 2013 to have all of their properties and data, including YouTube, move to HTTPS delivery, many have been asking what impact this has had on open caching deployments inside operator networks. Some have suggested that HTTPS delivery is becoming a trend but based on what we have heard from other content owners, and from talking to last-mile providers, I don’t expect this to be a broader industry trend in the long run.

In many cases, we can use the publicly stated plans of large streaming services like Netflix as proof of outlook for the industry as a whole. In short, the decision to stream all content via HTTPS is an expensive one and the business goals of long form video streaming services like Netflix, Amazon, ESPN, and Hulu can be met through more efficient and far less costly streaming infrastructure and best practices. To this point, Netflix publicly stated they would not implement SSL given their assessment that “costs over time would be in the $10’s to $100’s of millions per year” to fully encrypt all their streaming traffic. [Source: one, two]

Indeed, we know that content providers worldwide have adopted best practices to manage content security and consumer privacy for streaming media. Through the use of DRM to protect content rights and URL obfuscation combined with control plane encryption to secure consumer privacy, content providers can meet their obligations to both content rights owners and consumers. These streaming media best practices also support the deployment of open caching solutions in operator networks to optimize online video for both network utilization and Quality of Experience (QoE).  Going forward, content providers will continue to rely on these best practices to scale their streaming offerings worldwide and the majority won’t move to HTTPS delivery.

There will be significant and long-term value in the deployment of open caching as a critical part of the overall open architecture for streaming video. Operators can invest in open caching platforms with confidence, knowing that their investment will continue to deliver value in the form of network cost savings and improved QoE over the long run.

In just a few instances, as seems to be the case with YouTube, some content providers may take the extreme and costly step of encrypting both control and data plane traffic for the sake of consumer privacy. Full SSL encryption is generally considered to be cost prohibitive and few, if any, other content providers can afford to implement such a model. However, even in the case of fully encrypted traffic, it’s a safe bet to expect that content providers will continue to work collaboratively with caching technology providers to support traffic optimization and open caching in last mile networks.

Limelight Launches New DDoS Solution & Research Findings About The Security Market

DDoS and other cyber attacks are clearly on the rise. According to Akamai’s recent State of the Internet Report, between 2013 and 2014, DDoS attacks rose 90%. And not only are the number of attacks rising, but the volume of those attacks is growing as well. Numbers from Radware’s 2014-2015 Global Application and Network Security Report, stated that 29% of attacks are over 1Gbps in size. It’s probably safe to say that attack volumes and frequency will only continue to increase, especially as companies continue to rely on the Internet to conduct their business.

Many organizations already recognize the need for security. According to recent research by Limelight Networks, only 8% of surveyed executives indicated that they weren’t using some sort of security for the delivery of their digital content. What’s more, 76% indicated that the delivery of digital content is “extremely important” to their business.

So what are organizations doing today to mitigate potential attacks that might interfere with their ability to deliver digital content? For many, it’s on-premise equipment (CPE). Of those surveyed in Limelight’s research, 31% are handling the security themselves. Others are employing a hybrid approach, using some CPE combined with cloud-based services. But there are a variety of problems with both of these approaches (pure CPE and CPE plus cloud). First, using any kind of CPE has both CAPEX and OPEX requirements. You not only need to purchase the hardware (redundantly, of course) but you need people to manage, update, upgrade, and operate it. Second, you need excess bandwidth (transit) to support an attack while also handling “good” traffic. Finally, combining CPE with cloud services adds significant complexity to your content delivery architecture.

What’s the alternative? CDN-based security. More than half (53%) of respondents in Limelight’s research plan to rely on their CDN provider to handle content delivery security concerns in the future. And for many customers, it makes total sense for several reasons:

  • Upstream—if an organization is already using a CDN provider to deliver their digital content, detecting and mitigating an attack can come at the network edge, potentially thousands of miles from origin thereby sparing an organization’s network from any potential fallout or impact. When combined with scrubbing, only good traffic is returned to the origin preventing an organization’s bandwidth from being flooded with bad traffic.
  • Absorption—as a distributed network, most CDNs have thousands of servers against which they can spread out an attack, even preventing Layer 3 and Layer 4 attacks (two common DDoS vectors) from ever reaching the origin.
  • Resiliency—with those thousands of servers and terabits of egress capacity, the CDN quickly returns to normal operations in the wake of volumetric DDoS attacks. Even while under duress, the CDN can still continue to provide accelerated content delivery services.

Last week, Limelight announced its CDN-based security offering—DDoS Attack Interceptor. This solution, integrated directly with the Limelight’s content delivery services, provides proactive detection with mitigation technology in the cloud protecting customers against the downtime, loss of business and brand reputation impact associated with DDoS attacks. The solution is virtually transparent to customers and from a high-level, works the following way:

  • Prior to an attack, Limelight’s detection technology is constantly fingerprinting a customer’s traffic to learn what “good” traffic looks like. This fingerprint is sent continuously to “off-net” scrubbing centers. According to Limelight, the scrubbing centers are in different data centers and do not share bandwidth with Limelight’s delivery POPs so that the attack traffic does not share resources with the good, or clean, traffic
  • An attack presents itself against a target protected by Limelight
  • The Limelight CDN begins to absorb most of the attack while, at the same time, proactive monitoring detects the DDoS attack and notification alarms are raised in the network operations center
  • The customer is notified that they are under attack. If the attack is small enough and the customer has enough bandwidth to handle both good and bad traffic, they can opt to just let the CDN do what it does best. But if they don’t want to chance that the attack volume will increase, or if they don’t have the resources to handle it, they can opt to have the traffic scrubbed
  • When scrubbing is enacted, traffic is rerouted to the off-net scrubbing centers
  • The scrubbing centers already have a very detailed fingerprint of good traffic, so they may immediately begin aggressively mitigating the attack without having to be configured manually and without a lengthy “learning” period. The scrubbing centers return the clean traffic directly to Limelight’s CDN for delivery as usual using dedicated network interconnects for increased performance.

Limelight’s detection system constantly monitors for malicious traffic. However, since this monitoring is not happening in-line, Limelight claims it has no performance impact on a customer’s traffic. The detection covers the broadest range of DDoS attacks—both infrastructure as well as application layer attacks. According to Limelight, their solution can prevent certain zero day attacks using “behavior-based” techniques that compare measured baselines of both volume and patterns to more intelligently differentiate good traffic from bad.

It’s clear from the research that not only will DDoS attacks continue to rise (both in scope and scale) but that executives are worried about how to mitigate them. When the results can be loss of revenue, everyone starts to pay attention. And because the CDN as a cloud-based security solution provides a number of benefits over CPE or hybrid architectures, it’s no wonder that the major CDNs (Level 3 and EdgeCast by Verizon were the most recent before Limelight) have all added the service to their portfolios. It good to see Limelight moving up the stack with their product portfolio and offering more value-added-services, like security, to help them diversify their revenue away from purely storage and bit delivery. As DDoS and other attacks continue to grow in size and sophistication it will be interesting to see how these services evolve in an otherwise crowded security market with many different approaches and solutions to the DDoS problem.