How To Quantify The Value Of Your CDN Services

As mobile applications become more sophisticated, many congestion points have been identified which have given rise to a number of specialized solutions to resolve them. The primary solution for working around Internet congestion and slow-downs has long been the edge delivery and caching provided by content delivery networks. But those tactics have become commoditized, with asset delivery performance becoming table stakes delivered as a service. As a result, vendors have been working hard to offer true performance solutions, outside of storage, large software download and streaming video delivery services.

Over the past few years, the CDN market has spawned a number of specialty solutions to overcome specific challenges in the form of video streaming, web security, and dynamic applications. It has been well documented that web and mobile application performance is critical for e-commerce companies to achieve maximum transaction conversions. In today’s e-commerce landscape, where even milliseconds of latency can impact business performance, high CDN performance isn’t a nice to have, it’s a must have. But the tradeoffs have led to a polarizing effect between business units and the IT teams that support them. Modern marketers and e-commerce practitioners focus on engaging users with third-party content in the form of social media integration, localized reviews, trust icons and more, all of which need to perform flawlessly across a range of devices and form factors to keep users focused, engaged and loyal. The legacy attitude of one-size-fits-all for a CDN has become outdated as businesses seek out best-of-breed solutions to keep them competitive and to drive top-line growth. This is one of the main reasons why many customers have a multi-CDN strategy, where they might use one CDN specifically for video streaming, but another for mobile content acceleration.

One of the primary challenges in all of this is in arriving at measurable proof of the business impact. Historically, it has been extremely hard for e-retailers to quickly analyze the effectiveness of the solutions they’ve put in place to help drive web performance. Enterprise IT departments often find it difficult, if not impossible, to prove the benefit that their efforts have on customer satisfaction and top-line growth because analytic tools have historically been siloed by business specialties – IT has Application Performance Management (APM) tools and the business units have business analytics solutions. Whereas marketing and e-commerce teams have a variety of A/B testing solutions at their disposal, the IT team often struggles to show measurable business improvements.

Last week, adaptive CDN vendor Yottaa unveiled a new A/B testing methodology called ValidateIT that enables enterprises to easily and instantly demonstrate business value from their CDNs and other web performance optimization investments. Yottaa developed the methodology in 2013 and has been using it successfully with many of its customers since then. Through ValidateIT, enterprises can predictably and accurately split traffic in real-time, allowing them to verify the immediate and long-term business benefits of optimizing their web applications. As the first vendor in this market that I know of to offer this type of methodology, Yottaa is enabling enterprises to make an informed and confident buying decision by demonstrating the business value of web application optimization

But Yottaa and its customers are not the only ones to have “validated” ValidateIT, the methodology has also earned a certification from Iterate Studio, a company that specializes in bringing business-changing technologies to large enterprises. Iterate Studio curates, validates and combines differentiated technologies that have repeatedly delivered positive and verifiable business impact across a broad set of metrics. Working together with customers, Yottaa applies the ValidateIT methodology to split, instrument and measure web traffic using trusted third-party business analytic tools. The important aspects of the methodology include:

  • Control over the flow of visitor traffic. Yottaa typically splits traffic 50/50 in proof-of-concept scenarios to highlight the benefit their technology is having over an existing solution. In head-to-head “bake-off” scenarios, Yottaa can split the traffic into thirds or more, depending upon the competition.
  • Conducting a live, simultaneous A/B test. Online businesses frequently say that it’s impossible to accurately compare two different time periods to one another because of the variables that would impact the results. Campaigns, seasonality, breaking news and events, and any number of competing factors can influence visitor behavior. So Yottaa ensures that ValidateIT highlights the business-impacting results of their solution in real-time by arbitrarily sending visitors to the incumbent solution and the Yottaa (and possibly other competing vendors’) optimized solution and then measuring the results. This eliminates objections with regard to performance versus content or campaigns, as arguably all things are equal but the web performance optimization techniques applied to the visitor sessions.
  • Leverage in-place third-party analytics solutions. IT vendors have attempted to bring proprietary business analytics to market, but Yottaa felt it was important to lean on the business analytics solutions companies already use to ensure a credible test and validation. Plus, by using existing business analytics, marketing, e-commerce and IT leaders can leverage any existing custom metrics, analysis methodologies, and reports to drill-down into the details.

The most legitimate use case for Yottaa’s solution is that you don’t know whether your one-size-fits-all CDN solution is right for you or whether you need a specialty CDN until you actually measure, evaluate and analyze the results. That’s the reason for and beauty of ValidateIT and why the company offers it at no cost. It’s free as part of their solution validation process because they want 100% of companies to better understand which solutions in the market truly work versus one’s that don’t. It’s a nice tool to arm enterprise buyers with to show them the real business benefit vs. relying on a blue-chip logo.

The Code Problem for Web Applications & How Instart Logic Is Using Machine Learning To Fix It

The adoption of mobile devices with powerful CPUs and full HTML5 browsers is enabling a new wave of increasingly sophisticated applications. Responsive/adaptive sites and single page applications are two obvious examples. But the increased sophistication is also creating new performance bottlenecks along the application delivery path. While the industry can continue to eek out incremental performance gains from network-level optimizations, the innovation focus has shifted to systems that are application-aware (like FEO) and now execution-aware. It’s the new frontier for accelerating application performance.

To deliver web experiences that meet these new world demands, developers are increasingly using JavaScript in web design. In fact, according to httparchive, the amount of JavaScript used by the top 100 websites has almost tripled in the last three years and the size of web pages has grown 15 percent since 2014. The popularity of easily available frameworks and libraries like angular.js, backbone.js and jQuery make development easier and time-to-market faster.

Unfortunately, there is a tradeoff for these rich web experiences. As web pages become bloated with JavaScript, there are substantial delays in application delivery performance — particularly on mobile devices where there are smaller CPUs, memory and cache sizes. It’s not uncommon for end-users to wait for seconds, staring at a blank screen, while the browser downloads and parses all this code.

A big part of the bottleneck causing these performance delays lies within the delivery of JavaScript code. When a site loads and a browser request is made, traditional web delivery approaches respond by sending all of the JavaScript code without understanding how the end users’ browsers will use it. In fact, many times, more than half of the code is never even used. Developers have tried to mitigate this challenge by turning to minification – an approach that removes unnecessary data, such as white-spaces and comments. But this approach provides only minimal benefits to web performance.

Now imagine instead, if the browser could intelligently decide what JavaScript code is actually used and download that code on-demand. While the performance benefit could be substantial, demand loading code without breaking the application would be a very challenging problem. This is exactly what a new technology called SmartSequence with JavaScript Streaming from Instart Logic does. It’s the first such innovation that I have seen that applies machine learning to gain a deep understanding of how browsers use JavaScript code to optimize its delivery and enhance performance.

By using real-time learning coupled with a cloud-client architecture, their technology can detect what JavaScript code is commonly used and deliver only the necessary portions. The company says this approach reduces the download size of a typical web application by 30-40%, resulting in dramatic performance improvements for end users. With this new method, developers can now accelerate the delivery of web applications even as the use of JavaScript continues to rise.

For web and application developers, this gives them the freedom to push the boundaries of web development without sacrificing performance, opening up endless opportunities for revolutionizing web and mobile applications. The way Instart Logic is looking to solve this problem is interesting as I haven’t seen this approach in the market before, so it’s definitely one to watch as it evolves. For more details on the technology, check out the company’s blog post entitled, “Don’t Buy the JavaScript You Don’t Use.”

Stream Optimization Vendor Beamr Details ROI, Breaks Down Cost

Screen Shot 2015-03-03 at 10.44.48 AMIn a recent blog post, I detailed stream optimization solutions and concluded that the lack of market education could kill this segment before it even has a chance to grow. In that post I raised some key questions about the economics of optimizing streams including: How much cost does optimization add to the encoding flow? How much is the delivery cost reduced? How much traffic or content do you need to have in order to get an ROI? At that time, none of the stream optimization vendors had this information available on their websites.

Beamr, who is one of the stream optimization vendors I mentioned, recently stepped up to the challenge, and sent me a detailed ROI calculation for their solution, which they also posted on their website. Finally we have some public numbers that show the economics of their stream optimization solution, and the type of companies that could benefit from it. Beamr’s product works by iteratively re-encoding each video frame to different compression levels, and selecting the “optimal” compression level which provides the minimum frame size in bytes while not introducing any additional artifacts. The diagram below shows the processing which Beamr Video performs on each video frame.

Screen Shot 2015-03-02 at 7.11.34 PMAs you can imagine, this process is resource intensive, since each frame is encoded several times before moving on to the next frame. And indeed, the processing time for one hour of video on a single core ranges from 3 hours for 360p content to 14 hours for 1080p content. However, since Beamr is capable of distributing its processing to multiple cores in parallel, on a strong enough machine optimizing an hour of 1080p content can actually be completed in one hour. Beamr’s ROI calculation estimates that the cost of processing one hour of video (which includes 6 ABR layers at various resolutions ranging from 360p to 1080p) is around $15. This figure includes about $4 of CPU cost, and $11 of the Beamr software license cost.

As for delivery cost, Beamr’s estimation based on Amazon’s CDN costs for large customers that deliver Petabytes of data each month is around 3 cents for each hour of video delivered. Since Beamr chops off around 35% of the stream bitrate on average, the Beamr savings on each hour of video delivered are approximately one cent. Comparing this number with the processing cost, it can easily be seen that for videos that are viewed 1500 times the cost of processing is exactly offset by the savings in the CDN cost, and above 1500 views there is a positive ROI.

After reviewing these numbers, two things became clear to me: First, that under the right circumstance, there can be a positive ROI for deploying Beamr’s stream optimization solution. Second, the benefit is only for OTT content owners that have a relatively large number of video views each month. If you have 1M views for an hour-long episode, you can save $10,000 in delivery by optimizing that episode with Beamr. If your clips are viewed only 100 times on average, you won’t recover optimization costs through delivery costs savings. However, since stream optimization can also benefit the UX, by reducing rebuffing events and enabling a higher-quality ABR layer to be received by more users, it might make sense to apply it even in smaller-scale services. But again, these claims have not been proven yet by Beamr or any of the other stream optimization vendors yet.

Thanks to Beamr for breaking down some of the costs and helping to educate the market.

The Internet Has Always Been Open, It’s The Platforms & Devices That Are Closed

As expected, today’s vote on the FCC’s proposed net neutrality rules passed with a 3-2 margin. While this is a big step in a process that has been going on for thirteen years now, we’re still a long way off from this debate being over. Since a draft of the proposal wasn’t shared with the public we still don’t know what exactly the rules state or how to interpret them. We’ve also learned that FCC Commissioner Clyburn did get FCC Chairman Wheeler to make “significant changes” to the newly passed FCC rules, but what those changes are we won’t know until we get to see the actual language.

The problem is that even when we do get to read the new rules, many of the words used are going to be vague. Things like “fair” and “unreasonable” have no meanings. What is the baseline that will be used to define what is fair, and what isn’t? Apparently that is up to the FCC and from what I am told, the new rules provide no definitions or methodology at all of how those words will be put into practice. Vague, high-level language isn’t what we need more of, yet that’s what we get when the rules are being written by politicians. It also doesn’t help that many in the media still can’t get the basic facts right, which only continues to add more confusion to the topic. My RSS feed is already full of more than a hundred net neutrality posts and some, like this one from Engadget, get the very basics wrong.

The post says that the new rules will, “ban things like paid prioritization, a tactic some ISPs used to get additional fees from bandwidth-heavy companies like Netflix,” except that Netflix is getting NO prioritization of any kind. Netflix has a paid interconnect deal with Comcast and other ISPs but a paid interconnect deal is not the same thing as paid prioritization. All you have to do is read the joint press release by Comcast and Netflix, to know this as it clearly states that, “Netflix receives no preferential network treatment“. Engadget is not the only media site to get this wrong. These are the basics, if people can’t get those right, what chance do we have of having an educated discussion on net neutrality rules when people don’t even know what they apply to?

For all the talk of how this now help consumers with regards to blocking or throttling of content via wireline services, it has no impact. We don’t have a single example of that being done by any wireline ISP, so there isn’t a problem that needs fixing. To me, the biggest piece of language in the new rules is that the FCC is using Title II classification not just for ISPs, but also edge providers. This gives the FCC the right to examine the ISP practices downstream to broadband consumers as well as upstream to edge providers. But is the oversight and regulations for upstream and downstream going to be the same? Probably not and one would expect it could very well be different.

I find it funny that the term “open Internet” keeps being used. Has the Internet ever been “closed” to anyone? I’ve never heard of any consumer complaining that they went to a website or content service and it was denied on their device, do to their wireline ISP provider. It’s usually denied on the device because the platform or device has a closed ecosystem, which the net neutrality rules don’t address. So for those that have been saying that today’s vote now, “opens up the Internet to be a level playing field”, think again. The Internet itself has always been open, the apps and platforms we use, for the most part, are all closed.

Job Opening: Sony PlayStation Vue, Technical Product Manager With CDN Expertise

playstation-vueThe Sony PlayStation Vue team is looking for a Technical Product Manager with solid CDN and HLS skills. Must have experience with video streaming technologies such as HLS, CDNs and understanding of the caching layer and scalable architecture. Familiarity with PlayStation consoles, iOS and Android smartphones/tablets, and various standard streaming set-top boxes/devices (Roku, Apple TV, Chromecast) also required. If interested, check out all the details via the job posting on LinkedIn.

If your company has a unique position they are trying to fill, send me the details.