Why Vendors Need To Hire Marketing Leaders From Outside Their Industry

With the growth we have seen in the streaming media market, most vendors have a long list of open reqs, all desperately trying to hire the same people. And while it makes sense to hire sales and product folks from within the industry, I’m a firm believer that when it comes to marketing positions, vendors need to start hiring from outside our industry – immediately.

At first many might think that’s an odd statement for me to make, but marketing is a skill set. Either you understand how to market a product or you don’t. Understanding the technology helps to a degree, but companies aren’t selling technology, they are selling a service. No offense to the marketers in our industry, but we need some fresh blood, with those who know how to market a service/produce, and bring a different perspective to the industry. Even for myself, I recently hired a marketing specialist to help me re-think and re-imagine my brand in the market. A good marketing person knows how to tell a story and transform a product and service into something compelling, even if they don’t know how to encode a piece of video.

As an industry, we are all using far too many high-level and generic words like speed, quality, performance, scale etc. with no real meaning behind them. Good marketing and branding is an art. It involves knowing how to price, package and productize a product/service and do it in a way that resonates with the customer, be it b2b or b2c. Those with good marketing skills know how to transcend verticals and markets, while delivering a clear message. And the really good marketers can meld companies, industries and make brands more valuable and relevant.

As I have seen first hand from the marketing person I am working with, great marketers are remarkable observers. They love to observe people’s behaviors and can quickly tell what a person likes, what resonates with them, and what creates the experience the client is looking for. A good marketing person is also extremely curious, they constantly ask questions, and want to know what businesses and people think of things. Skillful marketers always have questions to ask and never run out of ways to think about how a person or business reacts to a name, a brand, a service or a feeling. In short, really good marketing folks are geniuses because they aren’t afraid to try something new, to disrupt the market, to change how people think.

A good marketing person doesn’t work 9 to 5. They spend a lot of personal hours watching people, questioning the norm, researching, looking at data and advancing their skills. They tend to read everything they can and absorb information like a sponge, constantly retaining it for later. They are great planners, but even better doers. Marketing professionals live in the trenches, because it’s where they get their energy from and they don’t use buzz words or quotes from books, because they have been there and done it. They have the hands-on experience, are always thinking, coming up with ideas, and trying something new. They also love their community, are aware of their surroundings, love challenges, and I’ve found, they never start a conversation with a list of their achievements. They are most interested in their client’s challenges and how they can solve them.

I also have found that really good marketers are quite humble, don’t come with an ego and are not seeking glory. They take great pride in their work and they love to see a campaign and branding exercise be successful. Great marketers believe in accountability, and are not afraid of data, reporting and have a tangible methodology to determine the clients ROI.

When it comes to marketing products and services in the online video industry, it’s time for our market to be disrupted. We need change. We need to evolve. We all need a fresh perspective. Even me. It does not matter how long you have been in the space, in fact, I think the longer you have been in the industry, is actually a dis-service when it comes to having a fresh marketing approach to the market. Right now I am having someone look at what I do, critic it, change it, and find ways to make it even more relevant, make it transcend verticals – which is the only way for any business and brand to grow. And that is the true value of a marketing genius, growing a company. If you are interested in branding, marketing, and packaging help, feel free to reach out to me and I’ll put you in contact with the marketing genius I am using.

PacketZoom Looking To Solve TCP Slow Start By Boosting Mobile App Performance

TCP is a protocol meant for stable networks, and all too often, mobile is anything but stable, with unreliable and often bottlenecked connections that conflict with TCP methodologies like slow starts on transfers. But while TCP looks inevitable from some angles, PacketZoom, an in-app technology that boosts mobile app performance, says outperforming TCP is actually not as difficult as it looks. The problem is certainly complex, when many moving pieces are involved, especially when we’re dealing with the last-mile. But if you break the problem down into digestible chunks and apply observations from the real world, improvement becomes achievable.

The internet is made up of links of different capacities, connected by routers. The goal is to achieve maximum possible throughput in this system. However, the links all possess different capacities, and we don’t know which will present bottlenecks to bandwidth ahead of time. PacketZoom utilizes a freeway analogy to describe the system. Imagine we have to move trucks from a start to end point. At the start point, the freeway has six lanes — but at the end, it has only two lanes. When the trucks are received at the end, an “acknowledgement” is sent back. Each acknowledgement improves the knowledge of road conditions, allowing trucks to be sent more efficiently.

The goal is for the maximum number of trucks to reach the destination as fast as possible. You could, of course, send six lanes worth of trucks to start, but if the four extra lanes suddenly end, the extra trucks must be taken off the road and resent. Given the unknown road conditions at the start of the journey, TCP Shipping, Inc. sensibly starts out with just one lane worth of trucks. This is the “slow start” phase of TCP, and it’s one of the protocol’s unsolved problems in mobile. TCP will start with the same amount of data for all kinds of channels, and thus often gets into the situation of an initial sending rate that’s far smaller than the available bandwidth, or, if it we were to try to start off with too much data, the rate would be larger than available bandwidth and require resending.

However, PacketZoom says it is possible to be smarter by choosing the initial estimate based on channel properties discernible from the mobile context. In other words, we could start off with the ideal 2 lanes of trucks, shown below in frame C.

Of course, it’s impossible to have perfect knowledge of the networks ahead of time but it is possible to get a much better estimate based on prior knowledge of network conditions. If estimates are close to correct, the TCP slow start problem can be enormously improvd upon. The contention that PacketZoom makes is not that TCP has never improved in any use case. Traditionally, TCP was set to 3 MSS (maximum segment size, the MTU of the path between two endpoints). As networks improved, this was set to 10 MSS; then Google’s Quick UDP Internet Connections protocol, in use in the Chrome browser, raised it to 32 MSS.

But mobile has largely been passed by, because traffic run through a browser is the minority on mobile. A larger fixed channel, like QUIC, is also not a solution, given the vast range of conditions between a 2G network in India and an ultra-fast wifi connection at Google’s Mountain View campus. That’s why today, a very common case is mobile devices accessing content via wireless networks where the bandwidth bottleneck is the last mobile mile. And that path has very specific characteristics based on the type of network, carrier and location. For instance, 3G/T-Mobile in New York would behave differently than LTE/AT&T in San Francisco.

From the real world data Packetzoom has collected, the company has observed initial values ranging from 3 MSS to over 100 MSS for different network conditions. Historical knowledge of these conditions is what allows them to avoid slow starts, to instead have a head start. Crunching a giant stream of performance data for worldwide networks to constantly update bandwidth estimates is not a trivial problem. But it’s not an intractable problem either, given the computing power available to us today. In a typical scenario, if a connection starts with a good estimate and performs a couple of round trips across the network, it can very quickly find an accurate estimate of available bandwidth. Consider the following wireshark graph, which shows how quickly a 4MB file was transferred with no slow start (red line) versus TCP slow start (blue line).

TCP started with a very low value and took 3 seconds to fully utilize the bandwidth. In the controlled experiment shown by the red line, full bandwidth was in use nearly from the start. The blue line also shows some pretty aggressive backoff that’s typical of TCP. In cricket, they often say that you need a good start to pace your innings well, and to strike on the loose deliveries to win matches. PacketZoom says that in their case, the initial knowledge of bottlenecks gets the good start. And beyond the start, there’s even more that they can do to “pace” the traffic and look for “loose deliveries.” The payoff to these changes would be a truly big win: a huge increase in the efficiency, and speed, of mobile data transfer.

Mobile app performance is a facinitating topic as now, more than ever, we’re all consuming more content via apps over mobile networks, as opposed to using the browser. I’d be interested to hear from others on how they think the bottlenecks can be solved over the mobile last-mile.

Reviewing Fastly’s New Approach To Load Balancing In The Cloud

Load balancing in the cloud is nothing new. Akamai, Neustar, Dyn, and AWS have been offering DNS-based cloud load balancing for a long time. This cloud based approach has many benefits over more traditional appliance-based solutions, but there are still a number of shortcomings. Fastly’s recently released load balancer product takes an interesting new approach, which the company says actually addresses many of the original challenges.

But before delving into the merits of cloud-based load balancing, let’s take a quick look at the more traditional approach. The global and local load balancing market has long been dominated by appliance vendors like F5, Citrix, A10, and others. While addressing key technology requirements, appliances have several weaknesses. First, they are costly to maintain. You need specialized IT staff to configure and manage them, not to mention the extra space and power they take up in your datacenter. Then there are the high support costs, sometimes as much as 20-25% of the total yearly cost of the hardware. Appliance-based load balancers are also notoriously difficult to scale. You can’t just “turn on another appliance” based on flash traffic or a sudden surge in the popularity of your website. Finally, these solutions don’t fit into the growing cloud model which requires you to be able to load balance within and between Amazon’s AWS, Google Cloud or Microsoft Azure.

Cloud load balancers address the shortcomings of appliance-based solutions. However, the fact that they are built on top of DNS creates some new challenges. Let’s consider an example of a user who wants to connect to www.example.com. A DNS query is generated, and the DNS-based load balancing solution decides what region/location to send the query based on a few variables. The browser/end-user caches that information typically for a minute or more. There are two key problems with this approach, the DNS time-to-live/caching, and the minimal number of variables that the load balancer can use to make the optimal decision.

The first major flaw with this approach is the fact that DNS-based load balancing is dependent upon a mechanism that was designed to help with the performance of DNS. DNS has a built-in performance mechanism where the answer returned from a DNS question can be cached for a time period specified by the server. This is called the Time to Live or TTL, and usually the lowest value most sites use is between 30-60 seconds. However, most browsers have implemented their own caching layer that can override the TTL specified by the server. In fact, some browsers cache for 5-10 minutes, which is an eternity when a region or data center fails and you need to route end users to a different location. Granted, modern browsers have improved their response time as it relates to TTL, but there are a ton of older browsers and various libraries that still hold on to cached DNS responses for 10+ minutes.

The second major flaw with DNS-based load balancing solutions is that the load balancing provider can only make a decision based on the recursive IP of the querying DNS server, or less frequently (if the provider supports it), the end-user IP. Most frequently, DNS-based solutions receive a DNS query for www.example.com and the load balancer looks at the IP address of the querying system, which is generally just the end user’s DNS resolver and is often not even in the same geography. The DNS-based load balancer has to make decisions based solely on this input. It doesn’t know anything about the specific request itself – e.g. the path requested, the type of content it is, whether the user is logged in or not, the particular cookie or header values, etc. It only sees the querying IP address and the hostname which severely limits its ability to make the best possible decision.

Fastly’s says their new application-aware load balancer is built-in such a way that it avoids these problems. It’s basically a SaaS service built on top of their 10+ Tbps platform, which already provides CDN, DDoS protection, and web application firewall (WAF). Fastly’s load balancer makes all of its load balancing decisions at the HTTP/HTTPS layer, so it can make application-specific decisions on every request, overcoming the two major flaws of the DNS-based solutions. Fastly also provides granular control, including the ability to make different load balancing decisions and ratios based on cookie values, headers, whether a user is logged in (and if they are a premium customer), what country they come from, etc. Decisions are also made on every request to the customer’s site or API, not just when the DNS cache expires. This allows for sub-second failover to a backup site if the primary is unavailable.

The other main difference is that Fastly’s Load Balancer, like the rest of their services, is developed on their single edge cloud platform, allowing customers to take advantage of all the benefits of this platform. For example, they can create a proactive feedback loop with real-time streaming logs to identify issues faster and instant configuration changes to address these issues. You can see more about what Fastly is doing with load balancing by checking out their recent video presentation from the CDN Summit last month.

When It Comes To Cache Hit Ratio And CDNs, The Devil Is In The Details

The term “cache hit ratio” is used so widely in the industry that it’s hard to tell what exactly it means anymore from a measurement standpoint, or the methodology behind how it’s measured. When Sandpiper Networks first invented the concept of a CDN (in 1996), and Akamai took it to the next level by distributing the caching proxy “Squid” on a network of global servers, the focus of that caching at the time was largely images. But now we need to ask ourselves if focusing on overall cache hit ratio as a success metric is the best way to measure performance on a CDN.

In the late 90’s, much of the Internet’s web applications were being served from enterprises with on premise data centers and generally over much lower bandwidth pipes. One of the core issues Akamai solved was relieving bandwidth constraints at localized enterprise data centers. Caching images was critical to moving bandwidth off the local networks and bringing content closer to the end user.

But fast forward 20 years later and the Internet of today is very different. Pipes are bigger, applications are more complicated and users are more demanding with respect to performance, availability and security of those applications. So, in this new Internet is the total cache hit ratio for an application a good enough metric to consider, or is there a devil in the details? Many CDNs boast of their customers achieving cache hit ratios around 90%, but what does that really mean and is it really an indicator of good performance?

To get into cache hit ratios we must think about the elements that make up a webpage. Every webpage delivered to a browser is comprised of an HTML document and then other assets including images, CSS files, JS files and Ajax calls.  HTTP Archive tells us that, on average, a web page contains about 104-108 objects per page coming from 19 different domains. The average breakdown of asset types served per webpage from all HTTP Archive sites tested looks like this:

Most of the assets being delivered per web page are static. On average 9 may specifically be content type HTML (and therefore potentially dynamic) but usually, only one will be the initial HTML document. An overall cache hit rate for all of these objects tells us what percentage of them are being served from the CDN, but does not give developers the details they need to truly optimize caching. A modern web application should have most of the images, CSS files and other static objects served from cache. Does a 90% cache hit ratio on the above page tell you enough about the performance and scalability of the application serving that page?  Not at all.

The performance and scalability of a modern web applications is often largely dependent on its ability to process and serve the HTML document.  The production of the HTML document is very often the largest consumer of compute resource on a web application. When more HTML documents are served from cache, less compute resource is consumed and therefore applications become more scalable.

HTML delivery time is also critical to page load time and start render time, being the first object delivered to the browser and a blocker to all other resources being delivered. Generally, serving HTML from cache can cut HTML delivery time to circa 100ms and significantly improve user experience and their perspective of page speed. Customers should seek to understand the cache hit ratio by asset type so developers can specifically target improvements in cache hit rates by asset type. This would result in achieving faster page load times and a more scalable application.

For example, seeking closer to 100% cache hit rates for CSS files, JS files and possibly images would seem appropriate.

As would understanding what cache hit rate is being achieved on the HTML.

[*Snapshots from the section.io portal]

While not all HTML can be served from cache, the configurability of cache solutions like Varnish Cache (commercially available through Varnish Software, section.io and Fastly) and improved HTML management options such as HTML streaming (commercially available from Instart Logic and section.io) have made it possible to cache HTML. In addition, new developer tools such as section.io’s Developer PoP allow developers to more safely configure and deploy HTML caching without risking incidents in production.

Many CDNs focus on overall cache hit rate because they do not encourage their users to cache HTML. A 90% cache hit rate may sound high, but when you consider that the 10% of elements not cached are the most compute-heavy, a different picture emerges. By exposing the cache hit ratio by asset type, developers are able to see the full picture of their caching and optimize accordingly. This results in builders and managers of web applications who can more effectively understand and improve the performance, scalability, and user experience of their applications and is where the industry needs to head.

New Report Reveals Low TV Network Brand Recognition among Young Millennials, Here’s What it Means for Business

A new report from ANATOMY entitled, “The Young and the Brandless,”  ranks seven key TV and OTT networks according to their digital performance and brand recognition among young millennials (18-26). The report reveals what raises a TV network brand’s relevance in digital environments.The biggest difference between TV brands with high and low brand recognition is a user

The biggest difference between TV brands with high and low brand recognition is a user experience-first strategy, ANATOMY’s report suggests. But, according to ANATOMY CEO Gabriella Mirabelli, “While networks consistently indicate that the viewer is at the center of their thinking, they don’t seem to actually analyze how users truly behave.” User experience is the key to making a TV network relevant in digital spaces. TV brands with higher brand recognition among young millennials (e.g., Netflix) are extremely social media savvy. ANATOMY found that these brands know when to post and what to post on to drive higher rates of engagement. For example, according to ANATOMY, Facebook posts published between 12-3 PM generate “236% more engagements” (reactions, shares, and comments).

TV brands with higher brand recognition among young millennials (e.g., Netflix) are extremely social media savvy. ANATOMY found that these brands know when to post and what to post on to drive higher rates of engagement. For example, according to ANATOMY, Facebook posts published between 12-3 PM generate “236% more engagements” (reactions, shares, and comments).

A TV network’s website or app is also an important touchpoint for its brand in digital spaces, but people judge websites quickly — in about “3.42 seconds”, according to ANATOMY. TV networks with higher brand recognition had easy-to-user user interfaces on their websites and apps. They made it easy for people to watch shows, discover new content, and find information about shows.

There is a lot of other great data in the report, which you can download for free here.