“Study” of ISP Slowdowns Conveniently Ignored Real Cause, ISPs Not At Fault

Last week, so-called consumer advocacy group Free Press announced that according to data collected via their BattlefortheNet website, major ISPs were “slowing” content services at key interconnection points. Free Press pitched their agenda to the media and some news outlets wrote stories saying major Internet providers were slowing traffic speeds for thousands of consumers across North America. But as it turns out, the Free Press came to the wrong conclusion when they accused the ISPs of being responsible. The main provider having the problem with the ISPs, GTT, confirmed they were given extra capacity by some ISPs, dating back more than six months ago, and that GTT simply hasn’t turned up that extra capacity yet. Updated Tues June 30: GTT and AT&T have entered into an interconnection agreement.

The data that Free Press is highlighting shows that GTT, a transit provider that connects to many ISPs, was having capacity problems with AT&T, Comcast and other ISPs in select cities. Naturally everyone assumed it must be the ISPs fault and interestingly enough, GTT told me that not a single member of the media of the Free Press contacted them for more details. I reached out to GTT and they were happy to set up a call and very easy to talk to. While GTT could not disclose full details on their peering agreements/relationships, I did confirm that multiple ISPs provided GTT with extra capacity, which the company is still in the process of turning up. But it doesn’t stop there.

GTT actually turned down capacity at interconnection points as they are shifting their flow of traffic because of acquisitions they have done in the market and consolidating how they connect with ISPs. In the last six years, GTT has acquired five companies (WBS Connect, PacketExchange, nLayer Communications, IP network Tinet from Inteliquent, UNSi) and a few months ago, announced an agreement to acquire their sixth, MegaPath.

As a result of the acquisitions, the nLayer side of GTT has been shutting down their connections, specifically AS4436, and moving that traffic over to the Tinet connections, AS3257. To make it simple to understand, GTT is simply consolidating networks and shifting how they connect to ISPs through different connections, while terminating others. So the capacity issues that Free Press data shows is a result of GTT essentially shutting down those connections and not because of any wrong doing on the ISPs part. The M-Labs data, that the Free Press is using, is measuring a problem that GTT owns and as GTT told me, extra capacity was made available to them before M-Labs even started their measurements.

Taking it a step further, public info that details GTT’s network shows that GTT/Nlayer (AS4436) now has all traffic behind the SFI relationships of GTT/Tinet (AS3257) and is no longer connecting with other networks. When you look this AS up in the peering database GTT says, “We are no longer accepting Peering requests for this ASN”. GTT has a lot of AS numbers and looking at all of them it shows the consolidation taking place and the reason they are no longer accepting peering requests for AS4436, since it is being shut down.

Data also shows that GTT/Nlayer (AS4436) was once connected to multiple networks and likely paying for connections to Tier 1 networks. These paths still exist and the BGP information is still available, but will likely be gone soon. GTT/Tinet (AS3257) is a 1Tbps+ network with “balanced” traffic and GTT/Nlayer (AS3257) is a 1Tbps+ traffic source with “mostly outbound” traffic. Of course none of this is info the average consumer would know how to look up or even understand, and that’s exactly what the Free Press and others want. It was not hard to find out the cause of the performance issues if you simply asked GTT, looked at public network info and asked the ISPs.

The take away from all of this is that many are far too quick to judge who is at fault when it comes to network performance topics, without talking to all the parties involved and having all the facts. GTT is making changes to their network, working closely with ISPs, already has the relationships in place and is working to solve any performance problems. While some like to say that these networks can just “flip the switch” to fix issues, it does not work that way, especially when you are consolidating networks, like GTT is. Many are quick to want to lay blame on ISPs just because it is fashionable to want to hate service providers or push an agenda like the Free Press.

It’s clear that the Free Press should not be trusted as they used wrong conclusions from the data to push their agenda. Even if they didn’t do it on purpose, it shows the Free Press has a complete lack of understanding of the data being collected. They don’t understand that when a Tier 1 network or CDN makes changes to their infrastructure, it impacts the data that is being collected. Don’t point fingers unless you talk to all the parties involved and review ALL of the data available in the market, not just a slice of it.

It should also be noted that GTT told me that no one from the Free Press ever contacted them, before the Free Press laid blame on the ISPs. If they had, GTT would have been able to inform the Free Press of some more details, which the Free Press neglected to even look into. I also find it interesting that while Free Press says they are “fighting” for consumers rights to “communicate”, the Free Press doesn’t allow any comments, on any of the posts they publish on their website. To try to discredit me, the Free Press has also called me a “paid cable-industry operative”, which isn’t true and is laughable to suggest, considering that I have written two detailed blog posts calling out Verizon for “poor business practices“, just within the past three months. Apparently, sticking to the facts is not something the Free Press does very well.

The Free Press has no wiggle room on this and if they don’t edit their blog post to correct their accusations, then it only proves they care about their agenda, not the truth.

Note: For those that say I am standing up for the ISPs, I’m not. They can speak for themselves. This is about getting to the real root cause of the problem, not the “perceived” problem that someone like the Free Press likes to promote. Like many, I am tired of all the vagueness and double-speak around discussions involving interconnects, peering and transit topics. We need more clarity, not more politics.

The Guardian’s Story About ISPs “Slowing Traffic” Is Bogus: Here’s The Truth

On Monday The Guardian ran a story with a headline stating that major Internet providers are slowing traffic speeds for thousands of consumers in North America. While that’s a title that’s going to get a lot of people’s attention, it’s not accurate. Even worse, other news outlets like Network World picked up on the story, re-hashed everything The Guardian said, but then mentioned they could not find the “study” that The Guardian is talking about. The reason they can’t find the report is because it does not exist.

In an email exchange with M-Labs this morning, they confirmed for me that there is no new report, since their last report published on October 28th, 2014. So The Guardian wrote a story about a “study released on Monday”, referencing data from M-labs, but provides no link to the so-called study. The Guardian does cite some data from what appears to have been collected via the BattlefortheNet website, using M-Labs methodology, which uses tests that end users initiate. Tim Karr of the Free Press, one of the organizations that makes up BattlefortheNet is quoted in The Guardian post as saying that, “Data compiled using the Internet Health Test show us that there is widespread and systemic abuse across the network.”

What The Guardian story neglects to mention is that this measurement methodology that the Free Press is highlighting, was actually rejected by the FCC in their Measuring Broadband America report. They rejected it because the methodology wasn’t collected in a real-world fashion, taking into account all of the variables that determine the actual quality consumers receive, as others have shown. (one, two, three) Updated 1:10 pm: M-Labs just put out a blog post about their data saying, “It is important to note that while we are able to observe and record these episodes of performance degradation, nothing in the data allows us to draw conclusions about who is responsible for the performance degradation.” M-Labs did not include a link to any “study” since they didn’t publish one, but you can see a Google Docs file of some of the data here. It’s interesting to note that the document has no name on it, so we don’t know who wrote it or published it to Google Docs. Updated 2:28 pm: Timothy Karr from Free Press has deleted all of the data that was in the original Google Docs file in question and simply added two links. It’s interesting to note that they published it without their name on it and only edited it once it was called into question.

Updated 2:07 pm: M-Labs has confirmed for me that they did not publish the Google Docs file in question. So the data and text that Free Press was showing the media, to get them to write a story, has now been erased. This is exactly why the media needs to check the facts and sources instead of believing anything they are told.

If the Free Press is referencing any “study” they may have put out on Monday, using M-Labs methodology, it’s nowhere to be found on their website. So where is this “study”? Why can’t anyone produce a link to it? Mainstream media outlets that picked up on The Guardian should be ashamed of themselves that they didn’t look at this “study” BEFORE they ran a story. This is sloppy reporting when you reference data in a story you haven’t seen yourself or even verified that a “study” exists.

Adding even more insult to injury, The Guardian piece has no basic understanding of how traffic flows on the Internet and the difference between companies that offer CDN services versus those that offer transit. The Guardian piece calls GTT a “CDN” provider when in fact, they are nothing of the sort. GTT is an IP network provider, they offer no CDN services of any kind and don’t use the word CDN anywhere on their website. At least one other news site that also incorrectly called them this has since corrected it and gotten the terminology right. But once again, some news outlets simply took what The Guardian wrote without getting the basics right or checking the facts. Others did a good job of looking past the hype.

The Guardian piece also says that, “Any site that becomes popular enough has to pay a CDN to carry its content on a network of servers around the country“, but that’s not entirely true. Netflix doesn’t use a CDN, they built one itself. So you don’t “have” to pay to use a third-party CDN, some content distributors choose to build and manage their own instead. The Guardian piece also uses words like “speed” and “download” interchangeably, but how these words are used have very different meanings. Speed is the rate at which packets get from one location to another. Throughput is the average rate of successful message delivery over a communication channel.

Even if The Guardian article was trying to use data collected via the BattlefortheNet website, they don’t understand what data is actually being collected. That data is specific to problems at interconnection points, not inside the last mile networks. So if there isn’t enough capacity at an interconnection point, saying ISPs are “slowing traffic speeds” is not accurate. No ISP is slowing down the speed of the consumers’ connection to the Internet as that all takes place inside the last mile, which is outside of the interconnection points. Even the Free Press isn’t quoted as saying ISPs are “slowing” down access speed, but rather access to enough capacity at connection points.

It should be noted that while M-Labs tells me they had not intended to release an additional report, because of The Guardian post, M-Labs will be putting out a blog post that broadly describes some of the noticeable trends in the M-Lab data and “clarifies a few other matters”. Look for that shortly. M-Labs blog post is now live.

Thursday Webinar: Effective Multiplatform Delivery – Formats, Players and Distribution

Thursday at 2pm ET, I’ll be moderating a StreamingMedia.com webinar on the topic of “Effective Multiplatform Delivery“. You can register and join this webinar for free.

Streaming Vendor News Recap For The Week Of June 15th

Here’s a list of all the releases I saw from streaming media vendors for this week.

Conference Videos Now Online From SM East Show & CDN Summit

Screen Shot 2015-06-18 at 11.24.22 AMAll of the presentations from last month’s Streaming Media East and Content Delivery Summit are now available in video at www.streamingmedia.com/videos. You can download the presentations from the shows here: CDN Summit, SM East Day 1, SM East Day 2.