The Guardian’s Story About ISPs “Slowing Traffic” Is Bogus: Here’s The Truth

On Monday The Guardian ran a story with a headline stating that major Internet providers are slowing traffic speeds for thousands of consumers in North America. While that’s a title that’s going to get a lot of people’s attention, it’s not accurate. Even worse, other news outlets like Network World picked up on the story, re-hashed everything The Guardian said, but then mentioned they could not find the “study” that The Guardian is talking about. The reason they can’t find the report is because it does not exist.

In an email exchange with M-Labs this morning, they confirmed for me that there is no new report, since their last report published on October 28th, 2014. So The Guardian wrote a story about a “study released on Monday”, referencing data from M-labs, but provides no link to the so-called study. The Guardian does cite some data from what appears to have been collected via the BattlefortheNet website, using M-Labs methodology, which uses tests that end users initiate. Tim Karr of the Free Press, one of the organizations that makes up BattlefortheNet is quoted in The Guardian post as saying that, “Data compiled using the Internet Health Test show us that there is widespread and systemic abuse across the network.”

What The Guardian story neglects to mention is that this measurement methodology that the Free Press is highlighting, was actually rejected by the FCC in their Measuring Broadband America report. They rejected it because the methodology wasn’t collected in a real-world fashion, taking into account all of the variables that determine the actual quality consumers receive, as others have shown. (one, two, three) Updated 1:10 pm: M-Labs just put out a blog post about their data saying, “It is important to note that while we are able to observe and record these episodes of performance degradation, nothing in the data allows us to draw conclusions about who is responsible for the performance degradation.” M-Labs did not include a link to any “study” since they didn’t publish one, but you can see a Google Docs file of some of the data here. It’s interesting to note that the document has no name on it, so we don’t know who wrote it or published it to Google Docs. Updated 2:28 pm: Timothy Karr from Free Press has deleted all of the data that was in the original Google Docs file in question and simply added two links. It’s interesting to note that they published it without their name on it and only edited it once it was called into question.

Updated 2:07 pm: M-Labs has confirmed for me that they did not publish the Google Docs file in question. So the data and text that Free Press was showing the media, to get them to write a story, has now been erased. This is exactly why the media needs to check the facts and sources instead of believing anything they are told.

If the Free Press is referencing any “study” they may have put out on Monday, using M-Labs methodology, it’s nowhere to be found on their website. So where is this “study”? Why can’t anyone produce a link to it? Mainstream media outlets that picked up on The Guardian should be ashamed of themselves that they didn’t look at this “study” BEFORE they ran a story. This is sloppy reporting when you reference data in a story you haven’t seen yourself or even verified that a “study” exists.

Adding even more insult to injury, The Guardian piece has no basic understanding of how traffic flows on the Internet and the difference between companies that offer CDN services versus those that offer transit. The Guardian piece calls GTT a “CDN” provider when in fact, they are nothing of the sort. GTT is an IP network provider, they offer no CDN services of any kind and don’t use the word CDN anywhere on their website. At least one other news site that also incorrectly called them this has since corrected it and gotten the terminology right. But once again, some news outlets simply took what The Guardian wrote without getting the basics right or checking the facts. Others did a good job of looking past the hype.

The Guardian piece also says that, “Any site that becomes popular enough has to pay a CDN to carry its content on a network of servers around the country“, but that’s not entirely true. Netflix doesn’t use a CDN, they built one itself. So you don’t “have” to pay to use a third-party CDN, some content distributors choose to build and manage their own instead. The Guardian piece also uses words like “speed” and “download” interchangeably, but how these words are used have very different meanings. Speed is the rate at which packets get from one location to another. Throughput is the average rate of successful message delivery over a communication channel.

Even if The Guardian article was trying to use data collected via the BattlefortheNet website, they don’t understand what data is actually being collected. That data is specific to problems at interconnection points, not inside the last mile networks. So if there isn’t enough capacity at an interconnection point, saying ISPs are “slowing traffic speeds” is not accurate. No ISP is slowing down the speed of the consumers’ connection to the Internet as that all takes place inside the last mile, which is outside of the interconnection points. Even the Free Press isn’t quoted as saying ISPs are “slowing” down access speed, but rather access to enough capacity at connection points.

It should be noted that while M-Labs tells me they had not intended to release an additional report, because of The Guardian post, M-Labs will be putting out a blog post that broadly describes some of the noticeable trends in the M-Lab data and “clarifies a few other matters”. Look for that shortly. M-Labs blog post is now live.

  • Gunther

    Dan, thanks for always cutting through the hype and politics for me. This is simply Free Press prmoting their agenda, in a non-transparent way.

  • We are the site referenced that corrected our description of GTT. I’d make two points:

    1) We welcomed information about GTT’s exact role and they do provide services to Content Delivery Networks and reference it here: http://www.gtt.net/services/ip-transit/ so CDNs do appear on their website, at least as part of their client base.

    It was the fact GTT does provide services to CDNs that lent credibility to the claim interconnection disputes would impact GTT clients’ performance for AT&T customers (in this particular case). Although our readers probably won’t have the first clue what a Tier1 IP Network is, that is definitely a more accurate description of what GTT is and we updated our piece to reflect that.

    But let’s not lose sight of the fact what measurement data is publicly available clearly shows traffic slowdowns that affect customers of some ISPs while others are relatively unaffected. If I was a consumer in a city like Atlanta, knowing I’m much more likely to encounter problems connecting through GTT with AT&T U-verse may steer me to Comcast, which had no issues.

    2) It is our normal practice to always link to these kinds of studies, which we always read ourselves. Our readers were aware we did not have a copy from the moment the article was written. After some research on our own, we did find relevant measurement data regarding GTT that lent credibility to the claims made in the Guardian piece. While you have spent a lot of time dissecting the process, you may be missing the larger point — there is a measurable issue here and that is the first step in the process of identifying who is responsible. We also know on which ISPs that performance data was measured. I don’t think it is wild, unfair speculation to suggest that certain ISPs have more performance issues than others and that may be an important distinction for consumers choosing between two ISPs — one experiencing measurable degradation and the other not.

    I’d also suggest one of the parties we should really hear from next is GTT — a company that seems to be experiencing the greatest amount of degradation. If I was reviewing a selection of network providers, MLab’s data would clearly point me away from GTT, and that may be through no fault of their own. But it is in their best interests to speak on the performance issue to add to the record.

    Based on everything that is publicly accessible from MLab, it would be wrong to dismiss BattlefortheNet’s assertions just because The Guardian may have been somewhat sloppy in its reporting.

    Despite your fierce defense of Comcast during the Netflix dispute, it was interesting to note that money has the magical power of solving these issues overnight. That’s really what is at the heart of most of these disputes and how they are resolved.

    • danrayburn

      1. Providing services to content delivery networks is not the same as being a content delivery network. CDNs store and cache content, GTT provides no such service.

      Yes, we know that there are slowdowns at interconnection points. But that’s not new news. M-Labs report from eight months ago documented that. The news jumped all over this Guardian post WITHOUT speaking to anyone at M-Labs or seeing a copy of any such report.

      Why didn’t the media email M-Labs and ask for the report? I send one email and quickly got a response from them saying there was no new report. Why didn’t the media take the time to check the source? It’s simply lazy reporting.

      Not to mention, M-Labs blog post today says “It is important to note that while we are able to observe and record these episodes of performance degradation, nothing in the data allows us to draw conclusions about who is responsible for the performance degradation.” So we are taking the word of the Free Press?

      Paid interconnection, especially in the U.S. has been taking place for a very long time. So anyone who follows this segment of the market would not find it “interesting to note that money has the magical power of solving these issues overnight.” That’s not news. ISPs have public peering policies anyone can review. If a company’s traffic does not allign with their peering policy, most times there is a form of payment. This not new, unique, or a change from how it has worked for a very long time.

      • We don’t have to take the word of Free Press. It seemed apparent from the Guardian article the study isn’t coming from MLabs, it would come from the BattlefortheNet. My assumption was they had an advance copy.

        But I don’t need a study to look at the raw data I found myself here: http://www.measurementlab.net/observatory#tab=explore&metric=download_throughput&metro=NewYork&combos=lga02_cablevision,lga01_cablevision&time=06012015-07012015&timeView=daily&

        From there you can plug in different ISP combinations and see results over several months that do not dispute what the Guardian article or the group said.

        As you concede, interconnection issues have been with us a long time, but these issues are becoming more visible and are impacting a growing number of consumers. From the perspective of an ISP (which seems to be closer to what you represent based on your self-referenced position as the “voice of the industry”), there may be valid issues of fairness at an executive level over who pays for peering and who doesn’t. From the perspective of many consumers, who I represent, the issue is how well an ISP performs in return for the large amount of money we pay them.

        So what seems fair for Comcast or AT&T or Time Warner Cable management vs. GTT is very different from what is fair for a customer of Comcast, AT&T or Time Warner frustrated with a non-performing website. I don’t think most consumers will side with AT&T or Comcast if they make a conscious decision to tolerate degraded service for their customers unless and until a content provider ponies up a check. Beyond that, an ISP is allowing other websites sharing that same connection to be little more than collateral damage, but they won’t do a thing until they get that check in the mail.

        The information MLab is distributing does not need MLab to call any specific provider out. A consumer can look at the data and see if they live in Atlanta, AT&T U-verse is likely to perform far worse with certain connections than Comcast.

        AT&T may have a righteous business case claiming it isn’t going to break its peering policy to correct that, but an end consumer would not be wrong to decide that Comcast is a far better choice because it does not experience the degradation AT&T allows its own customers to experience.

        As long as end subscribers have few choices, they will be in the unenviable position of waiting to see who pays Comcast or Time Warner Cable first to “correct” the problem. In a more competitive market, I suspect the consumer experience will count much more to a provider than the proportionately tiny amount of investment required to manage Internet traffic. When providers shift their priorities towards customers, these kinds of disputes will become extremely rare.

        • danrayburn

          You say there “may be valid issues of fairness at an executive level over who pays for peering and who doesn’t.” It’s not about “fairness”, it’s business. Some pay based on various business terms, others might not. There is no law against it, no rule that prohibits it.

          “but an end consumer would not be wrong to decide that Comcast is a far
          better choice because it does not experience the degradation AT&T
          allows its own customers to experience.” Yes, I 100% agree with you. But they are not “slowing down” the last mile network when that happens, which is what folks like the Guardian have implied.

          And keep in mind that the the measurement methodology that the Free press uses against ISPs was rejected by the FCC due to accuracy.

          Also, you mentin that “these kinds of disputes will become extremely rare” – they already are rare. Have you seen any disput involving Apple, Google, Microsoft, Twitch, Ebay, Amazon etc. with any ISPs? Nope. Because they all understand how interconnects work and so far, have not objected to the model. It wasn’t until Netflix came along that most even knew how any of this worked.

          • But what AT&T and Comcast are doing is double dipping consumers AND content providers. Google for example, already pays for connections.The same with Netflix and Level 3, and Congent.
            I as a consumer already pay A LOT $71 a month to AT&T for a crap 6-meg (in reality my speed never goes about 4.8megs) DSL connection with a 150 gigabyte cap in place and overage charges. Please explain to me the business model definition other than extortion?
            As a consumer looking at your responses and defending ISPs who do this, you seem to be perfectly OK with these already profitable telecom and cable companies nickel and diming hard working Americans out of their own money.

            People like you are the problem.

            It is no wonder many cities and towns unsatisfied with the status quo are wiring themselves and turning towards municipal broadband.

          • danrayburn

            “already profitable telecom and cable companies” – the companies who are paying the interconnection fees like Apple, Google, Microsoft, Facebook etc. are also “profitable”, and in many cases, more so than the ISPs. so your argument that this should not happen, just because one company is “profitable” isn’t valid.

            and don’t confuse Cogent with a Netflix. Cogent sells transit services to content owners, like Netflix, but then doesn’t have enough capacity, because they wont pay for it, with the ISPs they connect to. they are taking Netflix and other customers money and have a responsibility to make sure they have enough capacity with the ISPs, free or paid.

            as consumers, we pay ISPs to get a certain level of connection to the internet, via the last mile the ISPs operate. we do not pay for any kind of “guarantee” to be able to reach a certain website or video service, with a certain level of quality. i get that many consumers think that is what they are paying for, but it isn’t.

          • There has never been a last mile guarantee for any broadband provider unless you have a Service Level Agreement with them, and if you are a residential customer, good luck.

            Remember, providers market speeds “up to” a certain level. Where they get in trouble is when a chasm opens between what they market and what they deliver and that is where the interconnection battle is eventually going to attract regulator scrutiny. We can see a growing divide between what they are easily capable of providing vs. what they are willing to provide.

            Because we have experienced an Internet that has worked reasonably well at handling traffic growth up until these high profile interconnection disputes, and the fact that these congestion issues disappear overnight when money changes hands, it will not take too long for regulators to suspect there is funny business going on here.

            ISPs wanted paid prioritization and other Pay-for-QoS programs to market to high traffic content providers, charging them extra to assure a good end customer experience. Consumers and the FCC said no and now we’ll wait and see what the courts say. But they’ve effectively found another way to sell the same thing.

            If this issue evolves into one perceived to be only about technical fairness and there is substantial evidence some content providers are purposely degrading performance based on poor distribution choices in a quest for a free ride, then the status quo will probably win out.

            However, if content providers can demonstrate they offered an ISP a free or low-cost solution to resolve traffic bottlenecks for the benefit of their customers and the ISP rejects that, demanding payments to resolve these issues instead, it is a very safe bet regulators will see ISPs as attempting an end run around Net Neutrality and will step in. Netflix’s Open Connect Initiative gives Netflix a very strong hand to build a case ISPs are simply looking for a payday, not a technical resolution.

          • This statement: “ISPs wanted paid prioritization” is a lie. They’ve never offered it and have no interest in offering it. It would be good if they did, but it’s not in their interest.

          • In what way would paid prioritization be good??? Explain these interconnection issues and then suddenly strike deals then the connection magically improves? Money talks. That’s how.

          • It would be good to make voice and video calls have higher priority than Netflix streams. It’s an engineering thing.

          • On Cellular networks voice always has priority over everything else. But it’s supposed to be that way because cell phones run on them and I believe federal law requires it.

          • I believe the Easter Bunny is made of chocolate, but others disagree. Before LTE, cellphones segregated voice and data, using circuit switching for voice and packet switching for data. With the advent of Voice over LTE, it’s all packets and some have to be prioritized over others.

            Would you like to discuss how this differs from voice over cable and voice over DSL?

          • “as consumers, we pay ISPs to get a certain level of connection to the internet, via the last mile the ISPs operate. we do not pay for any kind of “guarantee” to be able to reach a certain website or video service, with a certain level of quality. i get that many consumers think that is what they are paying for, but it isn’t.”

            And right there, you just proved my point : that this is about money and against net neutrality or any kind of regulatory oversight to make sure that all traffic is treated equally, because AT&T and Comcast would love to be toll booth gatekeepers and charge for the access to certain websites. This is why they want metered broadband on fixed wireline and are treating it as if it were a scarce resource which it is not. This is also why internet is very expensive in the U.S. and why they refuse to invest in their networks.

            ” the companies who are paying the interconnection fees like Apple, Google, Microsoft, Facebook etc. are also “profitable”, and in many cases, more so than the ISPs. so your argument that this should not happen, just because one company is “profitable” isn’t valid.”

            Actually it is because the companies you listed above (with the exception of google dabbling in it) are NOT ISPs. I am not paying them to access the internet. Also this isn’t just one ISP that is profitable-Its MANY.

            AT&T and Comcast however are the largest.

            In my area of residence, its either AT&T or Comcast. I don’t like AT&T but I despise Comcast. It is widely known that Comcast for years would purposely degrade internet service on any competing services they don’t like such as netflix or bitorrent.
            Compare them to AT&T which I’ve been a subscriber for 7 years. While service is slower because it is DSL, it has been a consistent, stable connection and nearly trouble free. I have never had an issue with them.
            Comcast hates any services that competes with their ON-DEMAND. AT&T however is honest enough to admit they don’t care what runs on their network so long as they get paid.

            You also fail to mention that part of the reason why this issue has become front and center is because of cord cutting.
            The fact that these two mega giants are striking peering point connection deals AND are also engaging in metered broadband in certain areas where they are a duopoly or monopoly because of people like me that don’t want cable TV and have turned to streaming should alarm anyone.

            Ripping off the consumer is not good business.

  • Robert M. Enger

    From the vantage point of FiOS in the Los Angeles suburbs, it appears that GTT (or VZ-GTT interconnection, or MLAB server inside GTT) have some sort of problem. When four of of the five internethealthtest results are over 500Mbps, results to GTT are below 50. More than a factor of 10 worse. During evening peak periods, when tests to some test sites slump to 200Mbps, the test to GTT slumps to as low as 2.9Mbps (approaching a factor of 100 difference to even the lowest of the other 4 test sites).

    BTW, dragging one’s feet upgrading interconnection circuit capacity is a clever method for last mile providers to extort tribute payments from content originators (and their CDN and/or transit providers). The concept of inter-carrier interconnection is difficult for regulators and legislators to understand, so they are easily misled by industry-funded think tanks and public relations company sock-puppets.

    In many high profile examples, content is being handed to the last mile provider in the SAME CITY as it is being consumed. The last mile providers are impeding traffic delivery from one side of the city to the other. If last mile customers can’t receive data at their contracted rate even when it is originated in the same city, what exactly are they paying their ISP for each month?

  • Seems to me that the real story here is that GTT – a very, very small time transit network – has interconnection issues with several ISPs in several cities. This is probably GTT’s growing pains and isn’t really interesting to many people. Free Press, OTI, other comrades, and a few under-informed bloggers and journalists want to turn GTT’s problems around so they can induce the FCC to impose free peering conditions on AT&T. This is one of the most flagrantly dishonest campaigns I’ve seen. See: http://www.multichannel.com/news/technology/oti-pushes-fcc-interconnection-conditions-att-directv/391689