Scrutinizing Comcast's Apologists
EFF is continuing its research into Comcast's use of forged RST packets to interfere with their customers' BitTorrent connections. (Apparently the FCC is investigating, as well.) While Comcast has remained conspicuously silent about the technical details of its activities, a few networking engineers have tried to defend Comcast by proposing technical justifications for Comcast's interference activities.
One of the most energetic of these pundits is Richard Bennett, who has argued that Comcast deserves a "pat on the back and a gold star", not criticism, for injecting spoofed RST packets into their users' traffic. In this post we're going to examine and rebut his arguments...
Aside from making some strange accusations against us,1 Bennett has offered a series of possible explanations to justify Comcast's activities. They are all arguments of the form:
Because of complicated technical reason x, Comcast needs to spoof RST packets, or use other means that are almost as drastic, to jam broad classes of TCP connections in order to prevent BitTorrent from overloading cable modem networks.
Near as we can tell, these justifications are all based on speculation about cable networks in general; Bennett hasn't indicated any specific inside information relating to Comcast's situation. But because Comcast is staying silent about their decisions and motivations, Bennett's speculations may be the only technical explanations that software developers and Comcast subscribers are able to consider.
We've studied Bennett's explanations, and we're not persuaded.
Firstly, even if protocol-specific RST packet injection were the only way for Comcast to guarantee adequate service to its subscribers, that wouldn't justify misleading or stone-walling subscribers when they asked about it, as Comcast did for months until research by EFF and AP rendered the denials absurd.2
But let's back up for a moment and take on the technical bits — neither Bennett, nor Comcast, nor anyone else, has demonstrated that RST forgery is necessary for network management.
Bennett's first theory is built on an interesting research paper by Jim Martin, entitled The Interaction Between the DOCSIS 1.1/2.0 MAC Protocol and TCP Application Performance. That paper identifies, and uses simulations to measure, a serious design flaw in the DOCSIS 1.1/2.0 protocol that the cable modems on each street use to talk to each other and to the Internet.3 Martin also points out that a particular sort of denial-of-service attack could exploit these flaws and reduce the throughput of a cable modem network by a factor of two or more.
Bennett argues that BitTorrent essentially functions as a version of this denial-of-service attack ("In effect, several BT streams in the DOCSIS return path mimics a DoS attack to non-BT users. That's not cool."), and that rate limiting would be ineffective at preventing it. As he puts it,
"So the only way to make this [DOCSIS] protocol stable is to actively limit the amount of data queued at the cable modem for upstream delivery, and only way to do that for Torrent is to stifle connections at the TCP level. I've tried to scheme up a better way to do this, and there isn't one." 4
But Martin's work doesn't support Bennett's conclusion.
To see why, let us consider four kinds of TCP packet: incoming SYN packets, outgoing SYN packets, incoming data packets and outgoing data packets. Almost all of the packets received and sent by a BitTorrent client are data packets, and Bennett's assertion above is wrong about those. The number of data packets per second can easily be limited by the ISP dropping some if they are coming or leaving too fast. The ISP needs to maintain two counters for each IP address to see how many packets are arriving and leaving each second, and drop the excess (the limits should depend on how busy the street-level network is). TCP's congestion control algorithms will ensure the computers at each end get the message and respond accordingly. Counting packets per second for each IP is not rocket science.
So what about the SYN packets? Those can't be rate limited by the ISP, in the sense of reducing the number that are sent each second, but the ISP has the alternative of dropping them before they get to the DOCSIS cable modem link.5 ISPs might want to drop incoming SYN packets under certain strange circumstances, but they only cause a problem if there are hundreds or thousands of them arriving per second in a neat rhythm, and for the problem to be noticeable would require absurd numbers of machines (in the order of 100,000, give or take an order of magnitude) to all be trying to download files from one cable user in a short period of time. That would be a freakish storm. We were pretty skeptical that this sort of thing would ever happen, let alone happen regularly. When we made this argument in an email to Bennett, he responded with a second, even more elaborate theory. He speculated that a problem might arise as a result of flapping in the lists that BitTorrent trackers maintain of the fastest seeds to download particular files from:
"a shorter-lived version of [Martin's DOS attack] is possible simply through normal BitTorrent tracker updating. If a set of trackers identify a set of stations on the same loop that offered good upload times in the last cycle, they move to the top of the list and can expect immediate connect requests from a number of leechers. This condition will persist for 30 minutes or so, depending on the degree of synchronization between the trackers."
However, as the BitTorrent protocol and Bram Cohen's paper both make clear, BitTorrent trackers do not ever "identify [...] stations [...] that offered good upload times"; they do not provide any upload rate statistics to BitTorrent clients.6 Finding fast peers is the responsibility of individual BitTorrent clients, not servers, and hence there is no reason that several servers would simultaneously recommend a particular peer. On the other hand, if freak incoming connection-request storms did actually occur for some reason, ISPs could deploy a perfectly non-discriminatory and civilized solution to the problem: drop a proportion of the incoming SYNs if they're arriving at a rate that will cause problems for DOCSIS.7
Bennett responds by asserting that this kind of rate limiting is too expensive because it requires the SYN flag in each packet to be inspected synchronously (before packets are delivered), whereas interdiction with RST packets can be performed asynchronously. Neither the premise nor the deduction seems very persuasive. If the problem is machines receiving an unusually high number of SYN packets, there is nothing to prevent the detection from being performed asynchronously with slower router rules being switched on when and only when a SYN flood is occurring.
At the end, Bennett hasn't managed to show that incoming SYN packets generated by BitTorrent are some kind of "perfect storm" that leaves Comcast with no alternative to protocol-specific RST forgery. In fact, when pressed, Bennett himself falls back to another justification: maybe it's cheaper.
"The Comcast alternative [to spoof RSTs rather than dropping excess packets] is to asynchronously monitor traffic and destroy connections after the fact. It's not as efficient as stateful packet inspection, but the gear to do it is a lot cheaper. Given their Terms of Service, which ban servers on their network, it's sensible."
Sensible for who? After all, using spoofed RSTs causes some users' applications, and some developers' code, to break mysteriously. So it's not "sensible" for Comcast customers, nor for those interested in preserving the Internet as a fertile ground for standards-based innovation. And it isn't even clear that it's cheaper to do RST forgery (engineers from other telecommunications firms have told us, "we can't work out why Comcast spent so much money on Sandvine systems that break their own network"). But even if it might be cheaper to provide an inferior and subtly broken Internet experience to your customers (especially if you don't tell them about what you're doing), we don't suppose that's a good argument in favor of patting anyone on the back for doing it.
- 1. In particular, Bennett inexplicably accused us of following the "religious" principle of "flow rate fairness", which holds that all streams of data (such as TCP connections) should receive equal treatment on the network. This claim is simply untrue. We agree with the paper by Bob Briscoe that he cites, which argues that Intenet congestion control needs to be oriented around economic entities, and not flows. In the case of a cable network, we had already argued that ISPs could do dynamic traffic shaping on the basis of the aggregated rate of traffic to/from each IP (in contrast, flow rate fairness would call for treating each connection from a user who has 40 TCP connections active in the same way as the single connection of a user who only has one active). These methods are actually examples of the weighted proportional fairness principle that Briscoe (and presumably Bennett) are advocating!
- 2. Disclosure is not only necessary for Comcast's customers, but for Internet software and protocol developers. It enables those programmers to understand any unusual congestion situations that involve their software and develop appropriate responses to them. P2P software developers have a clear incentive to do so, because congestion inefficiencies are just as likely to affect P2P software users as they are to affect anyone else.
- 3. The problem is that sporadic packets sent by cable modems can overload a contention-based request-to-send mechanism. The outgoing TCP acknowledgement (ACK) packets that cable modems have to send in response when downloading data using TCP protocols (eg, web downloads, FTP downloads, or BitTorrent downloads) cause exactly this kind of problem. Continuous uploads cause less trouble (because DOCSIS has a piggyback mechanism to avoid contention and collisions when IPs are sending a continuous flow of data). This is a very brief summary, and we advise interested readers to read all of Martin's paper. We point out to casual observers that the result in the abstract ("we show that downstream rate control is not sufficient to avoid the vulnerability [in DOCSIS]"), which no doubt contributed to Bennett's misunderstanding of the situation, applies only to static rate control mechanisms.
- 4. See http://www.circleid.com/posts/711281_praise_relatively_dumb_pipes/#3577 ; http://blogs.zdnet.com/Ou/?p=852&page=3 .
- 5. We aren't going to go into detail about outgoing SYN packets, since nobody is alleging that they're the problem. (It should however be noted however that Comcast's use of RST forgery will actually increase the number of outgoing SYN packets from Gnutella clients by lengthening the period during which they are searching for peers.)
- 6. "All logistical problems of file downloading," says Cohen, "are handled in the interactions between peers. Some information about upload and download rates is sent to the tracker, but that's just for statistics gathering. The tracker's responsibilities are strictly limited to helping peers find each other. [... T]he standard tracker algorithm is to return a random list of peers."
- 7. There may be other solutions too, like batching the SYNs up and sending them in groups, so that the recipient computer can ACK each group using DOCSIS piggybacking.
Recent DeepLinks Posts
Sep 23, 2016
Sep 22, 2016
Sep 22, 2016
Sep 22, 2016
Sep 21, 2016
- Abortion Reporting
- Analog Hole
- Anti-Counterfeiting Trade Agreement
- Artificial Intelligence & Machine Learning
- Bloggers' Rights
- Border Searches
- Broadcast Flag
- Broadcasting Treaty
- Cell Tracking
- Coders' Rights Project
- Computer Fraud And Abuse Act Reform
- Content Blocking
- Copyright Trolls
- Council of Europe
- Cyber Security Legislation
- Defend Your Right to Repair!
- Development Agenda
- Digital Books
- Digital Radio
- Digital Video
- DMCA Rulemaking
- Do Not Track
- E-Voting Rights
- EFF Europe
- Electronic Frontier Alliance
- Encrypting the Web
- Export Controls
- Fair Use and Intellectual Property: Defending the Balance
- FAQs for Lodsys Targets
- File Sharing
- Fixing Copyright? The 2013-2016 Copyright Review Process
- Free Speech
- Genetic Information Privacy
- Government Hacking and Subversion of Digital Security
- Hollywood v. DVD
- How Patents Hinder Innovation (Graphic)
- International Privacy Standards
- Internet Governance Forum
- Know Your Rights
- Law Enforcement Access
- Legislative Solutions for Patent Reform
- Locational Privacy
- Mandatory Data Retention
- Mandatory National IDs and Biometric Databases
- Mass Surveillance Technologies
- Medical Privacy
- Mobile devices
- National Security and Medical Information
- National Security Letters
- Net Neutrality
- No Downtime for Free Speech
- NSA Spying
- Offline : Imprisoned Bloggers and Technologists
- Online Behavioral Tracking
- Open Access
- Open Wireless
- Patent Busting Project
- Patent Trolls
- PATRIOT Act
- Pen Trap
- Policy Analysis
- Public Health Reporting and Hospital Discharge Data
- Reading Accessibility
- Real ID
- Reclaim Invention
- Search Engines
- Search Incident to Arrest
- Section 230 of the Communications Decency Act
- Social Networks
- SOPA/PIPA: Internet Blacklist Legislation
- State-Sponsored Malware
- Student Privacy
- Stupid Patent of the Month
- Surveillance and Human Rights
- Surveillance Drones
- Terms Of (Ab)Use
- Test Your ISP
- The "Six Strikes" Copyright Surveillance Machine
- The Global Network Initiative
- The Law and Medical Privacy
- TPP's Copyright Trap
- Trade Agreements and Digital Rights
- Trans-Pacific Partnership Agreement
- Travel Screening
- Trusted Computing
- UK Investigatory Powers Bill
- Video Games