Comcast Unveils Its New Traffic Management Architecture
Late on Friday night, Comcast filed an overview of its new traffic management arrangements with the FCC. This is the long term replacement for its controversial practice of using forged TCP Reset packets to limit the use of peer to peer protocols.
The new system appears to be a reasonable attempt at sharing limited bandwidth amongst groups of users. Unlike TCP RST spoofing, it doesn't explicitly discriminate against some applications, and it doesn't threaten protocol developers with interoperability problems and uncertainty about network behavior.
Comcast's objective here is still largely to prioritize non-P2P traffic above P2P traffic. But the criterion they use is the amount of data a cable modem sends during each 15 minute period, which is a much fairer rule than examining the traffic protocol. The way deprioritization works is simple: high priority machines get to send data, and if there is any transmission capacity left over, the low priority machines get a share of that.
EFF is proud that our work helped to expose Comcast's misadventures in network management last year, and we're pleased to see Comcast returning to congestion management practices that are transparently disclosed and avoid protocol discrimination.
The new traffic management setup should not be confused with the 250 GB/month cap which Comcast announced last month; the two will exist side by side.
Comcast decided not to use deliberate packet dropping techniques like random early detection (RED) to implement dynamic per-user traffic shaping. We can see three possible reasons for chosing the two-tiered priority route instead of RED traffic shaping:
- it requires less tuning for the peculiarities of DOCSIS networks;
- less of the network management infrastructure needs to be in the routing path, so crashes are less likely to stop traffic altogether; and
- almost all of the burden of congestion is placed on users sending lots of data1
How do users become "low priority"?
[Correction 2008-09-24: this section was rewritten to be clearer and to include a few extra details]
No users are marked as low priority unless their cable network is in a near-congested state. "Near congestion" is defined as a 15 minute window during which more than 80% of the download capacity or more than 70% of the upload capacity of that neighborhood cable loop is being used.
Once the network is near congestion, any cable modem that uses more than 70% of its allowed bandwidth in the congested direction over a 15 minute period will be marked as "low priority".
In practice, it's much more likely that this will happen because of uploads. If you're seeding large files on a P2P network, or uploading large amounts of data with tools like SCP or rsync, or making network backups, and those activities last for more than 15 minutes, there is a fair chance that your cable modem will be de-prioritized.
Once a cable modem has been marked as low priority, it will remain low priority until its usage falls below 50% for a 15 minute period.
What effect will low priority have?
During any 15 minute period when the network is busy, if your computer has been marked as low priority, the Cable modem termination system (CMTS) will always allow the high priority machines to send their packets before yours. In other words, the low priority systems get to send when and only when there is a gap in the high priority traffic.
Only real-world experimentation will give us qualitative measures of the impact of de-prioritization. It's likely that large background data transfers (like the ones that trigger de-prioritization in the first place) will continue to work, but a little more slowly than previously.
The impact of low priority on other kinds of applications — especially interactive apps like network games, VOIP, ssh sessions, etc, is likely to be more severe. If you happen to be using these kinds of latency-sensitive programs at the same time as 70% or more of your upload capacity, you can expect to suffer a bit!
In an ideal world, there would be a way to de-prioritize whatever application was doing the uploading, without deprioritizing everything. In practice that's quite complicated.2 For the time being, if users run into situations where one piece of software causes deprioritization that hampers the operation of another, it will be up to them to find a solution. P2P applications often have upload and download bandwidth controls that should be able to prevent them from causing deprioritization on their own. For other kinds of software, you may need to look at QOS settings in your operating system or network router. Or, most simply, don't expect VOIP to work well while you're running a network backup.
- 1. In RED, each user bears a portion of the congestion burden equal to the amount of traffic they send/receive; in the two tiered model, the congestion burden would normally fall on low priority users only.
- 2. The Internet Protocol theoretically contains a field ("type of service") for conveying this type of request from the operating system. Unfortunately, devices like consumer wireless routers may make unpredictable modifications to the field. Even if the TOS field can be set reliably by the operating system, a way must be found to tell the CMTS what it's set to, before the cable modem sends each packet. A solution may well require some new standards or protocols for moving this information around. Comcast has said that it's working on this at the IETF, and we hope that will produce results in the future.
Recent DeepLinks Posts
Nov 25, 2015
Nov 25, 2015
Nov 25, 2015
Nov 24, 2015
Nov 23, 2015
- Fair Use and Intellectual Property: Defending the Balance
- Free Speech
- Know Your Rights
- Trade Agreements and Digital Rights
- State-Sponsored Malware
- Abortion Reporting
- Analog Hole
- Anti-Counterfeiting Trade Agreement
- Bloggers' Rights
- Broadcast Flag
- Broadcasting Treaty
- Cell Tracking
- Coders' Rights Project
- Computer Fraud And Abuse Act Reform
- Content Blocking
- Copyright Trolls
- Council of Europe
- Cyber Security Legislation
- Defend Your Right to Repair!
- Development Agenda
- Digital Books
- Digital Radio
- Digital Video
- DMCA Rulemaking
- Do Not Track
- E-Voting Rights
- EFF Europe
- Encrypting the Web
- Export Controls
- FAQs for Lodsys Targets
- File Sharing
- Fixing Copyright? The 2013-2015 Copyright Review Process
- Genetic Information Privacy
- Hollywood v. DVD
- How Patents Hinder Innovation (Graphic)
- International Privacy Standards
- Internet Governance Forum
- Law Enforcement Access
- Legislative Solutions for Patent Reform
- Locational Privacy
- Mandatory Data Retention
- Mandatory National IDs and Biometric Databases
- Mass Surveillance Technologies
- Medical Privacy
- National Security and Medical Information
- National Security Letters
- Net Neutrality
- No Downtime for Free Speech
- NSA Spying
- Offline : Imprisoned Bloggers and Technologists
- Online Behavioral Tracking
- Open Access
- Open Wireless
- Patent Busting Project
- Patent Trolls
- PATRIOT Act
- Pen Trap
- Policy Analysis
- Public Health Reporting and Hospital Discharge Data
- Reading Accessibility
- Real ID
- Search Engines
- Search Incident to Arrest
- Section 230 of the Communications Decency Act
- Social Networks
- SOPA/PIPA: Internet Blacklist Legislation
- Student and Community Organizing
- Stupid Patent of the Month
- Surveillance and Human Rights
- Surveillance Drones
- Terms Of (Ab)Use
- Test Your ISP
- The "Six Strikes" Copyright Surveillance Machine
- The Global Network Initiative
- The Law and Medical Privacy
- TPP's Copyright Trap
- Trans-Pacific Partnership Agreement
- Travel Screening
- Trusted Computing
- Video Games