In light of the recent spate of high-profile hacking campaigns, and the overall poor state of security on the internet, NextGov.com reports that parts of the US government are advocating for a separate, “secure” internet. The idea calls for segmenting “critical” networks (not yet fully defined, but presumably including infrastructure and financial systems) and applying two security mechanisms to these networks: (1) increased deep packet inspection (DPI) to detect and prevent intrusions and malicious data; and (2) strong authentication, at least for clients. The trouble is that this “.secure” internet doesn’t make much technical or economic sense: the security mechanisms are simply not powerful or cost-effective enough to warrant re-engineering an internet.
Whether the idea is to apply different security policies to sites using a special domain name like “.secure” (and possibly the existing .edu and .gov domains), or to create a parallel internet infrastructure, is not yet clear. (Although government representatives say the idea is not to create a parallel infrastructure, that is the most “secure” form of the idea, and I therefore expect the idea to begin to incorporate elements of new, separate infrastructure for the most important networks as the idea matures.)
Intrusion Detection and Prevention
From the NextGov article:
Today, searches of the .gov domain are conducted by the Einstein program, an intrusion prevention and detection system under the direction of the Homeland Security Department that monitors only federal traffic for signs of unauthorized access. It alerts response teams to potential attacks and automatically blocks penetration in some cases.
The .secure network would apparently involve an increase in the use of intrustion detection and prevention systems (IDS and IPS). It’s not clear why increasing the use of such systems would require new legislation or even a special new network. Network operators can, and do, deploy such systems now on their own networks to protect their own sites. (And as we know, the government has no qualms about using DPI to surveil the entire country without a warrant.)
Another problem is that IDS and IPS have limited applicability, for several reasons.
There is only a very weak global definition of “malicious” network traffic. Strong security assertions tend to be local to a particular application or network, rather than global to the network as a whole or to all applications. A network request that would destroy site A might be merely meaningless to site B, and possibly even normal functionality for site C. Some traffic is widely-agreed to be bad, such as the binary executable for a piece of malware. But even then, security researchers (including those working on the .secure network!) need to download malware to do their jobs.
It is very easy, sometimes even trivial, to encode malicious data in such a way that an IDS/IPS won’t recognize it as malicious, but will still have its evil effect on the target system. (Newsham and Ptacek wrote a pioneering paper on evading IDS/IPS. Hackers have refined the techniques since then, and IDS/IPS vendors have refined their counter-measures. But like signature-based anti-virus software, fundamentally this is a cat-and-mouse game that IDS/IPS systems cannot consistently or conclusively win against a motivated attacker.)
IDS/IPS, by their nature, tend to have very high equipment costs because they store and crunch huge amounts of data. Even more expensive is the salaries for the teams of network security experts that have to analyze the huge amounts of data. As a result, IDS/IPS tends to get defined down; network engineers tend to stop saying “intrustion prevention system” and start saying “post-breach forensic data source”. Obviously, having a way to do forensics is valuable in itself — but reliable, automatic intrusion prevention remains a dream.
IDS/IPS can perform a valuable security function when used correctly — with special care for cost-effectiveness. However, even in the best scenario, they are not powerful or effective enough to warrant fragmenting the internet.
The Fallacy of Authentication
The other component of .secure would be “strong” authentication. In particular, the idea is that there would be no anonymity, and presuambly no pseudonymity: a relying party (such as a bank web server or bank client) would be able to trust that their interlocutor was the “true” bank or client. This would supposedly deter hacking, or at least provide a more reliable forensic trail in post-breach investigations.
But what does “authentication” mean on the internet? People often (implicitly) take it to mean something like “Through this web browser I am talking to the true Wells Fargo Bank, and Wells knows that I am the true Chris Palmer.” However, when one computer presents credentials (such as a username and password pair or a cryptographic certificate) to another, the link between software data structure and real-world entity (like a person or a business) is weak. It is no stronger than the person’s or business’ ability to ensure that the computers on both ends are operating correctly and are not compromised, and that the channel between them is secure against network threats. From painful experience we know that our operating systems suffer from numerous design and implementation flaws, and that malware and system hacks are all-too-prevalent.
Unless the .secure network runs software far more advanced than the software that is currently available, malware will run as rampant, credentials will be as oft-compromised, and services and users will be as impersonated as on the real internet.
It’s not for EFF and other internet advocates to “raise privacy and speech concerns” about .secure. It’s for the proponents of this idea to show why it makes any technical or economic sense at all. For economic reasons (such as Metcalfe’s Law and economies of scale), networks tend to converge, not diverge. (We will probably use the same computers (and wires!) to connect to the real internet as to the .secure internet.) Not all participants in the .secure network will have the same incentives or the same ability to bear the opportunity and operational costs. And the costs are only justified if .secure prevents at least as much fraud and loss as it costs to build and operate .secure.
We do not yet know what those costs would be. In particular, we need to know precisely what the separation mechanism between the real internet and .secure will be. The strongest separation — a physically separate network with dedicated machines — is also inordinately expensive. The strength of the separation between the two networks goes down as, inevitably, interconnections between .secure and the real internet are created.
Weaker separation mechanisms, such as VPNs and alternate naming and routing schemes, will cost less. But they suffer the most attack modes, most notably including the menagerie of password-stealing malware on the real internet.
Many organizations already create networks that are in some sense separate from the internet and in some sense limited to trustworthy or rigorously authenticated users, and this can be a useful security measure. But ideas that can make sense at one scale do not necessarily make sense at larger scales. If the government wants to help out with internet security, they should use their vast purchasing power to push vendors for advances in basic software engineering quality. That benefits everyone in the most sustainable and economically productive way.
Watching the revolutions unfolding in the Arab world this springtime – and learning details first-hand from our friends on the ground – we at EFF struggled to find meaningful ways to support democratic activists and promote online freedom of expression. But we didn’t just want to lend a helping hand –we wanted to create a pathway so that anyone, anywhere in the world, could contribute to making the Internet more private and more resistant to censorship. From these discussions came our idea of launching the Tor Challenge.
We started the Tor Challenge with a simple goal: to launch 100 new Tor relays. Tor is software that individuals –including online activists in authoritarian regimes– can use to mask their IP addresses and proxy out to uncensored networks, helping them dodge network surveillance and elude online censorship. But Tor isn’t merely software – it’s also a network of volunteer computers, each donating bandwidth and acting as a router so that people can bounce their requests through the network, thereby obscuring their digital tracks.
We launched our campaign on May 31, 2011 –and within days surpassed our goal of 100 new relays. Today, we are closing the challenge after adding 549 new relays to the network. This includes:
Exit relays: 123
Middle relays: 299
Current bandwidth: 326,084 kb/s
Percentage of Tor network bandwidth: 5.77%
While some of the new relays were later taken offline, the majority of them stayed operational. The total number of public relays in the Tor network has increased by 13.4% during the course of our campaign.
There is an acute need for circumvention technologies in authoritarian regimes - and even activists in many would-be progressive societies may feel safer if they can avoid the electronic gaze of authorities. Jacob Appelbaum, a security researcher and advocate for the Tor Project, recently wrote:
The Tor Challenge is a phenomenal show of support for the Tor network and the network graphs show the results. The efforts expended by EFF supporters around the world have helped to continue the Tor network's growth in a positive direction. Additionally, the educational efforts made by the EFF have similarly impacted the world; people everywhere understand the need for anonymity as well as how to use Tor to meet their needs in a practical manner.
While EFF’s Tor Challenge may have ended, individuals and organizations that want to create a more private Internet can still run Tor relays. And those who want to support Tor but aren’t tech-savvy can find an ally in TorServers.net, an organization based in Germany that provides technical assistance and support in running Tor relays.
Our gratitude goes out to the hundreds of individuals who set up relays and donated bandwidth to help strengthen the network. They are true defenders of online freedoms.
Unfortunately, patent trolls are not a new phenomenon. But, lately we’ve seen a disturbing new trend: patent trolls targeting app developers. Platforms such as iOS and Android allow small software developers the ability to widely distribute their work, which – for obvious reasons – is good for both developers and consumers. Just as these developers were finding new audiences, the patent trolls decided they wanted a piece of the action and started sending cease-and-desist letters demanding license fees, and in some instances even suing. Often, these developers cannot afford the time and money it takes to fight the trolls, leaving them with a stark choice: pay up or go out of business. It should come as no surprise then to hear reports that developers are pulling their apps from the U.S. app stores.
We’ve written before about Lodsys (here and here), a troll accusing app developers of patent infringement based on in-app purchasing functionality. Lodsys continues to send out cease-and-desist letters even as litigation surrounding its patents is pending all over the country. (The Court has not yet ruled on Apple’s motion to intervene in the case Lodsys brought against iOS app developers.)
Lodsys is not the only troll with the dubious distinction of suing app developers. A company called MacroSolve sued app developers for using technology apparently as basic as distributing electronic forms. And more recently, a company called Kootol announced plans to sue 30 companies – some large ones, such as Facebook and Twitter, as well as app developers like Iconfactory. (Of note, Kootol’s “patent” is currently only an application, so we won’t see any lawsuits until it matures into a full-blown patent, which should happen in short order.)
The patent system is intended to promote innovation, but it seems pretty clear to us that here it’s doing just the opposite. While we are disappointed that the proposed patent reformlegislation does nothing to curb this problem, we will continue to monitor the situation and provide resources to as many developers as we can. To help us do this, we request that attorneys licensed to practice in the United States who are willing to and interested in advising targets of trolls join EFF’s Cooperating Attorneys’ List.
If interested, please email Rebecca Reagan at firstname.lastname@example.org with your contact information or the contact information for your firm, and the states in which you are licensed to practice law.
As was widelyreported last week, several major internet access providers (including, very likely, yours) struck a deal last week with big content providers to help them police online infringement, educate allegedly infringing subscribers and, if subscribers resist such education, take various steps including restricting their internet access. We’ve now had a chance to peruse the lengthy “Memorandum of Understanding" (MOU) behind this deal. Turns out, as is often observed, the devil is in the details – and they are devilish indeed.
Let’s start with the people taking credit: major content owners, service providers, and some government officials, principally New York Attorney General Andrew Cuomo. But guess who wasn’t invited to the party? The millions of subscribers who will be governed by the deal—the same subscribers who elect the politicians, buy the content owners’ goods and pay subscription fees to the internet access providers (which are likely to go up as administration costs are passed on – the UK’s graduated response system was estimated to cost about $40 per subscriber). Given that subscribers weren’t consulted, it’s probably not surprising that this deal is not in their interests.
Here’s some of the biggest problems with what resulted--and some ideas on what subscribers should demand of the system they’ll be paying for:
Who’s in Charge? The MOU calls for the creation of a new organization, called the Center for Copyright Information (CCI), to administrate the six-strikes system. CCI will be governed by a six-person executive committee comprised of representatives from content owners and internet access providers. Throwing a bone to subscribers, a three-person advisory board will include members “from relevant subject matter and consumer interest communities,” who will be given the chance to speak up whenever the executive committee asks. This possible advisory presence for subscribers is completely inadequate. Given they are the whole point of the MOU, subscribers deserve seats at the table as voting members of the executive committee.
“Mitigation” Measures and Independent Review: Internet access providers can punish accused subscribers by interfering with the subscribers’ connectivity, including by slowing transmission speeds, temporarily restricting web access for “some reasonable period of time,” and conditioning web access on completing a “meaningful copyright education program.” These mitigation measures can be imposed solely on the basis of the content owners’ assertions, without a judge ever determining that the subscriber did anything wrong.
Internet access has become an essential service in the digital age. Thus, just as we restrict the power of utilities to turn off services to their customers, we should not allow content owners to cause internet access providers to degrade or suspend their services without adequate due process.
The MOU does create a process designed to protect subscribers from unfounded accusations and punishment, but it’s hardly due process. Consider some of the procedural protections that subscribers might have sought if they had been at the bargaining table:
The burden should be on the content owners to establish infringement, not on the subscribers to disprove infringement. The Internet access providers will treat the content owners’ notices of infringement as presumptively accurate--obligating subscribers to defend against the accusations, and in several places requiring subscribers to produce evidence “credibly demonstrating” their innocence. This burden-shift violates our traditional procedural due process norms and is based on the presumed reliability of infringement-detection systems that subscribers haven't vetted and to which they cannot object. (The content owners’ systems will be reviewed by “impartial technical experts,” but the experts’ work will be confidential). Without subscribers being able to satisfy themselves that the notification systems are so reliable that they warrant a burden-shift, content owners should have to prove the merits of their complaints before internet access providers take any punitive action against subscribers.
Subscribers should be able to assert the full range of defenses to copyright infringement. A subscriber who protests an infringement notice may assert only six pre-defined defenses, even though there are many other possible defenses available in a copyright litigation. And even the six enumerated defenses are incomplete. For example, the “public domain” defense applies only if the work was created before 1923--even though works created after 1923 can enter the public domain in a variety of ways.
Content owners should be accountable if they submit incorrect infringement notices. A subscriber who successfully challenges an infringement notice gets a refund of the $35 review fee, but the MOU doesn’t spell out any adverse consequences for the content owner that make the mistake – or even making repeated mistakes. Content owners should be on the hook if they overclaim copyright infringement.
Subscribers should have adequate time to prepare a defense. The MOU gives subscribers only 10 business days to challenge a notice or their challenge rights are waived (a subscriber might get an extra 10 business days "for substantial good cause"). This period isn’t enough time for most subscribers to research and write a proper defense. Subscribers should get adequate time to defend themselves.
There should be adequate assurances that the reviewers are neutral. The MOU requires that reviewers must be lawyers and specifies that the CCI will train the reviews in “prevailing legal principles” of copyright law – an odd standard given the complexity of, and jurisdictional differences in, copyright law. We’re especially interested in the identity of these lawyers, and why they are willing to review cases for less than $35 each (assuming the CCI keeps some of the $35 review fee for itself). Perhaps there will be a ready supply of lawyer-reviewers who are truly independent. Given the low financial incentives, another possibility is that the reviewers will be lawyers tied—financially or ideologically—to the content owner community. To ensure that the reviewers remain truly neutral, reviewer resumes should be made public, and checks-and-balances should be built into the reviewer selection process to ensure that the deck isn’t stacked against subscribers from day 1.
Education or Propaganda? The MOU repeatedly emphasizes subscriber education as one of its main goals. Unfortunately, this education won’t offer a very balanced view of copyright, at least if the current version of the CCI website is any indication. That website currently is full of scare-mongering rhetoric decrying the ill effects of so-called “content theft” and stressing the security risks of P2P. As the site is further developed, the executive committee should reject the rhetoric and look instead to the numerousonlineresourcesthat provide a balanced and nuanced view of copyright law, helping to inform subscribers about their rights as well as their responsibilities when it comes to creative works.
Transparency: The MOU contemplates ongoing evaluation of the system through a variety of reports. That seems like a good idea, but neither subscribers nor the general public get to see or comment on those reports. Similarly, the statement of “prevailing legal principles” used to instruct reviewers also should be made public so that subscribers know how reviewers are interpreting U.S. copyright law. Simply put, if subscribers are supposed to treat the system as credible, they need enough information to determine that the system actually is credible.
Conclusion: This MOU has been in development for years, and we imagine the parties will be reluctant to revisit it. But it has yet to be implemented, which means there’s still time for the parties (and their friends in government) and to address the deficiencies of their proposal from perspective of the subscribers who’ll be paying for it. This deal is never going to be good for subscribers (nor for the artists who won’t see one more red cent as a result of it) -- but it sure could be better.
Yesterday in Righthaven v. Democratic Underground a federal court in Las Vegas ordered the notorious copyright troll Righthaven to pay $5,000 in sanctions and to file the court transcript containing its admonishment in hundreds of other copyright cases. EFF represents Democratic Underground.
Righthaven tried to build a business out of suing hundreds of bloggers and websites for allegedly infringing the copyrights in Las Vegas Review-Journal newspaper articles, but was stopped short by evidence EFF uncovered: the secret "Strategic Alliance Agreement" between Stephens Media (publisher of the Review-Jounal) and Righthaven, which showed that the assignment of the copyrights was a sham.
In the decision dismissing Righthaven's case, Judge Hunt also ordered Righthaven to explain why it should not be sanctioned for its failure to disclose media giant Stephens Media's financial interest in the lawsuits.
The Strategic Alliance Agreement required Righthaven to pay half of the lawsuit proceeds to Stephens Media (publisher of the Review-Journal). Nevertheless, Righthaven and Stephens Media asserted that the media company did not have an ongoing interest in the litigation. These misrepresentations not only concealed Stephens Media's role, but allowed Righthaven to continue to litigate hundreds of cases for months over a right that it did not have, raising defense costs and resulting in settlements that may never have happened if the truth had been known.
In its written response, Righthaven refused to accept responsibility, instead presenting several convoluted arguments that it hoped would get it off the hook, including - most brazenly - that the Court did not have authority to sanction it. As EFF explained in response, none of those arguments held water. Nevertheless, Righthaven apparently did not take the matter very seriously -- when asked about the Order to Show Cause in a television interview, Righthaven's CEO Steven Gibson called it a minor technical issue, and showed no remorse.
Yesterday, Righthaven appeared before the Court for one last chance to explain why it should not be sanctioned. Judge Hunt rejected all of Righthaven’s arguments.
The Court said that Righthaven's argument that the Local Rule "could have arguably been reasonably construed to not require the disclosure of Stephens Media's interest in any recovery" was "in the Court's opinion, is, frankly, ludicrous."
The Court found that the failure to disclose Stephens Media's role was "not negligence," but an "intentional avoidance of disclosing information and specific direct statements contrary to that." This was "part of a concerted effort to hide Stephens Media's role in this litigation."
The Court continued that Righthaven "claimed that it had various exclusive rights when it knew that the ability to exercise those rights were retained exclusively by Stephens Media. It constantly and consistently refused to produce the [Strategic Alliance] agreement." The Court went on to hold that "[t]he representations about the relationship and the rights of Righthaven were misrepresentations. They were misleading." Moreover, Court held that "having looked at all this evidence, [the Court] finds that they are intentionally untrue."
Based on this, the Court ordered Righthaven to pay $5,000 to the clerk of the court, and to provide the judges and the defendants in hundred of other cases with copies of (1) the Court's decision finding Righthaven did not own the copyright, (2) the Strategic Alliance Agreement and (3) the transcript of yesterday's sanctions hearing.
During the hearing, the Court also addressed Righthaven itself, noting, "In the Court's view, the arrangement between Righthaven and Stephens Media is nothing more nor less than a law firm, which, incidentally, I don't think is licensed to practice law in this state, but a law firm with a contingent fee agreement masquerading as a company that's a party." While this was not at issue for purposes of yesterday's sanctions hearing, it's not good news for Righthaven. Under the Nevada Rules of Professional Conduct, a law firm may not share legal fees with a non-lawyer or have non-lawyer investors. Righthaven is owned by Net Sortie Systems LLC (Steve Gibson's shell company) and SI Content Monitor LLC (the investment vehicle for members of the Stephens family, who also own Stephens Media). Righthaven has the opportunity to respond to the contention that it is a law firm practicing without a license in writing on July 25, 2011.
Think you know what to do when law enforcement seeks access to your digital device? Test your skills with our online quiz. Then brush up on your knowledge with our Know Your Rights whitepaper.
We also highly recommend you print our one-page guide explaining what to do when the police ask for access to your device. Leave it by your workstation, tape it up in your server room, and slip a copy into your laptop case—anywhere you have sensitive information on a digital device.
The patent reform legislation that continues to snake its way through Congress makes one thing clear: many in Washington don’t like business method patents anymore than we do. (Business method patents cover a merely a "method" or "process," as opposed to something tangible.)
Now that the House and Senate have each passed their own version of the bill, the two will need to be reconciled. The big issue standing in the way is fee diversion: whether the Patent Office can keep the additional fees it brings in that exceed its budget (Senate bill), or whether Congress can use that money to fund other government programs (House bill).
Issues like fee diversion and the shift from first-to-invent to first-to-file continue to get the lion’s share of the press, but there are some smaller provisions that caught our eyes. For example, both bills include a provision that would allow banks and other financial institutions to more easily challenge business method patents when those patents are asserted against them in litigation. And both the House and Senate bills would prohibit patents covering “any strategy for reducing, avoiding, or deferring tax liability,” which are currently considered patentable business methods.
While many decry reforms like these – especially the one relating to banks – as nothing more than Washington, D.C., political game-playing and Wall Street favors, each in its own right highlights the larger problem with business method patents: instead of spurring innovation (as the patent system is intended to do), they often harm businesses by imposing additional costs (in the form of licenses or litigation), which in turn harms the consumer, as well as the economy at large. So instead of blaming Congress, we applaud any effort to limit business method patents (something the Supreme Court failed to do in Bilski) – and just wish the legislation went further in curbing the often harmful business method patents.
In case you missed it, Spotify's long-awaited U.S. launch is here. Spotify now joins the ranks of services like Rhapsody, Rdio, and Mog that allow users (for a fee) to stream unlimited music from multiple devices, make and keep playlists, and store music on mobile devices.
This is good news for music fans. Spotify has already proven successful in Europe, and, unlike its current U.S. competitors, provides a free, ad-based service where users can access a certain amount of music each month (after that, users can pay for unlimited songs and to have access on their mobile devices). This is just the type of product the record labels have failed time and again to offer their fans: convenient access to different amounts of music, to be consumed in different ways, at different and relevant price points. Instead of being forced to buy full-length CDs at $15.99, fans can now make their own decision about how much they value music and how much of it they want. Of course, the record labels could have launched a service like this years ago. Instead of innovating, they famously sued their fans (and reportedly fought Spotify's U.S. entry) and are now left watching revenue go to others, despite their oft-repeated claims that they could not “compete with free.” Yet, multiple streaming services, music lockers, and others have found a way.
While we are glad to see more choices for music fans – and hopefully more ways for artists to be paid – we still have some major concerns. Chief among them: users' rights to port their data. Because streaming customers do not own their music, they cannot take it with them. Should they decide to try another service (or if a service goes under), users should be able to easily export titles of songs in playlists they created or a list of favorite music, etc. Users should also be able to choose independent add-ons that make the service more valuable, such as alternative means of organizing their music "collections." Without this kind of functionality, users are going to be disappointed, and we are unlikely to see the real competition that helps drive innovation.
More robust network connections, the popularity of tablets and smartphones, and the hype surrounding Spotify lead us to believe that streaming music's time may have come. But if users lose access to the work they’ve invested in searching through music catalogues, setting up playlists and favorites, and otherwise managing their music-listening habits, downloading music (legally or not) will still be a better alternative for many. We urge these new content companies to provide their users tools, such as convenient data portability and support interoperability. Then at last, we might be able to show the record labels that it is indeed quite possible to "compete with free."