For twenty years, supporters of the Electronic Frontier Foundation have helped an elite team of technologists and activists create an unparalleled force in the digital rights world. Randall Munroe, creator of the online comic xkcd, tells it best:
To thank our year-end donors, we are offering special edition EFF 20th anniversary xkcd t-shirts and hooded sweatshirts! This exclusive xkcd apparel is available only for a limited time and will be shipped in January 2010. Donate today to join xkcd in celebrating this milestone in EFF history and online rights.
Five months after it first announced coming privacy changes this past summer, Facebook is finally rollingout a new set of revamped privacy settings for its 350 million users. The social networking site has rightly been criticized for its confusing privacy settings, most notably in a must-read report by the Canadian Privacy Commissioner issued in July and most recently by a Norwegian consumer protection agency. We're glad to see Facebook is attempting to respond to those privacy criticisms with these changes, which are going live this evening. Unfortunately, several of the claimed privacy "improvements" have created new and serious privacy problems for users of the popular social network service.
The new changes are intended to simplify Facebook's notoriously complex privacy settings and, in the words of today's privacy announcement to all Facebook users, "give you more control of your information." But do all of the changes really give Facebook users more control over their information? EFF took a close look at the changes to figure out which ones are for the better — and which ones are for the worse.
Our conclusion? These new "privacy" changes are clearly intended to push Facebook users to publicly share even more information than before. Even worse, the changes will actually reduce the amount of control that users have over some of their personal data.
Not to say that many of the changes aren't good for privacy. But other changes are bad, while a few are just plain ugly.
The Good: Simpler Privacy Settings and Per-Post Privacy Options
The new changes have definitely simplified Facebook's privacy settings, reducing the overall number of settings while making them clearer and easier for users to find and understand. The simplification of Facebook's privacy settings includes the elimination of regional networks, which sometimes would lead people to unwittingly share their Facebook profile with an entire city, or, as Facebook's founder Mark Zuckerberg explained in a recent open letter, an entire country.
Perhaps most importantly, Facebook has added a feature that we and many others have long advocated for: the ability to define the privacy of your Facebook content on a per-post basis. So, for example, if you only want your close friends to see a particular photo, or only your business colleagues to see a particular status update, you can do that — using a simple drop-down menu that lets you define who will see that piece of content.
Most important, however, is the simple fact that as part of this transition, Facebook is forcing all of its users to actually pay attention to the specifics of their privacy settings. Considering that many if not most users have previously simply adopted the defaults offered by Facebook rather than customizing their privacy settings, this is an especially good thing.
No question, these are positive developments that hopefully will lead more people to carefully review and customize their level of privacy on Facebook. Unfortunately, the new flexibility offered by per-post privacy settings, a definite "good," is being used to justify the "bad"...
The Bad: EFF Doesn't Recommend Facebook's "Recommended" Privacy Settings
Although sold as a "privacy" revamp, Facebook's new changes are obviously intended to get people to open up even more of their Facebook data to the public. The privacy "transition tool" that guides users through the configuration will "recommend" — preselect by default — the setting to share the content they post to Facebook, such as status messages and wall posts, with everyone on the Internet, even though the default privacy level that those users had accepted previously was limited to "Your Networks and Friends" on Facebook (for more details, we highly recommend the Facebook privacy resource page and blog post from our friends at the ACLU, carefully comparing the old settings to the new settings). As the folks at TechCrunch explained last week before the changes debuted:
The way Facebook makes its recommendations will have a huge impact on the site's future. Right now, most people don't share their content using the 'everyone' option that Facebook introduced last summer. If Facebook pushes users to start using that, it could have a better stream of content to go against Twitter in the real-time search race. But Facebook has something to lose by promoting ‘everyone' updates: given the long-standing private nature of Facebook, they could lead to a massive privacy fiasco as users inadvertently share more than they mean to.
At this point there's no "if" about it: the Facebook privacy transition tool is clearly designed to push users to share much more of their Facebook info with everyone, a worrisome development that will likely cause a major shift in privacy level for most of Facebook's users, whether intentionally or inadvertently. As Valleywag rightly warns in its story "Facebook's New ‘Privacy' Scheme Smells Like an Anti-Privacy Plot":
[S]miley-face posturing aside, users should never forget that Facebook remains, at heart, not a community but a Silicon Valley startup, always hungry for exponential growth and new revenue streams. So be sure to review those new privacy "options," and take Facebook's recommendations with a huge grain of salt.
Being a free speech organization, EFF is supportive of internet users who consciously choose to share more on Facebook after weighing the privacy risks; more online speech is a good thing. But to ensure that users don't accidentally share more than they intend to, we do not recommend Facebook's "recommended" settings. Facebook will justify the new push for more sharing with everyone by pointing to the new per-post privacy options — if you don't want to share a particular piece of content with everyone, Facebook will argue, then just set the privacy level for that piece of content to something else. But we think the much safer option is to do the reverse: set your general privacy default to a more restrictive level, like "Only Friends," and then set the per-post privacy to "Everyone" for those particular things that you're sure you want to share with the world.
The Ugly: Information That You Used to Control Is Now Treated as "Publicly Available," and You Can't Opt Out of The "Sharing" of Your Information with Facebook Apps
Looking even closer at the new Facebook privacy changes, things get downright ugly when it comes to controlling who gets to see personal information such as your list of friends. Under the new regime, Facebook treats that information — along with your name, profile picture, current city, gender, networks, and the pages that you are a "fan" of — as "publicly available information" or "PAI." Before, users were allowed to restrict access to much of that information. Now, however, those privacy options have been eliminated. For example, although you used to have the ability to prevent everyone but your friends from seeing your friends list, that old privacy setting — shown below — has now been removed completely from the privacy settings page.
Facebook counters that some of this "publicly available information" was previously available to the public to some degree (while admitting that some of it definitely was not, such as your gender and your current city, which you used to be able to hide). For example, Facebook points to the fact that although you could restrict who could see what pages you are a fan of when they look at your profile, your fan status was still reflected on the page that you were a fan of. But that's no justification for eliminating your control over what people see on your profile. For example, you might want to join the fan page of a controversial issue (like a page that supports or condemns the legalization of gay marriage), and let all your personal friends see this on your profile, but hide it from your officemates, relatives or the public at large. While it's true that someone could potentially look through all the thousands upon thousands of possible fan pages to find out which ones you've joined, few people would actually do this.
Facebook also counters that users can still control whether non-friends can see your Friends List by going into the hard-to-find profile editing settings on your profile page and changing the number of friends displayed on the public version of your profile to "0" unchecking the new check-box in your Friends setting that says "show my friends on my profile". However, if the goal with these changes was to clarify the privacy settings and make them easier to find and use, then Facebook has completely failed when it comes to controlling who sees who you are friends with. And even if you do have some control over whether non-friends can see your friends list — if you hunt around and can find the right setting, which is no longer under "Privacy Settings" — Facebook has made the privacy situation even worse when it comes to information sharing with the developers of Facebook apps.
The issue of privacy when it comes to Facebook apps such as those innocent-seeming quizzes has been well-publicized by our friends at the ACLU and was a major concern for the Canadian Privacy Commissioner, which concluded that app developers had far too much freedom to suck up users' personal data, including the data of Facebook users who don't use apps at all. Facebook previously offered a solution to users who didn't want their info being shared with app developers over the Facebook Platform every time a one of their friends added an app: users could select a privacy option telling Facebook to "not share any information about me through the Facebook API."
That option has disappeared, and now apps can get all of your "publicly available information" whenever a friend of yours adds an app.
Facebook defends this change by arguing that very few users actually ever selected that option — in the same breath that they talk about how complicated and hard to find the previous privacy settings were. Rather than eliminating the option, Facebook should have made it more prominent and done a better job of publicizing it. Instead, the company has sent a clear message: if you don't want to share your personal data with hundreds or even thousands of nameless, faceless Facebook app developers — some of whom are obviously far from honest — then you shouldn't use Facebook.
These changes are especially worrisome because even something as seemingly innocuous as your list of friends can reveal a great deal about you. In September, for example, an MIT study nicknamed "Gaydar" demonstrated that researchers could accurately predict a Facebook user's sexual orientation simply by examining the user's friends-list. This kind of data mining of social networks is a science still in its infancy; the amount of data that can be extrapolated from "publicly available information" will only increase with time. In addition to potentially revealing intimate facts about your sexuality — or your politics, or your religion — this change also greatly reduces Facebook's utility as a tool for political dissent. In the Iranian protests earlier this year, Facebook played a critical role in allowing dissidents to communicate and organize with relative privacy in the face of a severe government crackdown. Much of that utility and privacy has now been lost.
We understand you may not want everyone in the world to have the information you share on Facebook; that is why we give you control of your information. ... You choose what information you put in your profile, including contact and personal information, pictures, interests and groups you join. And you control the users with whom you share that information through the privacy settings on the Privacy page.
"You decide how much information you feel comfortable sharing on Facebook and you control how it is distributed through your privacy settings."
"Facebook is about sharing information with others — friends and people in your networks — while providing you with privacy settings that you can use to restrict other users from accessing your information."
"you can control who has access to [certain information you have posted to your profile], as well as who can find you in searches, through your privacy settings."
"You can use your privacy settings to limit which of your information is available to 'everyone.'"
These statements are at best confusing and at worst simply untrue, and didn't give sufficient notice to users of the changes that were announced today.
In conclusion, we at EFF are worried that today's changes will lead to Facebook users publishing to the world much more information about themselves than they ever intended. Back in 2008, Facebook told Canada's Privacy Commissioner that "users are given extensive and precise controls that allow them to choose who sees what among their networks and friends, as well as tools that give them the choice to make a limited set of information available to search engines and other outside entities." In its report from July, The Privacy Commissioner relied on such statements to conclude that Facebook's default settings fell within "reasonable expectations," specifically noting that the "privacy settings — and notably all those relating to profile fields — indicate information sharing with 'My Networks and Friends.'"
No longer. Major privacy settings are now set to share with everyone by default, in some cases without any user choice, and we at EFF do not think that those new defaults fall within the average Facebook user's "reasonable expectations". If you're a Facebook user and you agree, we urge you to visit the Facebook Site Governance page and leave a comment telling Facebook that you want real control over all of your data. In the meantime, those users who care about control over their privacy will have to decide for themselves whether participation in the new Facebook is worth such an extreme privacy trade-off.
The Obama Administration today issued its long-awaited Open Government Directive (OGD), a blueprint for transparency that the President promised on January 21, his first full day in office. The OGD is “intended to direct executive departments and agencies to take specific actions to implement the principles of transparency, participation, and collaboration” the President spoke of as he took office, and it is hopefully the first of many concrete steps that will be taken to alter the entrenched culture of secrecy that pervades the federal government.
The OGD imposes four broad mandates on the federal bureaucracy: 1) publish government information online; 2) improve the quality of government information; 3) create and institutionalize a culture of open government; and 4) create an enabling policy framework for open government. The Directive sets time limits, ranging from 45 to 120 days, for agency action to implement specific benchmarks (this “open government timetable” is summarized in an excellent analysis by Meredith Fuchs of the National Security Archive). Many of the requirements are fairly concrete; for instance, within 60 days, each agency must create an “Open Government Webpage” to serve as the gateway for agency activities related to implementation of the OGD, including the receipt of public comments. There are lots of good ideas in the directive, and the success of this endeavor will be determined by the enthusiasm (or lack thereof) with which it’s received by agency officials and the federal workforce.
If the White House is serious about gaining enthusiastic, government-wide cooperation to make open government a reality, it can lead by example, and EFF can suggest a great place to start. Back in January and February, soon after the President issued his groundbreaking pronouncements on transparency, we submitted two requests to the White House for information concerning high-profile technology policy issues. The first sought disclosure of a waiver the White House Counsel issued to permit the use of visitor-tracking cookies at WhiteHouse.gov. The second request asked for release of policies governing the use of BlackBerries and other wireless devices by the President and high-ranking White House officials. More than ten months after the submission of those requests, EFF is still waiting for responses.
While we applaud the Obama Administration for continuing to say the right things about government transparency, and look forward to the successful implementation of the initiative announced today, we can’t ignore the fact that the White House continues to be less than forthcoming about some of its own practices and policies. We hope we’ll be receiving the information we requested before the first anniversary of the President’s inauguration.
EFF filed an amicus brief in the Ninth Circuit's en banc review of Mohamed v. Jeppesen, a case brought by the ACLU challenging the CIA's extraordinary rendition program. A panel of the Ninth Circuit Court of Appeals had rejected the government's argument that the case had to be dismissed at the outset due to the state secrets privilege. The panel decision is now being considered by a larger, en banc panel of the Court.
This case is another in a set of post-September 11, 2001 cases in which the Executive, having made new and tremendously broad assertions of its unilateral power, seeks to prevent the Judiciary from adjudicating the lawfulness of those new powers. To do so, the Executive skews the relevant caselaw on the state secrets privilege, attempts to rely on a case in which the privilege was not even the basis for the decision and claims that the court must blind itself to credible, admissible, nonsecret evidence because the Executive has determined that it cannot confirm or deny a particular fact. Adopting the government’s position would abdicate the Judiciary’s Article III responsibility to adjudicate the constitutional and statutory limits on Executive authority.
Oral argument is scheduled in the case in San Francisco on December 15, 2009. EFF has been urging Congress to reform the state secrets privilege.
"Yahoo isn’t happy that a detailed menu of the spying services it provides law enforcement agencies has leaked onto the web." That's how WIRED's Threat Level blog put it when describing Yahoo's recent effort to censor its own law enforcement compliance guide off the Internet using a bogus DMCA takedown demand.
The trouble all started when Yahoo stepped in to block a FOIA request for its law enforcement compliance "price list" (i.e., what it charges to law enforcement and spy agencies when responding to requests for information about Yahoo users). Shortly thereafter, a copy of the document, entitled "Yahoo! Compliance Guide for Law Enforcement," appeared on Cryptome.org.
Here's where the bogosity begins in earnest. Yahoo sent a formal DMCA takedown notice to Cryptome.org, demanding the removal of the compliance manual. In the letter, Yahoo's lawyers allege that posting the manual infringes Yahoo's copyrights (the only proper basis for a DMCA takedown), as well as claiming that it's a trade secret (absurd for a marketing document) and that posting it constitutes "business interference" (huh? informing customers about Yahoo's disclosure practices "interferes" with business?).
This should earn Yahoo a place in the Takedown Hall of Shame (we'll be updating our list of inductees soon). Posting the compliance manual is a clear fair use. Consider the "four factors" that courts examine in fair use cases: (1) publication is clearly for a transformative purpose (criticism, public debate); (2) publication does not harm the "market" for the original (since Yahoo doesn't sell copies of the manual); (3) the nature of the publication is factual, not highly creative; and (4) while the whole manual was published, that was necessary for the transformative purpose. And, perhaps most important, a federal court has already ruled in favor of fair use on nearly these same facts, when Diebold Election Systems was sued for trying to censor embarrassing internal documents off the Internet using bogus DMCA takedowns.
This brings up another important point: the DMCA does not require service providers to comply with bogus takedown notices. The DMCA offers a "safe harbor" from money damages for copyright infringement, but you only need a "safe harbor" if the activity in question might be infringing in the first place. Where (as here) the activity is clearly not infringing, a service provider doesn't need the DMCA for protection, and can just deposit takedown notices in the trash (as YouTube did a few months ago in the face of another obviously bogus takedown notice).
The white paper examines both clickwrap agreements—whereby service providers require the user to click an “I Agree” button next to the terms—and browsewrap agreements—whereby service providers try to characterize one's continued use of the website as constituting “agreement” to a posted set of terms. While neither method automatically creates enforceable contracts, some presentations may still be upheld even if the user never actually reads and understands the terms. The key is whether the service provider allows the user reasonable notice and opportunity to review the terms before using the website or service.
Of course, just because a TOS creates an enforceable agreement, does not mean that every provision of the TOS will be enforced by a court. In our next white paper, we'll examine which particular provisions are most unfair to consumers, including provisions that have aroused the skepticism of courts and regulators.
This is the fifth in a series of posts about the proposed Google Book Search settlement.
As we've explained in earlier posts, when it comes to evaluating the proposed Google Books settlement, the principal potential benefit to the public (increased access to books online) must be weighed against the potential drawbacks (impediments to competition, inadequate protection for privacy). Another potential downside for the public in the proposed settlement is the risk of censorship.
To understand the importance of this risk, keep two things in mind. First, while bookstores are entitled to pick and choose their inventory, Google Books hopes to be much more than a simple bookstore. In the words of Google's CEO Eric Schmidt: "Imagine one giant electronic card catalog that makes all the world's books discoverable with just a few keystrokes by anyone, anywhere, anytime." In other words, Google Books will have many characteristics that we associate more with the research libraries from which its books are drawn than with traditional bookstores. Second, as Prof. Geoffrey Nunberg reminds us: "This is almost certainly the Last Library, after all. There's no Moore's Law for capture, and nobody is ever going to scan most of these books again."
If Google's scans under the proposed settlement are likely to be the only chance millions of books will have for a digital life, then the potential for censorship is something to be taken very seriously indeed. If the books can't be found by researchers, it will be as though they were cast down the Memory Hole.
Censorship by Rightsholders
The biggest censorship risk created by the proposed settlement is from copyright owners. The proposed settlement gives rightsholders (until April 2011) the power to "Remove" their books from the Google Books corpus altogether. Once a book is removed, not only won't you be able to read it online, you won't even be able to find it using full-text search. In short, these books would simply cease to exist as far as users of Google Books are concerned, despite the fact that courts have ruled that indexing copyrighted works is a perfectly legal fair use. Moreover, even the libraries who contributed the book for scanning wouldn't have a digital "backup" in their collections, as these removed books would also vanish from the digital copies that Google gives back to the research libraries (the "Library Digital Copies" and the "Research Corpus," in the lingo of the settlement agreement).
Why would a rightsholder want to self-censor? First, remember that the author of a book is often not the rightsholder. As a result, the copyright in a book can be purchased and then used to suppress further publication (a trick Howard Hughes tried). Moreover, sometimes the author or author's heir (or corporate successor) wants to suppress a work (Prof. R. Anthony Reese describes a number of historical examples of post-publication suppression efforts by authors and rightsholders in this article).
In the world of research libraries, of course, this kind of censorship is impossible—no research library would pull cards from the catalog and destroy copies of published works at the behest of those who own the copyright in those books. Yet this is exactly what the proposed settlement would permit for the "Last Library." And most galling is that the settlement does not even require that a complete list of these "Removed" books ever be made publicly available (in Google's web search, in contrast, Google includes entries for results that would have appeared, but for DMCA takedown demands, and makes those demands publicly available through Chilling Effects).
At a minimum, books that are "removed" should remain in the database for full-text search, and Google should remain able to offer a "Library Link" (i.e., a link that directs a researcher to a library where the book can be found).
Even more troubling is the possibility of selective alterations of the texts of the books themselves. In Section 3.10(c)(i), the settlement forbids Google "except as expressly authorized by the Registered Rightsholder" from altering the text of scanned books when displayed to users. That's certainly a good thing, as far as it goes—we shouldn't want Google to be able to go in and selectively edit books. But Google is allowed to selectively edit if "authorized" by the copyright owner. Why is this permitted? And if the rightsholder "authorizes" Google to make changes, can Google refuse to do so? Will the fact of alteration be publicly visible to the reader? The answer is not clear. But clearly the better rule is a prohibition on anyone making editorial alterations in the text of scanned books (again, no library would allow a copyright owner to selectively blackline books in the stacks). Any other option creates the chilling prospect of "revising history" as imagined in Orwell's 1984.
Censorship by Google
The proposed settlement also gives Google a troubling degree of discretion when it comes to choosing which books will be publicly accessible. For example, Section 3.7(e) makes it clear that Google can exclude any scanned book it likes from public access "for editorial or non-editorial reasons." If it excludes a book for "editorial reasons," it must notify the Registry (but not the public), and the Registry may look for an alternative partner ("Third-Party Required Library Service Provider") to host the book. There is nothing that requires the Registry to do so, nor any guarantee that such a partner will step forward.
In addition, in order to meet its obligations under Section 7.2(e) of the proposed settlement, Google need only make 85% of the books it scans from its library partners publicly accessible through full-text search, consumer purchase, or the institutional subscription database. Assuming that Google has already scanned approximately 8 million books that are in-copyright, that means Google can make more than 1.2 million of these books disappear from its publicly accessible services for any reason and still meet its obligations under the settlement. And, again, nothing in the settlement requires Google to make the list of omitted books available to the public.
Censorship by Government
Finally, it's worth noting that governments will doubtless exploit the leeway that the settlement gives to both rightsholders and Google to pull books off the digital shelves of Google Books. It's all too easy to imagine foreign governments pressuring their citizens to "remove" books from public access on Google. It's also likely that foreign governments will pressure Google to omit books from Google Books. If that comes to pass, neither Google nor the rightsholders will be able to say that they are legally constrained by the settlement from complying short of legal process. Had the settlement agreement been written to forbid this kind of censorship, both rightsholders and Google could have responded to censorship demands by saying "come back with a court order."
And, finally, remember that Google may, under the settlement, sell off the entire Google Books project. So even if you believe that Google would never cave to foreign governments or engage in selective censorship, keep in mind that 10 years from now, Google Books might be owned by an entirely different corporate master.
Much of the coverage of the UK's proposed Digital Economy bill has centered, and rightly so, on the damaging consequences to civil liberties for Britons caused by its Internet termination provisions. Less documented is quite how damaging these regulations are for the bill's own namesake: Britain's present and future digital economy.
The history of Net businesses shows that an integral feature of the digital economy is decentralized innovation and the creation of generative new markets by individuals or small, loosely-affiliated groups. These generators of wealth often begin as end-users of the Net, unconnected with established companies. When they start, they don't have lobbyists, and their entrepreneurship is not yet recognized as part of the country's vital digital infrastructure or core creative industries — or even a business interest at all.
So how does the Digital Economy Bill treat Britain's present and future engines of digital growth?
First, it burdens the digital industries with the demands of older incumbent sectors. The Digital Economy Bill has an open-ended requirement that ISPs pay for and implement record-keeping and technical measures against subscribers, as lobbied for by the entertainment industry: costs and red-tape that the ISP industry has strongly protested.
But it's not just established ISPs that suffer. The repeated demand by the entertainment industry that intermediaries should police their networks has been expanded by the bill to include the subscribers on the edge of the network. If you're not an ISP, but other people use your network to get their net access — if you run an open Wi-Fi spot, for instance, like the British Library — you'll now be vulnerable to being terminated or constrained by the actions of those users.
Open Wi-Fi nodes are currently the most common scenario where subscribers at the edge are also providers. But in future network topologies, communities at the edge may play a more widespread role in distributing Net access. Decentralised mesh networking is still experimental, but is already used in locales from San Francisco to the Scottish Islands, and could yet emerge as a viable complement to centralized broadband providers. Except in United Kingdom under the Digital Economy Bill, that is, where independent mesh nodes might now be responsible for all the traffic that passes through them. The potential for a new competitor in the world of bandwidth provision has been sacrificed to the powers invested in Britain's status quo.
Another indication that the Digital Economy's drafters don't seem to understand the immediate future of the digital economy is the section on internet domain registries. The ostensible reason for this section is to stabilize the private organization that runs the ".uk" top level domain (TLD), by allowing the UK government to take over DNS registries, and change their charters. But the language of the bill now means that "instability" could mean be interpreted as being insufficiently responsive to corporate trademark complaints. And the current draft allows the UK government to take over *any* TLD provider, including other countries' TLD registries and non-geographic TLDs based in the UK. The end result? Profitable registries will move away from Britain's unstable regulatory regime where any registry might be seized by the local government and set up shop in friendlier markets. It seems that the UK's Department of Business either ignores or is ignorant of the fact that this digital economy sector is due to expand significantly as ICANN pursues its plans to open up the top level domain space in the next few years.
As we've described previously, the bill notoriously proposes that a British secretary of state can change the entirety of British copyright law, except for its criminal provisions, through secondary legislation. The legal uncertainty created in such an environment, when copyright policy is fundamental to the digital economy, is in itself irresponsible.
Note though, that he or she can only do change the law for one reason; for "preventing or reducing the inf.ringement of copyright by means of the Internet". The result is a one-way ratchet on British copyright law, forsaking innovative new products and services whose business models are disruptive to the market dominators. From the piano roll to Betamax, vested interests in the creative industries have always defined potential new competitors as "infringement". They have done so to the search engine and caching businesses (as characterised by the newspaper industry), the iPod and MP3 player sector (ripping music from CDs to MP3s will remain illegal in the UK), and, as Mandelson himself wrote when advocating for this power, online file storage companies. This new power can never be used to create new fair use exceptions or confirm the legality of a new Internet service or products: they can only be used to outlaw and impose new restrictions on them.
Less than twenty years ago in the UK, the first Internet connections were enthusiasts grouping together on a BBS for common benefit; a decade ago, the idea that two of Britain's richest individuals would run a blogging network and the distributor of an operating system initially built by "hobbyists" would have seemed bizarre to many at the time. But that's how the digital economy works: when left free to grow and change their roles, those at the leading edge innovate, and help establish the multi-billion dollar industries of the future.
The success of the digital economy in Britain, as elsewhere, is not served by segmenting the multi-faceted roles of Net users into exclusive legal castes of subscriber, provider, and rightsholder. The fanatic emphasis on stricter IP enforcement as deterrence belies the legal flexibility which allow new industries to grow. This is a bill which is not only offensive to civil liberties, but a powerful disincentive to the innovators setting the keystones of the digital economy, and creating the tools that make us all more free.