EFF will be attending PrivacyCamp SF on Friday May 7th after the end of the Web 2.0 Expo, and we hope you will join us. The topic of the day will be Privacy and Social Networks.
This first annual PrivacyCamp in San Francisco will be a day-long user-generated "unconference" of engineers, privacy advocates, professors, lawyers, entrepreneurs and social network users that will focus on the privacy implications of social networks like Facebook, Twitter, and Google Buzz. If you will be in the Bay Area and want to engage in smart conversation with experts in tech and policy about what social networks mean for privacy and to brainstorm about how social networks can be designed to better protect privacy, register now.
What is an unconference? Well, there's no pre-planned agenda, no keynotes, no panels, and no “Q&As,” just a space to meet, discuss, debate, and share knowledge with others who are interested in a particular topic--in this case, Privacy and Social Networks. We at EFF certainly have a lot to say on that topic, and we hope you'll join us to help define the dialogue. Helping us with that will be Craig Newmark of Craigslist, who's planning to speak with conference participants before lunch, and we hope to see representatives from other Web 2.0 companies participating in the discussion as well.
Since there's no pre-planned agenda, the topics of discussion will be collaboratively defined the morning of the conference (for example, here's the agenda that was settled on for the DC PrivacyCamp, which focused on privacy and government policy). But to give you an idea of possible topics for discussion, here are a few initial ideas and questions suggested by the PrivacyCamp blog:
Privacy by Design: Where in the design process should privacy be addressed? How far have we come and in what direction are we heading? What are the biggest obstacles to designing a private network, and what are some ways to overcome them?
All Out in the Open: How can privacy exist on a public network? In an age that seemingly embraces oversharing, are privacy controls a futile exercise? What are users’ expectations and how can they be addressed?
The Money Question: Does privacy work against the very tenets of social networking monetization? Can networks emphasize privacy and still be profitable? Is it possible to compete on privacy?
Too Much Control: Are granular controls the answer to privacy? How detailed can controls get before they become too complicated? How sophisticated is the “average user” and how can sites encourage users to educate themselves about the full functionality of privacy controls?
Update Headaches: What works when you change your site’s privacy controls? What doesn’t?
What would you like to see discussed at PrivacyCamp SF? Register now to get in on the conversation. You can also participate in or follow the discussion on Twitter via @privacycampdc and hashtags #privacycamp and #privacy2010, on the PrivacyCamp Facebook page, and on the PrivacyCamp blog.
Registration is still open, and the contest is still very much up for grabs! Current first place team Holy Handgrenades is sitting pretty at $575, with individual contestants Evan Keiser at second place with $65 and Robert Rowley at third place with $25. The pool of fabulous prizes is still within your reach!
Big thanks to Ninja Networks and Friends for sponsoring the contest and raising over $1200; however, contestants please note that its team has declared themselves ineligible for the prize package, leaving the contest wide open. Form a team; put a badge up on your blog; ask your friends and family -- there are lots of ways to help EFF and compete for the prizes. (See Official Rules for full details).
EFF is also thrilled to announce that security firms iSEC Partners and IOActive have joined us to sponsor the Defcon Getaway Contest! We're grateful for their support of the contest and EFF's Coders' Rights Project.
iSEC Partners is a proven full-service security consulting firm that provides penetration testing, secure systems development, security education and software design verification. iSEC Partners' security assessments leverage our extensive knowledge of current security vulnerabilities, penetration techniques and software development best practices to enable customers to secure their applications against ever-present threats on the Internet.
Established in 1998, IOActive is an industry leader that offers comprehensive computer security services with specializations in smart grid technologies, software assurance, and compliance. Boasting a well-rounded and diverse clientele, IOActive works with a majority of Global 500 companies including power and utility, hardware, retail, financial, media, router, aerospace, high-tech, and software development organizations.
Stay tuned for more developments and updates regarding EFF's Defcon Getaway Contest. If you haven't already registered, what are you waiting for? Click here, and see you in Vegas!
Social networking companies don't have it easy. Advertisers covet their users' data, and in a niche that often seems to lack a clear business model, selling (or otherwise leveraging) that data is a tremendously tempting opportunity. But most users simply don't want to share as much information with marketers or other "partners" as corporations would like them to. So it's no surprise that some companies try to have it both ways.
Monday evening, after an exasperating few days trying to make sense of Facebook's bizzare new "opt-out" procedures, we asked folks on Twitter and
">Facebook a question:
The world needs a simple word or term that means "the act of creating deliberately confusing jargon and user-interfaces which trick your users into sharing more info about themselves than they really want to." Suggestions?
And the suggestions rolled in! Our favorites include "bait-and-click", "bait-and-phish", "dot-comfidence games", and "confuser-interface-design".
Although we didn't specifically mention Facebook in our question, by far the most popular suggestions were variations on
">this one from @heisenthought on Twitter:
How about "zuck"? As in: "That user-interface totally zuckered me into sharing 50 wedding photos. That kinda zucks"
Other suggestions included "Zuckermining", "Infozuckering", "Zuckerpunch" and plenty of other variations on the name of Facebook's Founder and CEO, Mark Zuckerberg. Others suggested words like "Facebooking", "Facebaiting", and "Facebunk".
It's clear why folks would associate this kind of deceptive practice with Zuckerberg. Although Zuckerberg told users back in 2007 that privacy controls are "the vector around which Facebook operates," by January 2010 he had changed his tune, saying that he wouldn't include privacy controls if he were to restart Facebook from scratch. And just a few days ago, a New York Times reporter quoted a Facebook employee as saying Zuckerberg "doesn't believe in privacy".
Despite this, we'd rather not use Zuckerberg's name as a synonym for deceptive practices. Although the popularity of the suggestion shows how personal the need for privacy has become for many Facebook users, we'd prefer to find a term that's less personal and more self-explanatory.
As Conti describes it, a good interface is meant to help users achieve their goals as easily as possible. But an "evil" interface is meant to trick users into doing things they don't want to. Conti's examples include aggressive pop-up ads, malware that masquerades as anti-virus software, and pre-checked checkboxes for unwanted "special offers".
The new Facebook is full of similarly deceptive interfaces. A classic is the "Show Friend List to everyone" checkbox. You may remember that when Facebook announced it would begin treating friend-lists as "publicly available information" last December, the change was met with user protests and government investigation. The objections were so strong that Facebook felt the need to take action in response. Just one problem: Facebook didn't actually want to give up any of the rights it had granted itself. The result was the obscure and impotent checkbox pictured here. It's designed to be hard to find — it's located in an unlikely area of the User Profile page, instead of in the Privacy Settings page. And it's worded to be as weak as possible — notice that the language lets a user set their friend-list's "visibility", but not whether Facebook has the right to use that information elsewhere.
A more recent example is the process introduced last week for opting out of Instant Personalization. This new feature allows select Facebook partner websites to collect and log all of your "publicly available" Facebook information any time you visit their websites. We've already documented the labyrinthine process Facebook requires users to take to protect their data, so I won't repeat it here. Suffice to say that sharing your data requires radically less work than protecting it.
Of course, Facebook is far from the only social networking company to use this kind of trick. Memorably, users of GMail were surprised last February by the introduction of Google Buzz, which threatened to move private GMail recipients into a public "frequent contacts" list. As we noted at the time, Buzz's needlessly complex "opt-out" user-interface was a big part of the problem.
OK, perhaps the word "evil" is a little strong. There's no doubt that bad user-interfaces can come from good intentions. Design is difficult, and accidents do happen. But when an accident coincidentally bolsters a company's business model at the expense of its users' rights, it begins to look suspicious. And when similar accidents happen over and over again in the same company, around the same issues, it's more than just coincidence. It's a sign something's seriously wrong.
In yesterday's post, we asserted that the REACT high tech task force search of Gizmodo editor Jason Chen's home and seizure of his computers and other property as part of their investigation of that blog's reporting on the iPhone 4G prototype was almost certainly illegal. That claim causedsome to question whether the California shield law and the federal Privacy Protection Act (PPA) apply if the reporter himself is suspected of criminal activity.
Both statutory provisions likely apply here, and for good reason. The First Amendment does not excuse illegal activities, but it certainly provides safeguards to ensure that free speech interests are not trampled along the way.
Regarding the PPA, as we said in our original post, "[t]he PPA includes an exception for searches targeting criminal suspects (which Chen may or may not be), but that exception does not apply 'if the offense to which the materials relate consists of the receipt, possession, communication, or withholding of such materials or the information contained therein.'" If Chen’s property was seized under the theory that he or Gizmodo might be guilty of, say, receiving stolen property for taking possession of the iPhone about which the blog reported, even if he had reason to believe that it was stolen, then the seizure likely violated Chen’s PPA rights because the alleged crime would be one covered by the federal statute.
The California law is more stark. Penal Code section 1524(g) says sets forth that "no warrants shall issue" for unpublished "notes, outtakes, photographs, tapes or other data of whatever sort" if that information was "obtained or prepared in gathering, receiving or processing of information for communication to the public." There is no statutory exception for cases in which the journalist is the one under investigation. If the California legislature intended such an exemption, it could easily have included one, as it did in another part of the same Penal Code section 1524, subdivision (c), which prohibits search warrants targeting physicians, psychotherapists, and members of the clergy, with an explicit exception if they are “reasonably suspected of engaging or having engaged in criminal activity related to the documentary evidence for which a warrant is requested." (For a review of the respective histories of Penal Code subsections 1524(c) and (g), see PSC Geothermal Services Co. v. Superior Court, 25 Cal. App. 4th 1697, 1705 (Cal. Ct. App. 1994).)
Notwithstanding the clear language of the statute, some observers have pointed to the case of Rosato v. Superior Court, 51 Cal.App.3d 190 (1975), arguing that it stands for the proposition that California's state shield law "wouldn't apply to subpoenas or searches for evidence of such criminal activity." The Rosato decision, however, addresses whether a constitutional right (in that case the right to receive a fair trial) could trump the Evidence Code under certain circumstances. One problem with relying on Rosato is that the reporter’s privilege is now a constitutional and not merely a statutory right, having been overwhelmingly approved by voters in 1980 (after the Rosato decision). See, e.g., Liggett v. Superior Court (Gregerson), 260 Cal. Rptr. 161 (Cal. App. Ct. 1989) ("The purpose of adding the shield law to the Constitution was ostensibly to trump the reasoning of Rosato and Farr and to further insulate the shield law from judicial tampering.") (vacated on other grounds). If the reporter’s privilege is to give way to a competing right, that right must be constitutional in nature, as the California Supreme Court noted in Miller v. Superior Court, 21 Cal. 4th 883, 898 (Cal. 1999):
[T]here is nothing illogical in interpreting “the people['s] ... right to due process” not to include the right to compel the press through the sanctions of contempt-incarceration and substantial fines-to supply unpublished information obtained in the newsgathering process. The fact that the assertion of this immunity might lead to the inability of the prosecution to gain access to all the evidence it desires does not mean that a prosecutor's right to due process is violated, any more than the assertion of established evidentiary privileges against the prosecution would be a violation.
A bigger problem is that Rosato had nothing to say about the warrant restrictions Penal Code section 1524(g) sets forth to ensure that police investigations involving reporters do not disturb the confidentiality of sources or other unpublished information.
Protections for journalists implicate not only the journalist's right to speak but also the public's interest in obtaining information. That is why the First Amendment protects reporters who publish truthful information, even when it was illegally gathered. See, e.g., Bartnicki v. Vopper, 532 U.S. 514, 527-28, 533-35 (2001) (First Amendment barred imposition of civil damages under wiretapping law for publishing contents of conversation relevant to matter of public concern); Smith v. Daily Mail Pub. Co., 443 U.S. 97 (1979) (First Amendment barred prosecution under state statute for publishing name of a juvenile defendant). These protections apply even when the reporter has arguably stolen commercial trade secrets or otherwise violated the law. See, e.g., Proctor & Gamble Co. v. Bankers Trust Co., 78 F.3d 219 (6th Cir. 1996) (overturning an injunction preventing Business Week from publishing information about a court case even though the District Court had found that the magazine had "knowingly violated the protective order" by obtaining the documents that necessarily reflected "trade secrets or other confidential research, development or commercial information...."); CBS Inc v. Davis, 510 U.S. 1315 (1994) (permitting broadcast of footage of a meat-packing operations obtained through “calculated misdeeds.”).
To be sure, if Gizmodo or Chen did break the law, the First Amendment will likely not affect their potential civil or criminal liability. (The police have as of yet not identified what crime was allegedly committed, who allegedly committed that crime, and what evidence supports such an allegation.) But even in instances in which a reporter may have violated the law, and could be subject to criminal or civil liability for that violation, the First Amendment still applies, as do the procedural safeguards in California law and the federal PPA. Simply put, while a court may conclude that under particular facts and circumstances that a reporter must divulge sources or unpublished materials, or that he is liable for his misdeeds, police may not decide on their own to ignore free speech protections for journalists merely by claiming that the reporter may have committed a crime.
Since its incorporation just over five years ago, Facebook has undergone a remarkable transformation. When it started, it was a private space for communication with a group of your choice. Soon, it transformed into a platform where much of your information is public by default. Today, it has become a platform where you have no choice but to make certain information public, and this public informationmay be shared by Facebook with its partner websites and used to target ads.
To help illustrate Facebook's shift away from privacy, we have highlighted some excerpts from Facebook's privacy policies over the years. Watch closely as your privacy disappears, one small change at a time!
We understand you may not want everyone in the world to have the information you share on Facebook; that is why we give you control of your information. Our default privacy settings limit the information displayed in your profile to your school, your specified local area, and other reasonable community limitations that we tell you about.
Profile information you submit to Facebook will be available to users of Facebook who belong to at least one of the networks you allow to access the information through your privacy settings (e.g., school, geography, friends of friends). Your name, school name, and profile picture thumbnail will be available in search results across the Facebook network unless you alter your privacy settings.
Facebook is designed to make it easy for you to share your information with anyone you want. You decide how much information you feel comfortable sharing on Facebook and you control how it is distributed through your privacy settings. You should review the default privacy settings and change them if necessary to reflect your preferences. You should also consider your settings whenever you share information. ...
Information set to “everyone” is publicly available information, may be accessed by everyone on the Internet (including people not logged into Facebook), is subject to indexing by third party search engines, may be associated with you outside of Facebook (such as when you visit other sites on the internet), and may be imported and exported by us and others without privacy limitations. The default privacy setting for certain types of information you post on Facebook is set to “everyone.” You can review and change the default settings in your privacy settings.
Certain categories of information such as your name, profile photo, list of friends and pages you are a fan of, gender, geographic region, and networks you belong to are considered publicly available to everyone, including Facebook-enhanced applications, and therefore do not have privacy settings. You can, however, limit the ability of others to find this information through search using your search privacy settings.
When you connect with an application or website it will have access to General Information about you. The term General Information includes your and your friends’ names, profile pictures, gender, user IDs, connections, and any content shared using the Everyone privacy setting. ... The default privacy setting for certain types of information you post on Facebook is set to “everyone.” ... Because it takes two to connect, your privacy settings only control who can see the connection on your profile page. If you are uncomfortable with the connection being publicly available, you should consider removing (or not making) the connection.
Viewed together, the successive policies tell a clear story. Facebook originally earned its core base of users by offering them simple and powerful controls over their personal information. As Facebook grew larger and became more important, it could have chosen to maintain or improve those controls. Instead, it's slowly but surely helped itself — and its advertising and business partners — to more and more of its users' information, while limiting the users' options to control their own information.
At last week's "f8" Facebook developer conference, Mark Zuckerberg's notable quotable was that Facebook is "building a Web where the default is social." To our ears, that sounds like "a Web where exposure is the norm." To achieve this, Facebook is rolling out technologies that essentially put Facebook features on other sites, while those sites share data back to Facebook.
Despite the voluminous buzz, many commentators have missed the most confusing announcement of all — new Facebook jargon. So, in the interests of helping users understand what's going on, we've put together a rough Facebook-to-English translator. Think of it as a handy phrase-book that could help you navigate through the more common situations you'll find yourself in.
Important to note: Facebook makes frequent changes to its features. We believe this post is to be accurate at the time of publishing, but please understand that Facebook may change some or all of these definitions beyond recognition before long. In addition, be aware that Facebook operates differently in Europe than it does in the USA, because European nations tend to have stronger privacy-protection laws.
This is the term Facebook uses to describe information that it wants to share with anybody and everybody. Knowing what information Facebook considers "public" at any given moment can be confusing, but it's key to understanding what information Facebook may share with its business partners without seeking further permission.
Any time "public information" is referenced now, Facebook is talking about your: name, profile picture, current city, gender, networks, complete list of your friends, and your complete list of connections (formerly the list of pages that you were a "fan" of, but now including profile information like your hometown, education, work, activities, likes and interests, and, in some cases, your likes and recommendations from non-Facebook pages around the web).
Facebook offers a number of controls over what information is "visible" on your profile. This determines what can be seen by someone who visits your profile page, but does not change whether the information is "public information." As Facebook explains, "Keep in mind that Facebook Pages you connect to are public. You can control which friends are able to see connections listed on your profile, but you may still show up on Pages you're connected to." LIkewise, "While you do have the option to hide your Friend List from being visible on your profile, it will be available to applications you use and websites you connect with using Facebook." Because Facebook deems this information "public," it reserves the right to share that information with its business partners and third party websites, regardless of your visibility settings.
Facebook's "Pages" are distinct from regular Facebook user profiles, and have generally been used to represent non-user entities like companies, non-profits, products, sports teams, musicians, etc. Community Pages are a new type of Page "dedicated to a topic or experience," such as cooking. These will replace interests and activities.
Last December, Facebook made your Page affiliations available to everyone — non-Friends, advertisers, and data miners included — by classifying Pages as publicly available information.
You create a "Connection" to most of the things that you click a "Like button" for, and Facebook will treat those relationships as public information. If you Like a Page on Facebook, that creates a public connection. If you Like a movie or restaurant on a non-Facebook website (and if that site is using Facebook's OpenGraph system), that creates a public connection to either the applicable Page on Facebook or the affiliated website.
Last week, Facebook announced a plan to transform most of the bits in your profile (including your hometown, education, work, activities, interests, and more) into connections, which are public information. If you refuse to make these items into a Connection, Facebook will remove all unlinked information.
Social plugins allow other websites to incorporate Facebook features and share data with Facebook. Examples of social plugins include "Like buttons" that share information back to your Facebook profile when clicked; an "Activity Feed" that will show content that you've Liked on that site to Facebook friends; and more.
From the Facebook FAQ: "If you click "Like" or make a comment using a social plugin, your activity will be published on Facebook and shown to your Facebook friends who see an Activity Feed or Recommendations plugin on the same site. The things you like will be displayed publicly on your profile."
OpenGraph is a new Facebook program that grants any website a way to create objects that can become "connections" on Facebook user profiles. At the moment, some sites appear to be using OpenGraph in conjunction with the Facebook "Like button" in order to publish information back to your Facebook profile's list of Pages — information that everyone is able to see.
For example, the Internet Movie Database (IMDb) appears to be using OpenGraph in conjunction with the Like button social plugin. When you click to Like a movie on IMDb, that movie gets added to your list of Pages.
Instant Personalization is a pilot program that allows a few non-Facebook websites to obtain and make use your public Facebook information as soon as you visit those websites. For example, the music website Pandora receives access the list of music artists that you Liked on Facebook in order to pick songs to play (for users who are logged into Facebook and who have not opted out of instant personalization).
For users that have not opted out, Instant Personalization is instant data leakage. As soon as you visit the sites in the pilot program (Yelp, Pandora, and Microsoft Docs) the sites can access your name, your picture, your gender, your current location, your list of friends, all the Pages you have Liked — everything Facebook classifies as public information. Even if you opt out of Instant Personalization, there's still data leakage if your friends use Instant Personalization websites — their activities can give away information about you, unless you block those applications individually.
Last week’s police raid on Gizmodo blogger Jason Chen’s house, in response to a request from Apple Inc., has led many to wonder why government resources are being spent on a spat between Apple and Gizmodo.
But here at EFF, we are also wondering if we’ve just seen the future of copyright enforcement. Although the Gizmodo seizure doesn’t appear to be rooted in copyright, having cops kicking in doors over what seems like a private dispute reminded us of recent efforts by the big content industries to get law enforcement to go after “copyright thieves.”
Usually, copyright law requires copyright owners to do and pay for their own enforcement efforts – they don’t get the windfall of a limited monopoly, the hammer of statutory damages, and the ability to require the public to bankroll the enforcement for them. But the big content industries are trying to reverse that presumption, demanding (via wish lists sent to the new IP Czar last month) that federal agencies devote more resources to finding and catching “copyright thieves.” For example, the Motion Picture Association of American, the Recording Industry Association of America and others filed joint comments arguing among other things, that:
The planned release of a blockbuster motion picture should be acknowledged as an event that attracts the focused efforts of copyright thieves, who will seek to obtain and distribute pre-release versions and/or to undermine legitimate release by unauthorized distribution through other channels . . . An interagency task force should work with industry to coordinate and make advance plans to try to interdict these most damaging forms of copyright theft, and to react swiftly with enforcement actions where necessary.
In other words, while the movie studios are reporting record profits, we should deputize the FBI and Department of Homeland Security to provide taxpayer-supported muscle for summer blockbuster films.
This submission also urged state and local police to get involved in copyright policing, using “state labeling laws”: “State labeling laws that define unauthorized online file sharing and streaming as a felony would provide state and local law enforcement with jurisdiction to investigate and prosecute online theft of intellectual property.”
The International Intellectual Property Alliance (IIPA), which represents most of the entertainment industry’s biggest players, also wants to see a chilling expansion of law enforcement involvement in copyright enforcement, including:
empowering government agents to prosecute alleged infringements, whether or not a copyright owner has actually complained;
expanded "information sharing" between copyright owners and law enforcement, including border officials, i.e., a direct two-way pipeline between Big Media and the cops;
issuance and execution of search warrants without notice to the alleged infringer.
The Software Information Industry Association supports many similar measures, and also suggests that convicted infringers should be required to make public video confessions, to be posted online and "used for education in schools and in training programs."
If this wish list strikes you as disturbing, it should. Any government enforcement of copyrights should be focused on large scale, commercial infringements that can’t be adequately deterred by civil lawsuits, using the already powerful existing legal tools. The Gizmodo seizure reminds us that not only are our tax dollars at stake, but also our civil liberties. Whether you’re a blogger or a simple citizen, take note: if copyright policing becomes a regular item on the law enforcement agenda, you can expect more bogus search warrants, and more doors to be broken down.
Federal and California law both protect reporters against police searches aimed at uncovering confidential sources or seizing other information developed during newsgathering activities. Yet on Friday, agents with the Rapid Enforcement Allied Computer Team (REACT) executed a search warrant at Gizmodo editor Jason Chen’s home, searching for evidence related to Gizmodo's scoop on what appears to be a pre-release version of Apple's next iPhone model. The warrant does not reveal whether Chen himself is considered a criminal suspect, or what alleged crime the police are investigating, but Chen was not arrested. All of his computers and hard drives (among other materials) were seized for further search and analysis.
Under California and federal law, this warrant should never have issued. First, California Penal Code Section 1524(g) provides that "[n]o warrant shall issue for any item or items described in Section 1070 of the Evidence Code." Section 1070 is California's reporter's shield provision (which has since been elevated to Article I, § 2(b) of the California Constitution). The items covered by the reporter's shield protections include unpublished information, such as "all notes, outtakes, photographs, tapes or other data of whatever sort," if that information was "obtained or prepared in gathering, receiving or processing of information for communication to the public." The warrant explicitly authorizes the seizure of such protected materials and information, including the photographs and video taken of the iPhone prototype, as well as research regarding the Apple employee who purportedly lost the phone. This fact alone should have stopped this warrant in its tracks.
Second, the warrant likely violates the Privacy Protection Act (or PPA, 42 USC § 2000aa et al.). Congress passed the PPA to ensure special protection for journalists by prohibiting government search and seizure of both "documentary material" (explicitly including photos and video) and "work product material," material which is or has been used "in anticipation of communicating such materials to the public." 42 USC § 2000aa-7(a) and (b). The PPA includes an exception for searches targeting criminal suspects (which Chen may or may not be), but that exception does not apply "if the offense to which the materials relate consists of the receipt, possession, communication, or withholding of such materials or the information contained therein." 42 USC § 2000aa(a)(1). Violations of the PPA could render the law enforcement agencies or the individual officers who searched Chen's house liable for damages no less than $1,000.
The purpose of the PPA and state shield law is to prevent police from rummaging through sensitive information contained in a reporter's notes and communications. This search warrant is particularly worrisome on this point because it is so plainly overbroad. An officer seeking a search warrant must demonstrate to the issuing judge both probable cause that a crime was committed and that there is a reasonable basis to conclude that the materials sought and searched are relevant to that crime. The warrant issued in the Chen case was remarkably broad, seeking "all records and data located and/or stored on any computers, hard drives, or memory storage devices, located at the listed location." That a computer or hard drive may be capable of storing information relevant to the case is not enough. Unless the warrant application provided a factual basis to tie Chen's computer (and "digital cameras," "display screens," "mice," "cassette tapes," "CD-ROM disks," etc.), any information obtained from them could be thrown out. Furthermore, the Ninth Circuit Court of Appeals (the federal appellate court for California and the surrounding states) in its 2009 opinion in United States v. Comprehensive Drug Testing Inc., 579 F.3d 989 (9th Cir. 2009), identified a series of guildelines meant to ensure that even otherwise lawful warrants authorizing the search and seizure of computers do not give officers too much access to private data that might be intermingled with evidence of a crime. This warrant does not appear to comply with those guidelines.
The police appear to have gone too far. The REACT team, "a partnership of 17 local, state, and federal agencies" with a "close working partnership with the high tech industry," seems to have leapt eagerly to Apple's aid before it looked at the law. Putting the presumed interests of an important local company before the rights guaranteed by law is an obvious occupational hazard for a police force charged with paying particular attention to the interests of high tech businesses. Now that First Amendment lawyers, reporters, and others have highlighted the potential legal improprieties of this search, the task force should freeze their investigation, return Chen's property, and reconsider whether going after journalists for trying to break news about one of the Valley's most secretive (and profitable) companies is a good expenditure of taxpayer dollars.
[Colorado Law Professor Paul Ohm has more on this issue at Freedom to Tinker, in particular looking at the effect of Comprehensive Drug Testing on this search.]