Almost all posts on social media include depictions of real people. And most social media websites include advertising. Does this combination mean that nearly everyone featured on social media can sue for infringement of their right of publicity? That would be disruptive. Fortunately, a new ruling [PDF] by the California Court of Appeal confirms that more is needed for a right of publicity claim. This is a big win for free expression online.
The decision comes in a case called Cross v. Facebook. The case was brought by a country-rap artist who performs under the stage name Mikel Knight. He promotes his music using “street teams” that sell CDs out of vans. After these vans were involved in several accidents (causing two deaths), Knight was accused of pushing his sales teams too hard and creating an unsafe environment. Some Facebook users created a page called ‘Families Against Mikel Knight’ where ex-street team members and others could comment on Knight’s operation.
Knight filed a lawsuit against Facebook asserting a collection of claims, including one for infringement of his right of publicity. Facebook responded with an anti-SLAPP motion seeking to dismiss the complaint. Since Knight was effectively trying to hold Facebook liable for content posted by users, the court correctly dismissed most of Knight’s claims as barred under CDA 230. But the superior court did allow Knight’s right of publicity claim to proceed. The right of publicity is supposed to prohibit unauthorized commercial use of a person’s identity. The court reasoned [PDF] that Facebook had “used” Knight’s likeness because his image appeared on pages that also included advertisements.
The Court of Appeal agreed that Knight could not assert a right of publicity claim. It wrote:
Nowhere does Knight demonstrate that the advertisements appearing next to the pages used his name or likeness, or that any of the advertisements were created by, or advertised, Facebook. All he claims is that Facebook displayed advertisements next to pages created by third parties who were using Knight’s name and likeness to critique his business practices—and their allegedly fatal consequences. While Knight claims that “Facebook continues to place ads on all the unauthorized Facebook pages,” he necessarily concedes that his name and likeness appear not in the ads themselves, but in the content posted to Facebook by third parties. This is insufficient.
This is the right result. Courts had previously held, for example, that a magazine article does not give rise to a right of publicity claim just because it is placed next to an advertisement. There is no reason to have a different, less protective, rule for the Internet.
Since it found that Knight had not pleaded a viable right of publicity claim, the appellate court did not decide whether his claim was also barred by CDA 230 or the First Amendment. But even though it did not reach these issues, the ruling places an important limit on the right of publicity and is a victory for online speech.
There’s a bill in the California Assembly that we think would make postsecondary education more expensive for students. Not only that: we think that it would undermine students’ right to make fair uses of educational materials. To make matters worse, several states around the country appear to be considering similar measures.
S.B. 727 may seem benign. The bill’s purpose appears to be to give public colleges and universities more leeway in what types of course materials they assign to students and what types of pricing agreements they enter with the publishers of those materials. There’s a troubling provision, though, which says that institutions can assign texts that are “Delivered through a technology that is, or the license of which is, required to only be used within a course.” In other words, public colleges would be encouraged to assign materials that are locked down under arcane licensing agreements unfairly restricting how students can use them.
Being able to buy and sell used textbooks gives students an important lever with which to rein in unfair pricing tactics by publishers.
Under current law, publishers are urged to provide “unbundled” versions of textbooks for students—that is, to make books available as-is without forcing students also to buy expensive online subscriptions—and faculty are discouraged from assigning books that aren’t available unbundled. The law also directs institutions to facilitate resale and sharing of books among students, in order to help keep students’ costs low. SB 727 could undermine all of that.
EFF has written a lot about how manufacturers and media owners attempt to use Terms of Service agreements to ban otherwise lawful use of their products or copyrighted works. These providers sometimes argue that by clicking “I agree,” you relinquish your fair use rights. The standard set by S.B. 727—saying that publishers can force students to use their materials “only within a course”—would clearly invite publishers to attempt to use those licensing agreements to restrict students’ fair use rights.
Unfortunately, this dangerous bill has flown under the radar since it was introduced in April. It passed out of committee with barely a debate, and we’re concerned that it could quietly become law any day.
There may be educational benefits to a transition from traditional textbooks to online services, but it’s essential that institutions make that transition in a way that isn’t a step backward for students. States should partner only with publishers that understand and respect students’ fair use rights, and they certainly shouldn’t enact laws giving those publishers legal ground to unfairly restrict how students use their materials.
It’s also essential that schools recognize the power of secondary markets: being able to buy and sell used textbooks gives students an important lever with which to rein in unfair pricing tactics by publishers. Unfortunately, a transition to online systems may mean sacrificing some of students’ power to balance out unfair pricing through resale. Institutions should take that into account when choosing what course material offerings to use.
The best way to meet both of those needs—respecting students’ fair use rights and acknowledging that online resources can limit students’ ability to push back against price gouging—is to enact policies that highly prioritize OER.
A few months ago, we received confirmation of what many of us had feared: incoming Federal Communications Commission Chair Ajit Pai announced his plans to eliminate the clear, enforceable protections for net neutrality that the Commission had implemented in 2015.
Since then, people have stood up en masse in support of the open Internet. Over 18 million comments have been filed with the FCC—the majority of them opposing the Commission’s plan to roll back protections for net neutrality. (And it’s not too late! You still have one more week to file a comment of your own.)
Team Internet sent a loud and clear message to the FCC: users have a right to expect protections from unfair practices like site blocking and throttling, and FCC enforcement under Title II of the Telecommunications Act is the only means to secure those protections.
Next month, the House Energy and Commerce Committee will host a hearing on net neutrality. It has invited all of the major Internet service providers, as well as large Internet businesses like Facebook, Google, and Netflix, to come and testify. While it may be encouraging to see Congress turning its focus to net neutrality, it’s troubling that lawmakers appear to be more interested in the thoughts of a handful of large corporations than those of the public that’s been overwhelmingly calling for the preservation of existing net neutrality protections.
We can demand that lawmakers hear from us, though. Please take a moment to write your members of Congress and urge them to stand behind the Open Internet Order. Don’t let Congress compromise on your right to a free and open Internet.
If you have ever wanted to use the wifi at a coffee shop or library, you have probably had to click through a screen to do it. This screen might have shown you the network’s Terms of Service and prompted you to click an “I agree” button. Depending on where you were, it might have asked you for information about yourself, like your email, social media accounts, room number (in a hotel), account number (in a library), or other identifying information. Sometimes you even have to watch a short video or ad before wifi access is granted.
These kinds of screens are called captive portals, and they interfere with wireless security without providing many user benefits.
One example of a captive portal. In addition to getting the user's agreement to Terms of Service, other captive portals might ask for login information, social media accounts, email addresses, or other information.
Security and Privacy Problems for Users
Captive portals are to blame for a number of security issues, especially when it comes to HTTPS websites. HTTPS is meant to prevent traffic interception, alteration, and impersonation by a third party. But captive portals work by doing exactly that: they intercept and alter the connection between the user and the site they are trying to visit. On an unencrypted HTTP connection, the user would not even notice this. But for sites secured with HTTPS, the web browser detects something or someone hijacking the connection (similar to a man-in-the-middle attack). This causes “untrusted connection” warnings about fake certificates for websites that users otherwise expect to be safe.
Those copious unexplained “untrusted connection” warnings on a network with captive portals—essentially false-positive warnings about websites that are actually safe—can train users to adopt the dangerous habit of ignoring security warnings.
And that’s not the only inaccurate lesson captive portals teach users about wireless security. The illusion of security that a log-in window may provide can lead users to inaccurately believe that wireless networks with captive portals are safer than those without.
On top of that, captive portals may not play nicely with devices and softwares that don’t have web browsers. This can all be confusing and cumbersome for people trying to use the network.
Despite all this, businesses and organizations have several incentives to use captive portals. Chief among these is user authentication—that is, giving administrators some idea of who is using the wireless network and when. Captive portals that require information about you tie your online activities to a specific login or identity. In addition to monitoring the network, this can help an organization harvest emails for marketing campaigns, or collect social media information to sell to third parties—all trading user privacy in exchange for network access.
Organizations might also use captive portals to display a Terms of Service page. However, that is not the only way to make sure users see and agree to an access policy. The Open Wireless Movement, for example, offers an alternative. Posting a Terms of Service in a physical space, like in a library, can also be an option.
For Network Admins: If A Captive Portal Is Necessary, Follow These Best Practices
If you administer a network and must use a captive portal, you can follow best practices to mitigate some of the security and privacy problems described above.
First, let’s look at the problem of copious security warnings. The captive portal should reject connections on port 443 for hostnames it does not recognize. This will generate a “CONNECTION_REFUSED” error rather than the "Connection not private" error that would result from serving an invalid certificate, and will avoid desensitizing users to the risky behavior of clicking through that type of warning. Of course, even better is to pass through HTTPS connections without interference, if that meets your needs.
Second, there’s the challenge of authenticating network users. In many cases, access to a restricted network may require a complex login flow that is not currently supported by wifi’s simple shared password model. In general, such networks are better off using the more sophisticated WPA2 Enterprise model. In cases where that’s not feasible, the network can minimize captive portal harm by: (1) using a valid certificate on a domain name rooted in the public DNS, (2) not interfering with captive portal detection, (3) ensuring the login works in a restricted captive portal login environment (e.g. don’t require a logged-in Facebook account), and (4) rejecting HTTPS connections to external domains during the login process, rather than serving an incorrect certificate.
Finally, take advantage of existing device and OS features. Device and OS vendors have come up with ways to minimize the harms of captive portals, by sending an innocuous request on first connection to a network. If that request is interfered with, the OS will open up a special, limited browser to interact with the likely captive portal screen. Unfortunately, some captive portal software interferes with these detection methods by treating the “innocuous request” differently. Instead, best practice is to simply let the captive portal detection software do its job.
Toward More Open, Privacy-Protective Wireless
For most networks, captive portals are an unnecessary barrier between users and a wireless connection. Instead of providing access benefits, they only make users less safe. As we collectively move away from captive portals in our businesses and public spaces, we can move toward more open, more privacy-protective wireless access.
This blog post was first published in The Hill on July 18, 2017.
This summer, the U.S. Department of Homeland Security (DHS) is expanding its program of subjecting U.S. and foreign citizens to facial recognition screening at international airports. This indiscriminate biometric surveillance program threatens the personal privacy of millions of travelers. DHS should end it.
The historyofthisprogram is a case study in mission creep. In 1996, Congress authorized automated tracking of foreign citizens as they enter and exit the U.S. In 2004, DHS began biometric screening of foreign citizens upon arrival. In 2016, DHS launched a pilot program of facial recognition screening of all travelers, U.S. and foreign citizens alike, on a daily international flight out of Atlanta’s Hartsfield-Jackson airport. In March 2017, President Trump’s revised travel ban ordered DHS to expedite the completion of biometric entry-exit screening of foreign citizens. Today, facial recognition screening is underway for all travelers on certain international flights out of two more pilot sites: Washington’s Dulles airport and Houston’s Bush airport. Later this summer, DHS will expand this program to five more international airports.
To be clear—what began as DHS’s biometric travel screening of foreign citizens morphed, without congressional authorization, into screening of U.S. citizens, too. In the words of DHS’s recent testimony to Congress: “U.S. citizens are not exempted from this process.” Privacy advocates have longopposed biometric screening of immigrants. Now we also oppose the expansion of biometric border screening to cover U.S. citizens.
For many reasons, DHS should end its ever-growing biometric border screening program.
First, facial recognition is a unique threat to our privacy. Most of us display our faces wherever we go. Cameras are increasingly accurate at great distances. Facial recognition algorithms are increasingly powerful. Computer systems are increasingly interoperable. Thus, for example, an easy-to-use Russian mobile app called FindFace allows strangers to identify each other by using facial recognition to link an ordinary phone camera to a popular social networking site. If identity thieves or stalkers target us, we can change our credit card numbers and even our names, but we cannot change our faces.
Second, facial recognition has significant accuracyproblems. Thus, many international travelers will be unjustly delayed and scrutinized, and scarce law enforcement resources will be wasted, due to the inevitable errors of government biometric screening systems. Worse, facial recognition error rates are evenhigher for African-American travelers than for white travelers, perhaps because people of color are underrepresented in algorithmic training data. So the DHS program will have an inevitable racial disparate impact.
Third, data thieves might steal DHS’s biometric information. In the infamous 2015 data breach of the U.S. Office of Personnel Management, hackers absconded with the fingerprints of over five million people. As part of its border screening, DHS plans to retain the biometric information of U.S. citizens for as long as two weeks. DHS does not rule out keeping this sensitive information even longer. DHS retains the biometric information of foreign citizens for many years. DHS processes more than 300,000 international air travelers every day. Their biometric information will be an enticing target for data thieves.
Fourth, government employees might misuse DHS’s reservoir of biometric data. NSA and police officials alike (usually male) have abused sensitive government databases to acquire information about people (usually female) that they are romantically interested in.
Fifth, DHS might share with other government agencies the biometric information it seizes from travelers. Many government agencies share their biometric data with each other. For example, the FBI’s facial recognition system has access to more than 400 million photos held by other agencies (in addition to the FBI’s own repository of 30 million photos). Likewise, half of all adult U.S. drivers live in states whose motor vehicles agencies share their license photos with police facial recognition systems, according to a 2016 study by the Georgetown Law Center on Privacy and Technology. DHS’s biometric border screening system is part of this larger web of government biometric surveillance. If DHS shares its biometric data, the photographs of millions of innocent travelers could wind up in criminal justice databases.
Sixth, DHS might expand the ways it uses its biometric screening system. Today, DHS uses it to enforce immigration laws and ensure traveler identity. Tomorrow, DHS could try to use it to identify travelers who are wanted on outstanding warrants. Police warrant databases are riddled with error. And they include many people sought for traffic infractions and other minor offenses, which should not impede anyone from flying to a family event or a work opportunity.
DHS recently took the alarming position that “the only way for an individual to ensure he or she is not subject to collection of biometric information when traveling internationally is to refrain from traveling.” But our government should not try to force us to abandon one of our human rights (biometric privacy) in order to enjoy another (travel).
It gets worse. DHS is now exploring how to subject U.S. and foreign citizens to biometric airport screening not just for international departures, but also for international arrivals and even for domestic flights, according to a recent article in The Verge. A DHS executive explained: “Why not look to drive the innovation across the entire airport experience? . . . We want to make it available for every transaction in the airport where you have to show an ID today.”
Far from expanding its system of biometric border screening, DHS should end it. At a minimum, DHS must publish clear policies to ensure that any such screening is a knowing and voluntary opt-in choice, and that border agents do not coerce or trick any traveler into surrendering their biometric privacy.
At EFF, wekeepvery, verybusy. Our past is invariably tangled with the present—long-running court cases that stretch on for years, and hard-won battles that it turns out we have to re-visit. We kicked off 2016 with a blast from the past—the latest salvo in the Crypto Wars—and, along with the rest of the country, entered a new era in November. We've been moving at top speed ever since.
Every once in a while, though, it's wise to catch our breath and remember why we fight as hard as we do. Our 2016 Annual Report includes reflections from several EFF staff members on the work we do, and why we do it. In looking back, we look forward with fresh resolve. We hope you will, too.
Last month we wrapped up another successful summer membership drive. Thank you to everyone who participated in EFF’s Summer Security Camp! Whether it was sharing with your friends or helping us reach our match goal, you continue to make our work defending digital rights possible, and for that we are truly grateful.
We would like to extend our special thanks to the Botero-Lowry family for generously funding our match campaign when we reached our goal of 1,000 donors.
One more exciting announcement: our limited-edition Know Your Rights guides and commemorative EFF embroidered patch will be available on our online store very soon! PDFs of the guides in one-page format are available now (links below).
This summer, two of the west coast’s largest metropolitan areas—Seattle and Los Angeles County—took major steps to curtail secret, unilateral surveillance by local police. These victories for transparency and community control lend momentum toward sweeping reforms pending across California, as well as congressional efforts to curtail unchecked surveillance by federal authorities.
On July 31, the Seattle City Council adopted an ordinance requiring public participation when local police departments acquire surveillance technologies. Days before, a Los Angeles County oversight body rejected a proposed use policy governing the sheriff's department's use of surveillance drones, with a majority of commissioners expressing a preference for deputies not to use drones at all.
These measures are the frontrunners for others pending before municipal and county bodies across the country, as well as at the state level in California, America's most populous and prosperous state. S.B. 21—a bill poised to transform police oversight across California—has already passed the California State Senate and two State Assembly committees.
Seattle Adopts a New Transparency Ordinance
Last Monday’s vote in Seattle reflected a unanimous City Council agreement: the public must be involved in decisions about local police and other municipal surveillance.
The ordinance was long overdue. In 2013, the Seattle Police Department deactivated a federally funded surveillance mesh network after The Stranger reported that the invasive system was approved without meaningful scrutiny. SPD conceded that its process was insufficient, agreeing to suspend the program “until [the] city council approves a draft policy and until there's an opportunity for vigorous public debate."
Seattle’s oversight ordinance requires the Council to hear from the public before a law enforcement (or any other municipal) agency mayacquire surveillance equipment. The law is "device neutral," subjecting acquisition of any surveillance equipment to the process. According to Councilmember M. Lorena González, “this new law [will enable the] Council [to] be a check and balance on surveillance technology acquisitions because the public deserves to know how such data will be managed and for what purpose it is being collected.”
Unfortunately, Seattle’s ordinance has crucial gaps that limit its effectiveness. For example, the law's enforcement mechanism relies on private litigants, but unnecessarily limits their access to justice. In addition, broad exemptions carving out police body cameras and various sources of video surveillance exclude some of the most visible forms of surveillance from the ordinance’s protections. That said, the law still covers covert police spying tools like cell-site simulators, and Shotspotter audio listening devices that cities across America are already abandoning due to concerns about their effectiveness.
For instance, the new law does not require that surveillance technology be used narrowly for the “purposes of a criminal investigation supported by reasonable suspicion.” It does, however, require agencies seeking technology to specify their policies for each device platform, including how they will collect, retain, and share data obtained through them.
In particular, the ordinance requires law enforcement seeking any surveillance technology to develop “a clear use and data management policy,” addressing “factors that will be used to determine where, when, and how the technology is deployed…whether [it] will be operated continuously or used only under specific circumstances,” and also specifies “what processes will be required prior to each use…including…what legal standard…must be met before [it] is used….”
By providing transparency and community control at the point of acquisition, the new ordinance enables future policymakers and activists to seek more demanding limits on these parameters, not only through future legislation, but also through the process now required before each proposed technology acquisition.
LA County Commissioners Reject Proposed Drone Policy
In response to the Los Angeles County Sheriff’s Department acquisition of an unmanned aerial vehicle, concerned community members mobilized to challenge the normalization of drone surveillance. Hamid Khan from the Stop LAPD Spying Coalition argued that the sheriff’s operation of a surveillance drone “represents…the rapid escalation and militarization of police.”
Others objected that the sheriff had previously hired a private company to conduct manned aerial surveillance of Compton in 2012 without securing permission from local policymakers. Yet others noted predictable “mission creep” and, according to the LA Times, “voiced concerns that the aircraft could someday be armed, as in North Dakota….”
Prompted in part by those concerns, the Los Angeles County Board of Supervisors voted in January to subject the sheriff’s surveillance drone operations to civilian oversight. On July 27, the Los Angeles County Sheriff Civilian Oversight Commission voted to reject the sheriff’s proposed use policy, with a majority of commissioners preferring for the sheriff not to use the device at all. According to the LA Times:
The four members voting against the recommendations…said at the meeting and afterward that they oppose the department’s use of drones altogether….A fifth commissioner…was not at the meeting but wrote a report issued Thursday explaining her support for grounding the drone.
Despite the commissioners’ opposition to sheriff’s plans, the Stop LAPD Spying Coalition spokesperson noted that their decision “leaves things in limbo” since it lets the Sheriff’s Department continue deploying its drone.
Transforming Police Surveillance Across California: S.B. 21
The measure adopted in Seattle is similar to a statewide bill pending in California that could prevent the spy-first-ask-questions-later situations we saw in Seattle, Baltimore, and also in Los Angeles, by requiring new surveillance technologies to be approved by local policymakers—informed by public comment—before law enforcement agencies gain access to them.
S.B. 21, introduced by California State Sen. Jerry Hill (D-San Mateo), has already been approved by the state Senate, as well as two committees of the state Assembly. It is currently pending before the Assembly’s Appropriations Committee, which it must pass in order to receive a vote from the full Assembly.
Should S.B. 21 pass the Assembly and be signed into law by Gov. Jerry Brown, it would subject hundreds of law enforcement agencies at once to transparency and community control through requirements along the lines of—but even broader than—those adopted in Seattle this week.
Neither transparency nor public participation are partisan ideals. Rather, they are principles of democratic governance long embraced across the American political spectrum.
The alternative to S.B. 21 is continued secrecy and executive fiat, which always contradict our nation's founding values—but never so much as when they infect local policymaking. Today, when the federal executive branch appears to have less concern than ever for the rule of law, state and local checks on executive power grow even more crucial.
In addition to supporting S.B. 21 by contacting their members of the state Assembly, we also encourage Californians who support digital rights to write social media posts and op-eds explaining in their own words why transparency and community control matter so much.
For those who live elsewhere, bringing together neighbors to learn about these reforms presents the opportunity to champion them in other areas. Grassroots groups active in the Electronic Frontier Alliance have been integral to the policymaking process underlying S.B. 21, and if the Alliance has not yet included a group in your area, the local policing reforms embodied in S.B. 21 present a chance to raise the flag wherever in the U.S. you live.
Transforming Surveillance Policy in Congress: FISA 702
Before considering any further extension of the NSA’s expiring authorities, however, Congress must first do the hard work of uncovering secret facts, as oversight bodies in Seattle, Los Angeles, and the rest of the State of California are finally doing at the local level.
Update [8/8/2017]: This post was updated to clarify the result of the Los Angeles County Sheriff Civilian Oversight Commission's vote on July 27.