EFF Opposes Unless Amended California Bill to Require Bot Disclosures   

Note:  This post’s sub-heading has been updated to clarify EFF’s position on California’s bot labeling bill. As explained in our post, we are not opposed to bot labeling per se, but we oppose SB 1001 in its current form because of its over-broad scope and dangerous DMCA-inspired takedown provision, which will result in the censorship of lawful speech.

The Google Duplex demos released two weeks ago—audio recordings of the company’s new AI system scheduling a hair appointment and the other of the system calling a restaurant—are at once unsettling and astounding. The system is designed to enable the Google personal assistant to make telephone calls and conduct natural conversations, and it works; it’s hard to tell who is the robot and who is the human. The demos have drawn both awe and criticism, including calls that the company is “ethically lost” for failing to disclose that the caller was actually a bot and for adding human filler sounds, like “um” and “ah,” that some see as deceptive.

In response to this criticism, Google issued a statement noting that these recordings were only demos, that it is designing the Duplex feature “with disclosure built-in,” and that it is going “make sure the system is appropriately identified." We’re glad that Google plans to be build transparency into this technology. There are many cases, and this may be one of them, where it makes sense for AIs or bots to be labeled as such, so that people can appropriately calibrate their responses. But across-the-board legally mandated AI- or bot-labeling proposals, such as a bill currently under consideration in California, raise significant free speech concerns. 

The California bill, B.O.T. Act of 2018 (S.B. 1001), would make it unlawful for any person to use a social bot to communicate or interact with natural persons online without disclosing that the bot is not a natural person. The bill—which EFF opposes due to its over-breadth—is influenced by the Russian bots that plagued social media prior to the 2016 election and spambots used for fraud or commercial gain. But there are many other types of social bots, and this bill targets all of them. By targeting all bots instead of the specific type of bots driving the legislation, this bill would restrict and chill the use of bots for protected speech activities. EFF has urged the bill’s sponsor to withdraw the proposal until this fundamental constitutional deficiency is addressed.

While across-the-board labeling mandates of this type may sound like an easy solution, it is important to remember that the speech generated by bots is often simply speech of natural persons processed through a computer program. Bots are used for all sorts of ordinary and protected speech activities, including poetry, political speech, and even satire, such as poking fun at people who cannot resist arguing—even with bots. Disclosure mandates would restrict and chill the speech of artists whose projects may necessitate not disclosing that a bot is a bot.

Disclosure requirements could also be hard to effectuate in practice without effectively unmasking protected human speakers and thus reduce the ability of individuals to speak anonymously. Courts recognize that protecting anonymous speech, which has long-been recognized as “a shield from the tyranny of the majority,” is critical to a functioning democracy and subject laws that infringe on the right to anonymity in “core political speech” to close judicial scrutiny. 

When protected speech is at risk, it is not appropriate to cast a wide net and sort it out later. That’s not to say that all bot-labeling mandates would violate the First Amendment. There will likely be situations in which targeted labeling requirements may be needed to protect a significant or compelling “government interest”—such as in the context of social bots intended to persuade people to vote for a particular politician or ballot measure, especially if deployed at a scale that allowed those behind the bot to communicate with and potentially influence far more people than if relying on human-operated accounts. But any laws of this type must be carefully tailored to address proven harms. A helpful question to ask here is:

“Why does it matter that a bot (instead of a human) is speaking such that we should have a government mandate to force disclosure?”

While we understand and sympathize with the desire to know whether you are talking to a bot or a human, talking to a bot that you think is a human does not alone constitute a cognizable First Amendment harm. In the example above, a law targeting large-scale deployment of bots to persuade people to vote for a particular politician or ballot measure, the harm the law would be protecting against is election manipulation. And this harm would not flow from the mere failure to label a bot as a bot; it would flow from the use of bots to manufacture consensus for the purpose of distorting public opinion and swaying election results. Use of bots could hide these efforts; that’s why it would matter that a bot (instead of a human) was speaking.

Narrow-tailoring is also critical. As a paper by Madeline Lamo and University of Washington Law School Processor Ryan Calo presented at Stanford’s We Robot conference in April asks, “Does a concern over consumer or political manipulation, for instance, justify a requirement that artists tell us whether a person is behind their latest creation?” The authors say no, and we agree. Such a provision is not narrowly tailored to address concerns over consumer or political manipulation and will sweep in a great deal of protected speech.

In addition to First Amendment concerns, the California bill illustrates another problem with bot-labeling mandates: difficulties with enforcement. S.B. 1001 requires platforms to create a system whereby users can report suspected bots and, following any reports, “determine whether or not to disclose that the bot is not a natural person or remove the bot” in less than 72 hours. But it isn’t always easy to determine whether an account is controlled by a bot, a human, or a centaur, a human-machine team. Platforms can try to use metadata like IP addresses, mouse pointer movement, or keystroke timing to guess, but industrious bot operators can defeat those measures. These measures can also backfire against certain groups of users—such as people who use VPNs or Tor for privacy, who are often inappropriately blocked by sites today, or people with special accessibility needs who uses speech to text input, whose speech may be mislabeled by a mouse or keyboard heuristic. Platforms can also try to administer various sorts of Turing tests, but those don’t work against centaurs, and bots themselves are getting quite good at tricking their way through Turing tests. Some have claimed that Google Duplex, for instance, passed the Turing test via using verbal tics, speaking in the cadence of a natural human voice, and pausing and elongating certain words as if thinking about how to respond. 

We warned the California legislature last month that such a system would result in censorship of legitimate and protected speech. Years of attempts at content moderation by large platforms show that things can go wrong in a panoply of ways. And with an inflexible requirement built upon such subtle and adversarial criteria, S.B. 1001 would predictably cause innocent users to have their accounts labelled as bots, or even have them deleted altogether.

As the uproar following Google’s Duplex announcement portends, S.B. 1001 is only the beginning as far as AI- and bot-labeling proposals go. Bot-labeling raises complicated legal and ethical questions. As policy makers across the country begin to consider these proposals, they must recognize the free speech implications of across-the-board bot labeling mandates and craft narrowly-tailored rules that can pass First Amendment scrutiny.