Automated Copyright Filter Can’t Detect Infringement or Irony
Stop us if you’ve heard this one: legal expert posts video on YouTube with fair use clip in a lecture about copyright law, which is then taken down after a copyright bot finds it. Simply pointing out the mistake doesn’t restore the video to the Internet. Instead, extraordinary measures have to be taken for the wrong to be righted.
Somehow, these facts aren’t just the 2013 story of Larry Lessig, his “Open” lecture, and Liberation music. Somehow, seven years later, basically the exact same thing played out with a panel called “Proving Similarity,” moderated by Vanderbilt Law Professor Joseph Fishman and featuring Judith Finell and Sandy Wilbur—music experts from opposite sides of the “Blurred Lines” case, where the estate of Marvin Gaye claimed Robin Thicke and Pharrell’s song infringed “Got to Give It Up.” And, somehow, we still don’t have a system that functions as it should.
In a panel where the point was to explain how song similarity is analyzed in copyright cases, of course bits of the song were played. And in an attempt to share the information from the event, videos of the panel were put online by NYU Law. Of course, one of those videos was flagged by Content ID, YouTube’s automated copyright filter.
Unsurprisingly, the law school—you know, that place filled with legal experts—was sure that this video—you know, the one explaining part of copyright law—fell within the boundaries of fair use. What is surprising? That even a bunch of experts couldn’t figure out how YouTube’s Content ID and DMCA processes were intertwined.
Confident in its analysis, NYU Law disputed the Content ID claims. The rightsholders rejected this dispute, meaning any further attempt to challenge the copyright claims could end up with the school’s account getting copyright strikes. And, as all YouTubers know, multiple strikes can result in losing your account. What concerned NYU Law was that there were multiple claims on the video, and the experts there couldn’t figure out whether the multiple claims would result in multiple strikes or, because it was all one Content ID batch, one. None of YouTube’s help pages helped them figure out the answer.
The claim was eventually removed, but not through the processes available to everyday YouTube users. No, the claim was eventually removed only after NYU Law reached out to YouTube through private channels, attempting to get an answer to its questions about how Content ID, takedowns, and the strikes policy worked. According to NYU Law’s blog, the school never got any “clarity” from YouTube.
There are any number of things we can learn from this story. One is that the filter and takedown systems are so confusing that not even legal experts could figure them out. Another is that, somehow, seven years after it caught Lessig’s lecture, YouTube’s copyright filter still fails to identify fair use. Yet another is how powerful a disincentive account deletion is. But here’s the big one: when content is being removed, fame, backchanneling, and press attention are no substitute for a robust, transparent appeals process.