From its inception, YouTube’s algorithmic copyright cop, Content ID, has been rife with problems — at least from the user’s perspective. Overbroad takedowns, a confusing dispute process, and little in the way of accountability turned the “filter” into an easy censorship tool. On Wednesday, however, YouTube announced several changes that should help users fight back against bogus takedowns, and help prevent those takedowns in the first place. 

The Content ID system works by scanning videos on the site for content matching one or more of the over 10 million registered samples that partners have provided to YouTube. In the case of a match, it follows the "business rules" set by the assigned rightsholder, which can include blocking or "monetizing" the upload. If a rightsholder has requested a block, viewers see the familiar error message that indicates that the video has been pulled for copyright reasons. If the business rules are set to "monetize" the video, YouTube gives the rightsholder a portion of the revenue generated from ads run alongside the video.

Users could always dispute Content ID claims, but the process was confusing and there was no means to challenge a denial. Now, an eligible user (a broad category that appears to include verified users "in good copyright standing") can file an appeal in any situation where Content ID has flagged her video and the rightsholder has rejected her dispute. In the case of an appeal, the copyright holder must either release the claim or file a formal DMCA takedown.

This move helps to address one common criticism of Content ID: that it goes above and beyond the requirements of the DMCA, operating outside of it and rendering users subject to new rules that have neither the accountability nor the appeals process of the actual law. In cases where Content ID system has overreached, this new procedure requires rightsholders to return to the process set out in the DMCA for removing content. In turn, that requires the rightsholders to swear under penalty of perjury that there is an actual infringement, and allows for the video to reappear after a counter-notice.

YouTube also announced a technical change that it's referred to as "smarter claim detection." The video site has acknowledged that Content ID sometimes makes mistakes — either by misidentifying content, or correctly identifying content but failing to recognize it as a clear fair use — and has improved its algorithms to help recognize these mistakes. Following this update, some of these possibly mistaken claims will be considered "low-confidence" matches, and rightsholders will have to manually review those matches to confirm that there is actually an infringement. 1

This change in particular brings Content ID closer to the Fair Use Principles for User Generated Content that we proposed along with other public interest groups. It should also help to address the recent automated takedowns that have generated some unwanted attention for programs like Content ID. In August, a NASA video of the Curiosity landing on Mars was automatically blocked due to a mistaken copyright claim. And last month two separate livestream videos were removed — one of the annual Hugo awards and one from the Democratic National Convention — in circumstances also involving automated takedowns. Requiring human intervention in more cases is a big improvement, and will reduce these sorts of situations where copyright bots shoot first and humans ask questions later.

These changes have been a long time coming and we’re glad to see them. Equally pleased, we expect, will be the many users who have want to fight back when a rightsholder decides to play judge, jury, and executioner of their lawful speech.

  • 1. In some early coverage, this section of the announcement was misinterpreted to mean that YouTube, and not the rightsholder, would be doing the manual review. We've clarified with YouTube that the interpretation presented here is correct. It has since clarified the language in the announcement.

Related Issues