In the early legal battles over network neutrality, Comcast challenged a Republican FCC's ability to enforce open Internet principles. In repeated legal filings, the company made clear that it did not believe the FCC could prevent providers from data discrimination unless it reclassified them as common carriers. After all, Comcast itself said in court that "nondiscrimination obligations are the hallmark of common carrier regulation (page 12)." In other words, Comcast was saying that the FCC couldn't impose nondiscrimination rules unless it reclassified Comcast as a common carrier - which is exactly what the FCC did in 2015 and exactly what Comcast is fighting now. "Common carrier regulation" is code for Title II of the Communications Act. "nondiscrimination obligations are the hallmark of common carrier regulation" -Comcast's 2009 court filing in Comcast vs FCC
The Comcast Plan If Network Neutrality Is Repealed
At the FCC, Comcast doubled down. In 2010, Comcast told the agency that one of the "benefits" that would be lost under an Open Internet Order would be the ability for cable and telephone to strike exclusive deals with Internet companies - in other words, paid prioritization, or "fast lanes" for those who can afford them.
"The proposed rule could prohibit Internet content, application, and service providers from improving their existing offerings with the assistance of a broadband ISP, regardless of whether doing so would be pro-competitive and beneficial to consumers." -Comcast FCC filing, Jan 14, 2010 (page 40).
While Comcast attempted to make paid prioritization sound like something that would be good for online service competition, it is pretty obvious how these types of exclusives and priority access deals will play out in reality. In practice, what we will see is the biggest Internet companies getting premium access to bandwidth while every mom-and-pop business and tech startup will get relegated to inferior infrastructure because they do not have the excess capital to pay for access. For example, even as the FCC was actively pushing a new Open Internet Order in 2014, Comcast started rerouting and degrading Netflix traffic despite the demand coming from Comcast's customers. Today, Netflix says it can pay for fast lanes - but the next Netflix won't be able to survive in that world.
Setting up the FCC to Fail
In its PR campaign, Comcast claims that its decision not to challenge the 2010 Open Internet Order is evidence of its support for network neutrality. In reality, it's likely the company stayed quiet because shortly after the Open Internet Order was approved Comcast was required to operate neutrally as a condition of its merger with NBC Universal. It had little to gain from publicly opposing the 2010 Order because they could not lift network neutrality obligations over their network even if they won in court due to merger conditions. Those Comcast NBCU merger conditions will expire in 2018. Here is what they said following the merger during consideration of the FCC's second defeat under Verizon vs. Comcast as they were asking for approval of yet another merger (this time with Time Warner Cable).
"Comcast agreed to be bound by the FCC's Open Internet rules until 2018. These protections will now extend to the acquired TWC systems, giving the FCC ample time to adopt (and, if necessary, to defend) legally enforcement Open Internet rules applicable to the entire industry." -Joint statement by David L. Cohen (Comcast) and Arthur T. Minson (Time Warner Cable) to the Senate Judiciary Committee regarding the Comcast-Time Warner Cable merger
Translation: Don't worry about our merger because we are bound to respect the Open Internet Rules for now, and by the time the agreement expires, the FCC will have found a legally enforceable basis for net neutrality protections. As Comcast indicated way back in 2009, that path required the FCC to do exactly what it did in 2015: reclassify broadband as a common carrier service. So Comcast's record is pretty clear: the cable behemoth has known for years what the FCC had to go to get legally sound neutrality rules. Now the FCC has done it, Comcast is fighting tooth and nail to reverse it.
If we want to stop the Comcast plan to repeal network neutrality and convert the Internet into a pay-to-win system where only the largest players can compete for access to subscribers, squeezing out innovative and competing services (not to mention libraries, hospitals, schools, and political organizations), then we must act now.
Update [6/8/2017]: This post was updated to include a quote from a local organizer and the names of several supporting local organizations.
On Thursday night, the capital of the smallest state in the union adopted a wide-ranging police reform measure with national and historic implications. The Providence City Council voted 13-1 to adopt the Providence Community-Police Relations Act, which had generated controversy for the very same reason that it was ultimately adopted: it protects a sweeping array of civil rights and civil liberties (including digital rights championed by EFF) from various kinds of violations by police officers, all in a single measure.
Included within the Act are protections to prevent police from arbitrarily adding young people to gang databases, providing notice to youth under 18 if they are so designated, and allowing adults an opportunity to learn whether they have been included. It also forces police to justify any use of targeted electronic surveillance by imposing a requirement that officers first establish reasonable suspicion of criminal activity. Last but far from least, the Act protects the civilian right to observe and record police activities, which—combined with technology such as cell phones, video, and social media—has recently proven crucial in inspiring a multi-racial social movement responding to long festering abuses.
Providence may now plausibly claim to lead the nation in embracing policy reforms responding to the contemporary movement for civil rights.
Beyond those concerns shared by EFF, the Act also includes a range of further elements protecting civil rights. Visionary measures to address discriminatory profiling prohibit police from considering racial, religious, and gender characteristics when assessing suspects unless “the officer’s decision is based on a specific and reliable suspect description as well.” The Act also prohibits police from inquiring about immigration status, preserving community trust and protecting both families from being torn apart and police departments from being commandeered to do the federal government’s work enforcing non-criminal civil code violations.
Earlier this spring, the Council unanimously approved a slightly different version of the measure, then known as the Community Safety Act, in a unanimous vote. Only a week later, it delayed its prior decision, deferring until June 1 a final vote on proposed recommendations from a working group it established to bring together stakeholders including community advocates and police officials.
After meeting five times over the course of the past month, the working group issued its recommendations, with the support of Police Chief Hugh Clements and other officers included in the working group. Yet the Fraternal Order of Police remained intransigent in its opposition, issuing formal condemnations of the policy process at the eleventh hour for the second time in only a few weeks.
In the wake of the Council's approval, Mayor Jorge Elorza pledged to sign the bill into law. But long before it gained the approval of policymakers, a proposal for intersectional policing reforms united community organizations in and around Providence, including Rhode Island Rights, a member of the Electronic Frontier Alliance. Organizations coordinating the campaign, including the Providence Youth Student Movement (PrYSM), Direct Action for Rights and Equality (DARE), American Friends Service Committee (AFSC), and Olneyville Neighborhood Association (ONA), together formed the STEP UP Network.
Groups of residents promoted both formal and informal discussions of the issues. They educated their neighbors, drew together a remarkably broad coalition of local groups, and even hosted a street festival “to use music, dance and art to bring attention to injustices and inequalities in our city and encourage people from across Providence to stand behind this legislation so that we can ban racial profiling and build a safer city, specifically for youth, immigrants and people of color.”
Reflecting on the Council's vote, the STEP UP Network's Campaign Coordinator, Vanessa Flores-Maldonado, noted that the coalition's campaign “was crafted by the people, and the people have continued to fight and overcome countless obstacles in the past four years that sought to silence us. Today is proof that the people will be heard and that communities can succeed on our own terms.”
The movement for police accountability has drawn viral participation in some cities driving national news cycles, including St. Louis, Baltimore, and Charlotte. But Providence may now plausibly claim to lead the nation in embracing policy reforms responding to those social movements.
By working to secure the near-unanimous support of their elected municipal representatives, grassroots groups who championed the new Act have conclusively demonstrated the viability of expansive local reforms combining measures to limit police profiling, surveillance, and retaliation all at once. Where concerned residents in other parts of the country learn from their examples, they might create new policy opportunities for civil rights and civil liberties, and together, even shift the national landscape.
NLPC’s report is false. Not one name, email address, or email domain cited in the report matches to any of the comments that came through EFF's comment tool.
NLPC’s report is false. Not one name, email address, or email domain cited in the report matches to any of the comments that came through EFF's comment tool.
Unfortunately, NLPC didn’t reach out to us before publishing its report. If they had, we would have been able to share our evidence with them and they could have avoided publishing a flawed report.
Before we explain how we know that NLPC’s accusations are false, we want to say a few words about our DearFCC tool. The FCC proposal to toss net neutrality guidelines would open the door to ISPs creating fast lanes for some content and slow lanes for others. It would leave consumers at the mercy of throttling and rob them of the meaningful access guarantees and privacy protections we fought for and won two years ago. DearFCC was created to provide consumers like you a tool for making your voices heard. Your Internet rights are at stake, and you deserve to be heard. We take very seriously claims of fake comments, and our analysis shows that none of the supposedly fake comments in the NLPC report were submitted through DearFCC.
So how do we know NLPC’s report is wrong? For one thing, we counted the number of comments people have submitted to the FCC through our system. That number is nowhere near the 100,000 comments NLPC said we filed.
Further, just before the sunshine period—when the FCC stopped accepting comments—we started storing copies of comments submitted through our system, because we weren’t sure how the FCC would treat comments submitted during that period. This week, we searched through all of the stored comments for the names and email address domains listed in NLPC’s report, and didn’t find a single match.
Finally, the text that NLPC found in the 100,000 comments in question isn’t identical to the text our system uses. It’s close, but there’s a subtle difference.
One of the sentences our comment system generates is:
It’s subtle, but in the word “I’m” in EFF’s text, the apostrophe is actually a “right single quotation mark.” In the text from the 100,000 comments mentioned in NLPC’s report, the apostrophe is a neutral “typewriter apostrophe.” For more information on the difference, see this Wikipedia article.
Why does the difference matter? Because it shows that whoever submitted the 100,000 identical comments the NLPC report mentions copied and pasted the text to make the comments look like they came through EFF’s DearFCC.org site, when they did not. If NLPC had looked closely at the comments they would have noticed the difference, and realized that the comments weren’t generated by EFF’s website. Apparently, they did not.
As we said last month, “Digital democracy is not easy. The FCC can’t just count comments for and against net neutrality as though they were ballots in a ballot box. But neither can Chairman Pai ignore the opinions of Internet users in the U.S., the majority of whom want to [continue] being protected against data discrimination by ISPs like Comcast, AT&T, and Verizon.” If the FCC and opponents of net neutrality respond to attacks on the “public comment system not by defending the system, but by discounting and ignoring public opinion expressed through that system, then the agency is answerable to no one.”
Help us hold the FCC accountable. Submit your comments to the FCC at https://www.dearfcc.org, and we’ll work as hard as we can to make sure the FCC listens. And you can be sure that we won’t let anyone drown out your voice.
In May, Montana adopted a new statute that limits government access to the contents of electronic communications stored by service providers. EFF applauds this new privacy safeguard. We thank the Governor for cutting two flawed terms concerning the level of judicial review and records stored abroad, as we requested in a letter with the Center for Democracy & Technology.
Unfortunately, the new Montana law also allows the government to prevent providers from notifying their customers of records demands. Much like federal law that authorizes gags on service providers, this provision violates the First Amendment on its face.
What Montana Got Right
Service providers maintain an ever-growing volume of our highly private digital content. If government wants to seize this content from the providers, it should first get a search warrant from a court based on probable cause of crime. In the watershed Warshak decision in 2010, the U.S. Court of Appeals for the Sixth Circuit held that the Fourth Amendment requires a warrant in these circumstances. For many years, EFF and other privacy advocates have asked legislators to codify this critical standard. We succeeded in California. Congress remains a workinprogress.
The bill that the Montana Legislature initially sent its Governor was significantly less protective than Warshak. Specifically, it would have allowed government to seize digital content from service providers based either on a court warrant, or on “an investigative subpoena,” which in Montana can be issued on a lesser standard than probable cause.
EFF and CDT sent the Governor a letter seeking an amendatory veto of this provision. The Governor issued such a veto, requiring a court finding of probable cause for either a warrant or an investigative subpoena. The Legislature then enacted this change.
The Governor also fixed a second problem that we raised. The original bill would have required service providers to turn over digital content “regardless of where the information is held.” In other words, a Montana warrant would purportedly empower police to seize digital content stored outside the United States, including the content of foreign persons living in foreign countries.
EFF and CDT objected that a comparable rule by a foreign country would intrude on the digital liberties of U.S. persons. The Governor vetoed this problematic extraterritoriality clause, and the Legislature enacted the statute without it. We hope other states and Congress will follow this lead.
Montana, Meet Microsoft v. DOJ
Unfortunately, the Montana law also replicates a serious constitutional flaw in federal law and promises to violate service providers’ First Amendment rights. The law allows Montana authorities to ask a court to gag providers and prevent them from informing users that their data has been turned over. The wording of this provision is quite similar to a gag order provision in the federal Stored Communications Act, 18 U.S.C. § 2705; both laws allow the government to get a gag based on the assertion that notice to the user might interfere with an investigation or cause other harms. However, a federal court recently allowed a First Amendment challenge to Section 2705 by Microsoft to proceed. The court found that Microsoft had plausibly argued that Section 2705 violates the First Amendment because it does not require the government to demonstrate that a gag is truly necessary. If anything, the Montana law is even weaker on this ground because it requires only that the government claim that notification of a user “may” cause a specific harm, where Section 2705 requires that harm “will” result.
In light of the Microsoft case, the Montana gag order provision may be vulnerable to a constitutional challenge, and EFF will be keeping a close eye on it. As always, we’re happy to hear from anyone who might need our assistance in this regard.
This week, EFF joined Creative Commons, Wikimedia, Mozilla, EDRi, Open Rights Group, and sixty other organizations in signing an open letter [PDF] addressed to Members of the European Parliament expressing our concerns about two key proposals for a new European "Digital Single Market" Directive on copyright.
These are the "value gap" proposal to require Internet platforms to put in place automatic filters to prevent copyright-infringing content from being uploaded by users (Article 13) and the equally misguided "link tax" proposal that would give news publishers a right to compensation when snippets of the text of news articles are used to link to the original source (Article 11).
The joint letter addresses these two muddle-headed proposals by stating:
The provision on the so-called "value gap" is designed to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens’ communications if they want to have any chance of staying in business. ...
More and more voices have joined the protest by academics and a variety of stakeholders (including some news publishers) against this [link tax] provision. The Council cannot remain deaf to these voices and must remove any creation of additional rights such as the press publishers’ right.
IMCO Proposal Lowers the Bar for Awful Copyright Policy
Incredibly, since the letter was drafted, the proposals have gotten even worse. In our last post on this topic, we highlighted some of the atrocious amendments to the original text being pushed by the CULT (Committee on Culture and Education) of the European Parliament, notably to give producers and performers an unwaiveable copyright-like power to demand additional payments for the use of their work by online streaming services. But the CULT committee doesn't have a monopoly on bad ideas for European copyright.
One of the other committees, the Internal Market and Consumer Protection Committee (IMCO), will be finalizing its own recommendations for the amendment of the Digital Single Market Directive in a vote on 8 June. On Wednesday Member of the European Parliament (MEP) Julia Reda sounded the alarm about a sly move by EPP Group Shadow Rapporteur to IMCO, MEP Pascal Arimont, to propose that the committee accept an alternative "compromise" text, that far from being a compromise, checks off every item on the copyright maximalist wish-list.
As regards the upload filtering mandate, the "compromise" would extend this mandate to cover not only content hosts, but also "any service facilitating the availability of such content," apparently including search engines and link directories. Only small startups would be exempt from this filtering requirement, and only for a maximum period of five years.
The safe harbor that protects Internet platforms from copyright liability for users' content would also be abolished for any Internet platform that uses an algorithm to improve the presentation of such content—and it's hard to imagine any platform that doesn't do that. As such, it is no exaggeration to say that if this became law, it would no longer be safe to operate a user-generated content website in Europe.
If this wasn't bad enough, the Shadow Rapporteur's proposal goes all out in favor of the Commission's unpopular link tax proposal, extending it even further so that it would cover all uses of news snippets both online and offline, and last for 50 rather than 20 years. The new monopoly right would also be extended to scientific and academic journals, allowing them to demand fees for the use of abstracts of articles. Only the use of single words and bare hyperlinks would be exempted from the link tax, meaning that as few as two words quoted from a news article would raise liability for copyright-like fees payable to publishing organizations.
A Ban on Image Search
Another of the senseless proposals that has been published this week, this time not by the Shadow Rapporteur of IMCO but by the CULT committee again, is a proposed amendment to extend the "value gap" proposal to include a new tax on search engines that index images, as for example Google and Bing do. This proposed amendment [PDF] provides:
Information society services that automatically reproduce or refer to significant amounts of visual works of art for the purpose of indexing and referencing shall conclude licensing agreements with right holders in order to ensure the fair remuneration of visual artists.
It's unclear how much support this particular proposed amendment may have, but should it find its way into the final CULT committee report, we can well imagine that just as Google News shut down in Spain following Spain's implementation of a link tax for news publishers, the closure of image search services won't be far behind. It's hard to see that outcome as being any less detrimental to artists as it would be for users.
European lawmakers need to draw a line in the sand, and stop giving oxygen to copyright holders' most fanciful demands. If you are European or have friends in Europe, you can help deliver this message by contacting members of the IMCO committee and of the CULT committee to urge them to oppose such extremist rent-seeking proposals.
Since last year, Indian citizens have been required to submit their photograph, iris and fingerprint scans in order to access legal entitlements, benefits, compensation, scholarships, and even nutrition programs. Submitting biometric information is needed for the rehabilitation of manual scavengers, the training and aid of disabled people, and anti-retroviral therapy for HIV/AIDS patients. Soon police in the Alwar district of Rajasthan will be able to register criminals, and track missing persons through an app that integrates biometric information with the Crime and Criminal Tracking Network Systems (CCTNS).
These instances demonstrate how intrusive India’s controversial national biometric identity scheme, better known as Aadhaar has grown. Aadhaar is a 12-digit unique identity number (UID) issued by the government after verifying a person’s biometric and demographic information. As of April 2017, the Unique Identification Authority of India (UIDAI) has issued 1.14 billion UIDs covering nearly 87% of the population making Aadhaar, the largest biometric database in the world. The government asserts that enrollment reduces fraud in welfare schemes and brings greater social inclusion. Welfare schemes that provide access to basic services for marginalized and vulnerable groups are essential. However, unlike countries where similar schemes have been implemented, invasive biometric collection is being imposed as a condition for basic entitlements in India. The privacy and surveillance risks associated with the scheme have caused much dissension in India.
Identity and Privacy in India
Initiated as an identity authentication tool, the critical problem with Aadhaar is that it is being pushed as a unique identifier to access a range of services. The government continues to maintain that the scheme is voluntary, and yet it has galvanized enrollment by linking Aadhaar to over 50 schemes. Aadhaar has become the de-facto identity document accepted at private, banks, schools, and hospitals. Since Aadhaar is linked to the delivery of essential services, authentication errors or deactivation has serious consequences including exclusion and denial of statutory rights. But more importantly, using a unique identifier across a range of schemes and services enables seamless combination and comparison of databases. By using Aadhaar, the government can match existing records such as driving license, ration card, financial history to the primary identifier to create detailed profiles. Aadhaar may not be the only mechanism, but essentially, it's a surveillance tool that the Indian government can use to surreptitiously identify and track citizens.
This is worrying, particularly in context of the ambiguity regarding privacy in India. The right to privacy for Indian citizens is not enshrined in the Constitution. Although, the Supreme Court has located the right to privacy as implicit in the concept of “ordered liberty” and held that it is necessary in order for citizens to effectively enjoy all other fundamental rights. There is also no comprehensive national framework that regulates the collection and use of personal information. In 2012, Justice K.S. Puttaswamy challenged Aadhaar in the Supreme Court of India on the grounds that it violates the right to privacy. The Court passed an interim order restricting compulsory linking of Aadhaar for benefits delivery, and referred the clarification on privacy as a right to a larger bench. More than a year later, the constitutional bench is yet to be constituted.
The delay in sorting out the nature and scope of privacy as right in India has allowed the government to continue linking Aadhaar to as many schemes as possible, perhaps with the intention of ensuring the scheme becomes too big to be rolled back. In 2016, the government enacted the 'Aadhaar Act' passing the legislation without any debate, discussion or even approval of both houses of Parliament. In April this year, Aadhaar was made compulsory for filing income tax or PAN number application and the decision is being challenges in Supreme Court. Defending the State , the Attorney-General of India claimed that the arguments on so-called privacy and bodily intrusion is bogus, and citizens cannot have an absolute right over their body! The State’s articulation is chilling, especially in light of the Human DNA Profiling Bill seeking the right to collect biological samples and DNA indices of citizens. Such anti-rights arguments are worth note because biometric tracking of citizens isn't just government policy - it is also becoming big business.
Role of Private Companies
Private companies supply hardware, software, programs, and the biometric registration services for rolling out Aadhaar to India’s large population. UIDAI’s Committee on Biometrics acknowledges that biometrics data are national assets though American biometric technology provider L-1 Identity Solutions, and consulting firms Accenture and Ernst and Young can access and retain citizens' data. The Aadhaar Act introduces electronic Know-Your-Customer (eKYC) that allows government agencies and private companies to download data such as name, gender and date of birth from the Aadhaar database at the time of authentication. Banks and telecom companies using authentication process to download data and auto-fill KYC forms and to profile users. Over the last few years, the number of companies or applications built around profiling of citizens’ personally sensitive data has grown exponentially.
A number of people linked with creating the UIDAI infrastructure have founded iSPIRT, an organisation that is pushing for commercial uses of Aadhaar. Private companies are using Aadhaar for authentication purposes and background checks. Microsoft has announced SkypeLite integration with Aadhaar to verify users. Others, such as TrustId and Eko are integrating rating systems into their authentication services and tracking users through platforms they create. In essence such companies are creating their own private database to track authenticated Aadhaar users and they may sell this data to other companies. The growth of companies that share and combine databases to profile users is an indication of the value of personal data and its centrality for both large and small companies in India.
Integrating and linking large biometrics collections to each other, which are then linked with traditional data points that private companies hold such as geolocation or phone number enables constant surveillance to take over. So far, there has been no parliamentary discussion on the role of private companies. UIDAI remains the ultimate authority in deciding the nature, level and cost of access granted to private companies. For example, there is nothing in Aadhaar Act that prevents Facebook from entering into an agreement with the Indian government to make Aadhaar mandatory to access WhatsApp or any of its other services. Facebook could also pay data brokers and aggregators to create customer profiles to add to its ever growing data points for tracking and profiling its users.
Security Risks and Liability
A series of data leakages have raised concerns about which private entities are involved, and how they handle personal and sensitive data. In February, UIDAI registered a complaint against three companies for storing and using biometric data for multiple transactions. Aadhaar numbers of over 130 million people and bank account details of about 100 million people have been publicly displayed through government portals owing to poor security practices. A recent report from Centre for Internet and Society (CIS) showed that a simple tweaking of URL query parameters of the National Social Assistance Programme (NSAP) website could unmask and display private information of a fifth of India's population.
Such data leaks pose a huge risk as compromised biometrics can never be recovered. The Aadhaar Act establishes UIDAI as the primary custodian of identity information, but is silent on the liability in case of data breaches. The Act is also unclear about notice and remedies for victims of identity theft and financial frauds and citizens whose data has been compromised. UIDAI has continued to fix breaches upon being notified, but maintains that storage in federated databases ensures that no agency can track or profile individuals.
After almost a decade of pushing a framework for mass collection of data, the Indian government has issued guidelines to secure identity and sensitive personal data in India. The guidelines could have come earlier, and given large data leaks in the past may also be redundant. Nevertheless, it is reassuring to see practices for keeping information safe and the idea of positive informed consent being reinforced for government departments. To be clear, the guidelines are meant for government departments and private companies using Aadhaar for authentication, profiling and building databases fall outside its scope. With political attitudes to corporations exploiting personal information changing the world over, the stakes for establishing a framework that limits private companies commercializing personal data and tracking Indian citizens are as high as they have ever been.
As the campaign points out, the adoption of fair use would not harm copyright owners, but would simply authorize many everyday uses of copyright material that are currently technically infringing, such as forwarding emails, backing up movies, and sharing memes or mash-ups. That's one reason why Australia's Productivity Commission recommended the adoption of fair use as an improvement to Australia's patchwork of technologically-specific exceptions, such as a rule that allows format shifting from VHS tapes, but not from DVDs.
Why has Wikipedia, which is hosted in the U.S., jumped into this debate? Because the online encyclopedia provides an excellent example of the opportunity that the fair use doctrine creates for valuable information to be shared, without damaging the interests of creators. For example, in an article on Australian band Crowded House, you can hear a few bars of some of their most well-known tracks, and in a page about Aboriginal artist Albert Namatjira, a small representation of his art can be found.
Would anyone wishing to listen to Crowded House forgo purchasing their album because they can hear a few seconds of the same music on Wikipedia? Of course not, and that's one of the factors that make Wikipedia's partial reproduction of their music fair. But because Australia lacks a fair use right, Wikipedia could not be hosted in Australia without risking being found to infringe copyright. It's high time for this to change, for the sake of Australian users, creators, and innovators alike.
Other countries around the world are recognizing the benefits of fair use. South Africa is currently proposing to introduce a new fair use right into its own copyright law, adding to a growing list of countries that have done the same, including Israel, Malaysia, the Philippines, Thailand, Taiwan, Singapore, and South Korea.
Australians can join this growing movement and support the campaign for fair copyright by emailing their politicians, or by sharing the #faircopyrightoz hashtag on social media.
The Supreme Court’s recent decision in Impression Products v. Lexmark International was a big win for individuals’ right to repair and modify the products they own. While we’re delighted by this decision, we expect manufacturers to attempt other methods of controlling the market for resale and repair. That’s one reason we’re giving this month’s Stupid Patent of the Month award to Ford’s patent on a vehicle windshield design.
D786,157 is a design patent assigned to a subsidiary of Ford Motor Company. While utility patents are issued for new and useful inventions, design patents cover non-functional, ornamental aspects of a product.
Unlike utility patents, design patents have only one claim and usually have little or no written description. The patent only covers the non-functional design of a certain product. But design and utility patents are alike in an important way: both are intended to reward novelty. According to U.S. law, the Patent Office should issue design patents only for sufficiently new and original designs. By that test alone, it’s easy to see that the windshield patent should never have been issued.
Why did Ford apply for the patent on its windshield design? One possible reason is that it’s the automotive industry’s latest attempt to control the market for repair. If the shape of your windshield is patented by Ford, then no one else can replace it without risking costly patent litigation.
In the Supreme Court Lexmark opinion, Justice John Roberts specifically noted the danger of automobile manufacturers shutting out competition in the repair space:
Take a shop that restores and sells used cars. The business works because the shop can rest assured that, so long as those bringing in the cars own them, the shop is free to repair and resell those vehicles. That smooth flow of commerce would sputter if companies that make the thousands of parts that go into a vehicle could keep their patent rights after the first sale. Those companies might, for instance, restrict resale rights and sue the shop owner for patent infringement. And even if they refrained from imposing such restrictions, the very threat of patent liability would force the shop to invest in efforts to protect itself from hidden lawsuits.
If the Patent Office continues to issue stupid design patents like Ford's windshield patent, it risks giving manufacturers carte blanche to decide who can repair their products. And customers will pay the price.