In recent months, social media platforms—under pressure from a number of governments—have adopted new policies and practices to remove content that promotes terrorism. As the Guardian reported, these policies are typically carried out by low-paid contractors (or, in the case of YouTube, volunteers) and with little to no transparency and accountability. While the motivations of these companies might be sincere, such private censorship poses a risk to the free expression of Internet users.
As groups like the Islamic State have gained traction online, Internet intermediaries have come under pressure from governments and other actors, including the following:
the U.S. Congress in the form of legislative proposals that would require Internet companies to report “terrorist activity” to the U.S. government;
the European Union in the form of a “code of conduct” requiring Internet companies to take down terrorist propaganda within 24 hours of being notified, and via the EU Internet Forum;
individual European countries such as the U.K., France and Germany that have proposed exorbitant fines for Internet companies that fail to take down pro-terrorism content; and,
victims of terrorism who seek to hold social media companies civilly liable in U.S. courts for providing “material support” to terrorists by simply providing online platforms for global communication.
One of the coordinated industry efforts against pro-terrorism online content is the development of a shared database of “hashes of the most extreme and egregious terrorist images and videos” that the companies have removed from their services. The companies that started this effort—Facebook, Microsoft, Twitter, and Google/YouTube—explained that the idea is that by sharing “digital fingerprints” of terrorist images and videos, other companies can quickly “use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.”
As a second effort, the same companies created the GlobalInternetForum to Counter Terrorism, which will help the companies “continue to make our hosted consumer services hostile to terrorists and violent extremists.” Specifically, the Forum “will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the UN.” The Forum will focus on technological solutions; research; and knowledge-sharing, which will include engaging with smaller technology companies, developing best practices to deal with pro-terrorism content, and promoting counter-speech against terrorism.
Internet companies are also taking individual measures to combat pro-terrorism content. Google announced several new efforts, while both Google and Facebook have committed to using artificial intelligence technology to find pro-terrorism content for removal.
Private censorship must be cautiously deployed
While Internet companies have a First Amendment right to moderate their platforms as they see fit, private censorship—or what we sometimes call shadow regulation—can be just as detrimental to users’ freedom of expression as governmental regulation of speech. As social media companies increase their moderation of online content, they must do so as cautiously as possible.
Through our project Onlinecensorship.org, we monitor private censorship and advocate for companies to be more transparent and accountable to their users. We solicit reports from users of when Internet companies have removed specific posts or other content, or whole accounts.
We consistently urge companies to follow basic guidelines to mitigate the impact on users’ free speech. Specifically, companies should have narrowly tailored, clear, fair, and transparent content policies (i.e., terms of service or “community guidelines”); they should engage in consistent and fair enforcement of those policies; and they should have robust appeals processes to minimize the impact on users’ freedom of expression.
Over the years, we’ve found that companies’ efforts to moderate online content almost always result in overbroad content takedowns or account deactivations. We, therefore, are justifiably skeptical that the latest efforts by Internet companies to combat pro-terrorism content will meet our basic guidelines.
A central problem for these global platforms is that such private censorship can be counterproductive. Users who engage in counter-speech against terrorism often find themselves on the wrong side of the rules if, for example, their post includes an image of one of more than 600 “terrorist leaders” designated by Facebook. In one instance, a journalist from the United Arab Emirates was temporarily banned from the platform for posting a photograph of Hezbollah leader Hassan Nasrallah with a LGBTQ pride flag overlaid on it—a clear case of parody counter-speech that Facebook’s content moderators failed to grasp.
A more fundamental problem is that having narrow definitions is difficult. What counts as speech that “promotes” terrorism? What even counts as “terrorism”? These U.S.-based companies may look to the State Department’s list of designated terrorist organizations as a starting point. But Internet companies will sometimes go further. Facebook, for example, deactivated the personal accounts of Palestinian journalists; it did the same thing for Chechen independence activists under the guise that they were involved in “terrorist activity.” These examples demonstrate the challenges social media companies face in fairly applying their own policies.
A recent investigative report by ProPublica revealed how Facebook’s content rules can lead to seemingly inconsistent takedowns. The authors wrote: “[T]he documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.” The report emphasized the need for companies to be more transparent about their content rules, and to have rules that are fair for all users around the world.
Artificial intelligence poses special concerns
We are concerned about the use of artificial intelligence automation to combat pro-terrorism content because of the imprecision inherent in systems that automatically block or remove content based on an algorithm. Facebook has perhaps been the most aggressive in deploying AI in the form of machine learning technology in this context. The company’s latest AI efforts include using image matching to detect previously tagged content, using natural language processing techniques to detect posts advocating for terrorism, removing terrorist clusters, removing new fake accounts created by repeat offenders, and enforcing its rules across other Facebook properties such as WhatsApp and Instagram.
Google’s Content ID, for example, which was designed to address copyright infringement, has also blocked fair uses, news reporting, and even posts by copyright owners themselves. If automatic takedowns based on copyright are difficult to get right, how can we expect new algorithms to know the difference between a terrorist video clip that’s part of a satire and one that’s genuinely advocating violence?
Until companies can publicly demonstrate that their machine learning algorithms can accurately and reliably determine whether a post is satire, commentary, news reporting, or counter-speech, they should refrain from censoring their users by way of this AI technology.
Even if a company were to have an algorithm for detecting pro-terrorism content that was accurate, reliable, and had a minimal percentage of false positives, AI automation would still be problematic because machine learning systems are not robust to distributional change. Once machine learning algorithms are trained, they are as brittle as any other algorithm, and building and training machine learning algorithms for a complex task is an expensive, time-intensive process. Yet the world that algorithms are working in is constantly evolving and soon won’t look like the world in which the algorithms were trained.
This might happen in the context of pro-terrorism content on social media: once terrorists realize that algorithms are identifying their content, they will start to game the system by hiding their content or altering it so that the AI no longer recognizes it (by leaving out key words, say, or changing their sentence structure, or a myriad of other ways—it depends on the specific algorithm). This problem could also go the other way: a change in culture or how some group of people express themselves could cause an algorithm to start tagging their posts as pro-terrorism content, even though they’re not (for example, if people co-opted a slogan previously used by terrorists in order to de-legitimize the terrorist group).
We strongly caution companies (and governments) against assuming that technology will be the panacea in identifying pro-terrorism content, because this technology simply doesn’t yet exist.
Is taking down pro-terrorism content actually a good idea?
Apart from the free speech and artificial intelligence concerns, there is an open question of efficacy. The sociological assumption is that removing pro-terrorism content will reduce terrorist recruitment and community sympathy for those who engage in terrorism. In other words, the question is not whether terrorists are using the Internet to recruit new operatives—the question is whether taking down pro-terrorism content and accounts will meaningfully contribute to the fight against global terrorism.
Governments have not sufficiently demonstrated this to be the case. And some experts believe this absolutely not to be the case. For example, Michael German, a former FBI agent with counter-terrorism experience and current fellow at the Brennan Center for Justice, said, “Censorship has never been an effective method of achieving security, and shuttering websites and suppressing online content will be as unhelpful as smashing printing presses.” In fact, as we’ve argued before, censoring the content and accounts of determined groups could be counterproductive and actually result in pro-terrorism content being publicized more widely (a phenomenon known as the Streisand Effect).
Keeping pro-terrorism content online also contributes to journalism, open source intelligence gathering, academic research, and generally the global community’s understanding of this tragic and complex social phenomenon. On intelligence gathering, the United Nations has said that “increased Internet use for terrorist purposes provides a corresponding increase in the availability of electronic data which may be compiled and analysed for counter-terrorism purposes.”
While we recognize that Internet companies have a right to police their own platforms, we also recognize that such private censorship is often in response to government pressure, which is often not legitimately wielded.
Governments often get private companies to do what they can’t do themselves. In the U.S., for example, pro-terrorism content falls within the protection of the First Amendment. Other countries, many of which do not have similarly robust constitutional protections, might nevertheless find it politically difficult to pass speech-restricting laws.
Ultimately, we are concerned about the serious harm that sweeping censorship regimes—even by private actors—can have on users, and society at large. Internet companies must be accountable to their users as they deploy policies that restrict content.
First, they should make their content policies narrowly tailored, clear, fair, and transparent to all—as the Guardian’s Facebook Files demonstrate, some companies have a long way to go.
Second, companies should engage in consistent and fair enforcement of those policies.
Third, companies should ensure that all users have access to a robust appeals process—content moderators are bound to make mistakes, and users must be able to seek justice when that happens.
Fourth, until artificial intelligence systems can be proven accurate, reliable and adaptable, companies should not deploy this technology to censor their users’ content.
Finally, we urge those companies that are subject to increasing governmental demands for backdoor censorship regimes to improve their annual transparency reporting to include statistics on takedown requests related to the enforcement of their content policies.
Together, we painted an alarming picture of what the Internet might look like if the FCC goes forward with its plan to roll back net neutrality protections: ISPs prioritizing their favored content sources and deprioritizing everything else. (Fight for the Future has put together a great collection of examples of how sites participated in the day of action.)
Today has been about Internet users across the country who are afraid of large ISPs getting too much say in how we use the Internet. Voices ranged from huge corporations to ordinary Internet users like you and me.
Here are just a few examples of what Team Internet has been saying about net neutrality today.
“We live in an uncompetitive broadband market. That market is dominated by a handful of giant corporations that are being given the keys to shape telecom policy. The big internet companies that might challenge them are doing it half-heartedly. And [FCC Chairman] Ajit Pai seems determined to offer up a massive corporate handout without listening to everyday Americans.
“Is this what you want? Does this sound like a path toward better, faster, cheaper internet access? Toward better products and services in a more competitive market? To me, it sounds like Americans need to demand that our government actually hear our concerns, look at our skyrocketing bills, and make real policy that respects us, instead of watching the staff of an unelected official laugh as he ignores us. It sounds like we need to flood the offices of the FCC and Congress with calls and paperwork, demanding to know how giving handouts to huge corporations will help us.”
“Title II net neutrality protections are the civil rights and free speech rules for the internet. When traditional media outlets refuse to pay attention, Black, indigenous, queer and trans internet users can harness the power of the Internet to fight for lives free of police brutality and discrimination. This is why we’ll never stop fighting for enforcement of the net neutrality rules we fought for and saw passed by the FCC two years ago. There’s too much at stake to urge anything less.”
“We’re still picking ourselves off the floor from all the laughing we did when AT&T issued a press release this afternoon announcing that it was joining the ‘Day of Action for preserving and advancing the open internet.’
“If only it were true. In reality, AT&T is just a company that is deliberately misleading the public. Their lobbyists are lying. They want to kill Title II — which gives the FCC the authority to actually enforce net neutrality — and are trying to sell a congressional ‘compromise’ that would be as bad or worse than what the FCC is proposing. No thanks.”
“Everyone except these ISPs benefits from an open Internet… that’s it. It’s like a handful of companies. Not only is this about business—and it is about business and innovation—it’s also about freedom of speech.”
“No matter what, do not get discouraged or retreat into a state of silence and inaction. There are many like me who are listening and the role each of us plays is vital. We are not alone in believing that the FCC should be a governmental agency ‘of the people, by the people, and for the people.’”
Every year, EFF has lawyers with its Coders’ Rights Project on hand in Las Vegas at Black Hat, B-Sides and DEF CON for security researchers with legal questions about their research or presentations. EFF’s Coders’ Rights Project protects programmers, researchers, hackers, and developers engaged in cutting-edge exploration of technology. Security and encryption researchers help build a safer future for all of us using digital technologies, but too many legitimate researchers face serious legal challenges that prevent or inhibit their work.
The 2017 summer security conference legal team will include:
Staff Attorney Kit Walsh, who works on exemptions protecting security research and vehicle repair, along with a host of other beneficial activities threatened by Section 1201, the anti-circumvention provision of the Digital Millennium Copyright Act (DMCA).
Criminal Defense Staff Attorney Stephanie Lacambra, a former Federal and San Francisco Public Defender who has turned her expertise toward defending your civil liberties online.
Senior Staff Attorney Nate Cardozo, a Computer Fraud and Abuse Act expert who works on issues including the Wassenaar Arrangement, cryptography, hardware hacking, and electronic privacy law.
Deputy Executive Director and General Counsel Kurt Opsahl, who leads the Coders’ Rights Project and has been helping security researchers present at the summer security conferences since DEF CON was at the Alexis Park.
If you are wondering about whether your research came into a legal gray area, or concerned that the vendor will threaten legal action, please reach out to firstname.lastname@example.org. All EFF legal consultations are pro bono (free), part of our commitment to help the security researcher community. You can also stop by the EFF booths at each conference to make an appointment with one of our attorneys, though we highly recommend contacting us as far in advance of your talk as possible.
The large broadband providers and their associations who spent millions in Washington, D.C. to repeal broadband privacy just a few months ago in Congress are fighting to protect their victory in California. They are throwing every superficial argument against A.B. 375 in hopes to confuse California’s legislature enough to give them a pass despite an overwhelming 83% of the American public demanding a response to the Congressional Review Act repeal of their privacy rights.
EFF obtained copies of their letters and feel it is vitally important California’s elected officials know that the industry is unloading a plethora of misleading arguments, some of which they themselves are actively contradicting in other forums. Here are some examples of their attempt to have it both ways—where they repealed our privacy rights in D.C. yet express shock and dismay that state legislatures would respond to the public’s demands.
We Warned ISPs That Repealing the Federal Protections Would Result in a Patchwork of State by State Laws
The irony in the very companies who spent millions of dollars lobbying in DC to repeal our federal broadband privacy rights are now fighting state attempts to protect consumers because they supposedly prefer a federal rule. It is not lost on EFF that each state having to engage in broadband privacy individually without a federal floor is not ideal, we have said as much during the fight in DC. While California’s A.B. 375 represents model legislation EFF supports, not every state will enact the same law and some states may leave their citizens completely unprotected. That is a far cry from where we were in 2016 before Congress repealed our broadband privacy rights, and it is because of companies like Comcast, AT&T, and Verizon that we have arrived at this point.
Despite our repeated warnings to the industry and Congress that eliminating a uniform federal framework that protected personal information will result in states responding to protect their citizens, they pushed ahead and now find themselves on defense across the country.
And if A.B. 375 becomes law, we hope it would serve as the model for states across the country to avoid a patchwork problem, but again this problem was created by the ISP lobby repealing the federal rules in the first place.
AT&T is a Leader in Contradicting Itself
To California’s Legislature, AT&T right now is saying the following:
“AT&T and other major Internet service providers have committed to legally enforceable Privacy Principles that are consistent with the privacy framework developed by the FTC over the past twenty years.”
In essence, there is no need to pass a state law because the Federal Trade Commission can enforce the law on us. But what exactly is AT&T saying about the FTC’s enforcement power in the courts?
That is right. They are arguing that the FTC has no legal enforcement power over them. They are making that argument right now in the Ninth Circuit Court of Appeals, which means if they win there a second time (the case is on en banc appeal) then California will have no Federal Trade Commission enforcer on privacy.
On other fronts AT&T and others are arguing that the bill is unnecessary because the FCC’s powers remain perfectly intact after the Congressional Review Act repeal.
“The bill is not needed. The FCC retains statutory authority to enforce consumer privacy protections with respect to Internet service providers.” - AT&T
"We want to assure you that the action taken by Congress earlier this year has changed nothing for consumers." -CompTIA, TechNet, Bay Area Council
This is the biggest whopper they are spreading here in Sacramento because anyone who takes the time to look up the history of ISP conduct will quickly find out that they have been trying to profit off their customers’ personal information for years. The problem for them has been the law got in the way (until recently) or elected officials put political pressure on ISPs to change their plans.
In 2008, Charter play tested the idea of recording everything you do on the Internet and packaging it into profiles by using Deep Packet Inspection technology that was capable of detailed monitoring of your activity. The bipartisan political response from Congress was fierce and Charter quickly backed down from its plans. It is worth noting that cable broadband services were not clearly covered under the Communications Act’s privacy obligations until the 2015 Open Internet Order.
Pretending a Straight Forward and Widely Accepted Definition of Broadband is Untested
In several opposition letters the opponents assert the definition of “Internet access service” may result in any Internet business suddenly becoming affected by the legislation. This is a false reading of the definition in the bill and likely an attempt to stall the legislation by pretending we have not been living with these definitions for seven years.
“Internet service provider” means a person or entity engaged in the provision of Internet access service, but only to the extent that the person or entity is providing Internet access service.
“Internet access service” means a mass-market retail service by wire or radio that provides the capability to transmit data to and receive data from all or substantially all Internet endpoints, including any capabilities that are incidental to and enable the operation of the communications service, but excluding dial-up Internet access service. “Internet access service” also encompasses any service that the Federal Communications Commission or the Public Utilities Commission finds to be providing a functional equivalent to the service described in this subdivision.
Opponents are raising concerns with the term “functional equivalent” despite the 70 words preceding the term to limit and explicitly define what an eligible functional equivalent is. Lets break down the definition in its component parts to demonstrate. An ISP covered under A.B. 375 must be the following things:
1) Mass-market retail service
2) Transmit data by wire or radio
3) Capable of receiving and sending data to all or substantially all Internet endpoints
4) Includes capabilities that are incidental to and enable the operation of the communications service
5)Does not include dial up Internet
6) Directly provide the Internet access service
7) Includes services the FCC or CPUC finds to do parts 1-6 above
If this Level of Obfuscation and Attempts to Prevent a Law That Restores Your Broadband Privacy Rights Upsets You? You Need to Pick Up The Phone
On behalf of the Electronic Frontier Foundation, I would like to formally submit our request for an appeal of the Director's decision to publish Encrypted Media Extensions as a W3C Recommendation, announced on 6 July 2017.
The grounds for this appeal are that the question of a covenant to protect the activities that made DRM standardization a fit area for W3C activities was never put to the W3C membership. In the absence of a call for consensus on a covenant, it was improper for the Director to overrule the widespread members' objections and declare EME fit to be published as a W3C Recommendation.
The announcement of the Director's decision enumerated three ways in which DRM standardization through the W3C -- even without a covenant -- was allegedly preferable to allowing DRM to proceed through informal industry agreements: the W3C's DRM standard was said to be superior in its accessibility, its respect of user privacy, and its ability to level the playing field for new entrants to the market.
However, in the absence of a covenant, none of these benefits can be realized. That is because laws like the implementations of Article 6 of the EUCD, Section 1201 of the US Digital Millennium Copyright Act, and Canada's Bill C-11 prohibit otherwise lawful activity when it requires bypassing a DRM system.
1. The enhanced privacy protection of a sandbox is only as good as the sandbox, so we need to be able to audit the sandbox.
The privacy-protecting constraints the sandbox imposes on code only work if the constraints can't be bypassed by malicious or defective software. Because security is a process, not a product and because there is no security through obscurity, the claimed benefits of EME's sandbox require continuous, independent verification in the form of adversarial peer review by outside parties who do not face liability when they reveal defects in members' products.
This is the norm with every W3C recommendation: that security researchers are empowered to tell the truth about defects in implementations of our standards. EME is unique among all W3C standards past and present in that DRM laws confer upon W3C members the power to silence security researchers.
EME is said to be respecting of user privacy on the basis of the integrity of its sandboxes. A covenant is absolutely essential to ensuring that integrity.
2. The accessibility considerations of EME omits any consideration of the automated generation of accessibility metadata, and without this, EME's accessibility benefits are constrained to the detriment of people with disabilities.
It's true that EME goes further than other DRM systems in making space available for the addition of metadata that helps people with disabilities use video. However, as EME is intended to restrict the usage and playback of video at web-scale, we must also ask ourselves how metadata that fills that available space will be generated.
For example, EME's metadata channels could be used to embed warnings about upcoming strobe effects in video, which may trigger photosensitive epileptic seizures. Applying such a filter to (say) the entire corpus of videos available to Netflix subscribers who rely on EME to watch their movies would safeguard people with epilepsy from risks ranging from discomfort to severe physical harm.
There is no practical way in which a group of people concerned for those with photosensitive epilepsy could screen all those Netflix videos and annotate them with strobe warnings, or generate them on the fly as video is streamed. By contrast, such a feat could be accomplished with a trivial amount of code. For this code to act on EME-locked videos, EME's restrictions would have to be bypassed.
It is legal to perform this kind of automated accessibility analysis on all the other media and transports that the W3C has ever standardized. Thus the traditional scope of accessibility compliance in a W3C standard -- "is there somewhere to put the accessibility data when you have it?" -- is insufficient here. We must also ask, "Has W3C taken steps to ensure that the generation of accessibility data is not imperiled by its standard?"
There are many kinds of accessibility metadata that could be applied to EME-restricted videos: subtitles, descriptive tracks, translations. The demand for, and utility of, such data far outstrips our whole species' ability to generate it by hand. Even if we all labored for all our days to annotate the videos EME restricts, we would but scratch the surface.
However, in the presence of a covenant, software can do this repetitive work for us, without much expense or effort.
3. The benefits of interoperability can only be realized if implementers are shielded from liability for legitimate activities.
EME only works to render video with the addition of a nonstandard, proprietary component called a Content Decryption Module (CDM). CDM licenses are only available to those who promise not to engage in lawful conduct that incumbents in the market dislike.
For a new market entrant to be competitive, it generally has to offer a new kind of product or service, a novel offering that overcomes the natural disadvantages that come from being an unknown upstart. For example, Apple was able to enter the music industry by engaging in lawful activity that other members of the industry had foresworn. Likewise Netflix still routinely engages in conduct (mailing out DVDs) that DRM advocates deplore, but are powerless to stop, because it is lawful. The entire cable industry -- including Comcast -- owes its existence to the willingness of new market entrants to break with the existing boundaries of "polite behavior."
EME's existence turns on the assertion that premium video playback is essential to the success of any web player. It follows that new players will need premium video playback to succeed -- but new players have never successfully entered a market by advertising a product that is "just like the ones everyone else has, but from someone you've never heard of."
The W3C should not make standards that empower participants to break interoperability. By doing so, EME violates the norm set by every other W3C standard, past and present.
Through this appeal, we ask that the membership be formally polled on this question: "Should a covenant protecting EME's users and investigators against anti-circumvention regulation be negotiated before EME is made a Recommendation?"
Thank you. We look forward to your guidance on how to proceed with this appeal.
You might have noticed something unusual when you visited the EFF website today: our site was “blocked” unless you shelled out for “premium” Internet access.
As part of the day of action to support net neutrality, we decided to imagine what might happen if FCC Chairman Ajit Pai caves to industry pressure and abandons the net neutrality rules the FCC adopted just two years ago. If you don’t want to live in that future, it’s time to take action.
To make it easy for Team Internet to do just that, we’ve created a special site called DearFCC.org where we’ll help you write your own comment to the agency. We’ll offer some suggestions to get you started, but you can say whatever you like. What’s most important is that the FCC hears from you.
The fight over net neutrality isn’t just about consumer protection: it’s about your freedom of speech.
Some large ISPs say they support net neutrality, but that they just want the FCC to go enforce it under a different legal provision, or have Congress pass a specific net neutrality law. But this is just a trick—they already know that if the FCC goes back to classifying broadband as an information service, its net neutrality rules will fail (just like they did last time). They also know that Congress isn’t likely to pass a real net neutrality statute anytime soon, if ever, given the millions that telecom giants have invested in making sure they get to write any regulation of their industry.
Make no mistake: if we want to FCC to do its part to protect a free and open Internet—where Internet service providers don’t discriminate between different types of content or communications—we can’t let the agency go forward with its plan to abandon Title II (the legal foundation for today’s net neutrality rules). Competition between ISPs won’t guarantee net neutrality, especially when most of the country has only one option for broadband Internet access.
The fight over net neutrality isn’t just about consumer protection, though: it’s about your freedom of speech. What makes the Internet great is that anyone can use it to get their voice heard. Your message, your idea, or your story can reach millions of people, just as many people as large broadcasting companies can reach. If big ISPs win this fight, the next iteration of the Internet might look something more like cable TV, where providers have a great deal of influence over which messages their members hear—and they can deprioritize or even flat-out block content they don’t like.
If you love the Internet the way it is, then speak out now.
Second court recommends awarding legal fees to defendant hit with patent troll’s lawsuit
Update: On August 9, 2017, District Court Judge Rosenberg rejected Shipping & Transit's objections to the Magistrate Judge's report and recommendation awarding fees. The court ordered Shipping & Transit to pay $36,317.50 plus interest in attorneys' fees to Lensdiscounters.com.
The latest finding comes out of Shipping & Transit LLC v. Lensdiscounters.com, a case originally filed by Shipping & Transit just over a year ago, but not lasting nearly that long. When at an early hearing it came out there were serious defects in Shipping & Transit’s case, Shipping & Transit immediately sought to end the lawsuit. Lensdiscounters opposed letting Shipping & Transit run away without consequences. Lensdiscounters told the court its belief that Shipping & Transit had failed to investigate infringement before filing its lawsuit and that Shipping & Transit’s patents were invalid. It argued it should be awarded the cost it incurred in defending against Shipping & Transit’s infringement claim.
In a report signed on July 10, a magistrate judge agreed (PDF). The court found Shipping & Transit’s explanation for why it believed it had a case of infringement worth pursuing to be “flawed.” Instead, it appeared to the court that “likely,  from the inception, [Shipping & Transit] never intended to litigate its patent infringement rights” and “it appears that [Shipping & Transit] brought this case merely to elicit a quick settlement from Defendant on questionable patents.” With respect to Shipping & Transit’s “questionable patents,” the court noted that despite Shipping & Transit filing over 300 cases in Florida alone, the court “could not find one case  where the substantive issue of patent validity was reached.” Instead, Shipping & Transit “routinely and promptly” dismissed cases “to end any inquiry” any time the validity of its patents was challenged. These facts lead the judge to recommend that the court order Shipping & Transit to pay Lensdiscounters’ legal fees.
Because this report is from a magistrate judge, it still needs to be confirmed by the District Court judge. However, it represents yet another finding by a court that Shipping & Transit’s patent infringement lawsuits are exceptional and should lead to an award of fees to defendants targeted by Shipping & Transit. This latest decision from Florida, along with the similar order (PDF) from California, have Shipping & Transit’s death knell bell tolling across the country.
Update: The Senate Business, Professions and Economic Development Committee has since waived jurisdiction over the bill, so it will face only two committees not three as the post originally stated.
Earlier this year, Congress voted to repeal federal privacy rules that kept your ISP from selling information about who you are and what you do online without your permission. That wildly unpopular vote undid years of work at the FCC to prevent companies that you already pay to access the Internet from also monetizing information about what you look at, what you buy, and who you talk to online.
Last week, companies like Comcast, AT&T, and Verizon attempted to stall the bill in its first committee in hopes of running out the clock. They failed, but now they will now make every effort to vote the bill down in any one of these next two committees. If the telecom lobby wins in any of these committees, the bill will be stalled for the rest of the year.