Deeplinks EFF’s Deeplinks Blog: Noteworthy news from around the internet

  • EFF, ACLU Urge Appeals Court to Revive Challenge to Los Angeles’ Collection of Scooter Location Data
    by Karen Gullo on July 23, 2021 at 9:48 pm

    Lower Court Improperly Dismissed Lawsuit Against Privacy-Invasive Data Collection PracticeSan Francisco—The Electronic Frontier Foundation and the ACLU of Northern and Southern California today asked a federal appeals court to reinstate a lawsuit they filed on behalf of electric scooter riders challenging the constitutionality of Los Angeles’ highly privacy-invasive collection of detailed trip data and real-time locations and routes of scooters used by thousands of residents each day.The Los Angeles Department of Transportation (LADOT) collects from operators of dockless vehicles like Lyft, Bird, and Lime information about every single scooter trip taken within city limits. It uses software it developed to gather location data through Global Positioning System (GPS) trackers on scooters. The system doesn’t capture the identity of riders directly, but collects with precision riders’ location, routes, and destinations to within a few feet, which can easily be used to reveal the identities of riders.A lower court erred in dismissing the case, EFF and the ACLU said in a brief filed today in the U.S. Circuit Court of Appeals for the Ninth Circuit. The court incorrectly determined that the practice, unprecedented in both its invasiveness and scope, didn’t violate the Fourth Amendment. The court also abused its discretion, failing to exercise its duty to credit the plaintiff’s allegations as true, by dismissing the case without allowing the riders to amend the lawsuit to fix defects in the original complaint, as federal rules require. “Location data can reveal detailed, sensitive, and private information about riders, such as where they live, who they work for, who their friends are, and when they visit a doctor or attend political demonstrations,” said EFF Surveillance Litigation Director Jennifer Lynch. “The lower court turned a blind eye to Fourth Amendment principles. And it ignored Supreme Court rulings establishing that, even when location data like scooter riders’ GPS coordinates are automatically transmitted to operators, riders are still entitled to privacy over the information because of the sensitivity of location data.”The city has never presented a justification for this dragnet collection of location data, including in this case, and has said it’s an “experiment” to develop policies for motorized scooter use. Yet the lower court decided on its own that the city needs the data and disregarded plaintiff Justin Sanchez’s statements that none of Los Angeles’ potential uses for the data necessitates collection of all riders’ granular and precise location information en masse.“LADOT’s approach to regulating scooters is to collect as much location data as possible, and to ask questions later,” said Mohammad Tajsar, senior staff attorney at the ACLU of Southern California. “Instead of risking the civil rights of riders with this data grab, LADOT should get back to the basics: smart city planning, expanding poor and working people’s access to affordable transit, and tough regulation on the private sector.” The lower court also incorrectly dismissed Sanchez’s claims that the data collection violates the California Electronic Communications Privacy Act (CalECPA), which prohibits the government from accessing electronic communications information without a warrant or other legal process. The court’s mangled and erroneous interpretation of CalECPA—that only courts that have issued or are in the process of issuing a warrant can decide whether the law is being violated—would, if allowed to stand, severely limit the ability of people subjected to warrantless collection of their data to ever sue the government.“The Ninth Circuit should overturn dismissal of this case because the lower court made numerous errors in its handling of the lawsuit,” said Lynch. “The plaintiffs should be allowed to file an amended complaint and have a jury decide whether the city is violating riders’ privacy rights.” For the brief: Contact:  JenniferLynchSurveillance Litigation [email protected]

  • Data Brokers are the Problem
    by Gennie Gebhart on July 23, 2021 at 7:59 pm

    Why should you care about data brokers? Reporting this week about a Substack publication outing a priest with location data from Grindr shows once again how easy it is for anyone to take advantage of data brokers’ stores to cause real harm. This is not the first time Grindr has been in the spotlight for sharing user information with third-party data brokers. The Norwegian Consumer Council singled it out in its 2020 “Out of Control” report, before the Norwegian Data Protection Authority fined Grindr earlier this year. At the time, it specifically warning that the app’s data-mining practices could put users at serious risk in places where homosexuality is illegal. But Grindr is just one of countless apps engaging in this exact kind of data sharing. The real problem is the many data brokers and ad tech companies that amass and sell this sensitive data without anything resembling real users’ consent. Apps and data brokers claim they are only sharing so-called “anonymized” data. But that’s simply not possible. Data brokers sell rich profiles with more than enough information to link sensitive data to real people, even if the brokers don’t include a legal name. In particular, there’s no such thing as “anonymous” location data. Data points like one’s home or workplace are identifiers themselves, and a malicious observer can connect movements to these and other destinations. In this case, that includes gay bars and private residents. Another piece of the puzzle is the ad ID, another so-called “anonymous” label that identifies a device. Apps share ad IDs with third parties, and an entire industry of “identity resolution” companies can readily link ad IDs to real people at scale. All of this underlines just how harmful a collection of mundane-seeming data points can become in the wrong hands. We’ve said it before and we’ll say it again: metadata matters. That’s why the U.S. needs comprehensive data privacy regulation more than ever. This kind of abuse is not inevitable, and it must not become the norm.

  • Council of Europe’s Actions Belie its Pledges to Involve Civil Society in Development of Cross Border Police Powers Treaty
    by Karen Gullo on July 23, 2021 at 2:06 am

    As the Council of Europe’s flawed cross border surveillance treaty moves through its final phases of approval, time is running out to ensure cross-border investigations occur with robust privacy and human rights safeguards in place. The innocuously named “Second Additional Protocol” to the Council of Europe’s (CoE) Cybercrime Convention seeks to set a new standard for law enforcement investigations—including those seeking access to user data—that cross international boundaries, and would grant a range of new international police powers.  But the treaty’s drafting process has been deeply flawed, with civil society groups, defense attorneys, and even data protection regulators largely sidelined. We are hoping that CoE’s Parliamentary Committee (PACE), which is next in line to review the draft Protocol, will give us the opportunity to present and take our privacy and human rights concerns seriously as it formulates its opinion and recommendations before the CoE’s final body of approval, the Council of Ministers, decides the Protocol’s fate. According to the Terms of Reference for the preparation of the Draft Protocol, the Council of Ministers may consider inviting parties “other than member States of the Council of Europe to participate in this examination.” The CoE relies on committees to generate the core draft of treaty texts. In this instance, the CoE’s Cybercrime Committee (T-CY) Plenary negotiated and drafted the Protocol’s text with the assistance of a drafting group consisting of representatives of State Parties. The process, however, has been fraught with problems. To begin with, T-CY’s Terms of Reference for the drafting process drove a lengthy, non-inclusive procedure that relied on closed sessions (​​Article 4.3 T-CY Rules of Procedures). While the Terms of Reference allow the T-CY to invite individual subject matter experts on an ad hoc basis, key voices such as data protection authorities, civil society experts, and criminal defense lawyers were mostly sidelined. Instead, the process has been largely commandeered by law enforcement, prosecutors and public safety officials (see here, and here).  Earlier in the process, in April 2018, EFF, CIPPIC, EDRI and 90 civil society organizations from across the globe requested the COE Secretariat General provide more transparency and meaningful civil society participation as the treaty was being negotiated and drafted—and not just during the CoE’s annual and somewhat exclusive Octopus Conferences. However, since T-CY began its consultation process in July 2018, input from external stakeholders has been limited to Octopus Conference participation and some written comments. Civil society organizations were not included in the plenary groups and subgroups where text development actually occurs, nor was our input meaningfully incorporated.  Compounding matters, the T-CY’s final online consultation, where the near final draft text of the Protocol was first presented to external stakeholders, only provided a 2.5 week window for input. The draft text included many new and complex provisions, including the Protocol’s core privacy safeguards, but excluded key elements such as the explanatory text that would normally accompany these safeguards. As was flagged by civil society, privacy regulators, and even by the CoE’s own data protection committee, two and a half weeks is not enough time to provide meaningful feedback on such a complex international treaty. More than anything, this short consultation window gave the impression that T-CY’s external consultations were truly performative in nature.  Despite these myriad shortcomings, the Council of Ministers (CoE’ final statutory decision-making body, comprising member States’ Foreign Affairs Ministers) responded to our process concerns arguing that external stakeholders had been consulted during the Protocol’s drafting process. Even more oddly, the Council of Ministers’ justified the demonstrably curtailed final consultation period by invoking its desire to complete the Protocol on the 20th anniversary of the CoE’s Budapest Cybercrime Convention (that is, by this November 2021). With great respect, we kindly disagree with Ministers’ response. If T-CY wished to meet its November 2021 deadline, it had many options open to it. For instance, it could have included external stakeholders from civil society and from privacy regulators in its drafting process, as it had been urged to do on multiple occasions.  More importantly, this is a complex treaty with wide ranging implications for privacy and human rights in countries across the world. It is important to get it right, and ensure that concerns from civil society and privacy regulators are taken seriously and directly incorporated into the text. Unfortunately, as the text stands, it raises many substantive problems, including the lack of systematic judicial oversight in cross-border investigations and the adoption of intrusive identification powers that pose a direct threat to online anonymity. The Protocol also undermines key data protection safeguards relating to data transfers housed in central instruments like the European Union’s Law Enforcement Directive and the General Data Protection Regulation.  The Protocol now stands with CoE’s PACE, which will issue an opinion on the Protocol and might recommend some additional changes to its substantive elements. It will then fall to CoE’s Council of Ministers to decide whether to accept any of PACE’s recommendations and adopt the Protocol, a step which we still anticipate will occur in November. Together with CIPPIC, EDRI, Derechos Digitales and NGOs around the world hope that PACE takes our concerns seriously, and that the Council produces a treaty that puts privacy and human rights first. 

  • Venmo Takes Another Step Toward Privacy
    by Gennie Gebhart on July 21, 2021 at 9:12 pm

    As part of a larger redesign, the payment app Venmo has discontinued its public “global” feed. That means the Venmo app will no longer show you strangers’ transactions—or show strangers your transactions—all in one place. This is a big step in the right direction. But, as the redesigned app rolls out to users over the next few weeks, it’s unclear what Venmo’s defaults will be going forward. If Venmo and parent company PayPal are taking privacy seriously, the app should make privacy the default, not just an option still buried in the settings. Currently, all transactions and friends lists on Venmo are public by default, painting a detailed picture of who you live with, where you like to hang out, who you date, and where you do business. It doesn’t take much imagination to come up with all the ways this could cause harm to real users, and the gallery of Venmo privacy horrors is well-documented at this point. However, Venmo apparently has no plans to make transactions private by default at this point. That would squander the opportunity it has right now to finally be responsive to the concerns of Venmo users, journalists, and advocates like EFF and Mozilla. We hope Venmo reconsiders. There’s nothing “social” about sharing your credit card statement with your friends. Even a seemingly positive move from “public” to “friends-only” defaults would maintain much of Venmo’s privacy-invasive status quo. That’s in large part because of Venmo’s track record of aggressively hoovering up users’ phone contacts and Facebook friends to populate their Venmo friends lists. Venmo’s installation process nudges users towards connecting their phone contacts and Facebook friends to Venmo. From there, the auto-syncing can continue silently and persistently, stuffing your Venmo friends list with people you did not affirmatively choose to connect with on the app. In some cases, there is no option to turn this auto-syncing off.  There’s nothing “social” about sharing your credit card statement with a random subset of your phone contacts and Facebook friends, and Venmo should not make that kind of disclosure the default. It’s also unclear if Venmo will continue to offer a “public” setting now that the global feed is gone. Public settings would still expose users’ activities on their individual profile pages and on Venmo’s public API, leaving them vulnerable to the kind of targeted snooping that Venmo has become infamous for. We were pleased to see Venmo recently take the positive step of giving users settings to hide their friends lists. Throwing out the creepy global feed is another positive step. Venmo still has time to make transactions and friends lists private by default, and we hope it makes the right choice. If you haven’t already, change your transaction and friends list settings to private by following the steps in this post.

  • Cheers to the Winners of EFF’s 13th Annual Cyberlaw Trivia Night
    by Hannah Diaz on July 21, 2021 at 7:00 am

    On June 17th, the best legal minds in the Bay Area gathered together for a night filled with tech law trivia—but there was a twist! With in-person events still on the horizon, EFF’s 13th Annual Cyberlaw Trivia Night moved to a new browser-based virtual space, custom built in Gather. This 2D environment allowed guests to interact with other participants using video, audio, and text chat, based on proximity in the room.EFF’s staff joined forces to craft the questions, pulling details from the rich canon of privacy, free speech, and intellectual property law to create four rounds of trivia for this year’s seven competing teams.Our virtual Gathertown room!As the evening began, contestants explored the virtual space and caught-up with each-other, but the time for trivia would soon be at hand! After welcoming everyone to the event, our intrepid Quiz Master Kurt Opsahl introduced our judges Cindy Cohn, Sophia Cope, and Mukund Rathi. Attendees were then asked to meet at their team’s private table, allowing them to freely discuss answers without other teams being able to overhear, and so the trivia began!Everyone got off to a great start for the General Round 1 questions, featuring answers that ranged from winged horses to Snapchat filters.Everyone got off to a great start for the General Round 1 questions, featuring answers that ranged from winged horses to Snapchat filters. For the Intellectual Property Round 2, the questions proved more challenging, but the teams quickly rallied for the Privacy & Free Speech Round 3. With no clear winners so far, teams entered the final 4th round hoping to break away from the pack and secure 1st place.But a clean win was not to be!Durie Tangri’s team “The Wrath of (Lina) Khan” and Fenwick’s team “The NFTs: Notorious Fenwick Trivia” were still tied for first! Always prepared for such an occurrence, the teams headed into a bonus Tie-Breaker round to settle the score. Or so we thought…After extensive deliberation, the judges arrived at their decision and announced “The Wrath of (Lina) Khan” had the closest to correct answer and were the 1st place winners, with the “The NFTs: Notorious Fenwick Trivia” coming in 2nd, and Ridder, Costa & Johnstone’s team “We Invented Email” coming in 3rd. Easy, right? No!Fenwick appealed to the judges, arguing that under Official “Price is Right” Rules, that the answer closest to correct without going over should receive the tie-breaker point: cue more extensive deliberation (lawyers). Turns out…they had a pretty good point. Motion for Reconsideration: Granted! But what to do when the winners had already been announced?Two first place winners, of course! Which also meant that Ridder, Costa & Johnstone’s team “We Invented Email” moved into the 2nd place spot, and Facebook’s team “Whatsapp” were the new 3rd place winners! Whew!  Big congratulations to both winners, enjoy your bragging rights!EFF’s legal interns also joined in the fun, and their team name “EFF the Bluebook” followed the proud tradition of having an amazing team name, despite The Rules stating they were unable to formally compete.The coveted Cyberlaw Quiz Cups that are actually glass beer steins…EFF hosts the Cyberlaw Trivia Night to gather those in the legal community who help protect online freedom for their users. Among the many firms that continue to dedicate their time, talent, and resources to the cause, we would especially like to thank Durie Tangri LLP; Fenwick; Ridder, Costa & Johnstone LLP; and Wilson Sonsini Goodrich & Rosati LLP for sponsoring this year’s Bay Area event.If you are an attorney working to defend civil liberties in the digital world, consider joining EFF’s Cooperating Attorneys list. This network helps EFF connect people to legal assistance when we are unable to assist. Interested lawyers reading this post can go here to join the Cooperating Attorneys list.Are you interested in attending or sponsoring an upcoming Trivia Night? Please email [email protected] for more information.

  • India’s Draconian Rules for Internet Platforms Threaten User Privacy and Undermine Encryption
    by Katitza Rodriguez on July 21, 2021 at 12:14 am

    The Indian government’s new Intermediary Guidelines and Digital Media Ethics Code (“2021 Rules”) pose huge problems for free expression and Internet users’ privacy. They include dangerous requirements for platforms to identify the origins of messages and pre-screen content, which fundamentally breaks strong encryption for messaging tools. Though WhatsApp and others are challenging the rules in court, the 2021 Rules have already gone into effect. Three UN Special Rapporteurs—the Rapporteurs for Freedom of Expression, Privacy, and Association—heard and in large part affirmed civil society’s criticism of the 2021 Rules, acknowledging that they did “not conform with international human rights norms.” Indeed, the Rapporteurs raised serious concerns that Rule 4 of the guidelines may compromise the right to privacy of every internet user, and called on the Indian government to carry out a detailed review of the Rules and to consult with all relevant stakeholders, including NGOs specializing in privacy and freedom of expression. 2021 Rules contain two provisions that are particularly pernicious: the Rule 4(4) Content Filtering Mandate and the Rule 4(2) Traceability Mandate. Content Filtering Mandate Rule 4(4) compels content filtering, requiring that providers are able to review the content of communications, which not only fundamentally breaks end-to-end encryption, but creates a system for censorship. Significant social media intermediaries (i.e. Facebook, WhatsApp, Twitter, etc.) must “endeavor to deploy technology-based measures,” including automated tools or other mechanisms, to “proactively identify information” that has been forbidden under the Rules. This cannot be done without breaking the higher-level promises of secure end-to-end encrypted messaging.  Client-side scanning has been proposed as a way to enforce content blocking without technically breaking end-to-end encryption. That is, the user’s own device could use its knowledge of the unencrypted content to enforce restrictions by refusing to transmit, or perhaps to display, certain prohibited information, without revealing to the service provider who was attempting to communicate or view that information. That’s wrong. Client side-scanning requires a robot-spy in the room. A spy in a place where people are talking privately makes it not a private conversation. If that spy is a robot-spy like with client-side scanning, it is still a spy just as much as if it were a human spy. As we explained last year, client-side scanning inherently breaks the higher-level promises of secure end-to-end encrypted communications. If the provider controls what’s in the set of banned materials, they can test against individual statements, so a test against a set of size 1, in practice, is the same as being able to decrypt a message. And with client-side scanning, there’s no way for users, researchers, or civil society to audit the contents of the banned materials list. The Indian government frames the mandate as directed toward terrorism, obscenity, and the scourge of child sexual abuse material, but the mandate is acutally much broader. It also imposes proactive and automatic enforcement of the 2021 Rule’s Section (3)1(d)’s content takedown provisions requiring the proactive blocking of material previously held to be “information which is prohibited under any law,” including specifically laws for the protection of “the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation,” and incitement to any such act. This includes the widely criticized Unlawful Activities Prevention Act, which has reportedly been used to arrest academics, writers and poets for leading rallies and posting political messages on social media. This broad mandate is all that is necessary to automatically suppress dissent, protest, and political activity that a government does not like, before it can even be transmitted. The Indian government’s response to the Rapporteurs dismisses this concern, writing “India’s democratic credentials are well recognized. The right to freedom of speech and expression is guaranteed under the Indian Constitution.” The response misses the point. Even if a democratic state applies this incredible power to preemptively suppress expression only rarely and within the bounds of internationally recognized rights to freedom of expression, Rule(4)4 puts in place the tool kit for an authoritarian crackdown, automatically enforced not only in public discourse, but even in private messages between two people. Part of a commitment to human rights in a democracy requires civic hygiene, refusing to create the tools of undemocratic power. Moreover, rules like these give comfort and credence to authoritarian efforts to enlist intermediaries to assist in their crackdowns. If this Rule were available to China, word for word, it could be used to require social media companies to block images of Winnie the Pooh as it happened in China from being transmitted, even in direct “encrypted” messages.  Automated filters also violate due process, reversing the burden of censorship. As the three UN Special Rapporteurs made clear, a general monitoring obligation that will lead to monitoring and filtering of user-generated content at the point of upload … would enable the blocking of content without any form of due process even before it is published, reversing the well-established presumption that States, not individuals, bear the burden of justifying restrictions on freedom of expression. Traceability Mandate The traceability provision, in Rule 4(2), requires any large social media intermediary that provides messaging services to “enable the identification of the first originator of the information on its computer resource” in response to a court order or a decryption request issued under the 2009 Decryption Rules. The Decryption Rules allow authorities to request the interception or monitoring of any decrypted information generated, transmitted, received, or stored in any computer resource.. The Indian government responded to the Rapporteur report, claiming to honor the right to privacy: “The Government of India fully recognises and respects the right of privacy, as pronounced by the Supreme Court of India in K.S. Puttaswamy case. Privacy is the core element of an individual’s existence and, in light of this, the new IT Rules seeks information only on a message that is already in circulation that resulted in an offence.  This narrow view of Rule (4)4 is fundamentally mistaken. Implementing the Rule requires the messaging service to collect information about all messages, even before the content is deemed a problem, allowing the government to conduct surveillance with a time machine. This changes the security model and prevents implementing strong encryption that is a fundamental backstop to protecting human rights in the digital age. The Danger to Encryption Both the traceability and filtering mandates endanger encryption, calling for companies to know detailed information about each message that their encryption and security designs would otherwise allow users to keep private.  Strong end-to-end encryption means that only the sender and the intended recipient know the content of communications between them. Even if the provider only compares two encrypted messages to see if they match, without directly examining the content, this reduces security by allowing more opportunities to guess at the content. It is no accident that the 2021 Rules are attacking encryption. Riana Pfefferkorn, Research Scholar at the Stanford Internet Observatory, wrote that the rules were intentionally aimed at end-to-end encryption since the government would insist on software changes to defeat encryption protections: Speaking anonymously to The Economic Times, one government official said the new rules will force large online platforms to “control” what the government deems to be unlawful content: Under the new rules, “platforms like WhatsApp can’t give end-to-end encryption as an excuse for not removing such content,” the official said. The 2021 Rules’ unstated requirement to break encryption goes beyond the mandate of the Information Technology (IT) Act, which authorized the 2021 Rules. India’s Centre for Internet & Society’s detailed legal and constitutional analysis of the Rules explains: “There is nothing in Section 79 of the IT Act to suggest that the legislature intended to empower the Government to mandate changes to the technical architecture of services, or undermine user privacy.” Both are required to comply with the Rules. There are better solutions. For example, WhatsApp found a way to discourage massive chain forwarding of messages without itself knowing the content. It has the app note the number of times a message has been forwarded inside the message itself so that the app can then change its behavior based on this. Since the forwarding count is inside the encrypted message, the WhatsApp server and company don’t see it. So your app might not let you forward a chain letter, because the letter’s content shows it was massively forwarded, but the company can’t look at the encrypted message and know it’s content. Likewise, empowering users to report content can mitigate many of the harms that inspired the Indian 2021 Rules. The key principle of end-to-end encryption is that a message gets securely to its destination, without interception by eavesdroppers. This does not prevent the recipient from reporting abusive or unlawful messages, including now-decrypted content and the sender’s information. An intermediary may be able to facilitate user reporting, and still be able to provide the strong encryption necessary for a free society. Furthermore, there are cryptographic techniques for a user to report abuse in a way that identifies the abusive or unlawful content without the possibility of forging a complaint and preserving the privacy of those people not directly involved.  The 2021 Rules endanger encryption, weakening the privacy and security of ordinary people throughout India, while creating tools which could all too easily be misused against fundamental human rights, and which can give inspiration for authoritarian regimes throughout the world.  The Rules should be withdrawn, reviewed and reconsidered, bringing the voices of civil society and advocates for international human rights, to ensure the Rules help protect and preserve fundamental rights in the digital age.

  • Victory! Californians Can Now Choose Their Broadband Destiny
    by Ernesto Falcon on July 20, 2021 at 7:24 pm

    Years ago, we noted that despite being one of the world’s largest economies, the state of California had no broadband plan for universal, affordable, high-speed access. It is clear that access that meets our needs requires fiber optic infrastructure, yet most Californians were stuck with slow broadband monopolies due to laws supported by the cable monopolies providing us with terrible service. For example, the state was literally putting obsolete copper DSL internet connections instead of building out fiber optics to rural communities under a state law large private ISPs supported in 2017. But all of that is finally coming to an end thanks to your efforts. Today, Governor Newsom signed into law one of the largest state investments in public fiber in the history of the United States. No longer will the state of California simply defer to the whims of AT&T and cable for broadband access, now every community is being given their shot to choose their broadband destiny. How Did We Get a New Law? California’s new broadband infrastructure program was made possible through a combination of persistent statewide activism from all corners, political leadership by people such as Senator Lena Gonzalez, and investment funding from the American Rescue Plan passed by Congress. All of these things were part of what led up to the moment when Governor Newsom introduced his multi-billion broadband budget that is being signed into law today. Make no mistake, every single time you picked up the phone or emailed to tell your legislator to vote for affordable, high-speed access to all people, it made a difference because it set the stage for today. Arguably, what pushed us to this moment was the image of kids doing homework in fast-food parking lots during the pandemic. It made it undeniable that internet access was neither universal nor adequate in speed and capacity. That moment, captured and highlighted by Monterey County Supervisor Luis Alejo, a former member of the Sacramento Assembly, forced a reckoning with the failures of the current broadband ecosystem. Coupled with the COVID-19 pandemic also forcing schools to burn countless millions of public dollars renting out inferior mobile hotspots, Sacramento finally had enough and voted unanimously to change course. What is California’s New Broadband Infrastructure Program and Why is it a Revolution? California’s new broadband program approaches the problem on multiple fronts. It empowers local public entities, local private actors, and the state government itself to be the source of the solution. The state government will build open-access fiber capacity to all corners of the state. This will ensure that every community has multi-gigabit capacity available to suit their current and future broadband needs. Low-interest financing under the state’s new $750 million “Loan Loss Reserve” program will enable municipalities and county governments to issue broadband bonds to finance their own fiber. An additional $2 billion is available in grants for unserved pockets of the state for private and public applicants. The combination of these three programs provides solutions that were off the table before the governor signed this law. For example, a rural community can finance a portion of their own fiber network with low-interest loans and bonds, seek grants for the most expensive unserved pockets, and connect with the state’s own fiber network at affordable prices. In a major city, a small private ISP or local school district can apply for a grant to provide broadband to an unserved low-income neighborhood. Even in high-tech cities such as San Francisco, an estimated 100,000 residents lack broadband access in low-income areas, proving that access is a widespread, systemic problem, not just a rural one, that requires an all hands on deck approach. The revolution here is the fact that the law does not rely on AT&T, Frontier Communications, Comcast, and Charter to solve the digital divide. Quite simply, the program makes very little of the total $6 billion budget available to these large private ISPs who have already received so much money and still failed to deliver a solution.  This is an essential first step towards reaching near universal fiber access, because it was never ever going to happen through the large private ISPs who are tethered to fast profits and short term investor expectations that prevent them from pursuing universal fiber access. What the state needed was to empower local partners in the communities themselves who will take on the long-term infrastructure challenge. If you live in California, now is the time to talk to your mayor and city council about your future broadband needs. Now is the time to talk to your local small businesses about the future the state has enabled if they need to improve their broadband connectivity. Now is the time to talk to your school district about what they can do to improve community infrastructure for local students. Maybe you yourself have the will and desire to build your own local broadband network through this law. All of these things are now possible because for the first time in state history there is a law in place that lets you decide the broadband future.

  • Pegasus Project Shows the Need for Real Device Security, Accountability, and Redress for Those Facing State-Sponsored Malware
    by Cindy Cohn on July 20, 2021 at 7:11 pm

    People all around the world deserve the right to have a private conversation. Communication privacy is a human right, a civil liberty, and one of the centerpieces of a free society. And while we all deserve basic communications privacy, the journalists, NGO workers, and human rights and democracy activists among us are especially at risk, since they are often at odds with powerful governments.  So it is no surprise that people around the world are angry to learn that surveillance software sold by NSO Group to governments has been found on cellphones worldwide. Thousands of NGOs, human rights and democracy activists, along with government employees and many others have been targeted and spied upon. We agree and we are thankful for the work done by Amnesty International, the countless journalists at Forbidden Stories, along with Citizen Lab, to bring this awful situation to light. “A commitment to giving their own citizens strong security is the true test of a country’s commitment to cybersecurity.” Like many others, EFF has warned for years of the danger of the misuse of powerful state-sponsored malware. Yet the stories just keep coming about malware being used to surveil and track journalists and human rights defenders who are then murdered —including the murders of Jamal Khashoggi or Cecilio Pineda-Birto. Yet we have failed to ensure real accountability for the governments and companies responsible.  What can be done to prevent this? How do we create accountability and ensure redress? It’s heartening that both South Africa and Germany have recently banned dragnet communications surveillance, in part because there was no way to protect the essential private communications of journalists and privileged communications of lawyers. All of us deserve privacy, but lawyers, journalists, and human rights defenders are at special risk because of their often adversarial relationship with powerful governments. Of course, the dual-use nature of targeted surveillance like the malware that NSO sells is trickier, since it is allowable under human rights law when it is deployed under proper “necessary and proportionate” limits. But that doesn’t mean we are helpless. In fact, we have suggestions on both prevention and accountability. First, and beyond question, we need real device security. While all software can be buggy and malware often takes advantage of those bugs, we can do much better. To do better, we need the full support of our governments. It’s just shameful that in 2021 the U.S. government as well as many foreign governments in the Five Eyes and elsewhere are more interested in their own easy, surreptitious access to our devices than they are in the actual security of our devices. A commitment to giving their own citizens strong security is the true test of a country’s commitment to cybersecurity. By this measure, the countries of the world, especially those who view themselves as leaders in cybersecurity, are currently failing. It now seems painfully obvious that we need international cooperation in support of strong encryption and device security. Countries should be holding themselves and each other to account when they pressure device manufacturers to dumb down or back door our devices and when they hoard zero days and other attacks rather than ensuring that those security holes are promptly fixed. We also need governments to hold each other to the “necessary and proportionate” requirement of international human rights law for evaluating surveillance and these limits must apply whether that surveillance is done for law enforcement or national security purposes. And the US, EU, and others must put diplomatic pressure on the countries where these immoral spyware companies are headquartered in to stop selling hacking gear to countries who use them to commit human rights abuses. At this point, many of these companies—Cellebrite, NSO Group, and Candiru/Saitu—are headquartered in Israel and it’s time that both governments and civil society focus attention there.  Second, we can create real accountability by bringing laws and remedies around the world up to date to ensure that those impacted by state-sponsored malware have the ability to bring suit or otherwise obtain a remedy. Those who have been spied upon must be able to get redress from both the governments who do the illegal spying and the companies that knowingly provide them with the specific tools to do so. The companies whose good names are tarnished by this malware deserve to be able to stop it too. EFF has supported all of these efforts, but more is needed. Specifically: We supported WhatsApp’s litigation against NSO Group to stop it from spoofing WhatsApp as a strategy for infecting unsuspecting victims. The Ninth Circuit is currently considering NSO’s appeal.   We sought direct accountability for foreign governments who spy on Americans in the U.S. in Kidane v. Ethiopia. We argued that foreign countries who install malware on Americans’ devices should be held to account, just as the U.S. government would be if it violated the Wiretap Act or any of the other many applicable laws. We were stymied by a cramped reading of the law in the D.C. Circuit — the court wrongly decided that the fact that the malware was sent from Ethiopia rather than from inside the U.S. triggered sovereign immunity. That dangerous ruling should be corrected by other courts or Congress should clarify that foreign governments don’t have a free pass to spy on people in America. NSO Group says that U.S. telephone numbers (that start with +1) are not allowed to be tracked by its service, but Americans can and do have foreign-based telephones and regardless, everyone in the world deserves human rights and redress. Countries around the world should step up to make sure their laws cover state sponsored malware attacks that occur in their jurisdiction.  We also have supported those who are seeking accountability from companies directly, including the Chinese religious minority who have been targeted using a specially-built part of the Great Firewall of China created by American tech giant Cisco.   “The truth is, too many democratic or democratic-leaning countries are facilitating the spread of this malware because they want to be able to use it against their own enemies.” Third, we must increase the pressure on these companies to make sure they are not selling to repressive regimes and continue naming and shaming those that do. EFF’s Know Your Customer framework is a good place to start, as was the State Department’s draft guidance (that apparently was never finalized). And these promises must have real teeth. Apparently we were right in 2019 that NSO Group’s unenforceable announcement that it was holding itself to the “highest standards of ethical business,” was largely a toothless public relations move.  Yet while NSO is rightfully in the hot seat now, they are not the only player in this immoral market. Companies who sell dangerous equipment of all kinds must take steps to understand and limit misuse and these surveillance. Malware tools used by governments are no different. Fourth, we support former United Nations Special Rapporteur for Freedom of Expression David Kaye in calling for a moratorium on the governmental use of these malware technologies. While this is a longshot, we agree that the long history of misuse, and the growing list of resulting extrajudicial killings of journalists and human rights defenders, along with other human rights abuses, justifies a full moratorium.  These are just the start of possible remedies and accountability strategies. Other approaches may be reasonable too, but each must recognize that, at least right now, the intelligence and law enforcement communities of many countries are not defining “cybersecurity” to include actually protecting us, much less the journalists and NGOs and activists that do the risky work to keep us informed and protect our rights. We also have to understand that unless done carefully, regulatory responses like further triggering U.S. export restrictions could result in less security for the rest of us while not really addressing the problem. The NSO Group was reportedly able to sell to the Saudi regime with the permission and encouragement of the Israeli government under that country’s export regime. The truth is, too many democratic or democratic-leaning countries are facilitating the spread of this malware because they want to be able to use it against their own enemies. Until governments around the world get out of the way and actually support security for all of us, including accountability and redress for victims, these outrages will continue. Governments must recognize that intelligence agency and law enforcement hostility to device security is dangerous for their own citizens because a device cannot tell if the malware infecting it is from the good guys or the bad guys. This fact is just not going to go away.   We must have strong security at the start, and strong accountability after the fact if we want to get to a world where all of us can enjoy communications security. Only then will our journalists, human rights defenders, and NGOs be able to do their work without fear of being tracked, watched, and potentially murdered simply because they use a mobile device. Related Cases: Kidane v. EthiopiaDoe I v. Cisco

  • Final Day: Connect to a Brighter Internet ☀️
    by Aaron Jue on July 20, 2021 at 5:39 pm

    We’ve added one more day to EFF’s summer membership drive! Over 900 supporters have answered the call to get the internet right by defending privacy, free speech, and innovation. It’s possible if you’re with us. Will you join EFF? Through Wednesday, anyone can join EFF or renew their membership for as little as $20 and get a pack of issue-focused Digital Freedom Analog Postcards. Each one represents part of the fight for our digital future, from releasing free expression chokepoints to opposing biometric surveillance to compelling officials to be more transparent. We made this special-edition snail mail set to further connect you with friends or family, and to help boost the signal for a better future online—it’s a team effort! New and renewing members at the Copper level and above can also choose our Stay Golden t-shirt. It highlights your resilience through darkness and our power when we work together. And it’s pretty darn fashionable, too. Analog or digital—what matters is connection. Technology has undeniably become a significant piece of nearly all our communications, whether we are paying bills, working, accessing healthcare, or talking to loved ones. These familiar things require advanced security protocols, unrestricted access to an open web, and vigilant public advocacy. So if the internet is a portal to modern life, then our tech must also embrace civil liberties and human rights. Boost the Signal & Free the Tubes Why do you support internet freedom? You can advocate for a better online future just by connecting with the people around you. Here’s some sample language you can share with your circles: Staying connected has never been more important. Help me support EFF and the fight for every tech users’ right to privacy, free speech, and digital access. | Facebook | Email  It’s up to all of us to strengthen the best parts of the internet and create the future we want to live in. With people now coming of age only knowing a world connected to the web, EFF is using its decades of expertise in law and technology to stand up for the rights and freedoms that sustain modern democracy. Thank you for being part of this important work. Join EFF Support Online Rights For All

  • EFF to Ninth Circuit: Recent Supreme Court Decision in Van Buren Does Not Criminalize Web Scraping
    by Mukund Rathi on July 19, 2021 at 9:43 pm

    In an amicus brief filed Friday, EFF and the Internet Archive argued to the Ninth Circuit Court of Appeals that the Supreme Court’s recent decision in Van Buren v. United States shows that the federal computer crime law does not criminalize the common and useful practice of scraping publicly available information on the internet. The case, hiQ Labs, Inc. v. LinkedIn Corp., began when LinkedIn attempted to stop its competitor, hiQ Labs, from scraping publicly available data posted by users of LinkedIn. hiQ Labs sued and, on appeal, the Ninth Circuit held that the Computer Fraud and Abuse Act (CFAA) does not prohibit this scraping. LinkedIn asked the Supreme Court to reverse the decision. Instead, the high court sent the case back to the Ninth Circuit and asked it to take a second look, this time with the benefit of Van Buren. Our brief points out that Van Buren instructed lower courts to use the “technical meanings” of the CFAA’s terms—not property law or generic, non-technical definitions. It’s a computer crime statute, after all. The CFAA prohibits accessing a computer “without authorization”—from a technical standpoint, that presumes there is an authorization system like a password requirement or other authentication stage. But when any of the billions of internet users access any of the hundreds of millions of public websites, they do not risk violating federal law. There is no authentication stage between the user and the public website, so “without authorization” is an inapt concept. Van Buren used a “gates-up-or-down” analogy, and for a publicly available website, there is no gate to begin with—or at the very least, the gate is up. Our brief explains that neither LinkedIn’s cease-and-desist letter to hiQ nor its attempts to block its competitor’s IP addresses are the kind of technological access barrier required to invoke the CFAA. Lastly, our brief acknowledges LinkedIn’s concerns about how unbridled scraping may harm privacy online and invites the company to join growing advocacy efforts to adopt consumer and biometric privacy laws. These laws will directly address the collection of people’s sensitive information without their consent and won’t criminalize legitimate activity online. Related Cases: hiQ v. LinkedIn

  • Right or Left, You Should Be Worried About Big Tech Censorship
    by Cory Doctorow on July 16, 2021 at 9:09 pm

    Conservatives are being censored Claiming that “right-wing voices are being censored,” Republican-led legislatures in Florida and Texas have introduced legislation to “end Big Tech censorship.” They say that the dominant tech platforms block legitimate speech without ever articulating their moderation policies, that they are slow to admit their mistakes, and that there is no meaningful due process for people who think the platforms got it wrong. They’re right. So is everyone else But it’s not just conservatives who have their political speech blocked by social media giants. It’s Palestinians and other critics of Israel, including many Israelis. And it’s queer people, of course. We have a whole project tracking people who’ve been censored, blocked, downranked, suspended and terminated for their legitimate speech, from punk musicians to peanuts fans, historians to war crimes investigators, sex educators to Christian ministries.  The goat-rodeo Content moderation is hard at any scale, but even so, the catalog of big platforms’ unforced errors makes for sorry reading. Experts who care about political diversity, harassment and inclusion came together in 2018 to draft the Santa Clara Principles on Transparency and Accountability in Content Moderation but the biggest platforms are still just winging it for the most part. The situation is especially grim when it comes to political speech, particularly when platforms are told they have a duty to remove “extremism.” The Florida and Texas social media laws are deeply misguided and nakedly unconstitutional, but we get why people are fed up with Big Tech’s ongoing goat-rodeo of content moderation gaffes. So what can we do about it? Let’s start with talking about why platform censorship matters. In theory, if you don’t like the moderation policies at Facebook, you can quit and go to a rival, or start your own. In practice, it’s not that simple. First of all, the internet’s “marketplace of ideas” is severely lopsided at the platform level, consisting of a single gargantuan service (Facebook), a handful of massive services (YouTube, Twitter, Reddit, TikTok, etc) and a constellation of plucky, struggling, endangered indieweb alternatives.   DIY? If none of the big platforms want you, you can try to strike out on your own. Setting up your own rival platform requires that you get cloud services, anti-DDoS, domain registration and DNS, payment processing and other essential infrastructure. Unfortunately, every one of these sectors has grown increasingly concentrated, and with just a handful of companies dominating every layer of the stack, there are plenty of weak links in the chain and if just one breaks, your service is at risk. But even if you can set up your own service, you’ve still got a problem: everyone you want to talk about your disfavored ideas with is stuck in one of the Big Tech silos. Economists call this the “network effect,” when a service gets more valuable as more users join it. You join Facebook because your friends are there, and once you’re there, more of your friends join so they can talk to you. Setting up your own service might get you a more nuanced and welcoming moderation environment, but it’s not going to do you much good if your people aren’t willing to give up access to all their friends, customers and communities by quitting Facebook and joining your nascent alternative, not least because there’s a limit to how many services you can be active on. Network effects If all you think about is network effects, then you might be tempted to think that we’ve arrived at the end of history, and that the internet was doomed to be a  winner-take-all world of five giant websites filled with screenshots of text from the other four.  But not just network effects But network effects aren’t the only idea from economics we need to pay attention to when it comes to the internet and free speech. Just as important is the idea of “switching costs,” the things you have to give up when you switch away from one of the big services – if you resign from Facebook, you lose access to everyone who isn’t willing to follow you to a better place. Switching costs aren’t an inevitable feature of large communications systems. You can switch email providers and still connect with your friends; you can change cellular carriers without even having to tell your friends because you get to keep your phone number. The high switching costs of Big Tech are there by design. Social media may make signing up as easy as a greased slide, but leaving is another story. It’s like a roach motel: users check in but they’re not supposed to check out. Interop vs. switching costs Enter interoperability, the practice of designing new technologies that connect to existing ones. Interoperability is why you can access any website with any browser, and read Microsoft Office files using free/open software like LibreOffice, cloud software like Google Office, or desktop software like Apple iWorks. An interoperable social media giant – one that allowed new services to connect to it – would bust open that roach motel. If you could leave Facebook but continue to connect with the friends, communities and customers who stayed behind, the decision to leave would be much simpler. If you don’t like Facebook’s rules (and who does?) you could go somewhere else and still reach the people that matter to you, without having to convince them that it’s time to make a move. The ACCESS Act That’s where laws like the proposed ACCESS Act come in. While not perfect, this proposal to force the Big Tech platforms to open up their walled gardens to privacy-respecting, consent-seeking third parties is a way forward for anyone who chafes against Big Tech’s moderation policies and their uneven, high-handed application.  Some tech platforms are already moving in that direction. Twitter says it wants to create an “app store for moderation,” with multiple services connecting to it, each offering different moderation options. We wish it well! Twitter is well-positioned to do this – it’s one tenth the size of Facebook and needs to find ways to grow. But the biggest tech companies show no sign of voluntarily reducing their switching costs.  The ACCESS Act is the most important interoperability proposal in the world, and it could be a game-changer for all internet users. Save Section 230, save the internet Unfortunately for all of us, many of the people who don’t like Big Tech’s moderation think the way to fix it is to eliminate Section 230, a law that promotes users’ free speech. Section 230 is a rule that says you sue the person who caused the harm while organizations that host expressive speech are free to remove offensive, harassing or otherwise objectionable content. That means that conservative Twitter alternatives can delete floods of pornographic memes without being sued by their users. It means that online forums can allow survivors of workplace harassment to name their abusers without worrying about libel suits. If hosting speech makes you liable for what your users say, then only the very biggest platforms can afford to operate, and then only by resorting to shoot-first/ask-questions-later automated takedown systems. Kumbaya There’s not much that the political left and right agree on these days, but there’s one subject that reliably crosses the political divide: frustration with monopolists’ clumsy handling of online speech.  For the first time, there’s a law before Congress that could make Big Tech more accountable and give internet users more control over speech and moderation policies. The promise of the ACCESS Act is an internet where if you don’t like a big platform’s moderation policies, if you think they’re too tolerant of abusers or too quick to kick someone off for getting too passionate during a debate, you can leave, and still stay connected to the people who matter to you. Killing CDA 230 won’t fix Big Tech (if that was the case, Mark Zuckerberg wouldn’t be calling for CDA 230 reform). The ACCESS Act won’t either, by itself — but by making Big Tech open up to new services that are accountable to their users, the ACCESS Act takes several steps in the right direction.

  • What Cops Understand About Copyright Filters: They Prevent Legal Speech
    by Katharine Trendacosta on July 16, 2021 at 7:24 pm

    “You can record all you want. I just know it can’t be posted to YouTube,” said an Alameda County sheriff’s deputy to an activist. “I am playing my music so that you can’t post on YouTube.” The tactic didn’t work—the video of his statement can in fact, as of this writing, be viewed on YouTube. But it’s still a shocking attempt to thwart activists’ First Amendment right to record the police—and a practical demonstration that cops understand what too many policymakers do not: copyright can offer an easy way to shut down lawful expression. This isn’t the first time this year this has happened. It’s not even the first time in California this year. Filming police is an invaluable tool, for basically anyone interacting with them. It can provide accountability and evidence of what occurred outside of what an officer says occurred. Given this country’s longstanding tendency to believe police officers’ word over almost anyone else’s, video of an interaction can go a long way to getting to the truth. Very often, police officers would prefer not to be recorded, but there’s not much they can do about that legally, given strong First Amendment protections for the right to record.  But some officers are trying to get around this reality by making it harder to share recordings on many video platforms: they play music so that copyright filters will flag the video as potentially infringing. Copyright allows these cops to brute force their way past the First Amendment. Large rightsholders—the major studios and record labels—and their lobbyists have done a very good job of divorcing copyright from debates about speech. The debate over the merits of the Digital Millennium Copyright Act (DMCA) is cast as “artists versus Big Tech.” But we must not forget that, at its core, copyright is a restriction on, as well as an engine for, expression. Many try to cast the DMCA just as a tool to protect the rights of artists, since in theory it is meant to stop infringement. But the law is also a tool that makes it incredibly simple to remove lawful speech from the internet. The fair use doctrine ensures that copyright can exist in harmony with the First Amendment. But often, the debate gets wrapped up in who has the right to make a living doing what kind of art, and it becomes easy to forget how mechanisms to enforce copyright can actually restrict lawful speech. Forgetting all of this serves the purpose of those who advocate for the broader use of copyright filters on the internet. And where those filters are voluntarily deployed by companies, they replace a fair use analysis. So a filter that automatically blocks a video for playing a few seconds of a song becomes a useful tool for police officers who do not want to be subject to video-based accountability. What’s the harm in automating the identification and removal of things that have copyrighted material in them? The harm is that you are often removing lawful speech. It’s as easy to play a song out of your phone as it is to film with it. Easier, even. And copyright filters work by checking if something in an uploaded video matches any of the copyrighted material in its database. A few seconds of a certain song in the audio of a video could prevent that video from being uploaded. That’s the thing the cops in these stories are recognizing. And while it’s funny to see a cop playing Taylor Swift and claiming we can’t watch a video on YouTube that we are actually watching on YouTube, how many of these stories aren’t we hearing about? We know, without a doubt, that YouTube’s filter, Content ID, is very sensitive to music. And some singers and companies have YouTube’s filter set to automatically remove, rather than just demonetize, uploads with parts of their songs in them. Since YouTube is so dominant when it comes to video sharing, knowing how to game Content ID can be very effective in silencing others. When a story like this gets press attention, the video at issue won’t disappear because everyone recognizes the importance of the speech at issue. Neither the platform nor the record label is going to take down the video of the cop playing Taylor Swift. But countless videos never make it past the filters, and so never get public attention. Many activists don’t know what to do about a copyright claim. They may not want to share their name and contact information, as is required for both DMCA counternotices and challenges to Content ID. Or, when faced with the labyrinthine structure of YouTube’s appeals system, they may just give up. As the saying goes, we don’t know what we don’t know. Hopefully, these stories help others recognize and fight this devious tactic. If you have similar stories of police officers using this tactic, please let EFF know by emailing [email protected]

  • Don’t Let Police Arm Autonomous or Remote-Controlled Robots and Drones
    by Matthew Guariglia on July 16, 2021 at 4:46 pm

    It’s no longer science fiction or unreasonable paranoia. Now, it needs to be said: No, police must not be arming land-based robots or aerial drones. That’s true whether these mobile devices are remote controlled by a person or autonomously controlled by artificial intelligence, and whether the weapons are maximally lethal (like bullets) or less lethal (like tear gas). Police currently deploy many different kinds of moving and task-performing technologies. These include flying drones, remote control bomb-defusing robots, and autonomous patrol robots. While these different devices serve different functions and operate differently, none of them–absolutely none of them–should be armed with any kind of weapon.  Mission creep is very real. Time and time again, technologies given to police to use only in the most extreme circumstances make their way onto streets during protests or to respond to petty crime. For example, cell site simulators (often called “Stingrays”) were developed for use in foreign battlefields, brought home in the name of fighting “terrorism,” then used by law enforcement to catch immigrants and a man who stole $57 worth of food. Likewise, police have targeted BLM protesters with face surveillance and Amazon Ring doorbell cameras. Today, scientists are developing an AI-enhanced autonomous drone, designed to find people during natural disasters by locating their screams. How long until police use this technology to find protesters shouting chants? What if these autonomous drones were armed? We need a clear red line now: no armed police drones, period. The Threat is Real There are already law enforcement robots and drones of all shapes, sizes, and levels of autonomy patrolling the United States as we speak. From autonomous Knightscope robots prowling for “suspicious behavior” and collecting images of license plates and phone identifying information, to Boston Dynamic robotic dogs accompanying police on calls in New York or checking the temperature of unhoused people in Honolulu, to predator surveillance drones flying over BLM protests in Minneapolis. We are moving quickly towards arming such robots and letting autonomous artificial intelligence determine whether or not to pull the trigger. According to a Wired report earlier this year, the U.S. Defense Advanced Research Projects Agency (DARPA) in 2020 hosted a test of autonomous robots to see how quickly they could react in a combat simulation and how much human guidance they would need. News of this test comes only weeks after the federal government’s National Security Commission on Artificial Intelligence recommended the United States not sign international agreements banning autonomous weapons. “It is neither feasible nor currently in the interests of the United States,” asserts the report, “to pursue a global prohibition of AI-enabled and autonomous weapon systems.” In 2020, the Turkish military deployed Kargu, a fully autonomous armed drone, to hunt down and attack Libyan battlefield adversaries. Autonomous armed drones have also been deployed (though not necessarily used to attack people) by the Turkish military in Syria, and by the Azerbaijani military in Armenia. While we have yet to see autonomous armed robots or drones deployed in a domestic law enforcement context, wartime tools used abroad often find their way home. The U.S. government has become increasingly reliant on armed drones abroad. Many police departments seem to purchase every expensive new toy that hits the market. The Dallas police have already killed someone by strapping a bomb to a remote-controlled bomb-disarming robot.  So activists, politicians, and technologists need to step in now, before it is too late. We cannot allow a time lag between the development of this technology and the creation of policies to let police buy, deploy, or use armed robots. Rather, we must ban police from arming robots, whether in the air or on the ground, whether automated or remotely-controlled, whether lethal or less lethal, and in any other yet unimagined configuration. No Autonomous Armed Police Robots Whether they’re armed with a taser, a gun, or pepper spray, autonomous robots would make split-second decisions about taking a life, or inflicting serious injury, based on a set of computer programs. But police technologies malfunction all the time. For example, false positives are frequently generated by face recognition technology, audio gunshot detection, and automatic license plate readers. When this happens, the technology deploys armed police to a situation where they may not be needed, often leading to wrongful arrests and excessive force, especially against people of color erroneously identified as criminal suspects. If the malfunctioning police technology were armed and autonomous, that would create a far more dangerous situation for innocent civilians. When, inevitably, a robot unjustifiably injures or kills someone–who would be held responsible? Holding police accountable for wrongfully killing civilians is already hard enough. In the case of a bad automated decision, who gets held responsible? The person who wrote the algorithm? The police department that deployed the robot? Autonomous armed police robots might become one more way for police to skirt or redirect the blame for wrongdoing and avoid making any actual changes to how police function. Debate might bog down in whether to tweak the artificial intelligence guiding a killer robot’s decision making. Further, technology deployed by police is usually created and maintained by private corporations. A transparent investigation into a wrongful killing by an autonomous machine might be blocked by assertions of the company’s supposed need for trade secrecy in its proprietary technology, or by finger-pointing between police and the company. Meanwhile, nothing would be done to make people on the streets any safer. MIT Professor and cofounder of the Future of Life Institute Max Tegmark told Wired that AI weapons should be “stigmatized and banned like biological weapons.” We agree.  Although its mission is much more expansive than the concerns of this blog post,  you can learn more about what activists have been doing around this issue by visiting the Campaign to Stop Killer Robots. No Remote-Controlled Armed Police Robots, Either Even where police have remote control over armed drones and robots, the grave dangers to human rights are far too great. Police routinely over-deploy powerful new technologies in already over-policed Black, Latinx, and immigrant communities.  Police also use them too often as part of the United State’s immigration enforcement regime, and to monitor protests and other First Amendment-protected activities. We can expect more of the same with any armed robots. Moreover, armed police robots would probably increase the frequency of excessive force against suspects and bystanders. A police officer on the scene generally will have better information about unfolding dangers and opportunities to de-escalate, compared to an officer miles away looking at a laptop screen. Moreover, a remote officer might have less empathy for the human target of mechanical violence. Further, hackers will inevitably try to commandeer armed police robots. They already have succeeded at taking control of police surveillance cameras. The last thing we need are foreign governments or organized criminals seizing command of armed police robots and aiming them at innocent people. Armed police robots are especially menacing at protests. The capabilities of police to conduct crowd control by force are already too great. Just look at how the New York City Police Department has had to pay out hundreds of thousands of dollars to settle a civil lawsuit concerning police using a Long Range Acoustic Device (LRAD) punitively against protestors. Police must never deploy taser-equipped robots or pepper spray spewing drones against a crowd. Armed robots would discourage people from attending protests. We must de-militarize our police, not further militarize them. We need a flat-out ban on armed police robots, even if their use might at first appear reasonable in uncommon circumstances. In Dallas in 2016, police strapped a bomb to an explosive-diffusing robot in order to kill a gunman hiding inside a parking garage who had already killed five police officers and shot seven others. Normalizing armed police robots poses too great a threat to the public to allow their use even in extenuating circumstances. Police have proven time and time again that technologies meant only for the most extreme circumstances inevitably become commonplace, even at protests. Conclusion Whether controlled by an artificial intelligence or a remote human operator, armed police robots and drones pose an unacceptable threat to civilians. It’s exponentially harder to remove a technology from the hands of police than prevent it from being purchased and deployed in the first place. That’s why now is the time to push for legislation to ban police deployment of these technologies. The ongoing revolution in the field of robotics requires us to act now to prevent a new era of police violence.

  • The Tower of Babel: How Public Interest Internet is Trying to Save Messaging and Banish Big Social Media
    by Cory Doctorow on July 16, 2021 at 2:13 am

    This blog post is part of a series, looking at the public interest internet—the parts of the internet that don’t garner the headlines of Facebook or Google, but quietly provide public goods and useful services without requiring the scale or the business practices of the tech giants. Read our earlier installments. How many messaging services do you use? Slack, Discord, WhatsApp, Apple iMessage, Signal, Facebook Messenger, Microsoft Teams, Instagram, TikTok, Google Hangouts, Twitter Direct Messages, Skype?  Our families, friends and co-workers are scattered across dozens of services, none of which talk to each other. Without even trying, you can easily amass 40 apps on your phone that let you send and receive messages. The numbers aren’t dropping. Companies like Google and Facebook – who once supported interoperable protocols, even using the same chat protocol –  now spurn them. This isn’t the first time we’ve been in this situation. Back in the 2000s, users were asked to choose between MSN, AOL, ICQ, IRC and Yahoo! Messenger, many of which would be embedded in other, larger services. Programs like Pidgin and Adium collected your contacts in one place, and allowed end-users some independence from being locked in by one service – or worse, having to choose which friends you care enough about to join yet another messaging service. So, the proliferation of messaging services isn’t new. What is new is the interoperability environment. Companies like Google and Facebook – who once supported interoperable protocols, even using the same chat protocol –  now spurn them. Even upstarts like Signal try to dissuade developers from building their own, unofficial clients. Finding a way to splice together all these services might make a lot of internet users happy, but it won’t thrill investors or tempt a giant tech company to buy your startup. The only form of recognition guaranteed to anyone who tries to untangle this knot is legal threats – lots of legal threats. But that hasn’t stopped the voluntary contributors of the wider, Public Interest Internet. Take Matterbridge, an free/open software project that promises to link together “Discord, Gitter, IRC, Keybase, Matrix, Mattermost, MSTeams, Rocket.Chat, Slack, Telegram, Twitch, WhatsApp, XMPP, Zulip”. This is a thankless task that requires its contributors to understand (and, at times, reverse-engineer) many protocols. It’s hard work, and it needs frequent updating as all these protocols change. But they’re managing it, and providing the tools to do it for free. Intriguingly, some of the folks working in this area are the same ones who dedicated themselves to wiring together different messenger services in the 2000s, and they’re still plugging away at it. You can watch one of Pidgin’s lead developers live-coding on Twitch, repurposing the codebase for a new age. Pidgin was able to survive for a long time in the wilderness, thanks to institutional support from “Instant Messaging Freedom,” a non-profit that manages its limited finances, and makes sure that even if the going is slow, it never stops. IMF was started in the mid-2000s after AOL threatened the developers of Pidgin, then called GAIM. Initially intended as a legal defense organization, it stuck around to serve as a base for the service operations. We asked Pidgin’s Gary Kramlich about his devotion to the project. Kramlich quit his job in 2019 and lived off his savings while undertaking a serious refactoring of Pidgin’s code, something he plans to keep up until September when he will run out of money and have to return to paid work. “It’s all about communication and bringing people together, allowing them to talk on their terms. That’s huge. You shouldn’t need to have 30GB of RAM to run all your chat clients. Communications run on network effects. If the majority of your friends use a tool and you don’t like it, your friends will have to take an extra step to include you in the conversation. That forces people to choose between their friends and the tools that suit them best. A multi-protocol client like Pidgin means you can have both.” Many public interest internet projects reflect this pattern: spending years working in relative obscurity on topics that require concentrated work, but with little immediate reward, under a cloud of legal risk that scares off commercial ventures. This kind of work is, by definition, work for the public good. After years of slow, patient, unglamorous work, the moment that Pidgin, Matterbridge and others laid the groundwork for has arrived. Internet users are frustrated beyond the breaking point by the complexity of managing multiple chat and message services. Businesses are taking notice. This is a legally risky bet, but it’s a shrewd one. After decades of increasing anti-interoperability legal restrictions, the law is changing for the better. In an attempt to break the lock-in of the big messaging providers, the U.S. Congress and the EU are considering compulsory interoperability laws that would make these developers’ work far easier – and legally safer. Interoperability is an idea whose time has come. Frustrated by pervasive tracking and invasive advertising, free software developers have built alternative front-ends to sites like YouTube, Instagram and Twitter. Coders are sick of waiting for the services they pay to use to add the features they need, so they’re building alternative clients for Spotify, and Reddit. These tools are already accomplishing the goals that regulators have set for themselves as part of the project of taming Big Tech. The public interest internet is giving us tracking-free alternatives, interoperable services, and tools that put user needs and human thriving before “engagement” and “stickiness.” Interoperability tools are more than a way to reskin or combine existing services – they’re also ways to create full-fledged alternatives to the incumbent social media giants. For example, Mastodon is a Twitter competitor built on an open protocol that lets millions of servers, and multiple, custom front-ends to interconnect with one-another (Peertube does the same for video). These services are thriving, with a userbase in the seven digits, but they still struggle to convince the average creator or user on Facebook or YouTube to switch, thanks to the network effects these centralised services benefit from. A YouTube creator might hate the company’s high-handed moderation policies and unpredictable algorithmic recommendations, but they still use YouTube because that’s where all the viewers are. Every time a creator joins YouTube, they give viewers another reason to keep using YouTube. Every time a viewer watches something on YouTube, they give creators another reason to post their videos to YouTube. With interoperable clients, those network effects are offset by lower “switching costs.” If you can merge your Twitter and Mastodon feeds into one Mastodon client, then it doesn’t matter if you’re a “Mastodon user” or a “Twitter user.” Indeed, if your Twitter friends can subscribe to your Mastodon posts, and if you can use Mastodon to read their Twitter posts, then you don’t lose anything by switching away from Twitter and going Mastodon-exclusive. In fact, you might gain by doing so, because your Mastodon server might have features, policies and communities that are better for you and your needs than Twitter’s – which has to satisfy hundreds of millions of use-cases – can ever be.  Indeed, it seems that Twitter’s executives have already anticipated this future, with their support for BlueSky, an internal initiative to accelerate this interoperability so that they can be best placed to survive it. Right now, at this very moment, there are hundreds, if not thousands, of developers, supporting millions of early adopters in building a vision of a post-Facebook world, constructed in the public interest. Yet these projects are very rarely mentioned in policy circles, nor do they receive political or governmental support. They are never given consideration when new laws about intermediary liability, extremist or harmful content, or copyright are enacted. If a public institution ever considers them, it’s almost always the courts, as the maintainers of these projects struggle with legal uncertainty and bowel-looseningly terrifying lawyer-letters demanding that they stop pursuing the public good. If the political establishment really want to unravel big tech, they should be working with these volunteers, not ignoring or opposing them. This is the fifth post in our blog series on the public interest internet. Read more in the series: Introducing the Public Interest Internet The Enclosure of the Public Interest Internet Outliving Outrage on the Public Interest Internet: the CDDB Story Organizing in the Public Interest: MusicBrainz The Tower of Babel: How Public Interest Internet is Trying to Save Messaging and Banish Big Social Media  

  • Article 17 Copyright Directive: The Court of Justice’s Advocate General Rejects Fundamental Rights Challenge But Defends Users Against Overblocking
    by Christoph Schmon on July 15, 2021 at 7:45 pm

    The Advocate General (AG) of the EU Court of Justice today missed an opportunity to fully protect internet users from censorship by automated filtering, finding that the disastrous Article 17 of the EU Copyright Directive doesn’t run afoul of Europeans’ free expression rights. The good news is that the AG’s opinion, a non-binding recommendation for the EU Court of Justice, defends users against overblocking, warning social media platforms and other content hosts that they are not permitted to automatically block lawful speech. The opinion also rejects the idea that  content hosts should be “turned into judges of online legality, responsible for coming to decisions on complex copyright issues.” On its face, Article 17 would allow online platforms to be held liable for unlawful user content unless they act as copyright cops and bend over backwards to ensure infringing content is not available on their platforms. EFF has repeatedly stressed that such liability regimes will lead to upload filters, which are prone to error, unaffordable for all but the largest companies, and undermine fundamental rights of users. Simply put, people will be unable to freely speak and share opinions, criticisms, photos, videos, or art if they are subjected to a black box programmed by algorithms to make potentially harmful automated takedown decisions. Article 17 Interferes With Free Speech, But Not Quite Strong Enough Today’s opinion, while milder than we had hoped, could help mitigate that risk. Briefly, the AG acknowledges that Article 17 interferes with users’ freedom of expression rights, as providers are required preventively to filter and block user content that unlawfully infringes copyrights. The AG found that users were not free to upload whatever content they wish—Article 17 had the “actual effect” of requiring platforms to filter their users’ content. However, the AG concludes that, thanks to safeguards contained in Article 17, the interference with free speech was not quite strong enough to be incompatible with the EU’s Charter of Fundamental Rights.Here’s the slightly more detailed version: The EU Copyright Directive recognizes the right to legitimate uses of copyright-protected material, including the right to rely on exceptions and limitations for content such as reviews or parody. The AG opinion acknowledges that these protections are enforceable and stresses the importance of out of court redress mechanisms and effective judicial remedies for users. The AG points out that Article 17 grants users ex ante protection, protection at the moment they upload content, which would limit permissible filtering and blocking measures. Hence, in contrast to several EU Member States that have ignored the fundamental rights perspective altogether, the AG interprets Article 17 as requiring content hosts to pay strong attention to user rights’ safeguards and legitimate uses. As the Republic of Poland submits, complex issues of copyright relating, inter alia, to the exact scope of the exceptions and limitations cannot be left to those providers. It is not for those providers to decide on the limits of online creativity, for example by examining themselves whether the content a user intends to upload meets the requirements of parody. Such delegation would give rise to an unacceptable risk of ‘over-blocking’. Those questions must be left to the court.         Green Light to Filters, But Platforms Should Not Become The Copyright Police The AG reaffirms the “ban of mandated general monitoring” of user content, which is an important principle under EU law, and rejects an interpretation of Article 17 in which providers are “turned into judges of online legality, responsible for coming to decision on complex copyright issues.” To minimize the risk of overblocking legitimate user content, platform providers should only actively detect and block manifestly infringing content, meaning content that is “identical or equivalent” to the information provided by rightsholders, the AG opinion says. Such content could be presumed illegal. By contrast, in all ambiguous situations potentially covered by exceptions and limitations to copyright, such as transformative works or parody, priority must be given to freedom of expression and preventive blocking is not permitted.While the AG’s approach reduces the risk of overblocking, it unfortunately permits mandated upload filters in principle. The opinion fails to acknowledge the limits of technical solutions and could, in practical terms, make error-prone copyright matching tools, such as those used by YouTube, a legal standard. It’s also unfortunate that the AG considers the safeguards set out by Article 17 sufficient, trusting that a user-friendly implementation by national lawmakers or interpretation by courts will do the trick.These flaws aside, the opinion is a welcome clarification that there are limits to the use of upload filters. It should serve as a warning to Member States that, without sufficient user safeguards, national laws will undermine the “essence” of the right to freedom of expression. This is good news for users and bad news for States such as France or the Netherlands, whose laws implementing Article 17 offer far too little protections for legitimate uses of copyright.The decision is the result of a legal challenge by the Republic of Poland, questioning the compatibility of Article 17 with the EU’s Charter of Fundamental Rights of the European Union. The opinion now goes to the Court of Justice for final judgment.

  • UK’s Draft Online Safety Bill Raises Serious Concerns Around Freedom of Expression
    by Christoph Schmon on July 14, 2021 at 11:30 am

    On May 12, the UK government published a draft of its Online Safety Bill, which attempts to tackle illegal and otherwise harmful content online by placing a duty of care on online platforms to protect their users from such content. The move came as no surprise: over the past several years, UK government officials have expressed concerns that online services have not been doing enough to tackle illegal content, particularly child sexual abuse material (commonly known as CSAM) and unlawful terrorist and extremist content (TVEC), as well as content the government has deemed lawful but “harmful.” The new Online Safety Bill also builds upon the government’s earlier proposals to establish a duty of care for online providers laid out in its April 2019 White Paper and its December 2020 response to a consultation. EFF and OTI submitted joint comments as part of that consultation on the Online Harms White Paper in July 2019, pushing the government to safeguard free expression as it explored developing new rules for online content. Our views have not changed: while EFF and OTI believe it is critical that companies increase the safety of users on the internet, the recently released draft bill reflects serious threats to freedom of expression online, and must be revised. In addition, although the draft features some notable transparency provisions, these could be expanded to promote meaningful accountability around how platforms moderate online content. Our Views Have Not Changed: Broad and Vague Notion of Harmful Content The bill is broad in scope, covering not only “user-to-user services” (companies that enable users to generate, upload, and share content with other users), but also search engine providers. The new statutory duty of care will be overseen by the UK Office of Communications (OFCOM), which has the power to issue high fines and to block access to sites. Among the core issues that will determine the bill’s impact on freedom of speech is the concept of “harmful content.” The draft bill opts for a broad and vague notion of harmful content that could reasonably, from the perspective of the provider, have a “significant adverse physical or psychological impact” on users. The great subjectivity involved in complying with the duty of care poses a risk of overbroad removal of speech and inconsistent content moderation. In terms of illegal content, “Illegal content duties” comprise the obligations of platform operators to minimize the presence of so-called “priority illegal content,” to be defined through future regulation, and a requirement to take down any illegal content upon becoming aware of it. The draft bill thus departs from the EU’s e-Commerce Directive (and the proposed Digital Services Act), which abstained from imposing affirmative removal obligations on platforms. For the question of what constitutes illegal content, platforms are put first in line as arbiters of speech: content is deemed illegal if the service provider has “reasonable grounds” to believe that the content in question constitutes a relevant criminal offence. The bill also places undue burden on smaller platforms, raising significant concerns that it could erode competition in the online market. Although the bill distinguishes between large platforms (“Category 1”) and smaller platforms (“Category 2”) when apportioning responsibilities, it does not include clear criteria for how a platform would be categorized. Rather, the bill provides that the Secretary of State will decide how a platform is categorized. Without clear criteria, smaller platforms could be miscategorized and required to meet the bill’s more granular transparency and accountability standards. While all platforms should strive to provide adequate and meaningful transparency to their users, it is also important to recognize that certain accountability processes require a significant amount of resources and labor, and platforms that have large user bases do not necessarily also have access to corresponding resources. Platforms that are miscategorized as larger platforms may not have the resources to meet more stringent requirements or pay the corresponding fines, putting them at a significant disadvantage. The UK government should therefore provide greater clarity around how platforms would be categorized for the purposes of the draft bill, to provide companies sufficient notice of their responsibilities. Lastly, the draft bill contains some notable transparency and accountability provisions. For example, it requires providers to issue annual transparency reports using guidance provided by OFCOM. In addition, the bill seeks to respond to previous concerns around freedom of expression online by requiring platforms to conduct risk assessments around their moderation of illegal content, and it requires OFCOM to also issue a transparency report which summarizes insights and best practices garnered from company transparency reports. These are good first steps, especially considering the fact that governments are increasingly using legal channels to request that companies remove harmful and illegal content. However, it is important for the UK government to recognize that a one-size-fits-all approach to transparency reporting does not work, and often prevents companies from highlighting trends and data points that are most relevant to the subject at hand. In addition, the structure of the OFCOM transparency report suggests that it would mostly summarize insights, rather than provide accountability around how internet platforms and governments work together to moderate content online. Further, the draft bill does not significantly incorporate features such as providing users with notice and appeals process for content decisions, despite robust advocacy by content moderation and freedom of expression experts. Adequate notice and appeals are integral to ensuring that companies are providing transparency and accountability around their content moderation efforts, and are key components of the Santa Clara Principles for Transparency and Accountability in Content Moderation, of which EFF and OTI were among the original drafters and endorsers. UK government Should Revise the Draft Bill To Protect Freedom of Speech As social media platforms continue to play an integral role in information sharing and communications globally, governments around the world are taking steps to push companies to remove illegal and harmful content. The newly released version of the UK Government’s Online Safety Bill is the latest example of this, and it could have a significant impact in the UK and beyond. While well intended, the bill raises some serious concerns around freedom of expression online, and it could do more to promote responsible and meaningful transparency and accountability. We strongly encourage the UK government to revise the current draft of the bill to better protect freedom of speech and more meaningfully promote transparency. This post was co-written with Spandana Singh, Open Technology Institute (OTI).

  • Clearview’s Face Surveillance Still Has No First Amendment Defense
    by Adam Schwartz on July 13, 2021 at 5:01 pm

    Clearview AI extracts faceprints from billions of people, without their consent, and uses these faceprints to help police identify suspects. This does grave harm to privacy, free speech, information security, and racial justice. It also violates the Illinois Biometric Information Privacy Act (BIPA), which prohibits a company from collecting a person’s biometric information without first obtaining their opt-in consent. Clearview now faces many BIPA lawsuits. One was brought by the ACLU and ACLU of Illinois in state court. Many others were filed against the company in federal courts across the country, and then consolidated into one federal courtroom in Chicago. In both Illinois and federal court, Clearview argues that the First Amendment bars these BIPA claims. We disagree. Last week, we filed an amicus brief in the federal case, arguing that applying BIPA to Clearview’s faceprinting does not offend the First Amendment. Last fall, we filed a similar amicus brief in the Illinois state court case. EFF has a longstanding commitment to protecting both speech and privacy at the digital frontier, and these cases bring these values into tension. Faceprinting raises some First Amendment interests, because it is collection and creation of information for purposes of later expressing information. However, as practiced by Clearview, this faceprinting does not enjoy the highest level of First Amendment protection, because it does not concern speech on a public matter, and the company’s interests are solely economic. Under the correct First Amendment test, Clearview may not ignore BIPA, because there is a close fit between BIPA’s goals (protecting privacy, speech, and information security) and its means (requiring opt-in consent). A growing number of law enforcement agencies have used face surveillance to target Black Lives Matter protesters, including the U.S. Park Police, the U.S. Postal Inspection Service, and local police in Boca Raton, Broward County, Fort Lauderdale, Miami, New York City, and Pittsburgh. So Clearview is not the only party whose First Amendment interests are implicated by these BIPA enforcement lawsuits. For a more complete explanation of EFF’s First Amendment arguments, check out this blog post, or our two briefs. You might also be interested in the First Amendment arguments, recently filed in the federal lawsuit against Clearview, from the plaintiffs, the ACLU and ACLU of Illinois amici, and the Georgetown Law Center on Privacy & Technology amicus.

  • DNS Provider Hit With Outrageous Blocking Order – Is Your Provider Next?
    by Corynne McSherry on July 13, 2021 at 4:40 pm

    The seemingly endless battle against copyright infringement has caused plenty of collateral damage. But now that damages is reaching new levels, as copyright holders target providers of basic internet services. For example, Sony Music has persuaded a German court to order a Swiss domain name service (DNS) provider, Quad9, to block a site that simply indexes other sites suspected of copyright infringement. Quad9 has no special relationships with any of the alleged infringers. It simply resolves domain names, conveying the public information of which web addresses direct to which server, on the public internet, like many other service providers. In other words, Quad9 isn’t even analogous an electric company that provides service to a house where illegal things might happen. Instead, it’s like a GPS service that simply helps you find a house where you can learn about other houses where illegal things might happen. This order is profoundly dangerous for several reasons. In the U.S. context, where injunctions like these are usually tied to specious claims of conspiracy, we have long argued  that intermediaries which bear no meaningful relationship to the alleged infringement, and cannot therefore be held liable for it, should not be subject to orders like these in the first place. Courts do not have unlimited power; rather, judges should confine their orders to persons that are plausibly accused of infringement or acting in concert with infringers. Second, orders like these create a moderator’s dilemma. Quad9 faces this order in large part because it provides a valuable service: blocking sites that pose technical threats. Sony argues that if Quad9 can block sites for technical threats, it can block them for copyright “threats” as well. As Quad9 rightly observes: The assertion of this injunction is, in essence, that if there is any technical possibility of denying access to content by a specific party or mechanism, then it is required by law that blocking take place on demand, regardless of the cost or likelihood of success. If this precedent holds, it will appear again in similar injunctions against other distant and uninvolved third parties, such as anti-virus software, web browsers, operating systems, IT network administrators, DNS service operators, and firewalls, to list only a few obvious targets. If you build it, they will come, and their demands will discourage intermediaries from offering services like these at all – to the detriment of internet users. Third, orders like these are hopelessly overinclusive. Blocking entire sites inevitably means blocking content that is perfectly lawful. Moreover, courts may not carefully scrutinize the claims – keep in mind that U.S authorities persuaded a court to allow them to seize a popular music website for over a year, based solely on the say-so of a music industry association. To try avoid that kind of disruption, some intermediaries might also feel compelled to block preemptively.  If so, the entire history of copyright lobbying shows that this tactic will not work. Copyright maximalists are never satisfied. The only way to avoid the pressure is to insist that copyright enforcement, and other forms of content moderation, happen at the right level of the internet stack. Fourth, as the above suggests, blocking at the infrastructure level imports all of the flaws we see with content moderation at the platform level – and makes them even worse. The complete infrastructure of the internet, or the “full stack,” is made up of a range of intermediaries that range from consumer-facing platforms like Facebook or Pinterest, to ISPs like Comcast or AT&T. Somewhere in the middle are a wide array of intermediaries, such as infrastructure providers like Amazon Web Services (AWS), domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services. For most of us, this stack is nearly invisible. We send email, tweet, post, upload photos and read blog posts without thinking about all the services that have to function correctly to get the content from creators to users all over the world. We may think about our ISP when it gets slow or breaks, but most of us don’t think about AWS at all. We are more aware of the content moderation decisions—and mistakes—made by the consumer-facing platforms. We have detailed many times the chilling effects on speech and the other problems caused by opaque, bad, or inconsistent content moderation decisions from companies like Facebook. But when ISPs or intermediaries are forced to wade into the game and start blocking certain users and sites, it’s far worse. For one thing, many of these services have few, if any, competitors. For example, many people in the United States and overseas only have one choice for an ISP. If the only broadband provider in your area cuts you off because they (or your government) don’t like what you said online—or what some other user of the account said—you may lose access to a wide array of crucial services and information, like jobs, education, and health. And again, at the infrastructure level, providers usually cannot target their response narrowly. Twitter can shut down individual accounts; AWS can only deny service to the entire site, shutting down all speech including that which is entirely unobjectionable. And that is exactly why ISPs and intermediaries need to stay away from this fight if they can – and courts shouldn’t force them to do otherwise. The risks from getting it wrong at the infrastructure level are far too great. European policymakers have recognized these risks. As the EU Commission recently stated it in its impact assessment to the Digital Services Act, actions taken in these cases can effectively disable access to entire services. Nevertheless, injunctions against infrastructure providers requiring them to block access to copyright-infringing websites are on the rise, whilst freedom of expression and information rights often take the back seat. Finally, as we have already seen, these kinds of orders don’t stop with copyright enforcement – instead, copyright policing frequently serve as a model that is leveraged to shut down all kinds of content. While EFF does not practice law in German courts, we urge allies in the EU to support Quad9 and push back against this dangerous order. Copyright enforcement is no excuse for suppressing basic, legitimate, and beneficial, internet operations.

  • The Internet Loses a Champion with the Passing of Sherwin Siy
    by Ernesto Falcon on July 8, 2021 at 5:04 pm

    We at EFF are devastated to learn of the passing of Sherwin Siy. He was a brilliant advocate and strategist who was dedicated to protecting and preserving the internet as a space for creativity, innovation and sharing. He was also a friend and generous mentor who shaped the present and future of tech policy by supporting and teaching others. We are grateful for the work he did, and deeply saddened to lose his voice, his perspective, and above all his spirit, in the work to come. The internet lost one of its champions. RIP Sherwin, we will miss you.

  • Digital Rights Updates with EFFector 33.4
    by Christian Romero on July 8, 2021 at 4:14 pm

    Want the latest news on your digital rights? Then you’re in luck! Version 33, issue 4 of EFFector, our monthly-ish email newsletter, is out now! Catch up on rising issues in online security, privacy, and free expression with EFF by reading our newsletter or listening to the new audio version below. Listen on YouTube EFFECTOR 33.04 – Highest court hands down a series of critical Digital rights decisions Make sure you never miss an issue by signing up by email to receive EFFector as soon as it’s posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression.  Thank you to the supporters around the world who make our work possible! If you’re not a member yet, join EFF today to help us fight for a brighter digital future.

  • Improving Enforcement in State Consumer Privacy Laws
    by Hayley Tsukayama on July 7, 2021 at 11:21 pm

    Momentum for state privacy bills has been growing over the past couple of years, as lawmakers respond to privacy invasions and constituent demand to address them. As several states end their legislative sessions for the year and lawmakers begin to plan for next year, we urge them to pay special attention to strengthening enforcement in state privacy bills. Strong enforcement sits at the top of EFF’s recommendations for privacy bills for good reason. Unless companies face serious consequences for violating our privacy, they’re unlikely to put our privacy ahead of their profits. We need a way to hold companies directly accountable to the people they harm—especially as they have shown they’re all-too willing to factor fines for privacy violations into the cost of doing business. To do so, we recommend a full private right of action—that is, making sure people have a right to sue companies that violate their privacy. This is how legislators normally approach privacy laws. Many privacy statutes contain a private right of action, including federal laws on wiretaps, stored electronic communications, video rentals, driver’s licenses, credit reporting, and cable subscriptions. So do many other kinds of laws that protect the public, including federal laws on clean water, employment discrimination, and access to public records. Consumer data privacy should be no different. Unless companies face serious consequences for violating our privacy, they’re unlikely to put our privacy ahead of their profits. Yet while private individuals should be able to sue companies that violate their privacy, it is only part of the solution. We also need strong public enforcement, from regulators such as attorneys general, consumer protection bureaus, or data privacy authorities. We also advocate against what are called “right to cure” provisions. Rights to cure give companies a certain amount of time to fix violations of the law before they face consequences—essentially giving them a get-out-of-jail free card. This unnecessarily weakens regulators’ ability to go after companies. It can also discourage regulators from investing resources and lawyer time into bringing a case that could very easily disappear under these provisions. Last year, California voters removed the right to cure from the California Consumer Privacy Act. Unfortunately, several other state bills not only refused to include private rights of action to hold companies accountable, but they also hobble their one enforcement lever with rights to cure. Some Improvements, But We Still Have a Long Way to Go The Colorado Privacy Act passed very near the end of the state’s legislative session. It covers entities that process the data of more than 100,000 individuals or sell the data of more than 25,000 individuals. EFF did not take a position on this bill, viewing it as a mixed bag overall. It has no private right of action, centering all of its enforcement in the state Attorney General’s office. The bill also has a right to cure. However, we do applaud the legislature for adding a sunset to that bill’s right to cure—currently set to expire in 2025. Companies argue that rights to cure make it easier to comply with new regulations, which is often persuasive for lawmakers. We are glad to see Colorado recognize this loophole should not last indefinitely. EFF continues to oppose right to cure provisions but is glad to see them limited. We hope to see Colorado build on the basic privacy rights enshrined in this law in future sessions. We’ve also seen some small progress toward stronger enforcement. Opponents of strong privacy bills often argue that private rights of action, or expanding the private rights of action, is a poison pill for privacy bills. But some legislatures have shown this year that is not true. Nevada improved a consumer privacy bill passed last year, SB 220; that change now permits Nevadans to sue data brokers that violate their privacy rights. Furthermore, the Florida house voted to pass a bill that contained a full private right of action—a small but significant step forward and a blow against the argument from big tech companies and their legislative enablers that including this important right is a complete non-starter for a privacy bill. Given the recent Supreme Court ruling in the TransUnion case, which places limits on who can sue companies under federal laws, it has never been more important for states to step up and provide these crucial protections for their constituents. Overall, we would like to see continued momentum around prioritizing strong enforcement—and to see other states move beyond the baselines set in California and Colorado. We certainly should not accept steps backwards. Unfortunately, that is what happened in one state. The data privacy bill passed in Virginia this year is significantly weaker than any other state law in this and other crucial areas. Virginia’s law lacks a private right of action and includes a right to cure. Adding insult to injury, the state also opted to give the law’s sole enforcer, the attorney general’s office, only $400,000 in additional funding to cover its new duties. This anemic effort is wholly inadequate to the task of protecting the privacy of every Virginian. This mistake should not be repeated in other states. As other states look to pass comprehensive consumer data privacy bills, we urge lawmakers to focus on strong enforcement. There is much work to do. But we are encouraged to see more attention paid to properly funding regulatory bodies, growing support for private rights of action, and limits on rights to cure. EFF will continue to push for strong privacy laws and demand that these laws have real teeth to value consumer rights over corporate wish lists.  

  • EFF Gets $300,000 Boost from Craig Newmark Philanthropies to Protect Journalists and Fight Consumer Spyware
    by Rebecca Jeschke on July 7, 2021 at 5:17 pm

    Donation Will Help with Tools and Training for Newsgatherers, and Research on Technology like Stalkerware and BosswareSan Francisco – The Electronic Frontier Foundation (EFF) is proud to announce its latest grant from Craig Newmark Philanthropies: $300,000 to help protect journalists and fight consumer spyware. “This donation will help us to develop tools and training for both working journalists and student journalists, preparing them to protect themselves and their sources. We also help journalists learn to research the ways in which local law enforcement is using surveillance tech in its day-to-day work so that, ultimately, communities can better exercise local control,” said EFF Cybersecurity Director Eva Galperin. “Additionally, EFF is launching a public education campaign about what we are calling ‘disciplinary technologies.’ These are tools that are ostensibly for monitoring work or school performance, or ensuring the safety of a family member. But often they result in non-consensual surveillance and data-gathering, and often disproportionately punish BIPOC.” A prime example of disciplinary technologies is test-proctoring software. Recently, Dartmouth’s Geisel School of Medicine charged 17 students with cheating after misreading software activity during remote exams. After a media firestorm, the school later dropped all of the charges and apologized. Other disciplinary technologies include employee-monitoring bossware, and consumer spyware that is often used to monitor and control household members or intimate partners. Spyware, often based upon similar technologies, is also regularly used on journalists across the globe. “We need to make sure that technology works for us, not against us,” said EFF Executive Director Cindy Cohn. “We are so pleased that Craig Newmark Philanthropies has continued to support EFF in this important work for protecting journalists and people all over the world.” Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights [email protected]

  • Greetings from the Internet! Connect with EFF this Summer
    by Aaron Jue on July 6, 2021 at 3:28 pm

    Every July, we celebrate EFF’s birthday and its decades of commitment fighting for privacy, security, and free expression for all tech users. This year’s membership drive focuses on a central idea: analog or digital—what matters is connection. If the internet is a portal to modern life, then our tech must embrace privacy, security, and free expression for the users. You can help free the tubes when you join the worldwide community of EFF members this week. Join EFF! Free the Tubes. Boost the Signal. Through July 20 only, you can become an EFF member for just $20, and get a limited-edition series of Digital Freedom Analog Postcards. Each piece of this set represents part of the fight for our digital future, from protecting free expression to opposing biometric surveillance. Send one to someone you care about and boost our signal for a better internet. Physical space and cyberspace aren’t divided worlds anymore. The lines didn’t just blur during the last year; they entwined to help carry us through crisis. This laid bare the brilliance and dangers of a world online, showing us how digital policies will shape our lives. You can create a better future for everyone as an EFF member this year. Boost the Signal Will you help us encourage people to support internet freedom? It’s a big job and it takes all of us. Here’s some language you can share with your circles: Staying connected has never been more important. Help me support EFF and the fight for every tech users’ right to privacy, free speech, and digital access. Twitter | Facebook | Email Stay Golden We introduce new member gear each summer to thank supporters and help them start conversations about online rights. This year’s t-shirt design is a salute to our resilience and power when we keep in touch. EFF Creative Director Hugh D’Andrade worked in this retrofuturist, neo-deco art style to create an image that references an optimistic view of the future that we can (and must) build together. The figure here is bolstered by EFF’s mission and pale gold and glow-in-the-dark details. We have all endured incredible hardships over the last year, but EFF—with the strength of our relationships and the power of the web—never stopped fighting for a digital world that supports freedom, justice, and innovation for all people. Connect with us and we’re unstoppable. Donate TOday K.I.T. Have a nice summer <3 EFF

  • The Future Is in Symmetrical, High-Speed Internet Speeds
    by Ernesto Falcon on July 2, 2021 at 11:32 pm

    Congress is about to make critical decisions about the future of internet access and speed in the United States. It has a potentially once-in-a-lifetime amount of funding to spend on broadband infrastructure, and at the heart of this debate is the minimum speed requirement for taxpayer-funded internet. It’s easy to get overwhelmed by the granularity of this debate, but ultimately it boils down to this: cable companies want a definition that requires them to do and give less. One that will not meet our needs in the future. And if Congress goes ahead with their definition—100 Mbps of download and 20 of upload (100/20 Mbps)—instead of what we need—100 Mbps of download and 100 Mbps of upload (100/100 Mbps)—we will be left behind. In order to explain exactly why these two definitions mean so much, and how truly different they are, we’ll evaluate each using five basic questions below. But the too long, didn’t read version is this: in essence, building a 100/20 Mbps infrastructure can be done with existing cable infrastructure, the kind already operated by companies such as Comcast and Charter, as well as with wireless. But raising the upload requirement to 100 Mbps—and requiring 100/100 Mbps symmetrical services—can only be done with the deployment of fiber infrastructure. And that number, while requiring fiber, doesn’t represent the fiber’s full capacity, which makes it better suited to a future of internet demand. With that said, let’s get into specifics. All of the following questions are based in what the United States, as a country, is going to need moving forward. It is not just about giving us faster speeds now, but preventing us from having to spend this money again in the future when the 100/20Mbps infrastructure eventually fails to serve us. It’s about making sure that high-quality internet service is available to all Americans, in all places, at prices they can afford. High-speed internet access is no longer a luxury, but a necessity. Which Definition Will Meet Our Projected Needs in 2026 and Beyond? Since the 1980s, consumer usage of the internet has grown by 21% on average every single year. Policymakers should bake into their assumption that 2026 internet usage will be greater than 2021 usage. Fiber has capacity decades ahead of projected growth, which is why it is future-proof. Moreover, high-speed wireless internet will likewise end up depending on fiber, because high-bandwidth wireless towers must have equally high-bandwidth wired connections to the internet backbone. In terms of predicted needs in 2026, OpenVault finds that today’s average use is 207 Mbps/16 Mbps. If we apply 21% annual growth, that will mean 2026 usage will be over 500Mbps down and 40Mbps up. But another crucial detail is that the upload and download needs aren’t growing at the same speeds. Upload, which the average consumer used much less than download, is growing much faster. This is because we are all growing to use and depend on services that upload data much more. The pandemic underscored this, as people moved to remote socializing, remote learning, remote work, telehealth, and many other services that require high upload speeds and capacity. And even as we emerge from the pandemic, those models are not going to go away. Essentially, the pandemic jumped our upload needs ahead of schedule, but it does not represent an aberration. If anything, it proved the viability of remote services. And our internet infrastructure must reflect that need, not the needs of the past. The numbers bear this out, with services reporting upstream traffic increasing 56% in 2020. And if anything close to that rate of growth in upload demand persists, then the average upload demand will exceed 100Mbps by 2026. Those speeds will be completely unobtainable with infrastructure designed around 100/20 Mbps, but perfectly within reach of fiber-based networks. Notably, all the applications and services driving the increased demand on upstream usage (telehealth, remote work, distance learning) are based on symmetric usage of broadband—that is 100/100 Mbps and not 100/20 Mbps. And future cloud-based computing services are predicted to actually need higher upload speeds than download speeds to function. Which Definition Will Increase Upload Speeds Most Cost-Effectively? With upload demand skyrocketing, networks will have to improve their capacity. However, the cable infrastructure that will be maintained by a 100/20 Mbps definition is already reaching its capacity. That means that, in order to upgrade, companies will eventually have to start replacing the old infrastructure with fiber anyway. Or, they will be stuck delivering below what Americans need. The same is true for wireless internet. In other words, the only way to upgrade a non-fiber, 100/20 Mbps network is to connect it with fiber. There is just nowhere for the current infrastructure to go. Updating with fiber now saves everyone the cost of doing minor upgrades now and having to do fiber in a few years. Slow networks ultimately cost more than just going straight to fiber because they ultimately have to be replaced by fiber anyways and become wasted investments. Furthermore, once on fiber, increasing your speed comes much more cheaply, since the hardware at the ends of the fiber connections can be upgraded without digging and laying new cables. You can see this with the financial data from Chattanooga’s municipal fiber entity in 2015 when they upgraded from 1 gigabit to 10 gigabits. They did not experience a substantial increase in costs to upgrade at all. Which Definition Will Deliver Gigabit Speeds? For the same reason 100/20 cable and wireless systems can’t easily improve their upload speed, they can’t also turn around and deliver gigabit speeds. Meanwhile, the same fiber network able to deliver 100/100 Mbps is actually also capable of also delivering 1000/1000 Mbps and 10,000/10,000 Mbps with affordable upgrades to its hardware. 80,000/80,000 Mbps is already possible now over the same fiber wire, though the price of the hardware remains high. As the price comes down, 80 gigabit symmetrical could become the next standard for fiber networks. Wireless connected with fiber benefits from these gains with the only limitation being the amount of available spectrum they have for wireless transmission. Which Definition Will Give Americans an Affordable Option That Meets Their Needs Over Time? There is zero chance a network built to deliver 100/20 Mbps that isn’t premised on fiber can offer a scalable, low-cost solution in the future, for all the reasons listed above. Capacity constraints on cable and non-fiber-based wireless drastically limit the extent to which they can add new users. Their solution is to offer significantly lower speeds than 100/20 Mbps to minimize the burden on their capacity-constrained network. But a fiber network can share the gains it makes from advancements in hardware because it does not experience a new cost burden to deliver a scalable solution. This is why Chattanooga was able to give its low-income students free 100/100 Mbps internet access during the pandemic at very little cost to the network. Which Definition Makes the U.S. Globally Competitive? Advanced markets in Asia, led by China, will connect total of 1 billion people to symmetrical gigabit lines. China years ago committed to deploying universal fiber, and it is rapidly approaching that goal. The U.S. could choose to do the same. However, if it instead chooses to upgrade some cable networks and push some slow wireless connectivity out to communities at 100/20 Mbps, our capacity to innovate and grow the internet technology sector will be severely hindered. After all, if the U.S. market is not capable of offering a communications infrastructure capable of running the next generation of applications and services due to slow obsolete speeds, then those applications and services will find their home elsewhere. Not only will this impact our ability to attract a technology sector, but all related industries dependent on connectivity will be relying on speeds vastly inferior to gigabit fiber-connected businesses. In each one of these questions, it is clear that the government needs to invest in fiber infrastructure, which means defining what technology gets taxpayer dollars at 100/100 Mbps. While the existing monopolies would like to get that money for infrastructure they don’t actually have to build—old cable lines that can meet the 100/20 Mbps definition—that is doing a grave disservice to Americans.  

  • Victory! Fourth Circuit Rules Baltimore’s Warrantless Aerial Surveillance Program Unconstitutional
    by Saira Hussain on July 2, 2021 at 6:22 pm

    This blog post was cowritten by EFF intern Lauren Yu. The U.S. Court of Appeals for the Fourth Circuit ruled last week that Baltimore’s use of aerial surveillance that could track the movements of the entire city violated the Fourth Amendment. The case, Leaders of a Beautiful Struggle v. Baltimore Police Department, challenged the Baltimore Police Department’s (BPD) use of an aerial surveillance program that continuously captured an estimated 12 hours of coverage of 90 percent of the city each day for a six-month pilot period. EFF, joined by the Brennan Center for Justice, Electronic Privacy Information Center, FreedomWorks, National Association of Criminal Defense Lawyers, and the Rutherford Institute, filed an amicus brief arguing that the two previous court decisions upholding the constitutionality of the program misapplied Supreme Court precedent and failed to recognize the disproportionate impact of surveillance, like Baltimore’s program, on communities of color.  In its decision, the full Fourth Circuit found that BPD’s use and analysis of its Aerial Investigation Research (AIR) data was a warrantless search that violated the Fourth Amendment. Relying on the Supreme Court’s decisions in United States v. Jones and United States v. Carpenter, the Fourth Circuit held that Carpenter—which ruled that cell-site location information was protected under the Fourth Amendment and thus may only be obtained with a warrant—applied “squarely” to this case. The Fourth Circuit explained that the district court had misapprehended the extent of what the AIR program could do. The district court believed that the program only engaged in short-term tracking. However, the Fourth Circuit clarified that, like the cell-site location information tracking in Carpenter, the AIR program’s detailed data collection and 45-day retention period gave BPD the ability to chronicle movements in a “detailed, encyclopedic” record, akin to “attaching an ankle monitor to every person in the city.” The court further stated that oversurveillance and the resulting overpolicing do not allow different communities to enjoy the same rights That ability to deduce an individual’s movements over time violated Baltimore residents’ reasonable expectation of privacy. In making that determination, the court underscored the importance of considering not only the raw data that was gathered but also “what that data could reveal.” Contrary to the BPD’s claims that the aerial surveillance data was anonymous, the court pointed to studies that demonstrated the ease with which people could be identified by just a few points of their location history because of the unique and habitual way we all move. Moreover, the court stated that when this data was combined with Baltimore’s wide array of existing surveillance tools, deducing an individual’s identity became even simpler. The court also recognized the racial and criminal justice implications of oversurveillance. It noted that although mass surveillance touches everyone, “its hand is heaviest in communities already disadvantaged by their poverty, race, religion, ethnicity, and immigration status,” and that the impact of high-tech monitoring is “conspicuous in the lives of those least empowered to object.” The court further stated that oversurveillance and the resulting overpolicing do not allow different communities to enjoy the same rights: while “liberty from governmental intrusion can be taken for granted in some neighborhoods,” others “experience the Fourth Amendment as a system of surveillance, social control, and violence, not as a constitutional boundary that protects them from unreasonable searches and seizures.” In a powerful concurring opinion, Chief Judge Gregory dug deeper into this issue. Countering the dissent’s assumption that limiting police authority leads to more violence, the concurrence pointed out that Baltimore spends more per capita on policing than any comparable city, with disproportionate policing of Black neighborhoods. However, policing like the AIR program did not make the city safer; rather, it ignored the root issues that perpetuated violence in the city, including a long history of racial segregation, redlining, and wildly unequal distribution of resources. We are pleased that the Fourth Circuit recognized the danger in allowing BPD to use mass aerial surveillance to track virtually all residents’ movements. Although Baltimore discontinued the program, it is far from the only city to employ such intrusive technologies. This decision is an important victory in protecting our Fourth Amendment rights and a big step toward ending intrusive aerial surveillance programs, once and for all.

  • EFF is Highlighting LGBTQ+ Issues Year-Round
    by Rory Mir on July 2, 2021 at 4:19 pm

    EFF is dedicated to ensuring that technology supports freedom, justice and innovation for all the people of the world. While digital freedom is an LGBTQ+ issue, LGBTQ+ issues are also digital rights issues. For example, LGBTQ+ communities are often those most likely to experience firsthand how big tech can restrict free expression, capitulate to government repression, and undermine user privacy and security. In many ways, the issues faced by these communities today serve as a bellwether of the fights other communities will face tomorrow. This is why EFF is committing to highlight these issues not only during Pride month, but year-round on our new LGBTQ+ Issue Page. Centering LGBTQ+ Issues Last month many online platforms featured pride events and rainbow logos (in certain countries). But their flawed algorithms and moderation restrict the freedom of expression of the LGBTQ+ community year-round. Some cases are explicit, like when blunt moderation policies, responding in part to FOSTA-SESTA, shut down discussions of sexuality and gender. In other instances, platforms, such as TikTok, will more subtly restrict LGBTQ+ content allegedly to “protect” users from bullying– while promoting homophobic and anti-trans content. Looking beyond the platforms, government surveillance of LGBTQ+ individuals is also a long standing concern, including such historic cases as 1960’s FBI Director J. Edgar Hoover’s maintaining a “Sex Deviant” file used for state abuse. In addition to government repression seen in the U.S. and internationally,  data collection from apps disproportionately increases the risk to LGBTQ+ people online and off, because exposing this data can enable targeted harassment. These threats in particular were explored in a blog post last month on Security Tips for Online LGBTQ+ Dating. At Home with EFF: Pride Edition For the second year in a row, EFF has held an At Home with EFF livestream panel to highlight these and other related issues, facilitated by EFF Technologist Daly Barnett. This year’s panel featured Hadi Damien, co-president of InterPride; moses moon, a writer also known as @thotscholar; Ian Coldwater, Kubernetes SIG Security co-chair; and network security expert Chelsea Manning.  This conversation featured a broad range of expert opinions and insight on a variety of topics, from how to navigate the impacts of tightly controlled social media platforms, to ways to conceptualize open source licensing to better protect LGBTQ+ individuals.   If you missed this informative discussion, you can still view it in its entirety on the EFF Facebook, Periscope, or YouTube page (video below): Privacy info. This embed will serve content from LGBTQ+ community resources Now that June has drawn to a close, there are some ongoing commitments from EFF which can help year-round. For up-to-date information on LGBTQ+ and digital rights issues, you can refer to EFF’s new LGBTQ+ issue page. Additionally EFF maintains an up-to-date digital security advice project, Surveillance Self Defense, which includes a page specific to LGBTQ+ youth.  LGBTQ+ activists can refer to the EFF advocacy toolkit, and, if their work intersect with digital rights, are invited to reach out to the EFF organizing team at [email protected] People regularly engaging in digital rights and LGBTQ+ issues should also consider joining EFF’s own grassroots advocacy network, the Electronic Frontier Alliance.

  • Supreme Court Narrows Ability to Hold U.S. Corporations Accountable for Facilitating Human Rights Abuses Abroad
    by Sophia Cope on July 1, 2021 at 11:46 pm

    People around the world have been horrified at the role that technology companies like Cisco, Yahoo!, and Sandvine have played in helping governments commit gross human rights abuses. That’s why EFF has consistently called out technology companies, and American companies in particular, that allow their internet surveillance and censorship products and services to be used as tools of repression and persecution, rather than tools to uplift humanity. Yet legal mechanisms to hold companies accountable for their roles in human rights violations are few and far between. The Supreme Court has now further narrowed one mechanism: the Alien Tort Statute (ATS). We now call on Congress to fill the gaps where the Court has failed to act. The Supreme Court recently issued an opinion in Nestlé USA, Inc. v. Doe, in which we filed an amicus brief (along with Access Now, Article 19, Privacy International, Center for Long-Term Cybersecurity, and Ronald Deibert, director of Citizen Lab at University of Toronto.) Former child slaves on cocoa farms in Côte d’Ivoire claimed that two American chocolate companies, Nestlé USA and Cargill, facilitated their abuse at the hands of the farm operators by providing training, fertilizer, tools, and cash in exchange for the exclusive right to buy cocoa. The plaintiffs sued under the ATS, a law first passed by Congress in 1789, which allows foreign nationals to bring civil claims in U.S. federal court against defendants who violated “the law of nations or a treaty of the United States,” which many courts have recognized should include violations of modern notions of human rights, including forced labor. EFF’s brief detailed how surveillance, communications, and database systems, just to name a few, have been used by foreign governments—with the full knowledge of and assistance by the U.S. companies selling those technologies—to spy on and track down activists, journalists, and religious minorities who have then been imprisoned, tortured, and even killed. First, the Bad News The centerpiece of the Supreme Court’s opinion is about what has to happen inside the U.S. to make an American company liable, since the Court’s earlier decision in Kiobel v. Royal Dutch Petroleum (2013) had rejected the idea that a multinational corporation based in the U.S. could be held liable solely for its actions abroad. The former child slaves alleged that Nestlé USA and Cargill made “every major operational decision” in the United States, along with, of course, pocketing the profits. This U.S. activity was in addition to the training, fertilizer, tools, and cash the companies provided to farmers abroad in exchange for the exclusive right to buy cocoa. The Court rejected this “operational decision” connection to the U.S. as a basis for ATS liability, saying:  Because making “operational decisions” is an activity common to most corporations, generic allegations of this sort do not draw a sufficient connection between the cause of action respondents seek—aiding and abetting forced labor overseas—and domestic conduct … To plead facts sufficient to support a domestic application of the ATS, plaintiffs must allege more domestic conduct than general corporate activity. We strongly disagree with the Court. When a company or an employee leads the company’s operations from within the United States and pockets profits from human rights abuses suffered abroad, the courts in the United States must exercise jurisdiction to hold them accountable. This is especially important when victims have few other options, as is often the case for people living under repressive or corrupt regimes.  But this decision should be of little comfort to companies that take material steps in the U.S. to develop digital tools that are used to facilitate human rights abuses abroad—companies like Cisco, which, according to the plaintiffs in that case, specifically created an internet surveillance system for the Chinese government that targeted minority groups like the Falun Gong for repression. Building surveillance tools for the specific purpose of targeting religious minorities is not merely an “operational decision,” even under the Supreme Court’s crabbed view. EFF’s Know your Customer framework is a good place to start for any company seeking to stay on the right side of human rights. Next, Some Good News While we are not happy with the Nestlé decision, it did not embrace some of the more troubling arguments from the companies. A key question on appeal was whether U.S. corporations should be immune from suit under the ATS, that is, whether ATS defendants may only be natural persons. The Supreme Court had already held in Jesner v. Arab Bank (2018) that foreign corporations are immune from suit under the ATS, meaning that U.S. courts don’t have jurisdiction over a company, for example, based in Europe relying on forced labor in Asia. Thus, the question remained outstanding as to U.S. corporations. In Nestlé, five justices (Sotomayor, Breyer, Kagan, Gorsuch, Alito) agreed that the ATS should apply to U.S. corporations. As Justice Gorsuch wrote in his concurring opinion, “Nothing in the ATS supplies corporations with special protections against suit… Generally [] the law places corporations and individuals on equal footing when it comes to assigning rights and duties.” This was refreshing consistency, as the Court has held that corporations are “persons” in other legal contexts, including for purposes of free speech, and the companies in this case had pushed hard for a blanket corporate exception to ATS liability. Corporate accountability dodged another bullet as it appears that a majority of the Court agreed that federal courts may continue to recognize under certain circumstances, as the Court addressed in Sosa v. Alvarez-Machain (2004), new causes of action for violations of modern conceptions of human rights, given that the “law of nations” has evolved over the centuries. This might include new substantive claims, such as for child slavery, or indirect forms of liability, such as aiding and abetting. Justice Sotomayor, joined by Breyer and Kagan, was explicit about this in her concurrence. She cited the Court’s opinion in Jesner, which notes “the evolving recognition … that certain acts constituting crimes against humanity are in violation of basic precepts of international law.” Only three justices (Thomas, Gorsuch, Kavanaugh) would have limited the ATS to a very narrow set of historical claims involving piracy or violations of the rights of diplomats and “safe conducts.” These justices would prohibit new causes of action under the ATS, including the claim of aiding and abetting child slavery at issue in Nestlé.  Next Step: Congress As the Supreme Court has increasingly tightened its view of the ATS, in large part because the law is very old and not very specific, the Nestlé decision should be a signal for Congress. Justice Thomas’ opinion goes to great lengths to praise Congress’ ability to create causes of actions and forms of liability by statute, grounded in international law. He argues “that there always is a sound reason to defer to Congress.” Congress should take him up on this invitation and act now to ensure U.S. courts remain avenues of redress for victims of human rights violations—especially as American companies continue to be leaders in developing and selling digital tools of repression to foreign governments. Any American company that puts profits over human rights should face real accountability for doing so. Related Cases: Doe I v. Cisco

  • Victory! Federal Court Halts Florida’s Censorious Social Media Law Privileging Politicians’ Speech Over Everyday Users
    by Aaron Mackey on July 1, 2021 at 7:44 pm

    A federal court on Thursday night blocked Florida’s effort to force internet platforms to host political candidates and media entities online speech, ruling that the law violated the First Amendment and a key federal law that protects users’ speech. We had expected the court to do so. The Florida law, S.B. 7072, prohibited large online intermediaries—save for those that also happened to own a theme park in the state—from terminating politicians’ accounts or taking steps to de-prioritize their posts, regardless of whether it would have otherwise violated the sites’ own content policies. The law also prevented services from moderating posts by anyone who qualified as “journalistic enterprise” under the statute, which was so broadly defined as to include popular YouTube and Twitch streamers. EFF and Protect Democracy filed a friend-of-the-court brief in the case, NetChoice v. Moody, arguing that although online services frequently make mistakes in moderating users’ content, disproportionately harming marginalized voices, the Florida statute violated the First Amendment rights of platforms and other internet users. Our brief pointed out that the law would only have “exacerbate[ed] existing power disparities between certain speakers and average internet users, while also creating speaker-based distinctions that are anathema to the First Amendment.” In granting a preliminary injunction barring Florida officials from enforcing the law, the court agreed with several arguments EFF made in its brief. As EFF argued, the “law itself is internally inconsistent in that it requires ‘consistent’ treatment of all users, yet by its own terms sets out two categories of users for inconsistent special treatment.” The court agreed, writing that the law “requires a social media platform to apply its standards in a consistent manner, but . . . this requirement is itself inconsistent with other provisions.” The court also found that the law intruded upon online services’ First Amendment rights to set their own content moderation policies, largely because it mandated differential treatment of the content of certain online speakers, such as political candidates, over others. These provisions made the law “about as content-based as it gets,” the court wrote. Because the law amounted to a content- and viewpoint-based restriction on speech, Florida was required to show that it had a compelling interest in the restrictions and that it doesn’t burden any more or less speech than is necessary to advance that interest. The court ruled the Florida law failed that test. “First, leveling the playing field—promoting speech on one side of an issue or restricting speech on the other—is not a legitimate state interest,” the court wrote. Further, the law’s speech restrictions and burdens swept far beyond addressing concerns about online services silencing certain voices, as the court wrote that the law amounted to “an instance of burning the house to roast the pig.” As EFF wrote in its brief, inconsistent and opaque content moderation by large online media services is a legitimate problem that leads to online censorship of too much important speech. But coercive measures like S.B. 7072 are not the answer to this problem: The decisions by social media platforms to cancel accounts and deprioritize posts may well be scrutinized in the court of public opinion. But these actions, as well as the other moderation techniques barred by S.B. 7072, are constitutionally protected by binding Supreme Court precedent, and the state cannot prohibit, proscribe, or punish them any more that states can mandate editorial decisions for news media. EFF is pleased that the court has temporarily prohibited Florida from enforcing S.B. 7072 and we look forward to the court issuing a final ruling striking the law down. We would like to thank our local counsel, Christopher B. Hopkins, at McDonald Hopkins LLC for his help in filing our brief.

  • Nominations Open for 2021 Barlows!
    by Hannah Diaz on July 1, 2021 at 7:00 am

    Nominations are now open for the 2021 Barlows to be presented at EFF’s 30th Annual Pioneer Award Ceremony. Established in 1992, the Pioneer Award Ceremony recognizes leaders who are extending freedom and innovation in the realm of technology. In honor of Internet visionary, Grateful Dead lyricist, and EFF co-founder John Perry Barlow, recipients are awarded a “Barlow,” previously known as the Pioneer Awards. The nomination window will be open until July 15th at noon, 12:00 PM Pacific time. You could nominate the next Barlow winner today!What does it take to be a Barlow winner? Nominees must have contributed substantially to the health, growth, accessibility, or freedom of computer-based communications. Their contributions may be technical, social, legal, academic, economic or cultural. This year’s winners will join an esteemed group of past award winners that includes the visionary activist Aaron Swartz, global human rights and security researchers The Citizen Lab, open-source pioneer Limor “Ladyada” Fried, and whistle-blower Chelsea Manning, among many remarkable journalists, entrepreneurs, public interest attorneys, and others.The Pioneer Award Ceremony depends on the generous support of individuals and companies with passion for digital civil liberties. To learn about how you can sponsor the Pioneer Award Ceremony, please email [email protected], nominations close on July 15th at noon, 12:00 PM Pacific time! After you nominate your favorite contenders, we hope you will consider joining our virtual event this fall to celebrate the work of the 2021 winners. If you have any questions or if you’d like to receive updates about the event, please email [email protected] TO NOMINATION PAGENominate your favorite digital rights hero now!

  • Victory! Biden Administration Rescinds Dangerous DHS Proposed Rule to Expand Biometrics Collection
    by Saira Hussain on June 30, 2021 at 10:17 pm

    Marking a big win for the privacy and civil liberties of immigrant communities, the Biden Administration recently rescinded a Trump-era proposed rule that would have massively expanded the collection of biometrics from people applying for an immigration benefit. Introduced in September 2020, the U.S. Department of Homeland Security (DHS) proposal would have mandated biometrics collection far beyond the status quo—including facial images, voice prints, iris scans, DNA, and even behavioral biometrics—from anyone applying for an immigration benefit, including immigrants and often their U.S. citizen and permanent resident family members. The DHS proposed rule garnered more than 5,000 comments in response, the overwhelming majority of which opposed this unprecedented expansion of biometrics. Five U.S. Senators also demanded that DHS abandon the proposal. EFF, joined by several leading civil liberties and immigrant rights organizations, submitted a comment that warned the proposal posed grave threats to privacy, in part because it permitted collection of far more data than needed to verify a person’s identity and stored all data collected in the same place—amplifying the risk of future misuse or breach. EFF’s comment also highlighted the burden on First Amendment activity, particularly because the breadth of sensitive biometrics required by the proposal could lay the groundwork for a vast surveillance network capable of tracking people in public places. That harm would disproportionately impact immigrants, communities of color, religious minorities, and other marginalized communities. In its final days, the Trump Administration failed to finalize the proposed rule. Civil liberties and immigrant rights organizations, including EFF, pushed hard during the transition period to rescind it. Last month, the Biden Administration did just that. The rescission of this dangerous proposal is important to protecting the privacy rights of immigrant communities. However, those rights have been continuously eroded, including a regulation enacted last year that mandates DHS to collect DNA from people in U.S. Immigration and Customs Enforcement (ICE) and U.S. Customs and Border Protection (CBP) custody and enter it into the FBI’s CODIS database. Recent reporting has shown this practice even more widespread than anticipated, with border officers relying on the regulation to collect DNA from asylum-seekers. History has long shown that the surveillance we allow against vulnerable communities often makes its way to affecting the rest of the population. While the rescission of this proposed rule is a good first step, the battle for the privacy rights of immigrants—and for all of us—continues. Related Cases: Federal DNA CollectionDNA Collection

  • PCLOB “Book Report” Fails to Investigate or Tell the Public the Truth About Domestic Mass Surveillance
    by Matthew Guariglia on June 30, 2021 at 4:48 pm

    The Privacy and Civil Liberties Oversight Board (PCLOB) has concluded its six-year investigation into Executive Order 12333, one of the most sprawling and influential authorities that enables the U.S. government’s mass surveillance programs. The result is a bland, short summary of a classified report, as well as a justified, scathing, and unprecedented unclassified statement of opposition from PCLOB member Travis LeBlanc. Let’s start with the fact that the report is still classified—the PCLOB is supposed to provide public access to its work “to the greatest extent” consistent with the law and the needs of classification.  Yet the public statement here is just 26  pages describing, rather than analyzing, the program. Nothing signals to the public a lack of commitment to transparency and a frank assessment of civil liberties violations like blocking the public from even reading a report about one of the most invasive U.S. surveillance programs. Member LeBlanc rightly points out that, at a minimum, the PCLOB should have sought to have as much of its report declassified as possible, rather than issuing what he correctly criticizes as more like a “book report” than an expert legal and technical assessment.  The PCLOB was created after a recommendation by the 9/11 Commission to address important civil liberties issues raised by intelligence community activities. While its first report about Section 215 was critical in driving Congress to scale back that program, other PCLOB reports have been less useful. EFF sharply disagreed with the Board’s findings in 2014 on surveillance under FISA Section 702, especially where it found that the Section 702 program is sound “at its core,” and provides “considerable value” in the fight against terrorism—despite going on to make ten massive recommendations for what the program must do to avoid infringing on people’s privacy. But even by the standards of past PCLOB reports, this latest report represents a new low, especially when addressing the National Security Agency’s XKEYSCORE. XKEYSCORE is a tool that the NSA uses to sift through the vast amounts of data it obtains, including under Executive Order 1333. As the Guardian reported in 2013 based upon Edward Snowden’s revelations, XKEYSCORE gives analysts the power to watch—in real time—anything a person does on the Internet. There are real issues raised by this tool and, as LeBlanc notes, other than by the PCLOB, the XKEYSCORE program is “unlikely to be scrutinized by another independent oversight authority in the near future.”  LeBlanc writes that his opposition to the report stems from:  The unwillingness of the investigation into Executive Order 12333 to scrutinize modern technological surveillance issues, such as algorithmic decision making, and their impact on privacy and civil liberties;  A failure of the Board majority to investigate and evaluate not just how XKEYSCORE can query online communications that the NSA already has, but the legal authority and technological mechanisms that allow it to collect that data in the first place;  The decision to leave out of the report any analysis of the actual effectiveness, costs, or benefits of XKEYSCORE;  The haphazard and unthoughtful way the NSA defended its legal justification for the program’s use—and the Board’s unwillingness to probe into any possible issues of compliance;  A vote to exclude LeBlanc and Board member Ed Felten’s additional recommendations from the report;  The unwillingness of the Board to attempt to declassify the full report or inform the public about it, which LeBlanc labels as “inexcusable,” and, The unconventional process by which the Board voted to release the report.  Any one of these concerns would be significant.Taken together they are a scathing indictment of an oversight board that appears to be unable or unwilling to exercise actual oversight. As LeBlanc notes, there is so much about XKEYSCORE and the NSA’s operations under Executive Order 12333 that require more public scrutiny and deep analysis of their legality. But it seems impossible to achieve this under the current regime of overbroad secrecy and PCLOB’s refusal to play its role in both analyzing the programs and giving the public the information it needs. LeBlanc rightly notes that the report ignores the “collection” of information and that both collection and querying “are worthy of review for separate legal analysis, training, compliance and audit processes.” As we have long argued, this analysis is important “whether the collection and querying activities are performed by humans or machines.” He also notes that the PCLOB failed to grapple with so-called “incidental” collection—how ordinary Americans are caught up in mass surveillance even when they are not the targets. And he notes that the PCLOB failed to investigate compliance and accepted a legality analysis of XKEYSCORE by the NSA’s Office of General Counsel that appears to only have been written after the PCLOB requested it, despite the program operating for at least a decade before. What’s more, the review fails to take into consideration the Supreme Court’s more modern analysis of the Fourth Amendment. Those are just some of his concerns—all of which we share.  Nor did the PCLOB analyze the effectiveness of the program. Basic questions remain unanswered, like whether this program has ever saved any lives. Or whether, like so many other mass surveillance programs, any necessary information it gathered could not be gathered in another manner? These are all important questions that we deserve to know—and that at least one member of the PCLOB board agrees we deserve to know. On December 4, 1981, Ronald Reagan signed Executive Order 12333, which gave renewed and robust authorities to the U.S. intelligence community to begin a regime of data collection that would eventually encompass the entire globe. Executive Order 12333 served as a damning pivot after less than a decade of reforms and mea culpas. The 1975 report of the Church Committee revealed, and attempted to end, three decades of lawless surveillance, repression, blackmail, and sabotage that the FBI, CIA, and NSA wrought onto Americans and the rest of the world. Executive Order 12333 returned the intelligence community to its original state: legally careless, technically irresponsible, and insatiable for data. Twenty-three years after Reagan signed Executive Order 12333, PCLOB was established as a counterbalance to the intelligence community’s free reign over the years. But it’s clear that despite some early achievements, the PCLOB is not living up to its promise. That’s why cases like EFF’s Jewel v. NSA, while not about XKEYSCORE or Executive Order 13333 per se, are critically important to ensure that our constitutional and statutory rights remain protected by the courts, since independent oversight is failing. But at minimum, the PCLOB owes the public the truth about mass surveillance—and even its members are starting to see that.  Related Cases: Jewel v. NSA

  • Setbacks in the FTC’s Antitrust Suit Against Facebook Show Why We Need the ACCESS Act
    by Mitch Stoltz on June 30, 2021 at 4:58 am

    After a marathon markup last week, a number of bills targeting Big Tech’s size and power, including the critical ACCESS Act, were passed out of committee and now await a vote by the entire House of Representatives. This week, decisions by a federal court tossing out both the Federal Trade Commission’s (FTC) antitrust complaint against Facebook and a similar one brought by 48 state Attorneys General, underscore why we need those new laws. TELL YOUR REP TO SUPPORT THE ACCESS ACT BUILD a More Interoperable Future The federal court in DC ruled narrowly that the FTC hadn’t included enough detail in its complaint about Facebook’s monopoly power, and gave it 30 days to do so.  That alone was troubling, but probably not fatal to the lawsuit. A more ominous problem came in what lawyers call dicta: the court opined that even if Facebook had monopoly power, its refusal to interoperate with competing apps was OK. This decision highlights many of the difficulties we face in applying current antitrust law to the biggest Internet companies, and shows why we need changes to the law to address modern competition problems caused by digital platform companies like Facebook. When the FTC filed its suit in December 2020, and 48 U.S. states and territories filed a companion suit, we celebrated the move, but we predicted two challenges: 1) proving that Facebook has monopoly power and 2) overcoming Facebook’s defenses around preserving its own ability and incentives to innovate. Yesterday’s decision by Judge James E. Boasberg of the Federal District Court for D.C. touched on both of those. What Is Facebook, Exactly? To make a case under the part of antitrust law that deals with monopolies—Section 2 of the Sherman Act—a plaintiff has to show that the accused company is a monopoly, legally speaking, or trying to become one. And being a monopoly doesn’t just mean being a really large company or commanding a lot of public attention and worry. It means that within some “market” for goods or services, the company has a large enough share of commerce that the prices it charges, or the quality of its products, aren’t constrained by rivals. Defining the market is a hugely important step in monopoly cases, because a broad definition could mean there’s no monopoly at all. Judge Boasberg accepted the FTC’s market definition for social networks, at least for now. Facebook argued that its market could include all kinds of communications tools that don’t employ a “social graph” to link each user’s personal connections. In other words, Facebook argued that it competes against lots of other communications tools (including, perhaps, telephone calls and email) so it’s not a monopolist. Judge Boasberg rejected that argument, ruling that the FTC’s definition of “personal social networks” as a unique product was at least “theoretically rational.” But the judge also ruled that the FTC hadn’t included enough detail in its complaint about Facebook’s power within the social network market. While the FTC alleged that Facebook controlled “in excess of 60%” of the market, the agency didn’t say anything about how it arrived at that figure, nor which companies made up the other 40%. In an “unusual, nonintuitive” market like social networks, the judge said, a plaintiff has to put more detail in its complaint beyond a market share percentage. Even though market definition questions are often the place where a monopoly case lives or dies, this issue doesn’t seem to be fatal to the FTC’s case. The agency can probably file an amended complaint giving more detail about who’s in the social networking market and how big Facebook’s share of that market is. Alternatively, the FTC might allege that Facebook has monopoly power because it has repeatedly broken its public promises about protecting users’ privacy, and otherwise become even more of a surveillance-powered panopticon, without losing any significant number of users. This approach is equivalent to alleging that a company is able to raise prices without losing customers—a traditional test for monopoly power. The case has a long way to go, because the FTC (and the states) still have to prove their allegations with evidence. We can expect a battle between expert economists over whether Facebook actually competes with LinkedIn, YouTube, Twitter, email, or the comments sections of newspaper sites. But in the meantime, the case is likely to clear an important early hurdle. Interoperability is Not Required – Even for a Monopolist Another part of the court’s decision is more troubling. Facebook doesn’t allow third-party apps to interoperate with Facebook if they “replicate Facebook’s core functionality”—i.e. if they compete with Facebook. The FTC alleged that this was illegal given Facebook’s monopoly power. Judge Boasberg disagreed, writing that “a monopolist has no duty to deal with its competitors, and a refusal to do so is generally lawful even if it is motivated . . . by a desire ‘to limit entry’ by new firms or impede the growth of existing ones.” A monopolist’s “refusal to deal” with a competitor, wrote the judge, is only illegal when the two companies already had an established relationship, and cutting off that relationship would actually hurt the monopolist in the short term (a situation akin to selling products at a loss to force a rival out of business, known as “predatory pricing.”). Facebook’s general policy of refusing to open its APIs to competing apps didn’t fit into this narrow exception. This ruling sends a strong signal that the FTC won’t be able to use its lawsuit to compel Facebook to allow interoperability with potential competitors. This decision doesn’t end the lawsuits, because Judge Boasberg ruled that the FTC’s challenge to Facebook’s gobbling up potential rivals like Instagram and WhatsApp was valid and could continue. Cracking down on tech monopolists’ aggressive acquisition strategies is an important part of dealing with the power of Big Tech. But the court’s dismissal of the interoperability theory is a significant loss. We hope the FTC and the states appeal this decision at the appropriate time, because the law can and should require companies with a gatekeeper role on the Internet to interoperate with competing applications. As we’ve written, interoperability helps solve the monopoly problem by allowing users to leave giant platforms like Facebook without leaving their friends and social connections behind. New competitors could compete by offering their users better privacy protections, better approaches to content moderation, and so on, which in turn would force the Big Tech platforms to do better on these fronts. This might be possible under today’s antitrust laws, particularly if courts are willing to adopt a broader concept of “predatory conduct” that encompasses Big Tech’s strategy of foregoing profits for many years while growing an unassailable base of users and their data—an approach that incoming FTC chair Lina Khan suggested in her seminal paper about Amazon. We hope the FTC pursues an interoperability solution and doesn’t let this important part of the case fade away. But we shouldn’t bet the future of the Internet on a judicial solution, because Judge Boasberg’s ruling on interoperability in this case is no outlier. Many courts, including the Supreme Court, have generally been comfortable with monopolists standing at the gates they have built and denying entry to anyone who might one day threaten their power. We need a change in the law. That’s why EFF supports the ACCESS Act, which the House Judiciary Committee approved last week with bipartisan support. ACCESS will require the biggest online platforms to interoperate with third-party apps through APIs established by technical committees and approved by the FTC, while still requiring all participants to safeguard users’ privacy and security. We’re pleased to see the antitrust cases against Facebook continue, and we trust that the FTC attorneys under Lina Khan will give it their all, along with the states who continue to champion users in this fight. But we’re worried about the limitations of today’s antitrust law, as shown by yesterday’s decision. The ACCESS Act, along with the other Big Tech-related bills advanced last week and similar efforts in the Senate, are badly needed.

  • A Wide, Diverse Coalition Agrees on What Congress Needs to Do About Our Broadband
    by Ernesto Falcon on June 29, 2021 at 3:38 pm

    A massive number of groups representing interests as diverse as education, agriculture, the tech sector, public and private broadband providers, low-income advocacy, workers, and urban and rural community economic development entities came together on a letter to ask Congress to be bold in its infrastructure plan. They are asking the U.S. Congress to tackle the digital divide with the same purpose and scale as we did for rural electrification. It also asks Congress to focus on delivering 21st century future-proof access to every American. While so many slow internet incumbents are pushing Congress to go small and do little, a huge contingent of this country is eager for Congress to solve the problem and end the digital divide. What Unifies so Many Different Groups? Fully Funding Universal, Affordable, Future-Proof Access For months Congress has been hounded by big ISP lobbyists interested in preserving their companies’ take of government money. However, the big ISPs—your AT&Ts, Comcasts, and the former Time Warner Cable—want to preserve the monopolies that have resulted in our current limited, expensive, slow internet access. All Americans have the opposite needs and interests. We need a strong focus on building 21st century ready infrastructure to everyone. At the core of all new networks lies fiber optic wires, which is an inconvenient fact for legacy monopolies that intended to rely on obsolete wires for years to come. And all this opposition is happening while a billion fiber lines are being laid in the advanced Asian markets, primarily led by China, calling into question whether the United States wants to keep up or be left behind. They’ve argued that broadband is very affordable and that efforts to promote the public model to address rural and low-income access was akin to a “Soviet” take over of broadband. But our collective lived experience, from a pandemic where kids are doing homework in fast food parking lots in large cities to rural Americans lacking the ability to engage meaningfully in remote work and distance learning, makes clear we need a change. For cities, where it is profitable to fully serve, its clear low-income people have been discriminated against and are being skipped through digital redlining. For rural Americans who have basic internet access, they are forced to rely on inferior and expensive service that is incapable of providing access to the modern internet (let alone the future). ISPs have obscured this systemic problem by lobbying to continue to define 25/3 mbps as sufficient for connecting to the internet. That metric is unequivocally useless for assessing the status of our communications infrastructure. It is a metric that makes it look like the U.S. has more coverage than it does, since it represents the peak performance of old, outdated internet infrastructure. It is therefore important to raise the standard to accurately reflect what is actually needed today and for decades to come. What we build under a massive federal program needs to meet that standard, not one from the earlier days of broadband. Not doing so will mean repeating the mistakes of the past where a large portion of $45 billion in federal subsidies have been invested in obsolete infrastructure. Those wires have hit their maximum potential, and no amount of money handed over to current large ISPs will change that fact. We have to get it right this time.

  • EFF to Ecuador’s Human Rights Secretariat: Protecting Security Experts is Vital to Safeguard Everyone’s Rights
    by Veridiana Alimonti on June 29, 2021 at 4:00 am

    Today, EFF sent a letter to Ecuador’s Human Rights Secretariat about the troubling, slow-motion case against the Swedish computer security expert Ola Bini since his arrest in April 2019, following Julian Assange’s ejection from Ecuador’s London Embassy. Ola Bini faced 70 days of imprisonment until a Habeas Corpus decision considered his detention illegal. He was released from jail, but the investigation continued, seeking evidence to back the alleged accusations against the security expert. The circumstances around Ola Bini’s detention, which was fraught with due process violations described by his defense, sparked international attention and indicated the growing seriousness of security experts’ harassment in Latin America. The criminal prosecution has dragging out for two years since Bini’s release. And as a suspect under trial, Ola Bini continues to be deprived of the full enjoyment of his rights. During 2020, pre-trial hearings set for examining Bini’s case were suspended and rescheduled at least five times. The Office of the IACHR Special Rapporteur for Freedom of Expression expressed concern  with this delay in its 2020’s annual report. Last suspended in December, the pre-trial hearing is set to continue this Tuesday (6/29). Ecuador’s new President, Guillermo Lasso, recently appointed a new head for the country’s Human Rights Secretariat, Ms. Bernarda Ordoñez Moscoso. We hope Ms. Ordoñez can play a relevant role by bridging the protection of security experts to the Secretariat’s mission of upholding human rights. EFF’s letter calls upon Ecuadors’ Human Rights Secretariat to give special attention to Ola Bini’s upcoming hearing and prosecution. As we’ve stressed in our letter, Mr. Bini’s case has profound implications for, and sits at the center of, the application of human rights and due process, a landmark case in the context of arbitrarily applying overbroad criminal laws to security experts. Mr. Bini’s case represents a unique opportunity for the Human Rights Secretariat Cabinet to consider and guard the rights of security experts in the digital age.  Security experts protect the computers upon which we all depend and protect the people who have integrated electronic devices into their daily lives, such as human rights defenders, journalists, activists, dissidents, among many others. To conduct security research, we need to protect the security experts, and ensure they have the tools to do their work. Ola Bini’s arrest happened shortly after Ecuador’s Interior Minister at the time, María Paula Romo, held a press conference to claim that a group of Russians and Wikileaks-connected hackers were in the country, planning a cyber-attack in retaliation for the government’s eviction of Julian Assange from Ecuador’s London Embassy. However, no evidence to back those claims was provided by Romo. EFF has been tracking the detention, investigation, and prosecution of Ola Bini since its early days in 2019. We conducted an on-site visit to the country’s capital, Quito, in late July that year, and underscored the harmful impact that possible political consequences of the case were having on the security expert’s chances of receiving a fair trial. Later on, a so-called piece of evidence was leaked to the press and taken to court: a photo of a screenshot, taken by Bini himself and sent to a colleague, showing the telnet login screen of a router. As we’ve explained, the image is consistent with someone who connects to an open telnet service, receives a warning not to log on without authorization, and does not proceed—respecting the warning. As for the portion of Bini’s message exchange with a colleague, leaked with the photo, it shows their concern with the router being insecurely open to telnet access on the wider Internet, with no firewall. More recently, in April 2021, Ola Bini’s Habeas Data recourse, filed in October 2020 against the National Police, the Ministry of Government, and the Strategic Intelligence Center (CIES), was partially granted by the Judge. According to Bini’s defense, he had been facing continuous monitoring by members of the National Police and unidentified persons. The decision requested CIES to provide information related to whether the agency has conducted surveillance activities against the security expert. The ruling concluded that CIES unduly denied such information to Ola Bini, failing to offer a timely response to his previous information request. EFF has a longstanding history of countering the unfair criminal persecution of security experts, who have unfortunately been the subject of the same types of harassment as those they work to protect, such as human rights defenders and activists. The flimsy allegations against Ola Bini, the series of irregularities and human rights violations in his case, as well as its  international resonance, situate it squarely among other cases we have seen of politicized and misguided allegations against technologists and security researchers.  We hope Ecuador’s Human Rights Secretariat also carefully examines the details surrounding Ola Bini’s prosecution, and follows its developments so that the security expert can receive a fair trial. We respectfully urge that body to assess and address the complaints of injustice, which it is uniquely and well-positioned to do. 

  • Supreme Court Says You Can’t Sue the Corporation that Wrongly Marked You A Terrorist
    by Cindy Cohn on June 28, 2021 at 5:41 pm

    In a 5-4 decision, the Supreme Court late last week barred the courthouse door to thousands of people who were wrongly marked as “potential terrorists” by credit giant TransUnion. The Court’s analysis of their “standing” —whether they were sufficiently injured to file a lawsuit—reflects a naïve view of the increasingly powerful role that personal data, and the private corporations that harvest and monetize it, play in everyday life. It also threatens Congressional efforts to protect our privacy and other intangible rights from predation by Facebook, Google and other tech giants. Earlier this year, we filed an amicus brief, with our co-counsel at Hausfeld LLP, asking the Court to let all of the victims of corporate data abuses have their day in court. What Did the Court Do? TransUnion wrongly and negligently labelled approximately 8,000 people as potential terrorists in its databases. It also made that dangerous information available to businesses across the nation for purposes of making credit, employment, and other decisions. TransUnion then failed to provide the required statutory notice of the mistake. The Supreme Court held this was not a sufficiently “concrete” injury to allow these people to sue TransUnion in federal court for violating their privacy rights under the Fair Credit Reporting Act. Instead, the Court granted standing only to the approximately 1,800 of these people whose information was actually transmitted to third parties. The majority opinion, written by Justice Kavanaugh, fails to grapple with how consumer data is collected, analyzed, and used in modern society. It likened the gross negligence resulting in a database marking these people as terrorists to “a letter in a drawer that is never sent.” But the ongoing technological revolution is not at all like a single letter. It involves large and often interconnected sets of corporate databases that collect and hold a huge amount of our personal information—both by us and about us. Those information stores are then used to create inferences and analysis that carry tremendous and often new risks for us that can be difficult to even understand, much less trace. For example, consumers who are denied a mortgage, a job, or another life-altering opportunity based upon bad records in a database or inferences based upon those records will often be unable to track the harm back to the wrongdoing data broker. In fact, figuring out how decisions were made, much less finding the wrongdoer, has become increasingly difficult as an opaque archipelago of databases are linked and used to build and deploy machine learning systems that judge us and limit our opportunities. This decision is especially disappointing after the Court’s recent decisions, such as Riley and Carpenter, that demonstrated a deep understanding that new technology requires new approaches to privacy law. This decision is especially disappointing after the Court’s recent decisions, such as Riley and Carpenter, that demonstrated a deep understanding that new technology requires new approaches to privacy law. The Court concluded in these cases that when police collect and use more and more of our data, that fundamentally changed the inquiry about our Fourth Amendment right to privacy and the Court could not rigidly follow pre-digital cases. The same should be true when new technologies are used by private entities in ways that threaten our privacy. The majority’s dismissal of Congressional decision-making is also extremely troubling. In 1970, at the dawn of the database era, Congress decided that consumers should have a cause of action based upon a credit reporting agency failing to take reasonable steps to ensure that the data they have is correct. Here, TransUnion broke this rule in an especially reckless way: it marked people as potential terrorists simply because they shared the same name as people on a terrorist watch list without checking middle names, birthdays, addresses, or other information that TransUnion itself undoubtedly already had. The potential harms this could cause are particularly obvious and frightening. Yet the Court decided that, despite Congress’ clear determination to grant us the right to a remedy, the Court could still bar the courthouse doors. Justice Thomas wrote the principal dissent, joined by Justices Breyer, Sotomayor, and Kagan. As Justice Kagan explained in an additional dissent, the ruling “transforms standing law from a doctrine of judicial modesty into a tool of judicial aggrandizement.” Indeed, Congress specifically recognized new harms and provided a new cause of action to enforce them, yet the Court nullified these democratically-enacted rights and remedies based on its crabbed view that the harms are not sufficiently “concrete.” What Comes Next? This could pose problems for a future Congress that wanted to get serious about recognizing and empowering us to seek accountability for the unique and new harms caused by modern data misuse practices, potentially including harms arising from decision-making based upon machine learning and artificial intelligence. Congress will need to make a record of the grievous injuries caused by out-of-control data processing by corporations who care more for their profits than our privacy and expressly tie whatever consumer protections it creates to those harms and be crystal clear about how those harms justify a private right of action. The Court’s opinion does provide some paths forward, however. Most importantly, the Court expressly confirmed that intangible harms can be sufficiently concrete to bring a lawsuit. Doing so, the Court rejected the cynical invitation from Facebook, Google, and tech industry trade groups to deny standing for all but those who suffered a physical or economic injury. Nonetheless, we anticipate that companies will try to use this new decision to block further privacy litigation. We will work to make sure that future courts don’t overread this case. The court also recognized that the risk of future harm could still be a basis for injunctive relief—so while you cannot seek damages, you don’t have to wait until you are denied credit or a job or a home before seeking protection from a court from known bad data practices. Finally, as the dissent observed, the majority’s standing analysis only applies in federal court; state courts applying state laws can go much further in recognizing harms and adjudicating private causes of action because the federal “standing” doctrine does not apply. The good work being done to protect privacy in states across the country is now all-the-more important. But, overall, this is a bad day for privacy. We have been cheered by the Supreme Court’s increasing recognition, when ruling on law enforcement activity, of the perils of modern data collection practices and the vast difference between current and previous technologies. Yet now the Court has failed to recognize that Congress must have the power to proactively protect us from the risks created when private companies use modern databases to vacuum up our personal information, and use data-based decision-making to limit our access to life’s necessities. This decision is a big step backwards for empowering us to require accountability from today’s personal data-hungry tech giants. Let’s hope that it is merely an anomaly. We need a Supreme Court that understands and takes seriously the technology-fueled issues facing us in the digital age.    

  • Decoding California’s New Digital Vaccine Records and Potential Dangers
    by Alexis Hancock on June 26, 2021 at 12:10 am

    This post was updated on 6/29/21 to more accurately describe how New York is running its voluntary vaccine passport program The State of California recently released what it calls a “Digital COVID-19 Vaccine Record.” It is part of that state’s recent easing of public health rules on masking within businesses. California’s new Record is a QR code that contains the same information as is on our paper vaccine cards, including name and birth date. We all want to return to normal freedom of movement while keeping our communities safe. But we have two concerns with this plan: First, with minimal effort, businesses could use the information in the vaccination record to track the time and place of our comings and goings, pool that information with other businesses, and sell these dossiers of our movements to the government. We shouldn’t have to submit to a new surveillance technology that threatens pervasive tracking of our movements in public places to return to normal life. Second, we’re concerned that the Digital Vaccine Record might become something that enables a system of Digital Vaccine Bouncers that limit access to life’s necessities and amplify inequities for people who legitimately cannot get a vaccine. It’s good that California has not, at least so far, created any infrastructure to make it easy to turn vaccination status into a surveillance system that magnifies inequities. We do not object per se to another feature of California’s new Digital Vaccine Record: the display on one’s phone screen, in human-readable form, of the information on one’s paper vaccine card. Some people may find this to be a helpful way to store their vaccine card and present it to businesses. Unlike a QR code, such a digital record system does not readily lend itself to the automated collection, retention, use, and sharing of our personal information. In terms of fraud, there are laws in place where it is a crime to present a false vaccination record already, but there is little accountability for our data. To better understand what California has done, and why we have objections to including a digital personal health record used for screening at all manner of places, we’ll need to go over a brief summary of the state’s new public health rules, and then take a deep dive into the technology. What Did California Do? In mid-June, California announced a change to the state’s rules on masking in public places: businesses may now allow fully vaccinated people to forego masks. But, businesses must continue to require unvaccinated people to wear masks. To comply with these rules there are three options: require all customers to wear a mask; rely on an honor system; or implement a vaccine verification system. Soon after, California rolled out its Digital Vaccine Record. This is intended to be a vaccine verification system that businesses may use to distinguish vaccinated from unvaccinated customers for purposes of masking. The Record builds on SMART Health Cards. California enables vaccinated people to obtain their digital Record through a web portal. The new Record displays two sets of information. First, it shows the same information as a paper vaccine card: name, date of birth, date of vaccinations, and vaccine manufacturer. Second, it has a QR code that makes the same facts readable by a QR scanner. According to Reuters, an unnamed nonprofit group will soon launch an app that businesses can use to scan these QR codes. So, What Does the Digital Vaccine Record QR Code Entail? EFF looked under the hood. We generated a QR code based on this walkthrough for SMART Health Cards. Others might also use the project’s developer portal to generate a QR code. When we used a QR scanner on the QR code we generated, we revealed this blob of text: shc:/56762909524320603460292437404460312229595326546034602925407728043360287028647167452228092863336138625905562441275342672632614007524325773663400334404163424036744177447455265942526337643363675944416729410324605736010641293361123274243503696800275229652…[shortened for brevity] Okay, What Does That Mean? Starting with the shc:/, that is the scheme for the SMART Health Cards framework based on W3C Verifiable Credentials. That framework is an open standard to share claims over health information about an individual as issued by an institution, such as a doctor’s office or state immunization registry. What Are the Rest of Those Numbers? They are a JSON Web Signature (JWS or AKA a signed JSON Web Token). This is a form of transmittable content secured with digital signatures. A JWS has three parts: header, payload, and signature. Notably, this is encoded and not encrypted data. Encoding data formats it in a way that is easily transmitted using a common format. For example, the symbol “?” in ASCII encoding would be the “63” decimal value. By itself, 63 just looks like a number. But if you knew this was an ASCII code, you would be able to easily decode it back to a question mark. In this case, the JWS encoded payload (via base64URL encoding) is minified (white space removed), compressed, and signed according to specifications by a health authority. Encrypted data, on the other hand, is unreadable except to a person who knows how to decrypt it back into a readable form. Since this record is created to be read by anyone, it can’t be encrypted. After decoding, you will get something that looks like this: [Split up with headers for readability] Signature eyJ6aXAiOiJERUYiLCJhbGciOiJFUzI1NiIsImtpZCI6IlNjSkh2eEVHbWpGMjU4aXFzQlU0OUVlWUQwVzYwdGhWalRmNlphYVpJV0EifQ.3VJNj9MwEP0rq-HaJnEKt…[shortened for brevity] Header {“zip”:”DEF”,”alg”:”ES256″,”kid”:”ScJHvxEGmjF258iqsBU49EeYD0W60thVjTf6ZaaZIWA”} Payload {“iss”:””,”nbf”:1620992383.218,”vc”:{“@context”:[“″],”type”:[“VerifiableCredential”,””,””,”″],”credentialSubject”:{“fhirVersion”:”4.0.1″,”fhirBundle”:{“resourceType”:”Bundle”,”type”:”collection”,”entry”:[{“fullUrl”:”resource:0″,”resource”:{“resourceType”:”Patient”,”name”:[{“family”:”Anyperson”,”given”:[“John”,”B.”]}],”birthDate”:”1951-01-20″}},{“fullUrl”:”resource:1″,”resource”:{“resourceType”:”Immunization”,”status”:”completed”,”vaccineCode”:{“coding”:[{“system”:””,”code”:”207″}]},”patient”:{“reference”:”resource:0″},”occurrenceDateTime”:”2021-01-01″,”performer”:[{“actor”:{“display”:”ABC General Hospital”}}],”lotNumber”:”0000001″}},{“fullUrl”:”resource:2″,”resource”:{“resourceType”:”Immunization”,”status”:”completed”,”vaccineCode”:{“coding”:[{“system”:””,”code”:”207″}]},”patient”:{“reference”:”resource:0″},”occurrenceDateTime”:”2021-01-29″,”performer”:[{“actor”:{“display”:”ABC General Hospital”}}],”lotNumber”:”0000007″}}]}}}} In the payload displayed immediately above, you now can see the plaintext of the blob we originally saw upon the scan of the QR code we generated. It includes immunization status, where the vaccination occurred, date of birth, when the vaccination occurred, and the lot number for the vaccine batch. Basically, this is all the information that would be on your paper CDC card. Can Someone Forge a QR-based Digital Vaccine Record? Anyone can “issue” a digital health card. You can create one with a little programming knowledge, as just explained. Like the one immediately below, which is associated with blobs of data above. Suppose you lost your QR Code and had the decoded information saved somewhere. For example, if you had scanned the QR code to an SHC validator app, you could recreate another QR code from the decoded information. There are walk-throughs available that explain how to create and validate QR codes. California places some limits on access and generation of QR codes in its new Digital Vaccine Record. For example, these QR codes must be tied to either the email address or phone number of the individual who received the vaccine. Also, when a person requests a Record with a QR code, the California system generates a URL through which that person can access their Record, then that URL expires after 24 hours. California has not identified other security and anti-forgery features. The only encryption or secure transfer is the public health authority signing the record with their private key. The QR code itself is not encrypted; someone who plans to use it should be aware of that. As to forgery risk, since anyone can make a QR code like the one discussed above, it is up to the operator of the QR scanner to check the public key of the signed data to make sure it is from a valid public health authority. How Can This Hurt Us? Context Switching for Data Even though the Digital Vaccine Record’s QR code is a digital mirror to your CDC card (plus the authority’s signature), the companies that process your Record can change the context of protection and use. For example, CLEAR Health Pass allows you to record your health QR code into their app. With companies like CLEAR that plan to become our digital wallets, we have to consider the risks that come with storing your health credentials with others. You also run the risk that the scanned set of data will get stored, shared, and used in an unexpected or even nefarious way. For example, some bars scan IDs at the door to ensure patrons are 21–and also collect the information on the ID and share it with other bars. If a scanner can quickly check a simple fact on a barcode or QR code (like years since birth or vaccination status), it can also store that fact,as well as all other information embedded in the code (like name and date of birth), and surrounding data (like time and location). In this case, just as a doorkeeper generally will not copy the information on your paper vaccination card, a doorkeeper should not copy the information on your digital vaccination card. Yet no laws in California currently dictate that point to those who are scanning these health QR Codes. It is also unclear what the “official” verifying app will do and what privacy safeguards it will have. Likewise, while California apparently intends to allow businesses to use these Records to require unvaccinated patrons to wear a mask, nothing stops businesses from also using these Records to deny admission to unvaccinated patrons. At that point, these Records would become digital vaccine bouncers, which EFF opposes. National Identification Footholds With no federal data privacy law, we must assume that when companies process our data, no matter how benign the information or purpose may seem, they will take it down the most exploitative road possible. The QR code in California’s Digital Vaccine Record is a digital identity platform with more data, which can become part of the groundwork for National ID systems. EFF has long opposed such systems, which in one central government repository would store all manner of information about our activities. EFF raised this concern last year when opposing “vaccine passports.” We are now seeing these discussions occur in NY State with the Excelsior Pass and in the U.K., where the company the government hired to help create a vaccine passport has suggested redeploying such infrastructure into a national identification system. With no federal data privacy law, we must assume that when companies process our data, no matter how benign the information or purpose may seem, they will take it down the most exploitative road possible. Bottom Line for Digital Vaccine Records California’s approach is more welcome than state-sponsored proprietary vaccine passports, as in New York State. It’s comfortable knowing that if something happened to your paper card, you can access a digital copy. The open standard allows independent study to understand what is in that QR Code, which helps to ensure that users know the potential risks and scenarios that can happen with their health data. Still, we wish California had skipped the QR code. Also, we want more safeguards set, similar to those in the current bill in the NY State Senate that protects COVID-19 related health data, along with any sort of data processing expansion that is occurring due to this pandemic. Establishing data protections now, when we are in crisis, would help ensure privacy in future use of such technologies, during healthier times and in any future health crisis.

  • [VISUAL] The Overlapping Infrastructure of Urban Surveillance, and How to Fix It
    by Matthew Guariglia on June 24, 2021 at 6:50 pm

    Between the increasing capabilities of local and state police, the creep of federal law enforcement into domestic policing, the use of aerial surveillance such as spy planes and drones, and mounting cooperation between private technology companies and the government, it can be hard to understand and visualize what all this overlapping surveillance can mean for your daily life. We often think of these problems as siloed issues. Local police deploy automated license plate readers or acoustic gunshot detection. Federal authorities monitor you when you travel internationally. But if you could take a cross-section of the average city block, you would see the ways that the built environment of surveillance—its physical presence in, over, and under our cities—makes this an entwined problem that must be combatted through entwined solutions. Thus, we decided to create a graphic to show how—from overhead to underground—these technologies and legal authorities overlap, how they disproportionately impact the lives of marginalized communities, and the tools we have at our disposal to halt or mitigate their harms. You can download the entire set of Creative Commons licensed images for sharing here. Going from Top to Bottom:  1. Satellite Surveillance:  Satellite photography has been a reality since the 1950s, and at any given moment there are over 5,000 satellites in orbit over the Earth—some of which have advanced photographic capabilities. While many are intended for scientific purposes, some satellites are used for reconnaissance by intelligence agencies and militaries. There are certainly some satellites that may identify a building or a car from its roof, but it’s unlikely that we could ever reach the point where pictures taken from a satellite would be clear enough or could even be the correct angle to run through face recognition technology or through an automated license plate reader. Satellites can also enable surveillance by allowing governments to intercept or listen in on data transmitted internationally.  2. Internet Traffic Surveillance Government surveillance of internet traffic can happen in many ways. Through programs like PRISM and XKEYSCORE, the U.S. National Security Agency (NSA) can monitor emails as they move across the internet, browser and search history, and even keystrokes as they happen in real time. Much of this information can come directly from the internet and telecommunications companies that consumers use, through agreements between these companies and government agencies (like the one the NSA shares with AT&T) or through warrants or orders granted by a judge, including those that preside over the Foreign Intelligence Surveillance Court (FISC).  Internet surveillance isn’t just the domain of the NSA and international intelligence organizations; local law enforcement are just as likely to approach big companies in an attempt to get information about how some people use the internet. In one 2020 case, police sent a search warrant to Google to see who had searched the address of an arson victim to try to identify a suspect. Using the IP addresses Google furnished of users who conducted that search, police identified a suspect and arrested him for the arson.  How can we protect our internet use? FISA reform is one big one. Part of the problem is also transparency: in many instances it’s hard to even know what is happening behind the veil of secrecy that shrouds the American surveillance system.  3. Cellular Communications (Tower) Surveillance Cell phone towers receive information from our cell phones almost constantly, such as the device’s location, metadata like calls made and the duration of each call, and the content of unencrypted calls and text messages. This information, which is maintained by telecom companies, can be acquired by police and governments with a warrant. Using encrypted communication apps, such as Signal, or leaving your cell phone at home when attending political demonstrations are some ways to prevent this kind of surveillance.  4. Drones Police departments and other local public safety agencies have been acquiring and deploying drones at a rapid rate. This is in addition to federal drones used both overseas and at home for surveillance and offensive purposes. Whether at the border or in the largest U.S. cities, law enforcement claim drones are an effective method for situational awareness or for use in situations too dangerous for an officer to approach. The ability for officers to use a drone in order to keep their distance was one of the major reasons police departments around the country justified the purchase of drones as a method of fighting the COVID-19 pandemic. However, drones, like other pieces of surveillance equipment, are prone to “mission creep”: the situations in which police deploy certain technologies often far overreach their intended purpose and use. This is why drones used by U.S. Customs and Border Protection, whose function is supposedly to monitor the U.S. border, were used to surveil protests against police violence in over 15 cities in the summer of 2020, many hundreds of miles from the border. It’s not only drones that are in the skies above you spying on protests and people as they go about their daily lives. Spy planes, like those provided by the company Persistent Surveillance Systems, can be seen buzzing above cities in the United States. Some cities, however, like Baltimore and St. Louis, have recently pulled the plug on these invasive programs. Drones flying over your city could be at the behest of local police or federal agencies, but as of this moment, there are very few laws restricting when and where police can use drones or how they can acquire them. Community Control Over Police Surveillance, or CCOPS ordinances, are one such way residents of a city can prevent their police from acquiring drones or restrict how and when police can use them. The Fourth Circuit court of appeals has also called warrantless use of aerial surveillance a violation of the Fourth Amendment. 5. Social Media Surveillance Federal, local, and state governments all conduct social media surveillance in a number of different ways—from sending police to infiltrate political or protest-organizing Facebook groups, to the mass collection and monitoring of hashtags or geolocated posts done by AI aggregators. There are few laws governing law enforcement use of social media monitoring. Legislation can curb mass surveillance of our public thoughts and interactions on social media by requiring police to have reasonable suspicion before conducting social media surveillance on individuals, groups, or hashtags. Also, police should be barred from using phony accounts to sneak into closed-access social media groups, absent a warrant. 6. Cameras  Surveillance cameras, either public or private, are ubiquitous in most cities. Although there is no definitive proof that surveillance cameras reduce crime, cities, business districts, and neighborhood associations continue to establish more cameras, and equip those cameras with increasingly invasive capabilities. Face recognition technology (FRT), which over a dozen cities across the United States have banned government agencies from using, is one such invasive technology. FRT can use any image—taken in real-time or after-the-fact—and compare it to pre-existing databases that contain driver’s license photos, mugshots, or pre-existing CCTV camera footage. FRT has a long history of misidentifying people of color and trans* and nonbinary people, even leading to wrongful arrests and police harassment. Other developing technology, such as more advanced video analytics, can allow users to search footage accumulated from hundreds of cameras by things as specific as “pink backpack” or “green hair.” Networked surveillance cameras can harm communities by allowing police, or quasi-governmental entities like business improvement districts, by recording how people live their lives, who they communicate with, what protests they attend, and what doctors or lawyers they visit. One way to lessen the harm surveillance cameras can cause in local neighborhoods is through CCOPS measures that can regulate their use. Communities can also band together to join more than a dozen cities around the country that have banned government use of FRT and other biometrics. Take action TELL congress: END federal use of face surveillance 7. Surveillance of Cell Phones Cell phone surveillance can happen in a number of ways, based on text messages, call metadata, geolocation, and other information collected, stored, and disseminated by your cell phone every day. Government agencies at all levels, from local police to international intelligence agencies, have preferred methods of conducting surveillance on cell phones.  For instance, local and federal law enforcement have been known to deploy devices known as Cell-Site Simulators or “stingrays,” which mimic cell phone towers your phone automatically connects to in order to harvest information from your phone like identifying numbers, call metadata, the content of unencrypted text messages, and internet usage.  Several recent reports revealed that the U.S. government purchases commercially available data obtained from apps people downloaded to their phones. One report identified the Department of Defense’s purchase of sensitive user data, including location data, from a third-party databroker of information obtained through apps targeted at Muslims, including a prayer app with nearly 100 million downloads. Although the government would normally need a warrant to acquire this type of sensitive data, purchasing the data commercially allows them to evade constitutional constraints. One way to prevent this kind of surveillance from continuing would be to pass the Fourth is Not For Sale Act, which would ban the government from purchasing personal data that would otherwise require a warrant. Indiscriminate and warrantless government use of stingrays is also currently being contested in several cities and states and a group of U.S. Senators and Representatives have also introduced legislation to ban their use without a warrant.  CCOPS ordinances have proven a useful way to prevent police from acquiring or using Cell-Site Simulators.  8. Automated License Plate Readers Automated license plate readers (ALPRs) are high-speed, computer-controlled camera systems that are typically mounted on street poles, streetlights, highway overpasses, mobile trailers, or attached to police squad cars. ALPRs automatically capture all license plate numbers that come into view, along with the location, date, and time of the scan. The data, which includes photographs of the vehicle and sometimes its driver and passengers, is then uploaded to a central server. Taken in the aggregate, ALPR data can paint an intimate portrait of a driver’s life and even chill First Amendment protected activity. ALPR technology can be used to target drivers who visit sensitive places such as health centers, immigration clinics, gun shops, union halls, protests, or centers of worship. ALPRs can also be inaccurate. In Colorado, police recently pulled a Black family out of their car at gunpoint after an ALPR misidentified their vehicle as one that had been reported stolen. Too often, technologies like ALPRs and face recognition are used not as an investigative lead to be followed up and corroborated, but as something police rely on as a definitive accounting of who should be arrested. Lawmakers should better regulate this technology by limiting “hotlists” to only cars that have been confirmed stolen, rather than vehicles labeled as “suspicious”, and to limit retention of ALPR scans of cars that are not hotlisted. 9. Acoustic Gunshot Detection Cities across the country are increasingly installing sophisticated listening devices on street lights and the sides of buildings intended to detect the sound of gunshots. Acoustic gunshot detection, like the technology sold by popular company ShotSpotter, detects loud noises, triangulates where those noises came from, and sends the audio to a team of experts who are expected to determine if the sound was a gunshot, fireworks, or some other noise. Recent reports have shown that the number of false reports generated by acoustic gunshot detection may be much higher than previously thought. This can create dangerous situations for pedestrians as police arrive at a scene armed and expecting to encounter someone with a gun. Even though aimed at picking up gunshots, this technology also captures human voices, at least some of the time. In at least two criminal cases, the People v. Johnson (CA) and Commonwealth v. Denison (MA), prosecutors sought to introduce as evidence audio of voices recorded on an acoustic gunshot detection system. In Johnson, the court allowed this. In Denison, the court did not, ruling that a recording of “oral communication” is prohibited “interception” under the Massachusetts Wiretap Act. While the accuracy of the technology remains a problem, one way to mitigate its harm is for activists and policymakers to work to ban police and prosecutors from using voice recordings collected by gunshot detection technology as evidence in court. 10. Internet-Connected Security Cameras Popular consumer surveillance cameras like Amazon’s Ring doorbell camera are slowly becoming omnipresent surveillance networks in the country. Unlike traditional surveillance cameras which may back up to a local drive in the possession of the user, the fact that users of internet connected security cameras do not store their own footage makes that footage more easily accessible to police. Often police can bypass users altogether by presenting a warrant directly to the company. These cameras are ubiquitous but there are ways we can help blunt their impact on our society. Opting in to encrypt your Amazon Ring footage and opting out of seeing police footage requests on the Neighbors app are two ways to ensure police have to bring a warrant to you, rather than Amazon, if they think your camera may have witnessed a crime. 11. Electronic Monitoring Electronic monitoring is a form of digital incarceration, often in the form of a wrist bracelet or ankle “shackle” that can monitor a subject’s location, and sometimes their blood alcohol level. Monitors are commonly used as a condition of pretrial release, or post-conviction supervision, like probation or parole. They are sometimes used as a mechanism for reducing jail and prison populations. Electronic monitoring has also been used to track juveniles, immigrants awaiting civil immigration proceedings, and adults in drug rehabilitation programs.  Not only does electronic monitoring impose excessive surveillance on people returning home from incarceration, but it also hinders their ability to successfully transition back into the community. Additionally, there is no concrete evidence that electronic monitoring reduces crime rates or recidivism. 12. Police GPS Tracking However, recent court filings indicate that law enforcement believes that warrantless use of GPS tracking devices at the border is fair game. EFF currently has a pending Freedom of Information Act lawsuit to uncover CBP’s and U.S. Immigration and Customs Enforcement’s (ICE) policies, procedures, and training materials on the use of GPS tracking devices. 13. International Internet Traffic Surveillance Running under ground and under the oceans are thousands of miles of fiber optic cable that transmit online communications between countries. Originating as telegraph wires running under the ocean, these highways for international digital communication are now a hotbed of surveillance by state actors looking to surveil chatter abroad and at home. The Associated Press reported in 2005 that the U.S. Navy had sent submarines with technicians to help tap into the “backbone of the internet.” These cables make landfall at coastal cities and towns called “landing areas” like Jacksonville, Florida and Myrtle Beach, South Carolina and towns just outside of major cities like New York, Los Angeles, San Diego, Boston, and Miami. How do we stop the United States government from tapping into the internet’s main arteries? Section 702 of the Foreign Intelligence Surveillance Act allows for the collection and use of digital communications of people abroad, but often scoops up communications of U.S. persons when they talk to friends or family in other countries. EFF continues to fight Section 702 in the court in hopes of securing communications that travel through these essential cables. Take action TELL congress: END federal use of face surveillance

  • Now Is The Time: Tell Congress to Ban Federal Use of Face Recognition
    by Matthew Guariglia on June 24, 2021 at 6:48 pm

    Cities and states across the country have banned government use of face surveillance technology, and many more are weighing proposals to do so. From Boston to San Francisco, New Orleans to Minneapolis, elected officials and activists know that face surveillance gives police the power to track us wherever we go, disproportionately impacts people of color, turns us all into perpetual suspects, increases the likelihood of being falsely arrested, and chills people’s willingness to participate in first amendment protected activities. Even Amazon, known for operating one of the largest video surveillance networks in the history of the world, extended its moratorium on selling face recognition to police. Now, Congress must do its part. We’ve created a campaign that will easily allow you to contact your elected federal officials and tell them to co-sponsor the Facial Recognition and Biometric Technology Moratorium Act. Take action TELL congress: END federal use of face surveillance Police and other government use of this technology cannot be responsibly regulated. Face surveillance in the hands of the government is a fundamentally harmful technology, even under strict regulations or if the technology was 100% accurate.  Face surveillance also disproportionately hurts vulnerable communities. Last year, the New York Times published a long piece on the case of Robert Julian-Borchak Williams, who was arrested by Detroit police after face recognition technology wrongly identified him as a suspect in a theft case.  The ACLU filed a lawsuit on his behalf against the Detroit police.  The problem isn’t just that studies have found face recognition disparately inaccurate when it comes to matching the faces of people of color. The larger concern is that law enforcement will use this invasive and dangerous technology, as it unfortunately uses all such tools, to disparately surveil people of color.  Williams and two other Black men (Michael Oliver and Nijeer Parks)  have garnered the attention of national media after face recognition technology led to them being falsely arrested by police. How many more have already endured the same injustices without the media’s spotlight? These incidents show another reason why police cannot be trusted with this technology: a piece of software intended to identify investigative leads is often used in the field to determine who should be arrested without independent officer vetting. This federal ban on face surveillance would apply to increasingly powerful agencies like Immigration and Customs Enforcement, the Drug Enforcement Administration, the Federal Bureau of Investigation, and Customs and Border Patrol. The bill would ensure that these agencies cannot use this invasive technology to track, identify, and misidentify millions of people. Tell your Senators and Representatives they must co-sponsor and pass the Facial Recognition and Biometric Technology Moratorium Act. It was recently introduced by Senators Edward J. Markey (D-Mass.), Jeff Merkley (D-Ore.), Bernie Sanders (I-Vt.), Elizabeth Warren (D-Mass.), and Ron Wyden (D-Ore.), and by Representatives Pramila Jayapal (WA-07), Ayanna Pressley (MA-07), and Rashida Tlaib (MI-13). This important bill would be a critical step to ensuring that mass surveillance systems don’t use your face to track, identify, or harm you. The bill would ban the use of face surveillance by the federal government, as well as withhold certain federal funds from local and state governments that use the technology.  That’s why we’re asking you to insist your elected officials co-sponsor the Facial Recognition and Biometric Technology Moratorium Act, S.2052 in the Senate. Take action TELL congress: END federal use of face surveillance

  • How Big ISPs Are Trying to Burn California’s $7 Billion Broadband Fund
    by Ernesto Falcon on June 23, 2021 at 7:29 pm

    A month ago, Governor Newsom announced a plan to invest $7 billion of federal rescue funds and state surplus dollars to be mostly invested into public broadband infrastructure meant to serve every Californian with affordable access to infrastructure ready for 21st century demands. In short, the proposal would empower the state government, local governments, cooperatives, non-profits, and local private entities to utilize the dollars to build universal 21st century access. With that level of money, the state could end the digital divide—if invested correctly. But, so far, industry opposition from AT&T and cable have successfully sidelined the money—as EFF warned earlier this month. Now, they’re attempting to reshape how the state spends a once-in-a-generation investment of funds to eliminate the digital divide into wasteful spending and a massive subsidy that would go into the industry’s hands. Before we break down the woefully insufficient industry alternative proposals that are circulating in Sacramento, it is important we understand the nature of California’s broadband problem today, and why Governor Newsom’s proposal is a direct means of solving it. Industry’s Already Shown Us How Profit-Driven Deployment Leaves People Behind This cannot be emphasized enough, but major industry players are discriminating against communities that would be profitable to fully serve in the long term. Why? These huge companies have opted to expand their short-term profits through discriminatory choices against the poor. That’s how California became the setting for a stark illustration of the digital divide in the pandemic: a picture of little girls doing homework in a fast food parking lot so they could access the internet. That was not in a rural market, where households are more spaced out. That was Salinas, California, a city with a population of 155,000+ people at a density of 6,490 people per square mile. There was no good reason why those little kids didn’t have cheap, fast internet at home. We should disabuse ourselves of the notion that any industry subsidy will change how they approach the business of deploying broadband access.   From And in the lack of meaningful digital redlining regulation, it is perfectly natural for industry to opt to discriminate against low-income neighborhoods because of the pressures to deliver fast profits to investors. It is why dense, urban markets that would be profitable to serve, such as Oakland and Los Angeles, have a well-documented and thoroughly studied digital redlining problem. Research shows that it’s mostly Black and brown neighborhoods that are skipped over for 21st century network investments. It is also the same reason why people in rural California suffer from systemic underinvestment in networks that led to one of the largest private telecom bankruptcies in modern times—impacting millions of Californians. If the profit is not fast enough, they will not invest, and throwing more government money at these short term focused companies will never fix the problem. Big internet service providers have shown us again and again that they will not invest in areas that present an unattractive profit rate for their shareholders. On average, it takes a network about five years to fully deploy, and new networks were first deployed in these companies’ favored areas well over a decade ago. No amount of government money from a one-time capital expenditure standpoint will change their estimations of who is a suitable long-term payer for their private products and services. Their conclusions are hard-wired into expectations driven by Wall Street investors for their ability to pay dividends and keep delivering consistent profits certain households will deliver to them. Their priorities will not change due to more money from the state. Even with more aggressive regulation to address profitable yet discriminated against areas, the private industry is not able to address areas that will yield zero profit to provide service. The only means to reach 100% universal access with 21st century fiber broadband at affordable prices is to promote locally held alternatives and aggressively invest in public broadband infrastructure. Some rural communities can only be fully served by a local entity that can take a 30-to-40-year debt investment strategy that is not subject to pressure from far-off investors to deliver profits. That is exactly how we got electricity out to rural communities. Broadband being an essential service, the expectations of consistent revenue from rural residents to sustain their own networks align well with making long-term bets—as envisioned by Governor Newsom’s proposal to create a Loan Loss Revenue Reserve Account. This account will enable long-term low-interest infrastructure financing. And, most importantly, it’s only possible to deliver affordable access for low-income users in many places if we decouple the profit motive with the provisioning of this essential service. For proof of this, look no further than Chattanooga, Tennessee, where 17,700 households with low-income students will enjoy 10 years of free 100/100mbps fiber internet access at the zero profit cost of $8.2 million.  If we want to make 21st century-internet something everyone can access regardless of their socioeconomic status and location, we need to use all the options available to us. The private market has its role and importance. But truly reaching 100% access is not possible without a strong public model to cover those who are most difficult to reach. What Industry Is Actually Asking Sacramento To Do With Our Money The suggestions the cable industry and AT&T are making to Sacramento right now fail us twice over. They will not actually solve the problem our state faces. They will also set us down a path of perpetual industry subsidization and sabotage of the public model. These suggestions seem focused on blocking the state government from pushing middle-mile fiber deep into every community, which is a necessary pre-condition to ensure a local private or public solution is financially feasible. Still, the mere existence of some connectivity in or near an area does not mean there is the capacity to deliver 21st century access. Solving that problem requires fiber. And it’s the lack of accessible fiber (predominantly in rural areas) that prevents local solutions from taking root in many places, even those that are motivated. Industry has no solution to offer in these places, because it has always avoided investing in those areas. Let’s start with cable companies’ specific suggestions. This industry has a very long history of opposing municipal fiber to preserve high-speed monopolies. And so their suggested change to Governor Newsom’s plan comes as no surprise because all it would do is jam all the funding into the existing California Advanced Services Fund (CASF), which they supported in 2017. CASF has utterly failed to make significant progress in eliminating the digital divide. EFF has detailed why California’s broadband program is in desperate need of an update and has sponsored legislation to adopt a 21st century infrastructure standard in the face of industry opposition—which prevented needed changes to CASF in the height of the pandemic, with an assist from California’s Assembly. There is no saving grace for the existing broadband infrastructure program. CASF has spent an obscene amount of public money on obsolete slow connections that were worthless during the pandemic due to legislative restrictions the industry sought. Its current rules also make large swathes of rural California ineligible for broadband investments, and it prioritizes private industry investments by blocking most local government bidders. It is no surprise cable suggests we spend another $7 billion on that failed experiment. Arguably the worst suggestion the cable industry makes to Governor Newsom’s plan is to eliminate the long-term financing program that would help local governments access the bond market, and instead cram it into the failed CASF program. Doing that would mean local communities would be barred from replacing 1990s-era connections with fiber, and continue to reward the industry’s strategy of discriminating against low-income Californians and prioritizing the wealthy. That would effectively destroy the ability of local governments to finance community-wide upgrades, which is a core strategy of rural county governments left to deal with the wake of the Frontier Communications bankruptcy. By sabotaging the long-term financing program, cable ensures local governments have little chance of financing their own networks—and that is the entire point. If Sacramento wants to see everyone in rural California and underserved cities connected, then community networks must be community-wide to make long-term financing the cost of the entire network to all people affordable. Forcing the public sector to offset the discriminatory choices of industry only rewards that discrimination and makes these community solutions financially infeasible. AT&T, which has never lacked humility when talking to Sacramento legislators, has gone as far to say in a letter to the legislature that building out capacity to every community somehow prevents local last-mile solutions from taking root. That’s a bogus argument. If you don’t have capacity at an affordable rate provisioned to a community, there can never be a local solution. If that capacity is already available to rural communities today at a price point that enables local solutions, then we would be seeing it in rural communities today. So, unless AT&T is planning to show the state and local communities exactly where—and at what price—it is offering middle mile fiber to rural communities, legislators should just ignore this obvious misdirection. What is also particularly frustrating to read in AT&T’s letter is the argument that barely anyone needs infrastructure in California to engage in remote work, telehealth, and distance education. The letter goes so far as to say only 463,000 households need access. This just is not true. For starters, AT&T’s estimate is premised on the assumption that an extremely slow 25/3 mbps broadband connection is more than enough to use the internet today. That standard was established in 2015, long before the pandemic reshaped access needs. It is effectively useless today as a metric to assess infrastructure, because it obscures the extent to which the industry has under-invested in 21st century ready access. No one builds a network today to just deliver 25/3 mbps. Doing so would be a gigantic waste of money. Anything new built today is built with fiber, without exception. The appropriate assessment of the state’s communications infrastructure should boil down to one question: who has fiber? The reality, per state data, is that just to meet the Governor’s minimum metric of 100 mbps download the number of households that need support rises by 100,000s above AT&T’s estimate. And if we want 21st century fiber-based infrastructure access throughout the state, as envisioned by President Biden and Governor Newsom’s proposal, we have millions of homes to connect—something that can be done with a $7 billion investment. The choice for Sacramento should be easy. An investment at the size of $7 billion that will enable high-capacity fiber infrastructure throughout the state will begin a 21st century access transition for all Californians who lack it today. Adopting AT&T’s vision of narrowly funneling the funds to an extremely limited number of Californians while shoveling the rest in their coffers as subsidies will build nothing.

  • Standing With Security Researchers Against Misuse of the DMCA
    by Kurt Opsahl on June 23, 2021 at 4:04 pm

    Security research is vital to protecting the computers upon which we all depend, and protecting the people who have integrated electronic devices into their daily lives. To conduct security research, we need to protect the researchers, and allow them the tools to find and fix vulnerabilities. The Digital Millennium Copyright Act’s anti-circumvention provisions, Section 1201, can cast a shadow over security research, and unfortunately the progress we’ve made through the DMCA rule-making process has not been sufficient to remove this shadow. DMCA reform has long been part of EFF’s agenda, to protect security researchers and others from its often troublesome consequences. We’ve sued to overturn the onerous provisions of Section 1201 that violate the First Amendment, we’ve advocated for exemptions in every triennial rule-making process, and the Coders Rights Project helps advise security researchers about the legal risks they face in conducting and disclosing research. Today, we are honored to stand with a group of security companies and organizations that are showing their public support for good faith cybersecurity research, standing up against use of Section 1201 of the DMCA to suppress the software and tools necessary for that research. In the statement below, the signers have united to urge policymakers and legislators to reform Section 1201 to allow security research tools to be provided and used for good faith security research, and to urge companies and prosecutors to refrain from using Section 1201 to unnecessarily target tools used for security research. The statement in full: We the undersigned write to caution against use of Section 1201 of the Digital Millennium Copyright Act (DMCA) to suppress software and tools used for good faith cybersecurity research. Security and encryption researchers help build a safer future for all of us by identifying vulnerabilities in digital technologies and raising awareness so those vulnerabilities can be mitigated. Indeed, some of the most critical cybersecurity flaws of the last decade, like Heartbleed, Shellshock, and DROWN, have been discovered by independent security researchers. However, too many legitimate researchers face serious legal challenges that prevent or inhibit their work. One of these critical legal challenges comes from provisions of the DMCA that prohibit providing technologies, tools, or services to the public that circumvent technological protection measures (such as bypassing shared default credentials, weak encryption, etc.) to access copyrighted software without the permission of the software owner. 17 USC 1201(a)(2), (b). This creates a risk of private lawsuits and criminal penalties for independent organizations that provide technologies to researchers that can help strengthen software security and protect users. Security research on devices, which is vital to increasing the safety and security of people around the world, often requires these technologies to be effective. Good faith security researchers depend on these tools to test security flaws and vulnerabilities in software, not to infringe on copyright. While Sec. 1201(j) purports to provide an exemption for good faith security testing, including using technological means, the exemption is both too narrow and too vague. Most critically, 1201(j)’s accommodation for using, developing or sharing security testing tools is similarly confined; the tool must be for the “sole purpose” of security testing, and not otherwise violate the DMCA’s prohibition against providing circumvention tools. If security researchers must obtain permission from the software vendor to use third-party security tools, this significantly hinders the independence and ability of researchers to test the security of software without any conflict of interest. In addition, it would be unrealistic, burdensome, and risky to require each security researcher to create their own bespoke security testing technologies. We, the undersigned, believe that legal threats against the creation of tools that let people conduct security research actively harm our cybersecurity. DMCA Section 1201 should be used in such circumstances with great caution and in consideration of broader security concerns, not just for competitive economic advantage. We urge policymakers and legislators to reform Section 1201 to allow security research tools to be provided and used for good faith security research In addition, we urge companies and prosecutors to refrain from using Section 1201 to unnecessarily target tools used for security research. Bishop FoxBitwatcherBlack Hills Information SecurityBugcrowdCybereasonCybersecurity CoalitionDigital Oceandisclose.ioElectronic Frontier FoundationGrand Idea StudioGRIMMHackerOneHex-RaysiFixItLuta SecurityMcAfeeNCC GroupNowSecure Rapid7Red SiegeSANS Technology InstituteSCYTHESocial Exploits LLC

  • Supreme Court Upholds Process to Challenge Bad Patents
    by Alex Moss on June 23, 2021 at 7:00 am

    The Patent Office grants thousands of patents a year, including many that would be invalidated if a court considered them. These junk patents should never be issued in the first place, but fortunately there is a way to challenge them at the Patent Office rather than wasting the courts’ time and going through expensive litigation. Unsurprisingly, patent owners keep trying to convince the Supreme Court those post-grant challenges are unconstitutional. This week, they failed again. In United States v. Arthrex, the Supreme Court held that the administrative patent judges who preside over post-grant reviews were constitutionally appointed. That’s a relief: as we and Engine Advocacy explained in our amicus brief, the post-grant review system created has helped drive down the cost and number of patent infringement lawsuits clogging federal courts, raising consumer prices, and smothering innovation. But the way the Supreme Court reached that outcome is a surprise: by holding the Director of the Patent Office formally accountable for each and every post-grant review decision. That may not make much of a practical difference, but symbolically, it is a crushing blow to the myth of the Patent Office as a place where technical expertise rather than political power rules. The Supreme Court’s willingness to say so could be empowering as long as we are willing to use political processes to hold the Patent Office accountable for the hugely consequential decisions it makes. In Arthrex, the future of the post-grant review system was at stake. As we’ve written many times, this system allows granted patents to be challenged for claiming things that were known or obvious, and therefore cannot qualify as patentable inventions. We need this system because members of the public get no chance to challenge the original decision to grant a patent. That decision is made by a patent examiner. When an examiner rejects a patent, the applicant gets to request further examination and to appeal (either at the Patent Office or in federal district court). But when an examiner grants a patent, there’s no chance for the rest of us to request further examination or appeal before the patent goes into effect. We welcome the emphasis the Supreme Court put, repeatedly, on the need for accountability at the Patent Office. Because the Patent Office gets far more patent applications than it can carefully examine, a huge number of granted patents are found invalid when they do go to court. Because litigating invalid patents is a huge waste of time, money, and resources for companies, courts, and consumers alike, Congress created the post-grant review system as part of the America Invents Act of 2011. The system lets any member of the public petition for review, and the Patent Office is supposed to grant review as long as the petition shows the patent is likely invalid. The America Invents Act gave the Director of the Patent Office a huge amount of power over the post-grant review process. And the Director appointed by former President Trump, Andrei Iancu, misused that power to make the process less effective, accessible, and fair. Patent owners still weren’t satisfied and challenged the whole process as unconstitutional. The Constitution includes limits and requirements for how the President may delegate authority to agencies and officers in the executive branch. Patent owners argued that the administrative patent judges who preside over post-grant reviews had so much power—specifically, the power to hold a granted patent invalid—that they could only be appointed by the President with the Senate’s consent. The Federal Circuit agreed, but opted not to nullify the post-grant review system. Instead, it eliminated the civil service protections of administrative patent judges so that the Director of the Patent Office could fire them without cause, and thus, effectively supervise their decisions. Since the Director is appointed by the President with the Senate’s consent, the Federal Circuit held that this would cure the constitutional defect.  The Supreme Court agreed that administrative patent judges needed more supervision, but instead of taking away their civil service protections, gave the Director of the Patent Office the formal power to review their post-grant review decisions, and sent the post-grant review decision in Arthrex back to the Acting Director to decide whether to exercise that power or not. It is hard to see any reason why the Acting Director would bother to review, let alone, revise the decision to invalidate the patent in Arthrex. Nevertheless, he now has the power to do so as well as the discretion to decide whether to invoke it. The Supreme Court’s reasoning and remedy were far from unanimous. Justice Gorsuch concurred with much of the reasoning, but dissented from the result: he would have found administrative patent judges unconstitutionally appointed, nullified their appointments, and put an end to post-grant review once and for all. Justice Thomas, joined by the three liberal justices, would simply have upheld the appointment of administrative patent judges, and with it, the post-grant review system, no judicial intervention required. That reasoning—and lack of remedy—echoes the arguments EFF and Engine made in the amicus brief we submitted with the help of students and faculty at USC’s IP clinic. Even though the majority took an entirely different approach, we welcome the emphasis they put, repeatedly, on the need for accountability at the Patent Office. The government had argued that administrative patent judges were constitutionally appointed because the Director effectively, if informally, supervised their decisions thanks to the bundle of powers he possessed—including the power to assign individual judges (including himself) to particular reviews, set the rules for their conduct, and fire them for cause. But the majority held that the Director’s enormous informal power was the problem, and that the solution was to formalize it, not nullify or diminish the position of administrative patent judges even further. We don’t agree with the legal reasoning, but we wholeheartedly agree with the practical analysis: the Director of the Patent Office needs to be accountable through the political process for the enormous power that comes with the position. As we wait for the administration to appoint the next Director, the decision is an important and timely reminder of how much that decision matters. We must use the political process to make sure the next Director wields the power that comes with the position to improve the Patent Office’s ability to do its job: promoting the creation and dissemination of new technologies in ways that benefit the public as a whole—including our economy, health, and ability to communicate with each other. 

  • A Long Overdue Reckoning For Online Proctoring Companies May Finally Be Here
    by Jason Kelley on June 22, 2021 at 11:11 pm

    EFF Legal Intern Haley Amster contributed to this post. Update: An earlier version of this post said that ExamSoft has had a security breach. For clarity: security breaches have only been alleged by users, and ProctorU, a partner of ExamSoft, has had a breach. Over the past year, the use of online proctoring apps has skyrocketed. But while companies have seen upwards of a 500% increase in their usage, legitimate concerns about their invasiveness, potential bias, and efficacy are also on the rise. These concerns even led to a U.S. Senate inquiry letter requesting detailed information from three of the top proctoring companies—Proctorio, ProctorU, and ExamSoft—which combined have proctored at least 30 million tests over the course of the pandemic.1 Unfortunately, the companies mostly dismissed the senators’ concerns, in some cases stretching the truth about how the proctoring apps work, and in other cases downplaying the damage this software inflicts on vulnerable students.  In one instance, though, these criticisms seem to have been effective: ProctorU announced in May that it will no longer sell fully-automated proctoring services. This is a good step toward eliminating some of the issues that have concerned EFF with ProctorU and other proctoring apps. The artificial intelligence used by these tools to detect academic dishonesty has been roundly attacked for its bias and accessibility impacts, and the clear evidence that it leads to significant false positives, particularly for vulnerable students. While this is not a complete solution to the problems that online proctoring creates—the surveillance is, after all, the product—we hope other online proctoring companies will also seriously consider the danger that these automated systems present.  The AI Shell Game  This reckoning has been a long time coming. For years, online proctoring companies have played fast and loose when talking about their ability to “automatically” detect cheating. On the one hand, they’ve advertised their ability to “flag cheating” with artificial intelligence: ProctorU has claimed to offer “fully automated online proctoring”; Proctorio has touted the automated “suspicion ratings” it assigns test takers; and ExamSoft has claimed to use “Advanced A.I. software” to “detect abnormal student behavior that may signal academic dishonesty.” On the other hand, they’ve all been quick to downplay their use of automation, claiming that they don’t make any final decisions—educators do—and pointing out that their more expensive options include live proctors during exams or video review by a company employee afterward, if you really want top-tier service. Proctoring companies must admit that their products are flawed, and schools must offer students due process and routes for appeal when these tools flag them, regardless of what software is used to make the allegations. Nowhere was this doublespeak more apparent than in their recent responses to the Senate inquiry. ProctorU “primarily uses human proctoring – live, trained proctors – to assist test-takers throughout a test and monitor the test environment,” the company claimed. Despite this, it has offered an array of automated features for years, such as their entry-level “Record+” which (until now) didn’t rely on human proctors. Proctorio’s “most popular product offering, Automated Proctoring…records raw evidence of potentially-suspicious activity that may indicate breaches in exam integrity.” But don’t worry: “exam administrators have the ability and obligation to independently analyze the data and determine whether an exam integrity violation has occurred and whether or how to respond to it. Our software does not make inaccurate determinations about violations of exam integrity because our software does not make any determinations about breaches of exam integrity.” According to Proctorio’s FAQ, “Proctorio’s software does not perform any type of algorithmic decision making, such as determining if a breach of exam integrity has occurred. All decisions regarding exam integrity are left up to the exam administrator or institution” [emphasis Proctorio’s].  But this blame-shifting has always rung false. Companies can’t both advertise the efficacy of their cheating-detection tools when it suits them, and dodge critics by claiming that the schools are to blame for any problems.  And now, we’ve got receipts: in a telling statistic released by ProctorU in its announcement of the end of its AI-only service, “research by the company has found that only about 10 percent of faculty members review the video” for students who are flagged by the automated tools. (A separate University of Iowa audit they mention found similar results—only 14 percent of faculty members were analyzing the results they received from Proctorio.) This is critical data for understanding why the blame-shifting argument must be seen for what it is: nonsense. “[I]t’s unreasonable and unfair if faculty members” are punishing students based on the automated results without also looking at the videos, says a ProctorU spokesperson—but that’s clearly what has been happening, perhaps the majority of the time, resulting in students being punished based on entirely false, automated allegations. This is just one of the many reasons why proctoring companies must admit that their products are flawed, and schools must offer students due process and routes for appeal when these tools flag them, regardless of what software is used to make the allegations. We are glad to see that ProctorU is ending AI-only proctoring, but it’s disappointing that it took years of offering an automated service—and causing massive distress to students—before doing so. We’ve also yet to see how ProctorU will limit the other harms that the tools cause, from facial recognition bias to data privacy leaks. But this is a good—and important—way for ProctorU to walk the talk after it admitted to the Senate that “humans are simply better than machines alone at identifying intentional misconduct.”  Human Review Leaves Unanswered Questions Human proctoring isn’t perfect either. It has been criticized for its invasiveness, and for creating an uncomfortable power dynamic where students are surveilled by a stranger in their own homes. And simply requiring human review doesn’t mean students won’t be falsely accused: ExamSoft told the Senate that it relies primarily on human proctors, claiming that video is “reviewed by the proctoring partner’s virtual proctors—trained human invigilators [exam reviewers]—who also flag anomalies,” and that “discrepancies in the findings are reviewed by a second human reviewer,” after which a report is provided to the institution for “final review and determination.”  But that’s the same ExamSoft that proctored the California Bar Exam, in which over one-third of examinees were flagged (over 3,000). After further review, 98% of those flagged were cleared of misconduct, and only 47 test-takers were implicated. Why, if ExamSoft’s human reviewers carefully examined each potential flag, do the results in this case indicate that nearly all of their flags were still false? If the California Bar hadn’t carefully reviewed these allegations, the already-troubling situation, which included significant technical issues such as crashes and problems logging into the site, last-minute updates to instructions, and lengthy tech support wait times, would have been much worse. (Last month, a state auditor’s report revealed that the California State Bar violated state policy when it awarded ExamSoft a new five-year, $4 million contract without evaluating whether it would receive the best value for the money. One has to wonder what, exactly, ExamSoft is offering that’s worth $4 million given this high false-positive rate.)  Unfortunately, additional human review may simply result in teachers and administrators ignoring even more potential false flags, as they further trust the companies to make the decisions for them. We must carefully scrutinize the danger to students whenever schools outsource academic responsibilities to third-party tools, algorithmic or otherwise.  It’s well past time for online proctoring companies to be honest with their users. Each company should release statistics on how many videos are reviewed by humans, at schools or in-house, as well as how many flags are dismissed in each portion of review. This aggregate data would be a first step to understanding the impact of these tools. And the Senate and the Federal Trade Commission should follow up on the claims these companies made in their responses to the senators’ inquiry, which are full of weasel words, misleading descriptions, and other inconsistencies. We’ve outlined our concerns per company below.  ExamSoft ExamSoft claimed in its response to the Senate that it doesn’t monitor students’ physical environments. But it does keep a recording of your webcam (audio and visual) the entire time you’re being proctored. This recording, with integrated artificial intelligence software, detects, among other things, “student activity” and “background noise.” That sure sounds like environmental monitoring to us.  ExamSoft omitted from its Senate letter that there have been alleged data security issues. The company’s partner, ProctorU, had a data breach. ExamSoft continues to use automated flagging, and conspicuously did not mention disabilities that would lead students to be flagged for cheating, such as stimming. This has already caused a lot of issues for exam-takers with diabetes who have had restrictions on their food availability and insulin use, and have been basically told that a behavior flag is unavoidable.  The company also claimed that their facial recognition system still allows an exam-taker to proceed with examinations even when there is an issue with identity verification—but users report significant issues with the system recognizing them causing delays and other issues with their exams. ProctorU ProctorU claimed in its response to the Senate that it “prioritizes providing unbiased services,” and its “experienced and trained proctors can distinguish between behavior related to ‘disabilities, muscle conditions, or other traits’” compared with “unusual behavior that may be an attempt to circumvent test rules.” The company does not explain the training proctors receive to make these determinations, or how users can ensure that they are treated fairly when they have concerns about accommodations. ProctorU also claims to have received fewer than fifteen complaints related to issues with their facial recognition technology, and claims that it has found no evidence of bias in the facial comparison process it uses to authenticate test-taker identity. This is, to put it mildly, very unlikely.  ProctorU is currently being sued for violating the Illinois Biometric Information Privacy Act (BIPA), after a data breach affected nearly 500,000 users. The company failed to mention this breach in its response, and while it claims its video files are only kept for up to two years, the lawsuit contends that biometric data from the breach dated back to 2012. There is simply no reason to hold onto biometric data for two years, let alone that eight.  Proctorio Aware of face recognition’s well-documented bias, Proctorio has gone out of its way to claim that it doesn’t use it. While this is good news for privacy, it doesn’t negate concerns about bias. The company still uses automation to determine whether a face is in view during exams—what it calls facial detection—which may not compare an exam taker to previous pictures for identification, but still requires, obviously, the ability for the software to match a face in view to an algorithmic model for what a face looks like at various angles. A software researcher has shown that the facial detection model that the company is using “fails to recognize Black faces more than 50 percent of the time.” Separately, Proctorio is facing a lawsuit for misusing the Digital Millennium Copyright Act (DMCA) to force down posts by another security researcher who used snippets of the software’s code in critical commentary online. The company must be more open to criticisms of its automation, and more transparent about its flaws. In its response to the Senate, the company claimed that it has “not verified a single instance in which test monitoring was less accurate for a student based on any religious dress, like headscarves they may be wearing, skin tone, gender, hairstyle, or other physical characteristics.” Tell that to the schools who have canceled their contracts due to bias and accessibility issues. Lastly, Proctorio continues to promote their automated flagging tools, while dismissing complaints of false-positives by shifting the blame over to schools. As with other online proctoring companies, Proctorio should release statistics on how many videos are reviewed by humans, at schools or in-house, as well as how many flags are dismissed as a result.  1. ProctorU claims to have proctored 6,280,986 exams during the pandemic; Proctorio reports 20,000,000; ExamSoft reports over 75 million tests proctored total in June 2021, compared to 61 million in October 2020.

  • Understanding Amazon Sidewalk
    by Jon Callas on June 22, 2021 at 11:10 pm

    Just before the long weekend at the end of May, Amazon announced the release of their Sidewalk mesh network. There are many misconceptions about what it is and what it does, so this article will untangle some of the confusion. It Isn’t Internet Sharing Much of the press about Amazon Sidewalk has said that it will force you to share your internet or WiFi network. It won’t. It’s a network to connect home automation devices like smart light switches together in more flexible ways. Amazon is opening the network up to partners, the first of which is the Tile tracker. Sidewalk can use the internet for some features, but won’t in general. If it does, Amazon is limiting its rate to 80 kilobits per second — or 8 kilobytes per second, which is only about 50% more than the modems we used in the old days. It is also capped at 500 MB per month, which is less than two hours of 80 kbps over the whole month. To be clear: it isn’t going to interfere with your streaming, video calls, or anything else. The average web page is over two megabytes in size, which would take over four minutes to download at that speed. What is Sidewalk, Then? Sidewalk is primarily a mesh network for home automation devices, like Alexa’s smart device features, Google Home, and Apple HomeKit. This mesh network can provide coverage where your home network is flaky. To build the ecosystem, people incorporate their devices into this mesh network.  The first partner company to integrate to Sidewalk is the Tile tracker tags. Sidewalk allows you to use a Tile tag at a distance further than typical Bluetooth range. Sidewalk uses Bluetooth, WiFi, and 900MHz radio to connect the mesh network together. There will be other partner companies; this is an important thing to understand about the Amazon Sidewalk mesh, that it’s not just Amazon. Other companies will make devices that operate as entities in the network, either as a device like a smart light switch, or as a hub like the Echo and Ring devices. What is a Mesh Network, Anyway? Suppose you want to send a birthday card to Alice, I live next door to you, and you know I work with Alice. Rather than sending the card through the postal system, you might give me the card to take to Alice. When I get to work, I run into Bob who sits next to Alice, so I give the card to Bob, who gives it to Alice.  That’s a mesh network.  A web of people delivers the message in an ad hoc manner, and saves you postage. Notably, mesh networks work without explicit infrastructure or servers. How does Amazon Sidewalk Use a Mesh? Suppose you put an Alexa-controlled light in your bedroom, but the WiFi there is flaky. If you use Alexa to turn the light on or off, sometimes the command doesn’t get through. Let’s also suppose that in that bedroom, the WiFi from your neighbor’s house is stronger than your WiFi. Well, what if when your WiFi doesn’t process your command, your Alexa uses your neighbor’s WiFi instead? That’s what Amazon Sidewalk does, with a very simple mesh, from your Alexa to your neighbor’s WiFi to your light. Let’s expand on that example. Suppose that you’re out on a walk in your neighborhood and realize you didn’t turn your lamp off. You press a button on your smartphone to turn the lamp off. Your phone passes that message to a nearby house, perhaps the one across the street, which hands that message to another house, and it ends up at your lamp, in much the same way as your birthday card made its way to Alice. In some situations, Sidewalk won’t be able to route the message via the mesh. Instead, it has to send the message to the internet, and then back from the internet to the mesh network near the destination. The Sidewalk documents we have seen do not have details of the mesh routing algorithms, such as how messages are routed via mesh and when or why they go into or out of the internet. So we don’t know how that works. We do know that when Sidewalk tries to send messages without involving the internet, messages are expected to be small, and relatively infrequent, because the bandwidth throttle and total data caps are someone’s “nobody should need anywhere close to this” limits. We don’t know how hard it tries, nor how successful its tries are. How Is Sidewalk’s Privacy and Security? Amazon describes the privacy and security of Sidewalk in a privacy and security whitepaper. Amazon also has an overview, a blog post about their goals, an IoT Integration site, and developer documentation for the SDK. While it does not describe the details of the Sidewalk protocols, its description of the cryptographic security and privacy measures is promising. So is the sketch of the routing. It appears to have some good security and privacy protections. Of course, the proof is in the details and ultimate implementation. Amazon has a reasonable track record of designing, building, and updating security and privacy in AWS and related technologies. It’s in their interest to severely limit what participants in the mesh network learn about other participants, and thus whatever leaks researchers find are likely to be bugs.  What’s the Bad News? We have a number of concerns about Sidewalk. Amazon botched the announcement Most of the articles about Sidewalk focused on the network sharing, without explaining that this is a community mesh network of home automation and related technologies. Even more recent articles, which at least have stopped talking about internet sharing, are instead talking about wireless (WiFi) sharing. It’s been difficult to understand what Sidewalk is and is not. At the end of our investigation, we don’t know that we’ve gotten it right, either. Amazon needs to do a much better job telling us what their new systems do. To be fair, this is hard! Mesh networking is not widely used for wireless communications because the technology is difficult to implement. Nonetheless, this is all the more reason for Amazon to spend more time describing what Sidewalk is. There are many missing details Amazon has published some good overviews, white papers, and even some API descriptions, yet there is much that we still don’t know about Sidewalk. For example, we don’t know the details of the security and privacy measures. Likewise, we don’t know what the mesh routing algorithms are. Thus, there’s no independent analysis of Sidewalk. Moreover, while we like the sketch of Sidewalk’s security, there will be inevitable transfers of information to Amazon, such as IDs of devices on the new network. We don’t know if there are other information transfers to participating devices, or things Amazon can infer. It’s a V1 system, so it’s going to have bugs Even though the initial description of privacy and security show that care went into designing Sidewalk, it’s a version-one system. So there are bugs in the protocol and the software. There also will be bugs yet to be written in Sidewalk-compatible devices and software made by Amazon and its partners. Being an early-adopter of any new technology has the benefit of being early, as well as the risks of being early. No abuse mitigations While Sidewalk has been designed for security and privacy, it has not been designed to mitigate abuse. This is a glaring hole. Amazon’s whitepaper for Sidewalk describes a use case of a lost pet. The first Sidewalk partner is the Tile tracker. While we all empathize with someone whose pet is missing, and we’ve all wondered where we left our keys, any system that allows one to track a pet allows one to be a stalker. So Sidewalk creates new opportunities for people to stalk family members, former romantic partners, friends, neighbors, co-workers, and others. Just drop a tracker in their handbag or car, and you can track them. This has been our main criticism of Sidewalk, and to be fair, Tile says they are working on solutions. This has also been our criticism of Apple’s AirTags.  Sidewalk amplifies the existing risk of a surreptitious tracker by giving it the extended reach of every Echo or Ring camera that’s participating in the Sidewalk network. If Sidewalk systems don’t have proper controls on them then estranged spouses, ex-roommates, and nosy neighbors, can use them to spy from anywhere in the world. We also are concerned about how Amazon might connect its new Sidewalk technology to one of its most controversial products: Ring home doorbell surveillance cameras. For example, if Ring cameras are tied together through Sidewalk technology, they can form neighborhood-wide video surveillance systems.  While Amazon’s whitepapers indicate that the security and privacy is pretty good, Amazon is silent on these kinds of abuse scenarios. Indeed, their pet use case is a proxy for abuse. We are concerned that we don’t know what we don’t know about the overall ecosystem. Opt-out rather than opt-in Perhaps the most important principle in respectful design is user consent. People must be free to autonomously choose whether or not to use a technology, and whether or not another entity may process their personal information. Opt-in systems have far lower participation than opt-out systems, because most people either are not aware of the system and its settings, or don’t take the time to change the settings.  Thus, defaults matter. By making Sidewalk opt-out instead of opt-in, Amazon is ginning up a wider reach of its network, at the cost of genuine user control of their own technologies.  In Sidewalk’s case, there might be a relatively low infosec cost to a person being pushed into the system until they opt-out. The major risk is the effect of bugs in the system. It’s low risk, but not no risk. If Amazon had made its new system opt-in, we might not be writing about it at all. It would have traded slower growth for fewer complaints.  How Do I Turn Sidewalk Off? If you’ve decided after reading this that you don’t want to use Sidewalk, it’s easy to turn off.  Amazon has a page with instructions on how to turn Sidewalk off. If you do not use Alexa, Echo, or Ring, you won’t be using Sidewalk at all, so you don’t have to worry about turning it off. Lack of Abuse Mitigations and Opt-Out by Design Are Sidewalk’s Biggest Flaws Amazon’s Sidewalk system is a mesh network that uses their Echo devices and Ring cameras to improve the reach and reliability of their home automation systems and partner systems like Tile’s tracker. It is not an internet sharing system as some have reported. Its design appears to be privacy-friendly and to have good security. It is a brand-new system, so there will be bugs in it.  The major problem is a lack of mitigations to stop people from using it in abusive ways, such as tracking another person. It is also troubling that Amazon foisted the system on its users, placing on them the burden of opting out, rather than respecting its users’ autonomy and giving the opportunity to opt-in.

  • Congress Wants to Put the Brakes on Runaway Acquisitions by Big Tech
    by Katharine Trendacosta on June 22, 2021 at 9:40 pm

    The Judiciary Committee of the U.S. House of Representatives recently released a comprehensive series of bills designed to curb the excesses of Big Tech. One of them, the Platform Competition and Opportunity Act, addresses one of the biggest, most obvious problems among the largest tech companies: that they use their deep pockets to buy up services and companies which might have one day competed with them. We’ve said before that increased scrutiny of mergers and acquisitions is the first step in addressing the lack of competition for Big Tech. Restraining internet giants’ power to squash new competitors can help new services and platforms arise, including ones that are not based on a surveillance business model. It would also encourage those giants to innovate and offer better services, rather than relying on being the only game in town. Big Tech’s acquisitiveness is well-known and has been on the rise. Analysis of Apple’s finances, for example, revealed that over the last 6 years, the company was buying a new company every three to four weeks. Not only do these sales keep startups from ever competing with incumbent powers, they also bring more data under the control of companies that already have too much information on us. This is especially true when one of the draws of a startup’s  service was that it provided an alternative to Big Tech’s offering, as we saw when Google bought Fitbit. The acquisition practices of the largest tech firms have distorted the marketplace. Mergers and acquisitions are now seen as a primary driving force to securing initial investment to launch a startup. In other words, how attractive your company is to a big tech acquisition is now arguably the primary reason a startup gets funded. This makes sense because ultimately the venture capital firms that fund startups are interested in making money, and if the main source of profit in the technology sector is derived from mergers with big tech, as opposed to competing with them, the investment dollars will flow that way. The Platform Competition and Opportunity Act requires platforms of a certain size—or those owned by people or companies of a certain size—to prove that each proposed acquisition isn’t anticompetitive. In today’s marketplace, that means Apple, Google, Facebook, Amazon, and Microsoft. These companies would have to show that they’re not trying to buy a service that competed with a similar feature of their platforms. In other words, Facebook, home to Facebook Messenger, would not have been allowed to buy WhatsApp under this law. Platforms of this size would also be prevented from buying a service which is either a competitor or is in the process of growing to be a competitor. In other words, Facebook’s acquisition of Instagram would have gathered more scrutiny under this framework.  Stricter rules for mergers and acquisitions are a common-sense way to keep the big players from growing even bigger. The tech marketplace is top-heavy and concentrated, and the Platform Competition and Opportunity Act will prevent further imbalance in the marketplace. 

  • The New ACCESS Act Is a Good Start. Here’s How to Make Sure It Delivers.
    by Bennett Cyphers on June 21, 2021 at 9:50 pm

    The ACCESS Act is one of the most exciting pieces of federal tech legislation this session. Today’s tech giants grew by taking advantage of the openness of the early Internet, but have designed their own platforms to be increasingly inhospitable for both user freedom and competition. The ACCESS Act would force these platforms to start to open up, breaking down the high walls they use to lock users in and keep competitors down. It would advance the goals of competition and interoperability, which will make the internet a more diverse, more user-friendly place to be. We’ve praised the ACCESS Act as “a step towards a more interoperable future.” However, the bill currently before Congress is just a first step, and it’s far from perfect. While we strongly agree with the authors’ intent, some important changes would make sure that the ACCESS Act delivers on its promise. Strong Consent and Purpose Limitation Requirements One of the biggest concerns among proponents of interoperability is that a poorly thought-out mandate could end up harming privacy. Interoperability implies more data sharing, and this, skeptics argue, increases the risk of large-scale abuse. We addressed this supposed paradox head-on in a recent whitepaper, where we explained that interoperability can enhance privacy by giving users more choice and making it easier to switch away from services that are built on surveillance. Requiring large platforms to share more data does create very real risks. In order to mitigate those risks, new rules for interoperability must be grounded in two principles: user consent and data minimization. First, users should have absolute control over whether or not to share their data: they should be able to decide when to start sharing, and then to rescind that permission at any time. Second, the law must ensure that data which is shared between companies in order to enable interoperability—which may include extremely sensitive data, like private messages—is not used for secondary, unexpected purposes. Relatedly, the law must make sure that “interoperability” is not used as a blanket excuse to share data that users wouldn’t otherwise approve of. The ACCESS Act already has consent requirements for some kinds of data sharing, and it includes a “non-commercialization” clause that prevents both platforms and their competitors from using data for purposes not directly related to interoperability. These are a good start. However, the authors should amend the bill to make it clear that every kind of data sharing is subject to user consent, that they can withdraw that consent at any time, and that the purpose of “interoperability” is limited to things that users actually want.  Which brings us to our next suggestion… Define “Interoperability” The law should say what interoperability is, and what it isn’t. In the original, senate-introduced version of the bill from 2019, large platforms were required to support “interoperable communications with a user of a competing communications provider.” This rather narrow definition would have limited the scope of the bill to strictly inter-user communications, such as sharing content on social media or sending direct messages to friends.  The new version of the bill is more vague, and doesn’t pin “interoperability” to a particular use case. The term isn’t defined, and the scope of the activities implicated in the newer bill is much broader. This leaves it more open to interpretation. Such vagueness could be dangerous. Advertisers and data brokers have recently worked to co-opt the rhetoric of interoperability, arguing that Google, Apple, and other developers of user-side software must keep giving them access to sensitive user data in order to promote competition. But as we’ve said before, competition is not an end in itself—we don’t want the ACCESS Act to help more companies compete to exploit your data. Instead, the authors should define interoperability in a way that includes user-empowering interoperability, but explicitly excludes use cases like surveillance advertising. Let the people sue Time and again, we’ve seen well intentioned consumer protection laws fail to be effective because of a lack of meaningful enforcement. The easiest way to fix that is to give enforcement power to those who would be most affected by the law: the users. That’s why the ACCESS Act needs a private right of action.  In the House draft of the bill, the FTC would be in charge of enforcing the law. This is a lot of responsibility to vest in an agency that’s already overtaxed. Even if the FTC enforces the law in good faith, it may not have the resources to go toe-to-toe with the biggest corporations in the world. And this kind of regulatory enforcement could open the door to regulatory capture, in which giant corporations successfully lobby to fill enforcement agencies with personnel who’ll serve their interests.  The way to make sure that the bill’s policy turns into practice is to give those who might be harmed – users – the right to sue. Users whose privacy and security are compromised because of interfaces opened by the ACCESS Act should be able to take those responsible to court, whether it’s the large platforms or their would-be competitors who break the law. As we wrote: “Put simply: the ACCESS Act needs a private right of action so that those of us stuck inside dominant platforms, or pounding on the door to innovate alongside or in competition with them, are empowered to protect ourselves.” Bring back delegability One of the best ideas from the original version of the ACCESS act was “delegability.” A delegability mandate would require large platforms to open up client-side interfaces so that users, hobbyist developers, and small companies could create tools that work on top of the platforms’ existing infrastructure. Users would then be free to “delegate” some of their interactions with the large platforms to trusted agents who could help make those platforms serve users’ needs. This type of “follow-on innovation” has been a hallmark of new tech platforms in the past, but it’s been sorely lacking in the ecosystem around today’s tech giants, who assert tight control over how people use their services. Unfortunately, the version of the ACCESS Act recently introduced in the House has dropped the delegability requirement entirely. This is a major exclusion, and it severely limits the kinds of interoperability that the bill would create. The authors should look to the older version of the bill and re-incorporate one of the most important innovations that 2019’s ACCESS Act produced. Government standards as safe harbors, not mandates The ACCESS Act would establish a multi-stakeholder technical committee which would make recommendations to the FTC about the technical standards that large platforms need to implement to allow interoperability. Many consumer advocates may be tempted to see this as the best way to force big companies to do what the Act tells them. Advocates and lawmakers are (rightly) skeptical of giving Facebook and friends any kind of leeway when it comes to complying with the law. However, forcing big platforms to use new, committee-designed technical standards may do more harm than good. It will ensure that the standards take a long time to create, and an even longer time to modify. It could mean that platforms that are forced to use those standards must lobby for government approval before changing anything at all, which could prevent them from adding new, user-positive features. It could also mean that the interfaces created in the first round of regulation—reflecting the tech platforms as they exist today—are unable to keep up as the internet evolves, and that they fail to serve their purpose as time goes on. And such clunky bureaucracy may give the tech giants ammunition to argue that the ACCESS act is a needless, costly tax on innovation. It’s not necessarily bad to have the government design, or bless, a set of technical standards that implement ACCESS’ requirements. However, the platforms subject to the law should also have the freedom to implement the requirements in other ways. The key will be strong enforcement: regulators (or competitors, through a private right of action) should aggressively scrutinize the interfaces that big platforms design, and the law should impose strict penalties when the platforms build interfaces that are inadequate or anti-competitive. If the platforms want to avoid such scrutiny, they should have the choice to implement the government’s standards instead. About that standardization process At EFF, we’re no strangers to the ways that standardization processes can be captured by monopolists, and so we’ve paid close attention to the portions of the ACCESS Act that define new technical standards for interoperability. We have three suggestions: Fix the technical committee definition. The current draft of the bill calls for each committee to have two or more reps from the dominant company; two or more reps from smaller, competing companies; two or more digital rights/academic reps; and one rep from the National Institute for Standards and Technology. This may sound like a reasonable balance of interests, but it would in theory allow a committee consisting of 100 Facebook engineers, 100 Facebook lawyers, two engineers from a small startup, two academics and a NIST technologist. Congress should better-define the definition of the technical committee with maximum numbers of reps from the dominant companies and fix the ratio of dominant company reps to the other groups represented at the committee. Subject the committee work to public scrutiny and feedback. The work of the technical committee—including access to its mailing lists and meetings, as well as discussion drafts and other technical work—should be a matter of public record. All committee votes should be public. The committee’s final work should be subject to public notice and commentary, and the FTC should ask the committee to revise its designs based on public feedback where appropriate. Publish the committee’s final work. The current draft of the ACCESS Act limits access to the committee’s API documentation to “competing businesses or potential competing businesses.” That’s not acceptable. We have long fought for the principle that regulations should be in the public domain, and that includes the ACCESS Act’s API standards. These must be free of any encumbrance, including copyright (and para-copyrights such as anti-circumvention), trade secrecy, or patents, and available for anyone to re-implement. Where necessary, the committee should follow the standardization best practice of requiring participants to covenant not to enforce their patents against those who implement the API. Conclusion Ultimately, it’s unlikely that every one of these pieces of policy will make it into the bill. That’s okay—even an imperfect bill can still be a step forward for competition. But these improvements would make sure the new law delivers on its promise, leading to a more competitive internet where everyone has a chance for technological self-determination.

  • Changing Section 230 Won’t Make the Internet a Kinder, Gentler Place
    by Joe Mullin on June 17, 2021 at 3:29 pm

    Tech platforms, especially the largest ones, have a problem—there’s a lot of offensive junk online. Many lawmakers on Capitol Hill keep coming back to the same solution—blaming Section 230. What lawmakers don’t notice is that a lot of the people posting that offensive junk get stopped, again and again, thanks to Section 230. During a March hearing in the House Committee on Energy and Commerce, lawmakers expressed concern over some of the worst content that’s online, including extremist content, falsehoods about COVID-19, and election disinformation. But it’s people spreading just this type of content that often file lawsuits trying to force their content back online. These unsuccessful lawsuits show that Section 230 has repeatedly stopped disinformation specialists from disseminating their harmful content. Section 230 stands for the simple idea that you’re responsible for your own speech online—not the speech of others. It also makes clear that online operators, from the biggest platforms to the smallest niche websites, have the right to curate the speech that appears on their site. Users dedicated to spreading lies or hateful content are a tiny minority, but weakening Section 230 will make their job easier. When content moderation doesn’t go their way—and it usually doesn’t—they’re willing to sue. As the cases below show, Section 230 is rightfully used to quickly dismiss their lawsuits. If lawmakers weaken Section 230, these meritless suits will linger in court, costing online services more and making them leery of moderating the speech of known litigious users. That could make it easier for these users to spread lies online. Section 230 Protects Moderators Who Remove Hateful Content James Domen identifies as a “former homosexual,” who now identifies as heterosexual. He created videos that describe being LGBTQ as a harmful choice, and shared them on Vimeo, a video-sharing website. In one video, he described the “homosexual lifestyle” this way: “It’ll ruin your life. It’s devastating. It’ll destroy your life.” In at least five videos, Domen also condemned a California bill that would have expanded a ban on “sexual orientation change efforts,” or SOCE. Medical and professional groups have for decades widely recognized that efforts to change sexual orientation in various ways, sometimes called “conversion therapy,” are harmful. Vimeo removed Domen’s videos. In a letter to Domen’s attorney, Vimeo explained that SOCE-related videos “disseminate irrational and stereotypical messages that may be harmful to people in the LGBT community,” because it treated homosexuality as “a mental disease or disorder” that “can and should be treated.” Vimeo bans “hateful and discriminatory” content, and company officials told Domen directly that, in their view, his videos fell into that category. Domen sued, claiming that his civil rights were violated. Because of Section 230, Domen’s lawsuit was quickly thrown out. He appealed, but in March, the federal appeals court also ruled against him. Forcing a website to publish Domen’s anti-LGBTQ content might serve Domen’s interests, but only at the expense of many other users of the platform. No website should have to face a lengthy and expensive lawsuit over such claims. Because of Section 230, they don’t. Some lawmakers have proposed carving civil rights claims out of Section 230. But that could have the unintended side effect of allowing lawsuits like Domen’s to continue—making tech companies more skittish about removing anti-LGBTQ content. Section 230 Protects Moderators Who Remove Covid-19 Falsehoods Marshall Daniels hosts a YouTube channel in which he has stated that Judaism is “a complete lie” which was “made up for political gain.” Daniels, who broadcasts as “Young Pharaoh,” has also called Black Lives Matter “an undercover LGBTQ Marxism psyop that is funded by George Soros.” In April 2020, Daniels live-streamed a video claiming that vaccines contain “rat brains,” that HIV is a “biologically engineered, terroristic weapon,” and that Anthony Fauci “has been murdering motherfuckers and causing medical illnesses since the 1980s.” In May 2020, Daniels live-streamed a video called “George Floyd, Riots & Anonymous Exposed as Deep State Psyop for NOW.” In that video, he claimed that nationwide protests over George Floyd’s murder were “the result of an operation to cause civil unrest, unleash chaos, and turn the public against [President Trump].” According to YouTube, he also stated the COVID-19 pandemic and Floyd’s murder “were covert operations orchestrated by the Freemasons,” and accused Hillary Clinton and her aide John Podesta of torturing children. Near the video’s end, Daniels stated: “If I catch you talking shit about Trump, I might whoop your ass fast.” YouTube removed both videos, saying that they violated its policy on harassment and bullying.   Daniels sued YouTube, demanding account reinstatement and damages. He claimed that YouTube amounted to a state actor, and had thus violated his First Amendment rights. (Suggesting that courts treat social media companies as the government has no basis in the law, which the 9th Circuit reaffirmed is the case last year.) In March, a court dismissed most of Daniels’ claims under Section 230. That law protects online services—both large and small—from getting sued for refusing to publish content they don’t want to publish. Again, Internet freedom was protected by Section 230. No web host should be forced to carry false and threatening content, or Qanon-based conspiracy theories, like those created by Daniels. Section 230 protects moderators who kick out such content. Section 230 Protects Moderators Who Remove Election Disinformation The Federal Agency of News LLC, or FAN, is a Russian corporation that purports to be a news service. FAN was founded in the same building as Russia’s Internet Research Agency, or IRA; the IRA became the subject of a criminal indictment in February 2018 for its efforts to meddle in the 2016 U.S. election. The founder and first General Director of FAN was Aleksandra Yurievna Krylova, who is wanted by the FBI for conspiracy to defraud the U.S. Later in 2018, the FBI unsealed a criminal complaint against FAN’s chief accountant, Elena Khusyaynova. In that complaint, the FBI said that Federal Agency of News was not so different than the IRA. Both were allegedly part of “Project Lakhta,” a Russian operation to interfere with political and electoral systems both in Russia “and other countries, including the United States.” Facebook shut more than 270 Russian language accounts and pages in April of 2018, including FAN’s account. Company CEO Mark Zuckerberg said the pages “were controlled by the IRA,” which had “repeatedly acted deceptively and tried to manipulate people in the U.S., Europe, and Russia.” The IRA used a “network of hundreds of fake accounts to spread divisive content and interfere in the U.S. presidential election.” Facebook’s Chief Security Officer stated that the IRA had spent about $100,000 on Facebook ads in the United States. At this point, one might think that anyone with alleged connections to the Internet Research Agency, including FAN, would lie low. But that’s not what happened. Instead, FAN’s new owner, Evgeniy Zubarev, hired U.S. lawyers and filed a lawsuit against Facebook, claiming that his civil rights had been violated. He demanded that FAN’s account be reinstated, and that FAN be paid damages. A court threw the FAN lawsuit out on Section 230 grounds. The plaintiffs re-filed a new complaint, which the court again threw out. Small Companies And Users Can’t Afford These Bogus Lawsuits  Weakening Section 230 will give frivolous lawsuits like the ones above a major boost. Small companies, with no margin for extra legal costs, will be under more pressure to capitulate to bogus demands over their content moderation. Section 230 protects basic principles, whether you run a blog with a comment section, an email list with 100 users, or a platform serving millions. You have the right to moderate. You have the right to speak your own mind, and serve other users, without following the dictates of a government commission—and without fear of a bankrupting lawsuit.  Innovation, experimentation and real competition are the best paths forward to a better internet. More lawsuits over everyday content moderation won’t get us there.

  • Emails from 2016 Show Amazon Ring’s Hold on the LAPD Through Camera Giveaways
    by Matthew Guariglia on June 17, 2021 at 12:06 pm

    In March 2016, “smart” doorbell camera maker Ring was a growing company attempting to market its wireless smart security camera when it received an email from an officer in the Los Angeles Police Department (LAPD) Gang and Narcotics Division, who was interested in purchasing a slew of devices.The Los Angeles detective wanted 20 cameras, consisting of 10 doorbell cameras and 10 “stick up” cameras, which retailed for nearly $3,000. Ring, headquartered in nearby Santa Monica, first offered a discount but quickly sweetened the deal: “I’d be happy to send you those units free of charge,” a Ring employee told the officer, according to emails released in response to California Public Records Act (CPRA) requests filed by EFF and NBC’s Clark Fouraker. These emails are also the subject of a detailed new report from the Los Angeles Times. Ring offered nearly $3,000 worth of camera equipment to the LAPD in 2016, to aid in an investigation. A few months later, in July 2016, Ring was working with an LAPD officer to distribute a discount code that would allow officers to purchase Ring cameras for $50 off. As a growing number of people used his discount code, Ring offered the officer more and more free equipment. Officers were offered rewards based on how many people had used their personal coupon codes to order products. These officers receiving free equipment, either for an investigation or for their “hard work” helping to promote the sale of Ring through discount codes, were not isolated incidents. Across the LAPD—from the gang division in Downtown to community policing units in East Los Angeles and Brentwood—Ring offered, or officers requested, thousands of dollars’ worth of free products in exchange for officers’ promotion of Ring products to fellow officers and the larger community, seemingly in violation of department prohibitions on both accepting gifts from vendors and endorsing products. In another incident, the LAPD asked Ring for cameras to aid in an investigation involving a slew of church break-ins. Ring offered to send the police a number of cameras free of charge, but not without recognizing a marketing opportunity: “If the church sees value in the devices, perhaps it’s something that they can talk about with their members. Let’s talk more about this on the phone, but for now, I’ll get those devices sent out ASAP.” While offering free cameras to aid in a string of church robberies, a Ring representative suggested marketing the cameras to the church’s members. The LAPD released over 3,000 pages of emails from 2016 between Ring representatives and LAPD personnel in response to the CPRA requests. The records show that leading up to Ring’s official launch of partnerships with police departments—which now number almost 150 in California and over 2000 across the country—Ring worked steadily with Los Angeles police officers to provide free or discounted cameras for official and personal use, and in return, the LAPD worked to encourage the spread of Ring’s products throughout the community. The emails show officers were ready to tout the Ring camera as a device they used themselves, one they “love,” “completely believe in,” and “support.” In an email, an employee of the LAPD says they recommend Ring’s doorbell camera to everyone they meet. For over a year, EFF has been sounding the alarm about Ring and its police partnerships, which have in effect created neighborhood-wide surveillance networks without public input or debate. As part of these partnerships, Ring controls when and how police speak about Ring—with the company often requiring final say over statements and admonishing police departments who stray from the script. Racial justice and civil liberties advocates have continually pointed out how Ring enables racial profiling. Rather than making people feel safer in their own homes, Ring cameras can often have the reverse effect. By having a supposed crime-fighting tool alert a user every time a person approaches their home, the user can easily get the impression that their home is under siege. This paranoia can turn public neighborhoods filled with innocent pedestrians and workers into de facto police states where Ring owners can report “suspicious” people to their neighbors via Ring’s Neighbors social media platform, or the police. In a recent investigation, VICE found that a vast majority of people labeled “suspicious” were people of color. Ring, with its motion detection alerts, gives residents a digitally aided way of enforcing who does and does not belong in their neighborhood based on their own biases and prejudices. Ring also has serious implications on First Amendment activities. Earlier this year, EFF reported that LAPD requested footage from Ring cameras related to protests in Los Angeles following the police murder of George Floyd. These emails further add to these concerns, as they point to a scheme in which public servants have used their positions for private gain and contributed to an environment of fear and suspicion in communities already deeply divided.When confronted by police encouraging residents to mount security cameras, people should not have to decide whether their local police are operating out of a real concern over safety—or whether they are motivated by the prospect of receiving free equipment. EFF has submitted a letter raising these concerns and calling on the California Attorney General to initiate a public integrity investigation into the relationship between Ring and the LAPD. The public has a right to know whether officers in their communities have received or are receiving benefits from Ring, and whether those profits have influenced when and if police have encouraged communities to buy and use Ring cameras. Although the incidents recorded in these emails occurred primarily in 2016, Ring’s police partnerships and influence have only spread in the resulting years. It’s time for the California Department of Justice to step in and use its authority to investigate if and when Ring wielded inappropriate influence over California’s police and sheriff’s departments. Emails between the LAPD and Ring:’s Letter to the California Department of Justice on the relationship between the LAPD and Ring: EFF Director of Investigations Dave Maass and EFF Research Intern Jayme Sileo, a 2021 graduate of the Reynolds School of Journalism at the University of Nevada, Reno, contributed to this report.

  • 22 Rights Groups Tell PayPal and Venmo to Shape Up Policies on Account Closures
    by Rebecca Jeschke on June 15, 2021 at 4:42 pm

    Companies Have History of Unfair Crackdowns on First-Amendment Protected ActivitiesSan Francisco – Nearly two dozen rights groups, including the Electronic Frontier Foundation (EFF), have joined together to tell PayPal and its subsidiary Venmo to shape up its policies on account freezes and closures, as its opaque practices are interfering with payment systems connected to many First-Amendment protected activities. “Companies like PayPal and Venmo have hundreds of millions of users. Access to their services can directly impact an individual, company, or nonprofit’s ability to survive and thrive in our digital world,” said EFF International Director of Freedom of Expression Jillian York. “But while companies like Facebook and YouTube have faced substantial scrutiny for their history of account closures, financial companies like PayPal have often flown under the radar. Now, the human rights community is sending a clear message that it’s time to change.” The coalition sent a letter to PayPal and Venmo today, voicing particular concern about account closures that seem to have been used to pressure or single-out websites that host controversial—but legal—content. PayPay shut down the account of online bookseller Smashwords over concern about erotic fiction, and also refused to process payments to the whistleblower website Wikileaks. Last year, Venmo was sued for targeting payments associated with Islam or Arab nationalities or ethnicity, and there are also numerous examples of sex worker advocates facing account closures. Today’s letter calls on PayPal and Venmo to provide more transparency and accountability around its policies and practices for account freezes and closures, including publishing regular transparency reports, providing meaningful notice to users, and offering a timely and meaningful appeals process.  These recommendations are in alignment with the Santa Clara Principles on Transparency and Accountability in Content Moderation, a set of principles developed by free expression advocates and scholars to help companies center human rights when moderating user-generated content and accounts. “More transparency into financial censorship helps civil liberties and human rights advocates see patterns of abuse,” said EFF Chief Program Officer Rainey Reitman. “It’s vital that PayPal and Venmo follow in the steps of other companies and begin publishing annual transparency reports.” The signers of today’s letter include 7amleh – The Arab Center for the Advancement of Social Media, Access, ACLU of Northern California, American Civil Liberties Union, Article 19, the Center for Democracy and Technology, Center for LGBTQ Economic Advancement & Research (CLEAR), Demand Progress Education Fund, European Legal Support Center (ELSC), Fight for the Future, Freedom of the Press Foundation, Global Voices, Masaar-Technology and Law Community, Mnemonic, New America’s Open Technology Institute, PDX Privacy, the Tor Project, Taraaz, Ranking Digital Rights, Restore the Fourth Minnesota, and SMEX. For the full letter to PayPal and Venmo: Contact:  Jillian C.YorkDirector for International Freedom of [email protected] RaineyReitmanChief Program [email protected]

  • Unconstitutional Florida Law Barring Platforms from Suspending Politicians Should be Blocked, EFF Tells Court
    by Rebecca Jeschke on June 14, 2021 at 8:23 pm

    S.B. 7072 Violates First Amendment, Gives Politicians’ Speech Preferential Treatment Over Other UsersTallahassee, Florida—The Electronic Frontier Foundation (EFF) and Protect Democracy urged a federal judge to strike down Florida’s law banning Facebook, Twitter, and other platforms from suspending political candidates’ accounts, saying it unconstitutionally interferes with the First Amendment rights of the companies and their users, and forces companies to give politicians’ speech preferential treatment that other users are denied. EFF has long criticized large online platforms’ content moderation practices as opaque, inconsistent, and unfair because they often remove legitimate speech and disproportionately harm marginalized populations that struggle to be heard. These are serious problems that have real world consequences, but they don’t justify a law that violates the free speech rights of internet users who don’t happen to be Florida politicians and the private online services on which they rely, EFF said in a brief filed today in U.S. District Court for the Northern District of Florida. “The First Amendment prevents the government from forcing private publishers to publish the government’s preferred speech, and from forcing them to favor politicians over other speakers. This is a fundamental principle of our democracy,” said EFF Civil Liberties Director David Greene. The Supreme Court in 1974 unanimously rejected a Florida law requiring newspapers to print candidates’ replies to editorials criticizing them. Government interference with decisions by private entities to edit and curate content is anathema to free speech, the court said. “The same principle applies here to S.B. 7072,” said Greene. Florida Governor Ron DeSantis signed the law, set to take effect July 1, to punish social media companies for their speech moderation practices. It follows Facebook’s and Twitter’s bans on former President Donald Trump’s accounts and complaints by lawmakers of both parties that platforms have too much control over what can be said on the internet. The law gives preferential treatment to political candidates, preventing platforms at any point before an election from canceling their accounts. This gives candidates free rein to violate any platform’s rules with impunity, even when it causes abuse or harassment, or when the speech is unprotected by the First Amendment. Their posts cannot be de-prioritized or notated; all other users receive no such privilege. The law also limits platforms’ ability to moderate content by entities and individuals with large numbers of followers or readers. S.B. 7072 does mandate that platforms notify users about takedowns, use clear moderation standards, and take other steps to be more transparent. These are laudable provisions. But the overall framework of the law is unconstitutional. Instead, platforms could address unfair content moderation practices through voluntarily adopting a human rights framework for speech curation such as the Santa Clara Principles. “Internet users should demand transparency, consistency, and due process in platforms’ removal process,” said EFF Senior Staff Attorney Aaron Mackey. “These voluntary practices can help ensure content moderation comports with human rights principles to free expression without violating the First Amendment, as S.B. 7072 does.” For the full amicus brief: Contact:  DavidGreeneCivil Liberties [email protected] AaronMackeySenior Staff [email protected]

  • Hearing Tuesday: EFF Testifies Against SFPD for Violating Transparency Policies
    by Rebecca Jeschke on June 14, 2021 at 2:42 pm

    Police Department Withheld Documents About Use of Facial RecognitionSan Francisco – On Tuesday, June 15, at 5:30 pm PT, the Electronic Frontier Foundation (EFF) will testify against the San Francisco Police Department (SFPD) at the city’s Sunshine Ordinance Task Force committee meeting. EFF has registered a complaint against the SFPD for withholding records about a controversial investigation involving the use of facial recognition. In September of last year, SFPD arrested a man suspected of illegally discharging a gun, and a report in the San Francisco Chronicle raised concerns that the arrest came after a local fusion center ran the man’s photo through a facial-recognition database. The report called into question SFPD’s role in the search, particularly because the city’s Surveillance Technology Ordinance, enacted in 2019, made San Francisco the first city in the country to ban government use of face recognition technology. EFF filed a public records request with the SFPD about the investigation and the arrest, but the department released only previously available public statements. EFF filed a complaint with the Sunshine Ordinance Task Force for SFPD’s misleading records’ release, after which point SFPD produced many more relevant document. At Tuesday’s hearing, EFF Investigative Researcher Beryl Lipton will ask the task force to uphold EFF’s complaint about the SFPD, arguing that San Francisco’s transparency policies won’t work well unless public agencies are held to account when trying to skirt their responsibilities. WHAT:San Francisco Sunshine Ordinance Task Force hearingWHO:Beryl LiptonEFF Investigative ResearcherWHEN:Tuesday, June 155:30 pmLISTEN/CALL IN LINE:1-415-906-4659Meeting ID: 100 327 123#For more information on the hearing: Contact:  SairaHussainStaff [email protected] BerylLiptonInvestigative [email protected]

Share This Information.

5 thoughts on “Deeplinks

  1. Thanks for a marvelous posting! I seriously enjoyed reading it, you will be a great author.

    I will ensure that I bookmark your blog and will often come back down the road.
    I want to encourage you to continue your great writing, have a nice

Leave a Reply

Your email address will not be published. Required fields are marked *