Deeplinks EFF’s Deeplinks Blog: Noteworthy news from around the internet

  • Japan’s Rikunabi Scandal Shows The Dangers of Privacy Law Loopholes
    by Katitza Rodriguez on May 12, 2021 at 2:33 pm

    Technology users around the world are increasingly concerned, and rightly so, about protecting their data. But many are unaware of exactly how their data is being collected and would be shocked to learn of the scope and implications of mass consumer data collection by technology companies. For example, many vendors use tracking technologies including cookies—a small piece of text that is stored in your browser that lets websites recognize your browser, see your browsing activity or IP address but not your name or address—to build expansive profiles about user behavior over time and across apps and sites. Such data can be used to infer, predict, or evaluate information about a user or group. User profiles may or may not be accurate, fair, or discriminatory, but can still be used to inform life-altering decisions about them.  A recent data privacy scandal in Japan involving Rikunabi—a major job-seeking platform that calculated and sold companies algorithmic scores which predicted how likely individual job applicants would decline a job offer—has underscored how users’ behavioral data can be used against their best interests. Most importantly, the scandal showcases how companies design workarounds or “data-laundry” schemes to circumvent data protection obligations under Japan”s data protection law (Act on the Protection of Personal Information (APPI)). This case also highlights the dangers of badly-written data protection laws and their loopholes. Japanese Parliament adopted amendments to the APPI, expected to be implemented by early 2022, intended to close some of these loopholes, but the changes still fall short.  The Rikunabi Scandal Rikunabi is operated by Recruit Career (at the time of the scandal. It’s now Recruit Co., Ltd.), a subsidiary of a media conglomerate Recruit Group, which also owns Indeed and Glassdoor. Rikunabi allows job-seekers to search for job opportunities and mostly caters to college students and others just beginning their careers. It hosts job listings for thousands of companies. Like many Internet platforms, Rikunabi used cookies to collect data about how its users search, browse, and interact with its job listings. Between March 2018 and February 2019, using Rikunabi’s data, Recruit Career—without users’ consent—was calculating and selling companies algorithmic scores that predicted how likely an individual job applicant would decline a job offer or withdraw their application. Thirty-five companies, including Toyota Motor Corporation, Mitsubishi Electric Corporation, and other Japanese corporate giants, purchased the scores. In response to a public outcry, Recruit Career tried to excuse itself by saying that the companies who purchased the job-declining scores agreed not to use them for the selection of candidates. The company claimed the scores were intended only for clients to have better communication with their candidates, but, there was no such guarantee that’s how they would be used. Because of Japan’s dominant lifetime employment system, students feared such scores could limit their job opportunities and career choices, potentially affecting their whole professional life.  APPI: Japanese Data Protection Law  A loophole in APPI was key to understanding the Rikunabi scheme. Ironically, Japan, the world’s third-biggest economic power and one of the most technologically advanced, is the first country whose data protection law was recognized as offering equivalent levels of protection as European Union (EU) law.  However, the APPI lags considerably behind  EU law on cookie regulations, and the use of cookies to identify people. Under the stronger, stricter, and detailed EU data protection regulations, cookies can constitute personal data. Identifiers don’t have to include a user’s legal name (meaning identity found on national ID card or drivers’ license) to be considered personal data under EU law.  If entities processing personal data can indirectly identify you, based on multiple data, such as cookies,  and other identifiers likely to distinguish you from others, that is considered processing personal data.  This is what EU authorities refer to as “singling-out” to indirectly identify people:  isolating some or all records which identify an individual, linking at least two records of the same individual to identify someone, or inferring identification by looking at certain characteristics and comparing them to other characteristics. The very definition of personal data under the EU’s General Data Protection Regulation (GDPR) refers to things that are “online identifiers.” GDPR guidelines specifically mention that cookie identifiers may be used to create profiles of and identify people. If companies process personal data in a way that could tell one person apart from another, then this person is “identified or identifiable.” And if the data is about a  person, and used with the purpose of evaluating the individual, or is likely to have an impact on the person’s rights or interests, such data “relates to” the “identified or identifiable” person.  These are key elements of what is defined as personal data within EU regulation and valuable to understand this case.  Why? Because EU regulation requires companies to request users’ prior consent before using any identifying cookies, except ones strictly necessary for things like remembering items in your shopping cart or information entered into forms. In contrast, APPI uses very different criteria to judge whether cookies or similar machine-generated identifiers are personal data. APPI guidelines look at whether a company collecting, processing, and transferring cookies can readily collate them with other information by a method used in the ordinary course of business to find out the legal identity of an individual. So if a company can identify an individual by asking another company to access other data to collate with a cookie and identify an individual, the cookie is not considered personal data for the company. The company can thus freely collect, process, and transfer the cookie even when a recipient of the cookie can easily re-identify the person by linking it with another data set. Under this test, companies can indirectly identify people by means of singling out without running afoul of the APPI.  The Rikunabi Scheme: Data Laundering to Circumvent the Spirit of the Law The strategy involved three players. The first two are Recruit Career and Recruit Communications. Recruit Career is the company that operates Rikunabi, the job-search website. Recruit Communications is a marketing and advertising company, which Recruit Career subcontracted to create and deliver algorithmic scores. The third player is the one purchasing the scores: Rikunabi’s clients such as Toyota Motor Corporation. According to a disclosure by Recruit Career, the scheme operated as follows: Rikunabi First Scheme Recruit Career collected data about users who visited and used the Rikunabi site. This included their real names, email addresses, and other personal data, as well as their browsing activity on Rikunabi. For example, one user’s profile might contain information about which companies they searched for, which ones they looked at, and what industries they seemed most interested in. All of this information was linked to a Rikunabi cookie ID. For the creation of algorithmic scores, Recruit Career shared with Recruit Communications Rikunabi users’ browsing history and activity linked to their Rikunabi cookie IDs, omitting real names.  Rikunabi Second Scheme At the same time, client companies such as Toyota accepted job applications on their own website. Each client company collected applicants’ legal names and contact information, and also assigned each applicant a unique applicant ID. All of this information was linked to the companies’ Employer cookie IDs. For the scoring work, each client company instructed applicants to take a web survey, which was designed to allow Recruit Communications to directly collect their Employer cookie IDs and applicant IDs connected to them. In this way, Recruit Communications was able to collect applicants’ Rikunabi cookies and the cookies assigned to applicants by client companies. Recruit Communications somehow linked these two sets of identifiers, possibly by using cookie synching (a method that web trackers use to link cookies with one another and combine the data one company has about a user with data that other companies might have), so that it could associate their Rikunabi browsing activity with applicant IDs and single out an individual. With the linked database, Recruit Communications put the data to work. It trained a machine learning model to look at a user’s Rikunabi browsing history and then predict whether that user would accept or reject a job offer form a particular company.  Recruit Communications then delivered those scores associated with applicant IDs back to client companies. Since each client had its own database linking its applicant IDs to real identities, client companies could easily associate the scores they received from Recruit Communications with the real names of job applicants. And job seekers who trusted their data with Rikunabi? Without their knowledge or consent, the site’s operator and its sister company, in collaboration with Rikunabi’s clients, had created a system that may have cost them a job offer by inaccurately predicting what jobs or companies they were interested in. Why Do It Like This? The APPI prohibits businesses from sharing a user’s personal data without prior consent. So, if Recruit Career delivered scores linked to applicants’ names, it would be required to get users’ consent to process their information in that way.  APPI doesn’t regard cookies or similar machine-generated identifiers as personal data if a company itself cannot readily collate it  with other data sets to identify a person. So, Recruit Communication was, by being provided only with data disconnected from names and other personal identifiers, systematically unready to collate other information to identify individuals. Thus, under APPI, Recruit Communications was not collecting, processing, and providing any personal data and had no need to get user consent to calculate and deliver algorithmic scores to client companies. This data laundering scheme could have been created to ensure that the whole program was technically legal, even without users’ consent. But as Recruit Career knew those client companies can easily associate the scores linked to each applicant ID with applicants’  real names, the Japanese data protection authority, Personal Information Protection Commission, found that it had engaged in “very inappropriate services, which circumvented the spirit of the law,” and ordered the company to improve privacy protections.   The 2020 APPI Amendment Closed Some Loopholes, But Others Remain  After the scandal, the APPI was amended in June 2020.  When the amended law goes into effect by early 2022, it will require companies transferring a cookie or similar machine-generated identifiers to confirm beforehand whether the recipient of the data can identify an individual by combining such data with other information that the recipient has. When that is the case, the new APPI requires companies transferring such data to ensure that the recipient obtained users’ prior consent for the collection of personal data. Rikunabi’s scheme would violate the 2020 amendment unless Recruit Communications, knowing full well that clients can combine the data it provides with data they already have to identify individuals,  confirmed with clients before transferring algorithmic scores that they had obtained users’ prior consent for collecting their private information.   But even after the 2020 amendment, the APPI does not classify a cookie as personal data when combined indirectly with the dossiers of behavioral data often associated with them. This is a mistake. Cookies and similar machine-generated identifiers (like mobile ad IDs) are the linchpins that enable widespread online tracking and profiling. Cookies are used to link behavior from different websites to a single user, allowing trackers to connect huge swaths of a person’s life into a single profile. Just because a cookie isn’t directly linked to a person’s real identity doesn’t make the profile any less sensitive. And thanks to the data broker industry, cookies often can be linked to real identities with relative ease. A slew of “identity resolution” service providers sell trackers the ability to link pseudonymous cookie IDs to mobile phones, email addresses, or real names.

  • Outliving Outrage on the Public Interest Internet: the CDDB Story
    by Danny O'Brien on May 6, 2021 at 9:52 pm

    This is the third in our blog series on the public interest internet: past, present and future. In our previous blog post, we discussed how in the early days of the internet, regulators feared that without strict copyright enforcement and pre-packaged entertainment, the new digital frontier would be empty of content. But the public interest internet barn-raised to fill the gap—before the fledgling digital giants commercialised and enclosed those innovations. These enclosures did not go unnoticed, however—and some worked to keep the public interest internet alive. Compact discs (CDs) were the cutting edge of the digital revolution a decade before the web. Their adoption initially followed Lehman’s rightsholder-led transition – where existing publishers led the charge into a new medium, rather than the user-led homesteading of the internet. The existing record labels maintained control of CD production and distribution, and did little to exploit the new tech—but they did profit from bringing their old back catalogues onto the new digital format. The format was immensely profitable, because everyone re-bought their existing vinyl collections to move it onto CD. Beyond the improved fidelity of CDs, the music industry had no incentive to add new functionality to CDs or their players. When CD players were first introduced, they were sold exclusively as self-contained music devices—a straight-up replacement for record players that you could plug into speakers or your hi-fi “music centre,”  but not much else. They were digital, but in no way online or integrated with any other digital technology. The exception was the CD playing hardware that was incorporated into the latest multimedia PCs—a repurposing of the dedicated music playing hardware which sent the CD to the PC as a pile of digital data. With this tech, you could use CDs as a read-only data store, a fixed set of data, a “CD-ROM”; or you could insert a CD music disc, and use your desktop PC to read in and play its digital audio files through tinny desktop speakers, or headphones. The crazy thing was that those music CDs contained raw dumps of audio, but almost nothing else. There was no bonus artist info stored on the CDs; no digital record of the CD title, no digital version of the CD’s cover image JPEG, not even a user-readable filename or two: just 74 minutes of untitled digital sound data, split into separate tracks, like its vinyl forebear. Consequently, a PC with a CD player could read and play a CD, but had no idea what it was playing. About the only additional information a computer could extract from the CD beyond the raw audio was the total number of tracks, and how long each track lasted. Plug a CD into a player or a PC, and all it could tell you was that you were now listening to Track 3 of 12. Around about the same time as movie enthusiasts were building the IMDb, music enthusiasts were solving this problem by collectively building their own compact disk database—the CD Database (CDDB). Programmer Ti Kan wrote open source client software that would auto-run when a CD was put into a computer, and grab the number of tracks and their length. This client would query a public online database (designed by another coder, Steve Scherf) to see if anyone else had seen a CD with the same fingerprint. If no one had, the program would pop up a window asking the PC user to enter the album details themselves, and would upload that information to the collective store, ready for the next user to find. All it took was one volunteer to enter the album info and associate it with the unique fingerprint of track durations, and every future CDDB client owner could grab the data and display it the moment the CD was inserted, and let its user pick tracks by their name, peruse artist details, and so on.  The modern internet, buffeted as it is by monopolies, exploitation, and market and regulatory failure, still allows people to organize at low cost, with high levels of informality. When it started, most users of the CDDB had to precede much of their music-listening time with a short burst of volunteer data entry. But within months, the collective contributions of the Internet’s music fans had created a unique catalogue of current music that far exceeded the information contained even in expensive, proprietary industry databases. Deprived of any useful digital accommodations by the music industry, CD fans, armed with the user-empowering PC and the internet, built their own solution. This story, too, does not have a happy ending. In fact, in some ways the CDDB is the most notorious tale of enclosure on the early Net. Kan and Scherf soon realised the valuable asset that they were sitting on, and along with the hosting administrator of the original database server, built it into a commercial company, just as the overseers of Cardiff’s movie database had. Between 2000 and 2001, as “Gracenote”, this commercial company shifted from a free service, incorporated by its many happy users into a slew of open source players, to serving hardware companies, who they charged for a CD recognition service. It changed its client software to a closed proprietary software license, attached restrictive requirements on any code that used its API, and eventually blocked clients who did not agree to its license entirely. The wider CDDB community was outraged, and the bitterness persisted online for years afterwards. Five years later, Scherf defended his actions in a Wired magazine interview. His explanation was the same as IMDB’s founders: that finding a commercial owner and business model was the only way to fund CDDB as a viable ongoing concern. He noted that other groups of volunteers, notably an alternative service called freedb, had forked the database and client code from a point just before Gracenote locked it up. He agreed that was their right, and encouraged them to keep at it, but expressed scepticism that they would survive. “The focus and dedication required for CDDB to grow could not be found in a community effort,” he told Wired. “If you look at how stagnant efforts like freedb have been, you’ll see what I mean.”  By locking down and commercializing CDDB, Scherf said that he “fully expect[ed] our disc-recognition service to be running for decades to come.” Scherf may have overestimated the lifetime of CDs, and underestimated the persistence of free versions of the CDDB. While freedb closed last year,  Gnudb, an alternative derived from freedb, continues to operate. Its far smaller set of contributors don’t cover as much of the latest CD releases, but its data remains open for everyone to use—not just for the remaining CD diehards, but also as a permanent historical record of the CD era’s back catalogue: its authors, its releases, and every single track. Publicly available, publicly collected, and publicly usable, in perpetuity. Whatever criticism might be laid at the feet of this form of the public interest internet, fragility is not one of them. It hasn’t changed much, which may count as stagnation to Scherf—especially compared to the multi-million dollar company that Gracenote has become. But as Gracenote itself was bought up (first by Sony, then by Nielsen), re-branded, and re-focused, its predecessor has distinctly failed to disappear. Some Internet services do survive and prosper by becoming the largest, or by being bought by the largest. These success stories are very visible, if not organically, then because they can afford marketers and publicists. If we listen exclusively to these louder voices, our assumption would be that the story of the Internet is one of consolidation and monopolization. And if—or perhaps just when—these conglomerates go bad, their failings are just as visible. But smaller stories, successful or not, are harder to see. When we dive into this area, things become more complicated. Public interest internet services can be engulfed and transformed into strictly commercial operations, but they don’t have to be. In fact, they can persist and outlast their commercial cousins. And that’s because the modern internet, buffeted as it is by monopolies, exploitation, and market and regulatory failure, still allows people to organize at low cost, with high levels of informality, in a way that can often be more efficient, flexible and antifragile than strictly commercial, private interest services,or the centrally-planned government production of public goods. Next week: we continue our look at music recognition, and see how public interest internet initiatives can not only hang on as long as their commercial rivals, but continue to innovate, grow, and financially support their communities.

  • The Enclosure of the Public Interest Internet
    by Danny O'Brien on May 6, 2021 at 9:47 pm

    This is the second in our blog series on the public interest internet: past, present and future. It’s hard to believe now, but in the early days of the public internet, the greatest worry of some of its most high-powered advocates was that it would be empty. As the Clinton administration prepared to transition the internet from its academic and military origins to the heart of the promised “national information infrastructure” (NII), the government’s advisors fretted that the United States entertainment and information industries would have no commercial reason to switch from TV, radio, and recorded music. And without Hollywood and the record labels on board, the new digital environment would end up as a ghost mall, devoid of businesses or users.  “All the computers, telephones, fax machines, scanners, cameras, keyboards, televisions, monitors, printers, switches, routers, wires, cables, networks and satellites in the world will not create a successful NII, if there is not content”, former Patent Office head Bruce Lehman’s notorious 1994 government green paper on intellectual property on the Net warned. The fear was that without the presence of the pre-packaged material of America’s entertainment industry, the nation would simply refuse to go online. As law professor Jessica Litman describes it, these experts’ vision of the Internet was “a collection of empty pipes, waiting to be filled with content.”  Even as the politicians were drafting new, more punitive copyright laws intended to reassure Hollywood and the record labels (and tempt them into new, uncharted waters), the Internet’s first users were moving in and building anyway. Even with its tiny audience of technologists, first-adopters, and university students, the early net quickly filled with compelling “content,” a  free-wheeling, participatory online media that drew ever larger crowds as it evolved. Even in the absence of music and movies, the first net users built towers of information about them anyway. In rec.arts.movies, the Usenet discussion forum devoted to all things Hollywood, posters had been compiling and sharing lists of their favourite motion picture actors, directors, and trivia since the 1980s. By the time of the Lehman report, the collective knowledge of the newsgroup had outgrown its textual FAQs, and expanded first to a collectively-managed database on Colorado University’s file site, and then onward to one of the very first database-driven websites, hosted on a spare server at Wales’ Cardiff University. Built in the same barn-raising spirit of the early net, the public interest internet exploits the low cost of organizing online to provide stable, free repositories of user-contributed information. They have escaped an exploited fate as proprietary services owned by a handful of tech giants. These days, you’ll know that Cardiff Movie Database by another name – the IMDb. The database that had grown out of the rec.arts.movies contributions was turned into a commercial company in 1996 and sold to Amazon in 1998 for around $55 million dollars (equivalent to $88 million today). The Cardiff volunteers, led by one of its original moderators, Col Needham, continued to run the service as salaried employees of an Amazon subsidiary. The IMDB shows how the original assumptions of Internet growth were turned on their head. Instead of movie production companies leading the way, their own audience had successfully built and monetised the elusive “content” of the information superhighway by themselves—for themselves.  The data of the databases was used by Amazon as the seed to build an exclusive subscriptions service, IMDbpro, for movie business professionals, and to augment their Amazon Prime video streaming service with quick-access film facts. Rather than needing the movie moguls’ permission to fill the Internet, the Internet ended up supplying information that those moguls themselves happily paid a new, digital mogul for. But what about those volunteers who gave their time and labor to the collective effort of building this database for everyone? Apart from the few who became employees and shareholders of the commercial IMDb, they didn’t get a cut of the service’s profits. They also lost access to the full fruits of that comprehensive movie database. While you can still download the updated core of the Cardiff Database for free, it only covers the most basic fields of the IMDb. It is licensed under a strictly non-commercial license, fenced off with limitations and restrictions. No matter how much you might contribute to the IMDb, you can’t profit from your labor. The deeper info that was originally built by the user-contributions  and supplemented by Amazon has been enclosed: shut away, in a proprietary paywalled property, gated off from the super-highway it rode in on. It’s a story as old as the net is, and echoes historic stories of the enclosure of the commons. A pessimist would say that this has been the fate of much of the early net and its aspirations. Digital natives built, as volunteers, free resources for everyone. Then, struggling to keep them online in the face of the burdens of unexpected growth, they ended up selling up to commercial interests. Big Tech grew to its monopoly position by harvesting this public commons, and then locking it away. But it’s not the only story from the early net. Everyone knows, too, the large public projects that somehow managed to steer away from this path. Wikipedia is the archetype, still updated by casual contributors and defiantly unpaid editors across the world, with the maintenance costs of its website comfortably funded by regular appeals from its attached non-profit. Less known, but just as unique, is Open Street Map (OSM), a user-built, freely-licensed alternative to Google Maps, which has compiled from public domain sources and the hard work of its volunteer cartographers one of the most comprehensive maps of the entire earth.  These are flagships of what we at EFF call the public interest internet. They produce and constantly replenish priceless public goods, available for everyone, while remaining separate from government, those traditional maintainers of public goods. Neither are they commercial enterprises, creating private wealth and (one hopes) public benefit through the incentive of profit. Built in the same barn-raising spirit of the early net, the public interest internet exploits the low cost of organizing online to provide stable, free repositories of user-contributed information. Through careful stewardship, or unique advantages, they have somehow escaped an enclosed and exploited fate as a proprietary service owned by a handful of tech giants. That said, while Wikipedia and OSM are easy, go-to examples of the public interest internet, they are not necessarily representative of it. Wikipedia and OSM, in their own way, are tech giants too. They run at the same global scale. They struggle with some of the same issues of accountability and market dominance. It’s hard to imagine a true competitor to Wikipedia or OSM emerging now, for instance—even though many have tried and failed. Their very uniqueness means that their influence is outsized. The remote, in-house politics at these institutions has real effects on the rest of society. Both Wikipedia and OSM have complex, often carefully negotiated, large-scale interactions with the tech giants. Google integrates Wikipedia into its searches, cementing the encyclopedia’s position. OSM is used by, and receives contributions from, Facebook and Apple. It can be hard to know how individual contributors or users can affect the governance of these mega-projects or change the course of them. And there’s a recurring fear that the tech giants have more influence than the builders of these projects. Besides, if there’s really only a handful of popular examples of public good production by the public interest internet, is that really a healthy alternative to the rest of the net? Are these just crocodiles and alligators, a few visible survivors from a previous age of out-evolved dinosaurs, doomed to be ultimately outpaced by sprightlier commercial rivals? At EFF, we don’t think so. We think there’s a thriving economy of smaller public interest internet projects, which have worked out their own ways to survive on the modern internet. We think they deserve a role and representation in the discussions governments are having about the future of the net. Going further, we’d say that the real dinosaurs are our current tech giants. The small, sprightly, and public-minded public interest internet has always been where the benefits of the internet have been concentrated. They’re the internet’s mammalian survivors, hiding out in the nooks of the net, waiting to take back control when the tech giants are history. In our next installment, we take a look at one of the most notorious examples of early digital enclosure, its (somewhat) happier ending, and what it says about the survival skills of the public interest internet when a free database of compact discs outlasts the compact disc boom itself.

  • Introducing the Public Interest Internet
    by Danny O'Brien on May 6, 2021 at 9:27 pm

    Say the word “internet” these days, and most people will call to mind images of Mark Zuckerberg and Jeff Bezos, of Google and Twitter: sprawling, intrusive, unaccountable. This tiny handful of vast tech corporations and their distant CEOs demand our online attention and dominate the offline headlines.  But on the real internet, one or two clicks away from that handful of conglomerates, there remains a wider, more diverse, and more generous world. Often run by volunteers, frequently without any obvious institutional affiliation, sometimes tiny, often local, but free for everyone online to use and contribute to, this internet preceded Big Tech, and inspired the earliest, most optimistic vision of its future place in society. When Big Tech is long gone, a better future will come from the seed of this public interest internet: seeds that are being planted now, and which need everyone to nurture them.  The word “internet” has been so effectively hijacked by its most dystopian corners that it’s grown harder to even refer to this older element of online life, let alone bring it back into the forefront of society’s consideration. In his work documenting this space and exploring its future, academic, entrepreneur, and author Ethan Zuckerman has named it our “digital public infrastructure.” Hana Schank and her colleagues at the New America think tank have revitalized discussions around what they call “public interest technology.”  In Europe, activists, academics and public sector broadcasters talk about the benefits of the internet’s “public spaces” and improving and expanding the “public stack.” Author and activist Eli Pariser has dedicated a new venture to advancing better digital spaces—what its participants describe as the “New Public”. Not to be outdone, we at EFF have long used the internal term: “the public interest internet.” While these names don’t quite point to exactly the same phenomenon, they all capture some aspect of the original promise of the internet. Over the last two decades, that promise largely disappeared from wider consideration.  By fading from view, it has grown underappreciated, underfunded, and largely undefended. Whatever you might call it, we see our mission to not just act as the public interest internet’s legal counsel when it is under threat, but also to champion it when it goes unrecognized.  This blog series, we hope, will serve as a guided tour of some of the less visible parts of the modern public interest internet. None of the stories here, the organizations, collectives, and ongoing projects have grabbed the attention of the media or congressional committees (at least, not as effectively as Big Tech and its moguls). Nonetheless, they remain just as vital a part of the digital space. They not only better represent the spirit and vision of the early internet, they underlie much of its continuing success: a renewable resource that tech monopolies and individual users alike continue to draw from. When Big Tech is long gone, a better future will come from the seed of this public interest internet: seeds that are being planted now, and which need everyone to nurture them until they’re strong enough to sustain our future in a more open and free society.  But before we look into the future, let’s take a look at the past, to a time when the internet was made from nothing but the public—and because of that, governments and corporations declared that it could never prosper.This is the introduction to our blog series on the public interest internet. Read more in the series:  The Enclosure of the Public Interest Internet Outliving Outrage on the Public Interest Internet: the CDDB Story

  • Surveillance Self-Defense Playlist: Getting to Know Your Phone
    by Alexis Hancock on May 6, 2021 at 7:56 pm

    We are launching a new Privacy Breakdown of Mobile Phones “playlist” on Surveillance Self-Defense, EFF’s online guide to defending yourself and your friends from surveillance by using secure technology and developing careful practices. This guided tour walks through the ways your phone communicates with the world, how your phone is tracked, and how that tracking data can be analyzed. We hope to reach everyone from those who may have a smartphone for the first time, to those who have had one for years and want to know more, to savvy users who are ready to level up. The operating systems (OS) on our phones weren’t originally built with user privacy in mind or optimized fully to keep threatening services at bay. Along with the phone’s software, different hardware components have been added over time to make the average smartphone a Swiss army knife of capabilities, many of which can be exploited to invade your privacy and threaten your digital security. This new resource attempts to map out the hardware and software components, the relationships between the two, and what threats they can create. These threats can come from individual malicious hackers or organized groups all the way up to government level professionals. This guide will help users understand a wide range of topics relevant to mobile privacy, including:  Location Tracking: Encompassing more than just GPS, your phone can be tracked through cellular data and WiFi as well. Find out the various ways your phone identifies your location. Spying on Mobile Communications: The systems our phone calls were built on were based on a model that didn’t prioritize hiding information. That means targeted surveillance is a risk. Phone Components and Sensors: Today’s modern phone can contain over four kinds of radio transmitters/receivers, including WiFi, Bluetooth, Cellular, and GPS. Malware: Malicious software, or malware, can alter your phone in ways that make spying on you much easier. Pros and Cons of Turning Your Phone Off: Turning your phone off can provide a simple solution to surveillance in certain cases, but can also be correlated with where it was turned off. Burner Phones: Sometimes portrayed as a tool of criminals, burner phones are also often used by activists and journalists. Know the do’s and don’ts of having a “burner.” Phone Analysis and Seized Phones: When your phone is seized and analyzed by law enforcement, certain patterns and analysis techniques are commonly used to draw conclusions about you and your phone use. This isn’t meant to be a comprehensive breakdown of CPU architecture in phones, but rather of the capabilities that affect your privacy more frequently, whether that is making a phone call, texting, or using navigation to get to a destination you have never been to before. We hope to give the reader a bird’s-eye view of how that rectangle in your hand works, take away the mystery behind specific privacy and security threats, and empower you with information you can use to protect yourself. EFF is grateful for the support of the National Democratic Institute in providing funding for this security playlist. NDI is a private, nonprofit, nongovernmental organization focused on supporting democracy and human rights around the world. Learn more by visiting

  • Foreign Intelligence Surveillance Court Rubber Stamps Mass Surveillance Under Section 702 – Again
    by Cindy Cohn on May 6, 2021 at 6:25 pm

    As someone once said, “the Founders did not fight a revolution to gain the right to government agency protocols.”  Well it was not just someone, it was Chief Justice John Roberts. He flatly rejected the government’s claim that agency protocols could solve the Fourth Amendment violations created by police searches of our communications stored in the cloud and accessible through our phones.   Apparently, the Foreign Intelligence Surveillance Court (FISC) didn’t get the memo. That’s because, under a recently declassified decision from November 2020, the FISC again found that a series of overly complex but still ultimately swiss cheese agency protocols — that are admittedly not even being followed — resolve the Fourth Amendment problems caused by the massive governmental seizures and searches of our communications currently occurring under FISA Section 702. The annual review by the FISC is required by law — it’s supposed to ensure that both the policies and the practices of the mass surveillance under 702 are sufficient. It failed on both counts.   The protocols themselves are inherently problematic. The law only requires that intelligence officials “reasonably believe” the “target” of an investigation to be a foreigner abroad — it is immaterial to the initial collection that there is an American, with full constitutional rights, on the other side of a communication Justice Roberts was concerned with a single phone seized pursuant to a lawful arrest.  The FISC is apparently unconcerned when it rubber stamps mass surveillance impacting, by the government’s own admission, hundreds of thousand of nonsuspect Americans. What’s going on here?   From where we sit, it seems clear that the FISC continues to suffer from a massive case of national security constitutional-itis. That is the affliction (not really, we made it up) where ordinarily careful judges sworn to defend the Constitution effectively ignore the flagrant Fourth Amendment violations that occur when the NSA, FBI, (and to a lesser extent, the CIA, and NCTC) misuse the justification of national security to spy on Americans en mass. And this malady means that even when the agencies completely fail to follow the court’s previous orders, they still get a pass to keep spying.   The FISC decision is disappointing on at least two levels. First, the protocols themselves are not sufficient to protect Americans’ privacy. They allow the government to tap into the Internet backbone and seize our international (and lots of domestic) communications as they flow by — ostensibly to see if they have been targeted. This is itself a constitutional violation, as we have long argued in our Jewel v. NSA case. We await the Ninth Circuit’s decision in Jewel on the government’s claim that this spying that everyone knows about is too secret to be submitted for real constitutional review by a public adversarial court (as opposed to the one-sided review by the rubber-stamping FISC).   But even after that, the protocols themselves are swiss cheese when it comes to protecting Americans. At the outset, unlike traditional foreign intelligence surveillance, under Section 702, FISC judges do not authorize individualized warrants for specific targets. Rather, the role of a FISC judge under Section 702 is to approve abstract protocols that govern the Executive Branch’s mass surveillance and then review whether they have been followed.   The protocols themselves are inherently problematic. The law only requires that intelligence officials “reasonably believe” the “target” of an investigation to be a foreigner abroad — it is immaterial to the initial collection that there is an American, with full constitutional rights, on the other side of a conversation whose communications are both seized and searched without a warrant. It is also immaterial that the individuals targeted turn out to be U.S. persons.  This was one of the many problems which ultimately ended with the decommissioning of the Call Detail Records program, which despite being Congress’ attempt to rein in the program which started under section 215 of the Patriot Act, still mass surveilled communications metadata, including inadvertently collecting millions of call detail records from American persons illegally.  Next, the protocols allow collection for any “foreign intelligence,” purpose, which is a much broader scope than merely searching for terrorists. The term encompasses information that, for instance, could give the U.S. an advantage in trade negotiations. Once these communications are collected, the protocols allow the FBI to use the information for domestic criminal prosecutions if related to national security.  This is what Senator Wyden and others in Congress have rightly pointed out is a “backdoor” warrantless search. And those are just a few of the problems.   While the protocols are complex and confusing, the end result is that nearly all Americans have their international communications seized initially and a huge number of them are seized and searched by the FBI, NSA, CIA and NCTC, often multiple times for various reasons, all without individual suspicion, much less a warrant. Second, the government agencies — especially the FBI — apparently cannot be bothered to follow even these weak protocols.  This means that in practice, we users don’t even get that minimal protection.  The FISC decision reports that the FBI has never limited its searches to just those related to national security. Instead agents query the 702 system for investigations relating to health care fraud, transnational organized crime, violent gangs, domestic terrorism, public corruption and bribery. And that’s in just 7 FBI field offices reviewed. This is not a new problem, as the FISC notes. Although it once again seems to think that the FBI just needs to be told again to do it and to do proper training (which it has failed to do for years). The court notes that it is likely that other field offices also did searches for ordinary crimes, but that the FBI also failed to do proper oversight so we just don’t know how.   A federal court would accept no such tomfoolery…..Yet the FISC is perfectly willing to sign off on the FBI’s failures and the Bureau’s flagrant disregard of its own rulings for year upon year. Next, the querying system for this sensitive information had been designed to make it hard not to search the 702-collected data, including by requiring agents to opt out (not in) to searching the 702 data and then timing out that opt-out after only thirty minutes. And even then, the agents could just toggle “yes” to search 702 collected data, with no secondary checking prior to those searches. This happened multiple times (that we know of) to allow for searches without any national security justification. The FBI also continued to improperly conduct bulk searches, which are large batch queries using multiple search terms without written justifications as required by the protocols. Even the FISC calls these searches “indiscriminate,” yet it reauthorized the program.   In her excellent analysis of the decision, Marcy Wheeler lists out the agency excuses that the Court accepted: It took time for them to make the changes in their systems It took time to train everyone Once everyone got trained they all got sent home for COVID  Given mandatory training, personnel “should be aware” of the requirements, even if actual practice demonstrates they’re not FBI doesn’t do that many field reviews Evidence of violations is not sufficient evidence to find that the program inadequately protects privacy The opt-out system for FISA material — which is very similar to one governing the phone and Internet dragnet at NSA until 2011 that also failed to do its job — failed to do its job The FBI has always provided national security justifications for a series of violations involving their tracking system where an Agent didn’t originally claim one Bulk queries have operated like that since November 2019 He’s concerned but will require more reporting And the dog also ate their homework.  While more reporting sounds nice, that’s the same thing ordered the last time, and the time before that.  Reporting of problems should lead to something actually being done to stop the problems.   At this point, it’s just embarrassing. A federal court would accept no such tomfoolery from an impoverished criminal defendant facing years in prison. Yet the FISC is perfectly willing to sign off on the FBI and NSA failures and the agencies’ flagrant disregard of its own rulings for year upon year.  Not all FISC decisions are disappointing.  In 2017, we were heartened that another FISC judge had been so fed up that it issued requirements that led to the end of the “about” searching of collected upstream data and even its partial destruction. And the extra reporting requirements do give us at least a glimpse into how bad it is that we wouldn’t otherwise have.   But this time the FISC has let us all down again. It’s time for the judiciary, whether a part of the FISC or not, to inoculate themselves against the problem of throwing out the Fourth Amendment whenever the Executive Branch invokes national security, particularly when the constitutional violations are so flagrant, long-standing and pervasive. The judiciary needs to recognize mass spying as unconstitutional and stop what remains of it. Americans deserve better than this charade of oversight.  Related Cases: Jewel v. NSA

  • The Florida Deplatforming Law is Unconstitutional. Always has Been.
    by Kurt Opsahl on May 5, 2021 at 9:09 pm

    Last week, the Florida Legislature passed a bill prohibiting social media platforms from “knowingly deplatforming” a candidate (the Transparency in Technology Act, SB 7072), on pain of a fine of up to $250k per day, unless, I kid you not, the platform owns a sufficiently large theme park.  Governor DeSantis is expected to sign it into law, as he called for laws like this. He cited social media de-platforming Donald Trump as  examples of the political bias of what he called “oligarchs in Silicon Valley.” The law is not just about candidates, it also bans “shadow-banning” and cancels cancel culture by prohibiting censoring “journalistic enterprises,” with “censorship” including things like posting “an addendum” to the content, i.e. fact checks. This law, like similar previous efforts, is mostly performative, as it almost certainly will be found unconstitutional. Indeed, the parallels with a nearly 50 years old compelled speech precedent are uncanny. In 1974, in Miami Herald Publishing Co. v. Tornillo, the Supreme Court struck down another Florida statute that attempted to compel the publication of candidate speech.  50 Years Ago, Florida’s Similar “Right of Reply” Law Was Found Unconstitutional At the time, Florida had a dusty “right of reply” law on the books, which had not really been used, giving candidates the right to demand that any newspaper who criticized them print a reply to the newspaper’s charges, at no cost. The Miami Herald had criticized Florida House candidate Pat Tornillo, and refused to carry Tornillo’s reply. Tornillo sued. Tornillo lost at the trial court, but found some solace on appeal to the Florida Supreme Court.  The Florida high court held that the law was constitutional, writing that the “statute enhances rather than abridges freedom of speech and press protected by the First Amendment,” much like the proponents of today’s new law argue.  So off the case went to the US Supreme Court. Proponents of the right of reply raised the same arguments used today—that government action was needed to ensure fairness and accuracy, because “the ‘marketplace of ideas’ is today a monopoly controlled by the owners of the market.”   Like today, the proponents argued new technology changed everything. As the Court acknowledged in 1974, “[i]n the past half century a communications revolution has seen the introduction of radio and television into our lives, the promise of a global community through the use of communications satellites, and the specter of a ‘wired’ nation by means of an expanding cable television network with two-way capabilities.”  Today, you might say that a wired nation with two-way communications had arrived in the global community, but you can’t say the Court didn’t consider this concern. You might wonder why the Florida Legislature would pass a law doomed to failure. Politics, of course. The Court also accepted that the consolidation of major media meant “the dominant features of a press that has become noncompetitive and enormously powerful and influential in its capacity to manipulate popular opinion and change the course of events,” and acknowledged the development of what the court called “advocacy journalism,” eerily similar to the arguments raised today.  Paraphrasing the arguments made in favor of the law, the Court wrote “The abuses of bias and manipulative reportage are, likewise, said to be the result of the vast accumulations of unreviewable power in the modern media empires. In effect, it is claimed, the public has lost any ability to respond or to contribute in a meaningful way to the debate on issues,” just like today’s proponents of the Transparency in Technology Act. The Court was not swayed, not because this was dismissed as an issue, but because government coercion could not be the answer. “However much validity may be found in these arguments, at each point the implementation of a remedy such as an enforceable right of access necessarily calls for some mechanism, either governmental or consensual. If it is governmental coercion, this at once brings about a confrontation with the express provisions of the First Amendment.” There is much to dislike about content moderation practices, but giving the government more control is not the answer. Even if one should decry the lack of responsibility of the media, the Court recognized “press responsibility is not mandated by the Constitution and like many other virtues it cannot be legislated.”  Accordingly, Miami Herald v. Tornillo reversed the Florida Supreme Court, and held the Florida statute compelling publication of candidates’ replies unconstitutional. Since Tornillo, courts have consistently applied it as binding precedent, including applying Tornillo to social media and internet search engines, the very targets of the Transparency in Technology Act (unless they own a theme park). Indeed, the compelled speech doctrine has even been used to strike down other attempts to counter perceived censorship of conservative speakers.1  With the strong parallels with Tornillo, you might wonder why the Florida Legislature would pass a law doomed to failure, costing the state the time and expense of defending it in court. Politics, of course. The legislators who passed this bill probably knew it was unconstitutional, but may have seen political value in passing the base-pleasing statute, and blaming the courts when it gets struck down.  Politics is also the reason for the much-ridiculed exception for theme park owners. It’s actually a problem for the law itself. As the Supreme Court explained in Florida Star v BJF, carve-outs like this make the bill even more susceptible to a First Amendment challenge as under-inclusive.  Theme parks are big business in Florida, and the law’s definition of social media platform would otherwise fit Comcast (which owns Universal Studios’ theme parks), Disney, and even Legoland.  Performative legislation is less politically useful if it attacks a key employer and economic driver of your state. The theme park exception has also raised all sorts of amusing possibilities for the big internet companies to address this law by simply purchasing a theme park, which could easily be less expensive than compliance, even with the minimum 25 acres and 1 million visitors/year. Much as Section 230 Land would be high on my own must-visit list, striking the law down is the better solution. The Control that Large Internet Companies Have on our Public Conversations Is An Important Policy Issue The law is bad, and the legislature should feel bad for passing it, but this does not mean that the control that the large internet companies have on our public conversations isn’t an important policy issue. As we have explained to courts considering the broader issue, if a candidate for office is suspended or banned from social media during an election, the public needs to know why, and the candidate needs a process to appeal the decision. And this is not just for politicians – more often it is marginalized communities that bear the brunt of bad content moderation decisions. It is critical that the social platform companies provide transparency, accountability and meaningful due process to all impacted speakers, in the US and around the globe, and ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of all users’ rights.  This is why EFF and a wide range of non-profit organizations in the internet space worked together to develop the Santa Clara Principles, which call upon social media to (1) publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines; (2) provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension; and (3) provide a meaningful opportunity for timely appeal of any content removal or account suspension.  1.  Provisions like Transparency in Technology Act’s ban on addendums to posts (such as fact checking or link to authoritative sources) are not covered by the compelled speech doctrine, but rather fail as prior restraints on speech. We need not spend much time on that, as the Supreme Court has roundly rejected prior restraint.

  • Facebook Oversight Board Affirms Trump Suspension — For Now
    by Corynne McSherry on May 5, 2021 at 3:48 pm

    Today’s decision from the Facebook Oversight Board regarding the suspension of President Trump’s account — to extend the suspension for six months and require Facebook to reevaluate in light of the platform’s stated policies — may be frustrating to those who had hoped for a definitive ruling. But it is also a careful and needed indictment of Facebook’s opaque and inconsistent moderation approach that offers several recommendations to help Facebook do better, focused especially on consistency and transparency. Consistency and transparency should be the hallmarks of all content decisions. Too often, neither hallmark is met. Perhaps most importantly, the Board affirms that it cannot and should not allow Facebook to avoid its responsibilities to its users.  We agree. The decision is long, detailed, and worth careful review. In the meantime, here’s our top-level breakdown: Today’s decision affirms, once again, that no amount of “oversight” can fix the underlying problem. First, while the Oversight Board rightly refused to make special rules for politicians, rules we have previously opposed, it did endorse special rules and procedures for “influential users” and newsworthy posts. These rules recognize that some users can cause greater harm than others.  On a practical level, every decision to remove a post or suspend an account is highly contextual and requires often highly specific cultural competency. But we agree that special rules for influential users or highly newsworthy content requires even greater transparency and the investment of substantial resources. Specifically, the Oversight Board explains that Facebook needs to document all of these special decisions well, clearly explain how any newsworthiness allowance applies to influential accounts, clearly explain how it cross checks such decisions including its rationale, standards, and processes of review, and the criteria for determining which pages to include. And Facebook should report error rates and thematic consistency of determinations as compared with its ordinary enforcement procedures. More broadly, the Oversight Board also correctly notes that Facebook’s penalty system is unclear and that it must better explain its strikes and penalties process, and inform users of strikes and penalties levied against them. We wholeheartedly agree, as the Oversight Board emphasized, that “restrictions on speech are often imposed by or at the behest of powerful state actors against dissenting voices and members of political oppositions” and that  “Facebook must resist pressure from governments to silence their political opposition.” The Oversight Board urged Facebook to treat such requests with special care. We would have also required that all such requests be publicly reported. The Oversight Board correctly also noted the need for Facebook to collect and preserve removed posts. Such posts are important for preserving the historical record as well as for human rights reporting, investigations, and accountability.  While today’s decision reflects a notable effort to apply an international human rights framework, we continue to be concerned that an Oversight Board that is US-focused in its composition is not best positioned to help Facebook do better. But the Oversight Board did recognize the international dimension of the issues it confronts, and endorsed the Rabat Plan of Action, from the United Nations Office of the High Commissioner for Human Rights, as a framework for assessing the removal of posts that may incite hostility or violence. It specifically did not apply the First Amendment, even though the events leading to the decision were focused in the US. Overall, these are good recommendations and we will be watching to see if Facebook takes them seriously. And we appreciate the Oversight Board’s refusal to make Facebook’s tough decisions for it. If anything, though, today’s decision affirms, once again, that no amount of “oversight” can fix the underlying problem: Content moderation is extremely difficult to get right, particularly at Facebook scale.

  • Proposed New Internet Law in Mauritius Raises Serious Human Rights Concerns
    by Jillian C. York on April 30, 2021 at 7:30 pm

    As debate continues in the U.S. and Europe over how to regulate social media, a number of countries—such as India and Turkey—have imposed stringent rules that threaten free speech, while others, such as Indonesia, are considering them. Now, a new proposal to amend Mauritius’ Information and Communications Technologies Act (ICTA) with provisions to install a proxy server to intercept otherwise secure communications raises serious concerns about freedom of expression in the country. Mauritius, a democratic parliamentary republic with a population just over 1.2 million, has an Internet penetration rate of roughly 68% and a high rate of social media use. The country’s Constitution guarantees the right to freedom of expression but, in recent years, advocates have observed a backslide in online freedoms. In 2018, the government amended the ICTA, imposing heavy sentences—as high as ten years in prison—for online messages that “inconvenience” the receiver or reader. The amendment was in turn utilized to file complaints against journalists and media outlets in 2019. In 2020, as COVID-19 hit the country, the government levied a tax on digital services operating  in the country, defined as any service supplied by “a foreign supplier over the internet or an electronic network which is reliant on the internet; or by a foreign supplier and is dependent on information technology for its supply.” The latest proposal to amend the ICTA has raised alarm bells amongst local and international free expression advocates, as it would enable government officials who have established instances of “abuse and misuse” to block social media accounts and track down users using their IP addresses. The amendments are reminiscent of those in India and Turkey in that they seek to regulate foreign social media, but differ in that Mauritius—a far smaller country—lacks the ability to force foreign companies to maintain a local presence. In a paper for a consultation of the amendments, proponents argue: Legal provisions prove to be relatively effective only in countries where social media platforms have regional offices. Such is not the case for Mauritius. The only practical solution in the local context would be the implementation of a regulatory and operational framework which not only provides for a legal solution to the problem of harmful and illegal online content but also provides for the necessary technical enforcement measures required to handle this issue effectively in a fair, expeditious, autonomous and independent manner. While some of the concerns raised in the paper—such as the fact that social media companies do not sufficiently moderate content in the country’s local language—are valid, the solutions proposed are disproportionate.  A petition calling on local and international supporters to oppose the amendments notes that “Whether human … or AI, the system that will monitor, flag and remove information shared by users will necessarily suffer from conscious or unconscious bias. These biases will either be built into the algorithm itself, or will afflict those who operate the system.”  Most concerning, however, is that authorities wish to install a local/proxy server that impersonates social media networks to fool devices and web browsers into sending secure information to the local server instead of social media networks, effectively creating an archive of the social media information of all users in Mauritius before resending it to the social media networks’ servers. This plan fails to mention how long the information will be archived, or how user data will be protected from data breaches. Local free expression advocates are calling on the ICTA authorites to “concentrate their efforts in ethically addressing concerns made by citizens on posts that already exist and which have been deemed harmful.” Supporters are encouraged to sign the petition or submit comment to the open consultation by emailing [email protected] before May 5, 2021.

  • Tell Congress: Support the Fourth Amendment Is Not For Sale Act
    by Matthew Guariglia on April 30, 2021 at 6:14 pm

    Everyday, your personal information is being harvested by your smart phone applications, sold to data brokers, and used by advertisers hoping to sell you things. But what safeguards prevent the government from shopping in that same data marketplace? Mobile data regularly bought and sold, like your geolocation, is information that law enforcement or intelligence agencies would normally have to get a warrant to acquire. But these databrokers don’t ask for a warrant. The U.S. government has been using its purchase of this information as a loophole for acquiring personal information on individuals without a warrant. Now is the time to close that loophole.  EFF is launching a campaign in support of the Fourth Amendment is Not For Sale Act, or H.R. 2738 and S.1265. This legislation prevents the government from purchasing information it would otherwise need a warrant to acquire. Tell your senators and representatives that this bill must be passed! TAKE ACTION TELL CONGRESS: THe fourth amendment is not for sale We first wrote about the need for legislation like this in December 2020, after a troubling article in Motherboard. It reported that a Muslim prayer app (Muslim Pro), a Muslim dating app (Muslim Mingle), and many other popular apps had been selling geolocation data about their users to a company called X-Mode, which in turn provided this data to the U.S. military through defense contractors. This violates the First and Fourth Amendments. Just because your phone apps know where you are does not mean the government should, too. The invasive marketplace for your data needs to be tamed by privacy legislation, not used by the government as an end-run around the warrant requirement. The Supreme Court has decided that our detailed location data is so revealing about our activities and associations that law enforcement must get a warrant in order to acquire it. Government purchase of location data also threatens to chill people’s willingness to participate in protests in public places, associate with who they want, or practice their religion. History and legal precedent teach us that when the government indiscriminately collects records of First Amendment activities, it can lead to retaliation or further surveillance. TAKE ACTION TELL CONGRESS: THe fourth amendment is not for sale You can read the full text of the bill below:

  • Brazil’s Bill Repealing National Security Law Has its Own Threats to Free Expression
    by Veridiana Alimonti on April 30, 2021 at 3:23 pm

    The Brazilian Chamber of Deputies is on track to approve  a law that threatens freedom of expression and the right to assemble and protest, with the stated aim of defending the democratic constitutional state. Bill 6764/02 repeals the Brazilian National Security Law (Lei de Segurança Nacional), one of the ominous legacies of the country’s dictatorship that lasted until 1985. Although there’s a broad consensus over the harm the National Security Law represents, Brazilian civil groups have been stressing that replacing it with a new act without careful discussion on its grounds, principles, and specific rules risks rebuilding a framework serving more to repressive than to democratic ends. The Brazilian National Security Law has a track record of abuses in persecuting and silencing dissent, with vague criminal offenses and provisions targeting speech. After a relatively dormant period, it gained new prominence during President Bolsonaro’s administration. It has served as a legal basis for accusations against opposition leaders, critics, journalists, and even a congressman aligned to Bolsonaro in the country’s current turbulent political landscape. However, its proposed replacement, Bill 6764/02, raises various concerns, some particularly unsettling for digital rights. Even with alternative drafts trying to untangle them, problems remain. First, the espionage offense in the bill defines the handover of secret documents to foreign governments as a crime. It’s crucial that this and related offenses do not apply to acts in a way that would raise serious human rights concerns: whistleblowers revealing facts or acts that could imply the violation of human rights, crimes committed by government officials, and other serious wrongdoings affecting public administration; or,  journalistic and investigative reporting, and the work of civil groups and activists, that bring to light governments’ unlawful practices and abuses. These acts should be clearly exempted from the offense. Amendments under discussion seek to address these concerns, but there’s no assurance they will prevail in the final text if this new law is approved. The IACHR’s Freedom of Expression Rapporteur highlighted how often governments in Latin America classify information under national security reasons without proper assessment and substantiation. The report provides a number of examples in the region on the hurdles this represents to accessing information related to human rights violations and government surveillance. The IACHR Rapporteur stresses the key role of investigative journalists, the protection of their sources, and the need to grant legal backing against reprisal to whistleblowers who expose human rights violations and other wrongdoings. This aligns with the UN Freedom of Expression Rapporteur’s previous recommendations and reinforces the close relationship between democracy and strong safeguards for those who take a stand of unveiling sensitive public interest information. As the UN High Commissioner for Human Rights has already pointed out: The right to privacy, the right to access to information and freedom of expression are closely linked. The public has the democratic right to take part in the public affairs and this right cannot be effectively exercised by solely relying on authorized information. Second, the proposal also aims to tackle “fake news” by making “mass misleading communication” a crime against democratic institutions. Although the bill should be strictly tailored to counter exceptionally serious threats, bringing disinformation into its scope, on the contrary, potentially targets millions of Internet users. Disseminating “facts the person know is untrue” that could put at risk “the health of the electoral process” or “the free exercise of constitutional powers,” using “means not provided by the private messaging application,” could lead to up to five years’ jail time. We agree with the digital rights groups on the ground which have stressed the provision’s harmful implications to users’ freedom of expression.  Criminalizing the spread of disinformation is full of traps. It criminalizes speech by relying on vague terms (as in this bill) easily twisted to stifle critical voices and those challenging entrenched political power. Repeatedly, joint declarations of the Freedom of Expression Rapporteurs urged States not to take that road. Moreover, the provision applies when such messages were distributed using “means not provided by the application.” Presuming that the use of such means is inherently malicious poses a major threat to interoperability. The technical ability to plug one product or service into another product or service, even when one service provider hasn’t authorized that use, has been a key driver to competition and innovation. And dominant companies repeatedly abuse legal protections to ward off and try to punish competitors.  This is not to say we do not care about the malicious spread of disinformation at scale. But it should not be part of this bill, given its specific scope, neither be addressed without careful attention to unintended consequences. There’s an ongoing debate, and other avenues to pursue that are aligned with fundamental rights and rely on joint efforts from the public and private sectors. Political pressure has hastened the bill’s vote. Bill 6764/02 may pass in a few days in the Chamber of Deputies, pending the Senate’s approval. We join the call of civil and digital rights groups that a rushed approach actually creates greater risks for what the bill is supposed to protect. These and other troubling provisions put freedom of expression on the spot, serving also to spur government’s surveillance and repressive actions. These risks are what the defense of democracy should fend off, not reiterate. 

  • EFF at 30: Protecting Free Speech, with Senator Ron Wyden
    by Jason Kelley on April 29, 2021 at 7:20 pm

    To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web. To celebrate 30 years of defending online freedom, EFF was proud to welcome Senator Ron Wyden as our second special guest in EFF’s yearlong Fireside Chat series. Senator Wyden is a longtime supporter of digital rights, and as co-author of Section 230, one of the key pieces of legislation protecting speech online, he’s a well-recognized champion of free speech. EFF’s Legal Director, Dr. Corynne McSherry, spoke with the senator about the fight to protect free expression and how Section 230, despite recent attacks, is still the “single best law for small businesses and single best law for free speech.” He also answered questions from the audience about some of the hot topics that have swirled around the legislation for the last few years.  You can watch the full conversation here or read the transcript. On May 5, we’ll be holding our third EFF30 Fireside Chat, on surveillance, with special guest Edward Snowden. He will be joined by EFF Executive Director Cindy Cohn, EFF Director of Engineering for Certbot Alexis Hancock, and EFF Policy Analyst Matthew Guariglia as they weigh in on surveillance in modern culture, activism, and the future of privacy.  RSVP NOW Section 230 and Social Movements Senator Wyden began the fireside chat with a reminder that some of the most important, and often divisive, social issues of the last few years, from #BlackLivesMatter to the #MeToo movement, would likely be censored much more heavily on platforms without Section 230. That’s because the law gives platforms both the power to moderate as they see fit, and partial immunity from liability for what’s posted on those sites, making the speech the legal responsibility of the original speaker. Section 230…has always been for the person who doesn’t have deep pockets The First Amendment protects most speech online, but without Section 230, many platforms would be unable to host much of this important, but controversial speech because they would be stuck in litigation far more often. Section 230 has been essential for those who “don’t own their own TV stations” and others “without deep pockets” for getting their messages online, Wyden explained. Privacy info. This embed will serve content from Wyden also discussed the history of Section 230, which was passed in 1996. ”[Senator Chris Cox] and I wanted to make sure that innovators and creators and people who had promising ideas and wanted to know how they were going to get them out – we wanted to make sure that this new concept known as the internet could facilitate that.” Privacy info. This embed will serve content from Misconceptions Around Section 230 Wyden took aim at several of the misconceptions around 230, like the fact that the law is a benefit only for Big Tech. “One of the things that makes me angry…the one [idea] that really infuriates me, is that Section 230 is some kind of windfall for Big Tech. The fact of the matter is Big Tech’s got so much money that they can buy themselves out of any kind of legal scrape. We sure learned that when the first bill to start unraveling Section 230 passed, called SESTA/FOSTA.” We need that fact-finding so that we make smart technology policy Privacy info. This embed will serve content from Another common misunderstanding around the law is that it mandates platforms to be “neutral.” This couldn’t be further from the truth, Wyden explained: “There’s not a single word in Section 230 that requires neutrality….The point was essentially to let ‘lots of flowers bloom.’ If you want to have a conservative platform, more power to you…If you want to have a progressive platform, more power to you.“ Privacy info. This embed will serve content from How to Think About Changes to Intermediary Liability Laws All the positive benefit for online speech that Section 230 allows doesn’t mean that Section 230 is perfect, however. But before making changes to the law, Wyden suggested, “There ought to be some basic fact finding before the Congress just jumps in to making sweeping changes to speech online.” EFF Legal Director, Corynne McSherry, agreed wholeheartedly: “We need that fact-finding so that we make smart technology policy,” adding that we need go no further than our experience with SESTA/FOSTA and its collateral damage to prove this point.  The first thing we ought to do is tackle the incredible abuses in the privacy area There are other ways to improve the online ecosystem as well. Asked for his thoughts on better ways to address problems, Senator Wyden was blunt: “The first thing we ought to do is tackle the incredible abuses in the privacy area. Every other week in this country Americans learn about what amounts to yet another privacy disaster.” Privacy info. This embed will serve content from Another area where we can improve the online ecosystem is in data sales and collection. Wyden recently introduced a bill, “The Fourth Amendment is Not For Sale,” that will help reign in the problem of apps and commercial data brokers selling things user location data. Privacy info. This embed will serve content from To wrap up the discussion, Senator Wyden took some questions about potential changes to Section 230. He lambasted SESTA/FOSTA, which EFF is challenging in court on behalf of two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist, as an example of a poorly guided amendment. Privacy info. This embed will serve content from Senator Wyden pointed out that every time a proposal to amend the law comes up, there should be a rubric of several questions asked about how the change would work, and what impact it would have on users. (EFF has its own rubric for laws that would affect intermediary liability for just these purposes.) Privacy info. This embed will serve content from We thank Senator Wyden for joining us to discuss free speech, Section 230, and the battle for digital rights. Please join us in the continuation of this fireside chat series on May 5 as we discuss surveillance with whistleblower Edward Snowden.

  • Apple’s AppTrackingTransparency is Upending Mobile Phone Tracking
    by Gennie Gebhart on April 27, 2021 at 3:49 pm

    Apple’s long-awaited privacy update for iOS is out, and it’s a solid step in the right direction. With the launch of iOS 14.5, hundreds of millions of iPhone users will now interact with Apple’s new AppTrackingTransparency feature. Allowing users to choose what third-party tracking they will or will not tolerate, and forcing apps to request those permissions, gives users more knowledge of what apps are doing, helps protect users from abuse, and allows them to make the best decisions for themselves. In short, AppTrackingTransparency (or ATT) means that apps are now required to ask you permission if they want to track you and your activity across other apps. The kind of consent interface that ATT offers is not new, and it’s similar for other permissions that mobile users will be accustomed to (e.g., when an app requests access to your microphone, camera, or location). It’s normal for apps to be required to request the user’s permission for access to specific device functions or data, and third-party tracking should be no different. You can mark your ATT preferences app by app, or set it overall for all apps.  Much of ATT revolves around your iPhone’s IDFA, or “ID for advertisers.” This 16-byte string of numbers and letters is like a license plate for your iPhone. (Google has the same kind of identifier for Android, called the Android Ad ID; these identifiers are referred to collectively as “ad IDs”). Previously, you could opt out of IDFA’s always-on surveillance deep in the settings of your iPhone; now, ATT means that IDFA settings are more visible, opt-in, and per app.  The main feature of ATT is the technical control on IDFA, but the framework will regulate other kinds of tracking, too: if an app does not have your permission to “track” you, it is also not allowed to use identifiers like your phone number, for example, to do so. Presumably, this policy-level feature will depend on Apple’s app store review process to be effective. Ad IDs are often compared to cookies, their tracker-enabling partner on the Web. But there’s a key difference: cookies were designed for, and continue to support, a wide range of user-friendly features. Cookies are the reason you don’t have to log in every time you visit a website, and why your shopping cart doesn’t empty if you leave a website in the middle of a visit.  Ad IDs, on the other hand, were designed for one purpose and one purpose only: to let third parties track you. Ad IDs were created so that advertisers could access global, persistent identifiers for users without using the IMEI number or MAC address baked into phone hardware, with absolutely no pretense of user-friendliness or “shopping cart” use-case. Simply put: this feature on your phone has never worked in your favor. That’s why we applaud Apple’s efforts to give users more visible and granular choices to turn it off, and in particular ATT’s new requirement that app developers must ask for explicit permission to engage in this kind of tracking. ATT is only a first step, and it has its weaknesses. It doesn’t do anything about “first-party” tracking, or an app tracking your behavior on that app itself. ATT might also be prone to “notification fatigue” if users become so accustomed to seeing it that they just click through it without considering the choice. And, just like any other tracker-blocking initiative, ATT may set off a new round in the cat-and-mouse game between trackers and those who wish to limit them: if advertisers and data brokers see the writing on the wall that IDFA and other individual identifiers are no longer useful for tracking iPhone users, they may go back to the drawing board and find sneakier, harder-to-block tracking methods. ATT is unlikely to wipe out nonconsensual tracking in one fell swoop. But moving from a world in which tracking-by-default was sanctioned and enabled by Apple, to one where trackers must actively defy the tech giant, is a big step forward. Apple is already pushing against the tide by proposing even this modest reform. Its decision to give users a choice to not be tracked has triggered a wave of melodramatic indignation from the tracking industry. In unraveling a tracking knot of its own creation, Apple has picked a fight with some of the most powerful companies and governments in the world. Looking ahead, the mobile operating system market is essentially a duopoly, and Google controls the larger part of the -opoly. While Apple pushes through new privacy measures like ATT, Google has left its own Ad ID alone. Of the two, Apple is undoubtedly doing more to rein in the privacy abuses of advertising technology. Nearly every criticism that can be made about the state of privacy on iOS goes double for Android. Your move, Google.

  • Here Are 458 California Law Enforcement Agencies’ Policy Documents All in One Place
    by Dave Maass on April 26, 2021 at 9:02 pm

    Dylan Kubeny, a student at the Reynolds School of Journalism at the University of Nevada, Reno, served as the primary data hunter and co-author on this project.  At this moment in history, law enforcement agencies in the United States face a long-overdue reevaluation of their priorities, practices, and processes for holding police officers accountable for both unconscious biases and overt abuse of power.  But any examination of law enforcement requires transparency first: the public’s ability to examine what those priorities, practices, and processes are. While police are charged with enforcing the law, they too have their own rules to follow, and too often, those rules are opaque to the public. An imbalance in access to information is an imbalance of power.  Today, EFF in partnership with Stanford Libraries’ Systemic Racism Tracker project is releasing a data set with links to 458 policy manuals from California law enforcement agencies, including most police departments and sheriff offices and some district attorney offices, school district police departments, and university public safety departments. This data set represents our first attempt to aggregate these policy documents following the passage of S.B. 978, a state law that requires local law enforcement agencies to publish this information online.  These policy manuals cover everything from administrative duties and record keeping to the use of force and the deployment of surveillance technologies. These documents reveal police officers’ responsibilities and requirements, but they also expose shortcomings, including an overreliance on boilerplate policies generated by a private company.  Download the data set as an CSV file, or scroll to the bottom to find a catalog of links.  Until a few years ago, many law enforcement agencies in California were reluctant to share their policy documents with the public. While a handful of agencies voluntarily chose to post these records online, the most reliable way to obtain these records was through the California Public Records Act (CPRA), which creates the legal right for everyday people to request information from the government. Most people don’t know they have this power, and even fewer know how to exercise it effectively.  To make these police records more accessible, California State Sen. Steven Bradford sponsored S.B. 978, which says all local law enforcement agencies “shall conspicuously post on their Internet Web sites all current standards, policies, practices, operating procedures, and education and training materials that would otherwise be available to the public if a request was made pursuant to the California Public Records Act.”  The requirement became fully effective in January 2020, and now the public can visit individual websites to find links to these documents. However, despite the requirement that these records be posted “conspicuously,” the links can often be challenging to find. With our new data set, the public now has access to a catalog of hundreds of currently available documents in one place.  EFF supported SB 978’s passage back in 2018 to increase government transparency through internet technology. We are currently collaborating with the Reynolds School of Journalism at the University of Nevada, Reno, to aggregate these policies. Stanford Libraries is using these records to build the Systemic Racism Tracker (SRT), a searchable database that harvests data about institutional practices that harm communities of color. The SRT’s goals are to serve as a growing collection of references, documents, and data to support research and education about systematic racism. The SRT also aims to empower people to take action against harmful practices by knowing their rights and identifying, appraising, and connecting with government agencies, non-profit organizations, and grassroots groups that address racism. “In order to understand, interrogate and work towards changing the very structures of systemic racism in policing, it is vital that we collect both current and historical policy and training manuals,” said Felicia Smith, head of Stanford Libraries Learning and Outreach, who created the SRT project. Although this data set is but the first step in a longer-term project, several elements of concern emerged in our initial analysis. First and foremost, perhaps the most conspicuous pattern with these policies is the connection to Lexipol, a private company that sells boilerplate policies and training materials to law enforcement agencies. Over and over again, the police policies were formatted the same, used identical language, and included a copyright mark from this company.  Lexipol has come under fire for writing policies that are too vague or permissive and for significantly differing from best practices. More often than not, rather than draft policies specifically tailored to the specific agency, these agencies simply copied and pasted the standard Lexipol policy. Mother Jones reported that 95% of agencies in California purchased policies or training materials from Lexipol. Our data showed that at least 379 agencies published policies from Lexipol.  This raises questions about whether police are soliciting guidance from the community or policymakers or are simply accepting the recommendations from a private company that is not accountable to the public.  In addition, we made the following findings:  Although most agencies complied with S.B. 978 and posted at least some materials online, many agencies still had failed to take action even a year after the law took effect. In those cases, we filed CPRA requests for the records and requested they be posted on their websites. In some instances the agencies followed through, but we are still waiting on some entities such as the Bell Police Department and the Crescent City Police Department to upload their records.  While most agencies complied with the requirement to post policies online, only a portion published training materials. In some cases, agencies only published the training session outlines and not the actual training presentations. Link rot undermines transparency. As we conducted our research over just a few months, URLs for policies would change or disappear as agencies updated their policies or relaunched their websites. That is one reason we include archived links in this data set.  In the coming months, Stanford Libraries aims to introduce a more robust tool that will allow for searching policies across departments and archiving policy changes over time. In the interim, this data set brings the public one step closer to understanding police practices and to holding law enforcement agencies accountable. SB 978 Policy and Training Catalog  The table below contains links to the SB 978 materials made available by local law enforcement agencies across California. There is little to no consistency across agencies for how this information is published online. Below you will find links to the primary page where a user would find links to SB 978 documents. In some cases, this may just be the agency’s home page, which includes an SB 978 link in the sidebar. Because we have found that these links break quite often, we have also included an archived version of the link through the Internet Archive’s Wayback Machine. We have also included direct links to the policies and training materials, however in many cases this is the same link as the primary page.  We used the California Commission on Peace Officers Standards and Training’s list of California law enforcement agencies to prioritize municipal police, sheriff’s offices, university and school district police, and district attorneys in our data collection. Future research will cover other forms of local law enforcement. Download the data set as an CSV file. Primary Law Enforcement Agency Page Archived Link Policies Training Materials Alameda County District Attorney Archived Link Policy Docs Not Available Alameda County Sheriff’s Office Archived Link Policy Docs Training Docs Alameda Police Department Archived Link Policy Docs Training Docs, 2, 3, 4 Albany Police Department Archived Link Policy Docs Training Docs Alhambra Police Department Archived Link Policy Docs Training Docs Alpine County Sheriff’s Department Archived Link Policy Docs Not Available Alturas Police Department Archived Link Policy Docs Not Available Amador County Sheriff’s Department Archived Link Policy Docs Not Available American River College Police Department Archived Link Policy Docs Not Available Anaheim Police Department Archived Link Policy Docs Not Available Anderson Police Department Archived Link Policy Docs Training Docs Angels Camp Police Department Archived Link Policy Docs Not Available Antioch Police Department Archived Link Policy Docs Training Docs Apple Valley Unified School District Police Department Archived Link Policy Docs Not Available Arcadia Police Department Archived Link Policy Docs Not Available Arcata Police Department Archived Link Policy Docs Not Available Arroyo Grande Police Department Archived Link Policy Docs Training Docs Arvin Police Department Archived Link Policy Docs Not Available Atascadero Police Department Archived Link Policy Docs Not Available Atherton Police Department Archived Link Policy Docs Training Docs Atwater Police Department Archived Link Policy Docs Not Available Auburn Police Department Archived Link Policy Docs Not Available Avenal Police Department Archived Link Policy Docs Not Available Azusa Police Department Archived Link Policy Docs Not Available Bakersfield Police Department Archived Link Policy Docs Not Available Banning Police Department Archived Link Policy Docs Not Available Barstow Police Department Archived Link Policy Docs Not Available Bay Area Rapid Transit Police Department Archived Link Policy Docs Training Docs Bear Valley Police Department Archived Link Policy Docs Not Available Beaumont Police Department Archived Link Policy Docs Not Available Bell Gardens Police Department Archived Link Policy Docs Not Available Belmont Police Department Archived Link Policy Docs Training Docs Belvedere Police Department Archived Link Policy Docs Not Available Benicia Police Department Archived Link Policy Docs Training Docs, 2, 3 Berkeley Police Department Archived Link Policy Docs Training Docs Beverly Hills Police Department Archived Link Policy Docs Not Available Blythe Police Department Archived Link Policy Docs Not Available Brawley Police Department Archived Link Policy Docs Not Available Brea Police Department Archived Link Policy Docs Training Docs Brentwood Police Department Archived Link Policy Docs Not Available Brisbane Police Department Archived Link Policy Docs Not Available Broadmoor Police Department Archived Link Policy Docs Not Available Buena Park Police Department Archived Link Policy Docs Training Docs Burbank Police Department Archived Link Policy Docs Training Docs Burlingame Police Department Archived Link Policy Docs Training Docs Butte County Sheriff’s Department/Coroner Archived Link Policy Docs Not Available Cal Poly University Police Archived Link Policy Docs Training Docs Cal State LA Police Department Archived Link Policy Docs Not Available Calaveras County Sheriff’s Department Archived Link Policy Docs Training Docs Calexico Police Department Archived Link Policy Docs Not Available California City Police Department Archived Link Policy Docs Not Available Calistoga Police Department Archived Link Policy Docs Not Available Campbell Police Department Archived Link Policy Docs Training Docs Capitola Police Department Archived Link Policy Docs Not Available Carlsbad Police Department Archived Link Policy Docs Training Docs Carmel Police Department Archived Link Policy Docs Not Available Cathedral City Police Department Archived Link Policy Docs Not Available Central Marin Police Authority Archived Link Policy Docs Not Available Ceres Department of Public Safety Archived Link Policy Docs Not Available Chaffey Community College District Police Department Archived Link Policy Docs Not Available Chico Police Department Archived Link Policy Docs Not Available Chino Police Department Archived Link Policy Docs Training Docs Chowchilla Police Department Archived Link Policy Docs Training Docs Chula Vista Police Department Archived Link Policy Docs Not Available Citrus Community College District Department of Campus Safety Archived Link Policy Docs Not Available Citrus Heights Police Department Archived Link Policy Docs Not Available Claremont Police Department Archived Link Policy Docs Training Docs Clayton Police Department Archived Link Policy Docs Not Available Clearlake Police Department Archived Link Policy Docs Not Available Cloverdale Police Department Archived Link Policy Docs Training Docs Clovis Police Department Archived Link Policy Docs Training Docs Clovis Unified School District Police Department Archived Link Policy Docs Training Docs Coalinga Police Department Archived Link Policy Docs Training Docs Coast Community College District Police Department Archived Link Policy Docs Not Available Colma Police Department Archived Link Policy Docs Training Docs Colton Police Department Archived Link Policy Docs Not Available Colusa County District Attorney Archived Link Policy Docs Not Available Colusa County Sheriff’s Department Archived Link Policy Docs Not Available Colusa Police Department Archived Link Policy Docs Not Available Concord Police Department Archived Link Policy Docs Training Docs Contra Costa Community College District Police Department Archived Link Policy Docs Not Available Contra Costa County District Attorney Archived Link Policy Docs Not Available Contra Costa County Sheriff’s Department/Coroner Archived Link Policy Docs Not Available Corcoran Police Department Archived Link Policy Docs Not Available Corona Police Department Archived Link Policy Docs Not Available Coronado Police Department Archived Link Policy Docs Training Docs Costa Mesa Police Department Archived Link Policy Docs Training Docs Cosumnes River College Police Department Archived Link Policy Docs Not Available Cotati Police Department Archived Link Policy Docs Not Available Covina Police Department Archived Link Policy Docs Not Available CPSU Pomona Department of Public Safety Archived Link Policy Docs Not Available CSU Bakersfield University Police Department Archived Link Policy Docs Not Available CSU Channel Islands University Police Department Archived Link Policy Docs Not Available CSU Chico University Police Department Archived Link Policy Docs Training Docs CSU Dominguez Hills University Police and Parking Archived Link Policy Docs Not Available CSU East Bay University Police Department Archived Link Policy Docs Not Available CSU Fresno University Police Department Archived Link Policy Docs Not Available CSU Fullerton University Police Department Archived Link Policy Docs Training Docs CSU Long Beach University Police Department Archived Link Policy Docs Not Available CSU Monterey Bay University Police Department Archived Link Policy Docs Training Docs CSU Northridge Department of Police Services Archived Link Policy Docs Training Docs CSU Sacramento Public Safety/University Police Department Archived Link Policy Docs Training Docs CSU San Bernardino University Police Department Archived Link Policy Docs Not Available CSU San José University Police Department Archived Link Policy Docs Not Available CSU San Marcos University Police Department Archived Link Policy Docs Not Available CSU Stanislaus Police Department Archived Link Policy Docs Training Docs Cuesta College Department of Public Safety Archived Link Policy Docs Training Docs Culver City Police Department Archived Link Policy Docs Training Docs Cypress Police Department Archived Link Policy Docs Training Docs Daly City Police Department Archived Link Policy Docs Training Docs Davis Police Department Archived Link Policy Docs Training Docs Del Norte County Sheriff’s Department Archived Link Policy Docs Not Available Del Rey Oaks Police Department Archived Link Policy Docs Training Docs Delano Police Department Archived Link Policy Docs Not Available Desert Hot Springs Police Department Archived Link Policy Docs Training Docs Dinuba Police Department Archived Link Policy Docs Not Available Dixon Police Department Archived Link Policy Docs Not Available Dos Palos Police Department Archived Link Policy Docs Not Available Downey Police Department Archived Link Policy Docs Not Available East Bay Regional Parks District Department of Public Safety Archived Link Policy Docs Not Available East Palo Alto Police Department Archived Link Policy Docs Not Available El Cajon Police Department Archived Link Policy Docs Not Available El Camino Community College District Police Department Archived Link Policy Docs Not Available El Centro Police Department Archived Link Policy Docs Not Available El Cerrito Police Department Archived Link Policy Docs Training Docs El Dorado County Sheriff’s Department Archived Link Policy Docs Not Available El Monte Police Department Archived Link Policy Docs Not Available El Segundo Police Department Archived Link Policy Docs Training Docs Elk Grove Police Department Archived Link Policy Docs Not Available Emeryville Police Department Archived Link Policy Docs Training Docs Escalon Police Department Archived Link Policy Docs Not Available Escondido Police Department Archived Link Policy Docs Training Docs Etna Police Department Archived Link Policy Docs Not Available Eureka Police Department Archived Link Policy Docs Not Available Exeter Police Department Archived Link Policy Docs Not Available Fairfax Police Department Archived Link Policy Docs Training Docs Fairfield Police Department Archived Link Policy Docs Training Docs Farmersville Police Department Archived Link Policy Docs Not Available Ferndale Police Department Archived Link Policy Docs Not Available Firebaugh Police Department Archived Link Policy Docs Not Available Folsom Lake College Police Department Archived Link Policy Docs Not Available Folsom Police Department Archived Link Policy Docs Training Docs Fontana Police Department Archived Link Policy Docs Training Docs Fort Bragg Police Department Archived Link Policy Docs Training Docs Fortuna Police Department Archived Link Policy Docs Not Available Foster City Police Department Archived Link Policy Docs Not Available Fountain Valley Police Department Archived Link Policy Docs Training Docs Fowler Police Department Archived Link Policy Docs Not Available Fremont Police Department Archived Link Policy Docs Training Docs Fresno County Sheriff’s Department Archived Link Policy Docs Training Docs Fresno Police Department Archived Link Policy Docs Not Available Fullerton Police Department Archived Link Policy Docs Not Available Galt Police Department Archived Link Policy Docs Not Available Garden Grove Police Department Archived Link Policy Docs Training Docs Gardena Police Department Archived Link Policy Docs Training Docs Gilroy Police Department Archived Link Policy Docs Not Available Glendale Community College District Police Department Archived Link Policy Docs Not Available Glendale Police Department Archived Link Policy Docs Training Docs Glendora Police Department Archived Link Policy Docs Training Docs Glenn County Sheriff’s Department/Coroner Archived Link Policy Docs Not Available Gonzales Police Department Archived Link Policy Docs Not Available Grass Valley Police Department Archived Link Policy Docs Not Available Greenfield Police Department Archived Link Policy Docs Not Available Gridley Police Department Archived Link Policy Docs Not Available Grover Beach Police Department Archived Link Policy Docs Not Available Guadalupe Police Department Archived Link Policy Docs Not Available Gustine Police Department Archived Link Policy Docs Not Available Hanford Police Department Archived Link Policy Docs Not Available Hawthorne Police Department Archived Link Policy Docs Not Available Hayward Police Department Archived Link Policy Docs Not Available Healdsburg Police Department Archived Link Policy Docs Training Docs Hemet Police Department Archived Link Policy Docs Not Available Hercules Police Department Archived Link Policy Docs Training Docs Hermosa Beach Police Department Archived Link Policy Docs Not Available Hillsborough Police Department Archived Link Policy Docs Training Docs Hollister Police Department Archived Link Policy Docs Not Available Humboldt County Sheriff’s Department Archived Link Policy Docs Not Available Humboldt State University Archived Link Policy Docs Training Docs Huntington Beach Police Department Archived Link Policy Docs Not Available Huntington Park Police Department Archived Link Policy Docs Training Docs Huron Police Department Archived Link Policy Docs Not Available Imperial Police Department Archived Link Policy Docs Not Available Indio Police Department Archived Link Policy Docs Not Available Inglewood Police Department Archived Link Policy Docs Not Available Inyo County Sheriff’s Department Archived Link Policy Docs Not Available Ione Police Department Archived Link Policy Docs Not Available Irvine Police Department Archived Link Policy Docs Training Docs Irwindale Police Department Archived Link Policy Docs Training Docs Jackson Police Department Archived Link Policy Docs Not Available Kensington Police Department Archived Link Policy Docs Not Available Kerman Police Department Archived Link Policy Docs Not Available Kern County Sheriff’s Department Archived Link Policy Docs Not Available King City Police Department Archived Link Policy Docs Not Available Kings County Sheriff’s Department Archived Link Policy Docs Not Available Kingsburg Police Department Archived Link Policy Docs Not Available La Habra Police Department Archived Link Policy Docs Not Available La Mesa Police Department Archived Link Policy Docs Training Docs La Palma Police Department Archived Link Policy Docs Not Available La Verne Police Department Archived Link Policy Docs Training Docs Laguna Beach Police Department Archived Link Policy Docs Training Docs Lake County Sheriff’s Department Archived Link Policy Docs Training Docs Lakeport Police Department Archived Link Policy Docs Not Available Lassen County Sheriff’s Department Archived Link Policy Docs Training Docs Lemoore Police Department Archived Link Policy Docs Not Available Lincoln Police Department Archived Link Policy Docs Not Available Lindsay Department of Public Safety Archived Link Policy Docs Not Available Livermore Police Department Archived Link Policy Docs Training Docs Livingston Police Department Archived Link Policy Docs Not Available Lodi Police Department Archived Link Policy Docs Not Available Lompoc Police Department Archived Link Policy Docs Not Available Long Beach Police Department Archived Link Policy Docs Not Available Los Alamitos Police Department Archived Link Policy Docs Not Available Los Altos Police Department Archived Link Policy Docs Training Docs Los Angeles City Department of Recreation and Parks, Park Ranger Division Archived Link Policy Docs Not Available Los Angeles County District Attorney Archived Link Policy Docs Not Available Los Angeles County Probation Department Archived Link Policy Docs Training Docs Los Angeles County Sheriff’s Department Archived Link Policy Docs Not Available Los Angeles Police Department Archived Link Policy Docs Training Docs Los Angeles Port Police Department Archived Link Policy Docs Not Available Los Angeles School Police Department Archived Link Policy Docs Training Docs Los Angeles World Airports Police Department Archived Link Policy Docs Not Available Los Banos Police Department Archived Link Policy Docs Training Docs Los Gatos/Monte Sereno Police Department Archived Link Policy Docs Not Available Los Rios Community College District Police Department Archived Link Policy Docs Not Available Madera County Sheriff’s Department Archived Link Policy Docs Not Available Madera Police Department Archived Link Policy Docs Not Available Mammoth Lakes Police Department Archived Link Policy Docs Training Docs Manhattan Beach Police Department Archived Link Policy Docs Training Docs Manteca Police Department Archived Link Policy Docs Training Docs Marin Community College District Police Department Archived Link Policy Docs Not Available Marin County Sheriff’s Department Archived Link Policy Docs Training Docs Marina Department of Public Safety Archived Link Policy Docs Not Available Martinez Police Department Archived Link Policy Docs Training Docs Marysville Police Department Archived Link Policy Docs Not Available McFarland Police Department Archived Link Policy Docs Not Available Mendocino County Sheriff’s Department Archived Link Policy Docs Training Docs Mendota Police Department Archived Link Policy Docs Training Docs Menifee Police Department Archived Link Policy Docs Not Available Menlo Park Police Department Archived Link Policy Docs Not Available Merced Community College Police Department Archived Link Policy Docs Not Available Merced County Sheriff’s Department Archived Link Policy Docs Training Docs Merced Police Department Archived Link Policy Docs Training Docs Mill Valley Police Department Archived Link Policy Docs Training Docs Milpitas Police Department Archived Link Policy Docs Not Available MiraCosta Community College District Police Department Archived Link Policy Docs Training Docs Modesto Police Department Archived Link Policy Docs Training Docs Modoc County Sheriff’s Department Archived Link Policy Docs Not Available Mono County Sheriff’s Department Archived Link Policy Docs Not Available Monrovia Police Department Archived Link Policy Docs Training Docs Montclair Police Department Archived Link Policy Docs Training Docs Montebello Police Department Archived Link Policy Docs Training Docs Monterey County Sheriff’s Department Archived Link Policy Docs Not Available Monterey Park Police Department Archived Link Policy Docs Training Docs Monterey Police Department Archived Link Policy Docs Training Docs Moraga Police Department Archived Link Policy Docs Not Available Morgan Hill Police Department Archived Link Policy Docs Training Docs Morro Bay Police Department Archived Link Policy Docs Not Available Mountain View Police Department Archived Link Policy Docs Training Docs Mt. Shasta Police Department Archived Link Policy Docs Not Available Murrieta Police Department Archived Link Policy Docs Training Docs Napa County Sheriff’s Department Archived Link Policy Docs Training Docs Napa Police Department Archived Link Policy Docs Not Available Napa Valley College Police Department Archived Link Policy Docs Training Docs National City Police Department Archived Link Policy Docs Training Docs Nevada City Police Department Archived Link Policy Docs Not Available Nevada County Sheriff’s Department Archived Link Policy Docs Training Docs Newark Police Department Archived Link Policy Docs Not Available Newman Police Department Archived Link Policy Docs Not Available Newport Beach Police Department Archived Link Policy Docs Training Docs Novato Police Department Archived Link Policy Docs Not Available Oakdale Police Department Archived Link Policy Docs Not Available Oakland Police Department Archived Link Policy Docs Training Docs Oakley Police Department Archived Link Policy Docs Not Available Oceanside Police Department Archived Link Policy Docs Training Docs Oceanside Police Department Harbor Unit Archived Link Policy Docs Training Docs Ohlone Community College District Police Department Archived Link Policy Docs Not Available Ontario Police Department Archived Link Policy Docs Training Docs Orange County District Attorney Archived Link Policy Docs Not Available Orange County District Attorney, Public Assistance Fraud Archived Link Policy Docs Not Available Orange County Sheriff’s Department/Coroner Archived Link Policy Docs Not Available Orange Cove Police Department Archived Link Policy Docs Not Available Orange Police Department Archived Link Policy Docs Training Docs Orland Police Department Archived Link Policy Docs Training Docs Oroville Police Department Archived Link Policy Docs Training Docs Oxnard Police Department Archived Link Policy Docs Training Docs Pacific Grove Police Department Archived Link Policy Docs Training Docs Pacifica Police Department Archived Link Policy Docs Training Docs Palm Springs Police Department Archived Link Policy Docs Not Available Palo Alto Police Department Archived Link Policy Docs Not Available Palos Verdes Estates Police Department Archived Link Policy Docs Not Available Paradise Police Department Archived Link Policy Docs Not Available Pasadena City College District Police Department Archived Link Policy Docs Training Docs Pasadena Police Department Archived Link Policy Docs Not Available Paso Robles Police Department Archived Link Policy Docs Training Docs Petaluma Police Department Archived Link Policy Docs Not Available Piedmont Police Department Archived Link Policy Docs Training Docs Pinole Police Department Archived Link Policy Docs Training Docs Pismo Beach Police Department Archived Link Policy Docs Training Docs Pittsburg Police Department Archived Link Policy Docs Not Available Placentia Police Department Archived Link Policy Docs Not Available Placer County District Attorney Archived Link Policy Docs Not Available Placer County Sheriff’s Department Archived Link Policy Docs Training Docs Placerville Police Department Archived Link Policy Docs Not Available Pleasant Hill Police Department Archived Link Policy Docs Training Docs Pleasanton Police Services Archived Link Policy Docs Training Docs Plumas County Sheriff’s Department Archived Link Policy Docs Not Available Pomona Police Department Archived Link Policy Docs Training Docs Port Hueneme Police Department Archived Link Policy Docs Not Available Porterville Police Department Archived Link Policy Docs Not Available Red Bluff Police Department Archived Link Policy Docs Training Docs Redding Police Department Archived Link Policy Docs Training Docs Redlands Police Department Archived Link Policy Docs Training Docs Redondo Beach Police Department Archived Link Policy Docs Training Docs Redwood City Police Department Archived Link Policy Docs Training Docs Reedley Police Department Archived Link Policy Docs Training Docs Rialto Police Department Archived Link Policy Docs Not Available Richmond Police Department Archived Link Policy Docs Not Available Ridgecrest Police Department Archived Link Policy Docs Not Available Rio Dell Police Department Archived Link Policy Docs Not Available Ripon Police Department Archived Link Policy Docs Not Available Riverside Community College District Police Department Archived Link Policy Docs Not Available Riverside County Sheriff’s Department Archived Link Policy Docs Training Docs Riverside Police Department Archived Link Policy Docs Not Available Rocklin Police Department Archived Link Policy Docs Training Docs Rohnert Park Department of Public Safety Archived Link Policy Docs Not Available Roseville Police Department Archived Link Policy Docs Not Available Ross Police Department Archived Link Policy Docs Not Available Sacramento County Sheriff’s Department Archived Link Policy Docs Training Docs Sacramento Police Department Archived Link Policy Docs Training Docs Saddleback Community College Police Department Archived Link Policy Docs Not Available Saint Helena Police Department Archived Link Policy Docs Not Available San Benito County Sheriff’s Department Archived Link Policy Docs Training Docs San Bernardino County Sheriff-Coroner Archived Link Policy Docs Training Docs San Bernardino Police Department Archived Link Policy Docs Training Docs San Bruno Police Department Archived Link Policy Docs Not Available San Diego County Probation Department Archived Link Policy Docs Training Docs San Diego County Sheriff’s Department Archived Link Policy Docs Not Available San Diego Harbor Police Department Archived Link Policy Docs Training Docs San Diego Police Department Archived Link Policy Docs Training Docs San Diego State University Police Department Archived Link Policy Docs Training Docs San Fernando Police Department Archived Link Policy Docs Training Docs San Francisco County Sheriff’s Department Archived Link Policy Docs Training Docs San Francisco Police Department Archived Link Policy Docs Not Available San Gabriel Police Department Archived Link Policy Docs Training Docs San Joaquin County Probation Department Archived Link Policy Docs Training Docs San Joaquin County Sheriff’s Department Archived Link Policy Docs Not Available San Joaquin Delta College Police Department Archived Link Policy Docs Training Docs San Jose Police Department Archived Link Policy Docs Training Docs San Leandro Police Department Archived Link Policy Docs Training Docs San Luis Obispo County Sheriff’s Department Archived Link Policy Docs Training Docs San Luis Obispo Police Department Archived Link Policy Docs Training Docs San Marino Police Department Archived Link Policy Docs Training Docs San Mateo County Sheriff’s Office Archived Link Policy Docs Training Docs San Mateo Police Department Archived Link Policy Docs Training Docs San Pablo Police Department Archived Link Policy Docs Not Available San Rafael Police Department Archived Link Policy Docs Training Docs San Ramon Police Department Archived Link Policy Docs Training Docs Sand City Police Department Archived Link Policy Docs Not Available Sanger Police Department Archived Link Policy Docs Not Available Santa Ana Police Department Archived Link Policy Docs Training Docs Santa Ana Unified School District Police Department Archived Link Policy Docs Not Available Santa Barbara County Sheriff’s Department Archived Link Policy Docs Not Available Santa Barbara Police Department Archived Link Policy Docs Training Docs Santa Clara County Sheriff’s Department Archived Link Policy Docs Training Docs Santa Clara Police Department Archived Link Policy Docs Training Docs Santa Cruz County District Attorney Archived Link Policy Docs Not Available Santa Cruz County Sheriff’s Department Archived Link Policy Docs Not Available Santa Cruz Police Department Archived Link Policy Docs Training Docs Santa Fe Springs Police Services Archived Link Policy Docs Not Available Santa Maria Police Department Archived Link Policy Docs Training Docs Santa Monica Police Department Archived Link Policy Docs Training Docs Santa Paula Police Department Archived Link Policy Docs Not Available Santa Rosa Police Department Archived Link Policy Docs Training Docs Sausalito Police Department Archived Link Policy Docs Not Available Scotts Valley Police Department Archived Link Policy Docs Not Available Seal Beach Police Department Archived Link Policy Docs Training Docs Seaside Police Department Archived Link Policy Docs Training Docs Sebastopol Police Department Archived Link Policy Docs Training Docs Selma Police Department Archived Link Policy Docs Not Available Shafter Police Department Archived Link Policy Docs Not Available Shasta County Sheriff’s Department Archived Link Policy Docs Not Available Sierra County Sheriff’s Office Archived Link Policy Docs Not Available Sierra Madre Police Department Archived Link Policy Docs Not Available Signal Hill Police Department Archived Link Policy Docs Not Available Simi Valley Police Department Archived Link Policy Docs Training Docs Siskiyou County Sheriff’s Department Archived Link Policy Docs Not Available Solano County Sheriff’s Department Archived Link Policy Docs Training Docs Soledad Police Department Archived Link Policy Docs Training Docs Sonoma County Probation Department Archived Link Policy Docs Training Docs Sonoma County Sheriff’s Office Archived Link Policy Docs Training Docs Sonoma Police Department Archived Link Policy Docs Training Docs Sonoma State University Police and Parking Services Archived Link Policy Docs Training Docs Sonora Police Department Archived Link Policy Docs Not Available South Gate Police Department Archived Link Policy Docs Not Available South Lake Tahoe Police Department Archived Link Policy Docs Not Available South Pasadena Police Department Archived Link Policy Docs Training Docs South San Francisco Police Department Archived Link Policy Docs Not Available Southwestern Community College Police Department Archived Link Policy Docs Not Available Stanford University Department of Public Safety Archived Link Policy Docs Not Available Stanislaus County Sheriff’s Department Archived Link Policy Docs Training Docs Stockton Police Department Archived Link Policy Docs Not Available Suisun City Police Department Archived Link Policy Docs Training Docs Sunnyvale Department of Public Safety Archived Link Policy Docs Not Available Sutter County Sheriff’s Department Archived Link Policy Docs Not Available Taft Police Department Archived Link Policy Docs Not Available Tehachapi Police Department Archived Link Policy Docs Not Available Tehama County Sheriff’s Department Archived Link Policy Docs Not Available Tiburon Police Department Archived Link Policy Docs Not Available Torrance Police Department Archived Link Policy Docs Not Available Tracy Police Department Archived Link Policy Docs Training Docs Trinity County Sheriff’s Department Archived Link Policy Docs Not Available Truckee Police Department Archived Link Policy Docs Training Docs Tulare County Sheriff’s Department Archived Link Policy Docs Training Docs Tulare Police Department Archived Link Policy Docs Not Available Tuolumne County Sheriff’s Department Archived Link Policy Docs Not Available Turlock Police Department Archived Link Policy Docs Training Docs Tustin Police Department Archived Link Policy Docs Training Docs Twin Rivers Unified School District Police Services Archived Link Policy Docs Not Available UC Berkeley Police Department Archived Link Policy Docs Not Available UC Davis Police Department Archived Link Policy Docs Not Available UC Irvine Police Department Archived Link Policy Docs Training Docs UC Los Angeles Police Department Archived Link Policy Docs Not Available UC Merced Police Department Archived Link Policy Docs Not Available UC Riverside Police Department Archived Link Policy Docs Not Available UC San Diego Police Department Archived Link Policy Docs Not Available UC San Francisco Police Department Archived Link Policy Docs Not Available UC Santa Cruz Police Department Archived Link Policy Docs Not Available Ukiah Police Department Archived Link Policy Docs Not Available Union City Police Department Archived Link Policy Docs Not Available Upland Police Department Archived Link Policy Docs Training Docs Vacaville Police Department Archived Link Policy Docs Training Docs Vallejo Police Department Archived Link Policy Docs Training Docs Ventura County District Attorney Archived Link Policy Docs Not Available Ventura County Sheriff’s Department Archived Link Policy Docs Training Docs Ventura Police Department Archived Link Policy Docs Training Docs Vernon Police Department Archived Link Policy Docs Training Docs Victor Valley College Police Department Archived Link Policy Docs Not Available Visalia Police Department Archived Link Policy Docs Not Available Walnut Creek Police Department Archived Link Policy Docs Not Available Watsonville Police Department Archived Link Policy Docs Not Available Weed Police Department Archived Link Policy Docs Not Available West Cities Police Communications Center Archived Link Policy Docs Not Available West Covina Police Department Archived Link Policy Docs Not Available West Sacramento Police Department Archived Link Policy Docs Not Available West Valley-Mission Community College District Police Department Archived Link Policy Docs Not Available Westminster Police Department Archived Link Policy Docs Not Available Wheatland Police Department Archived Link Policy Docs Not Available Whittier Police Department Archived Link Policy Docs Not Available Williams Police Department Archived Link Policy Docs Not Available Willits Police Department Archived Link Policy Docs Not Available Windsor Police Department Archived Link Policy Docs Not Available Winters Police Department Archived Link Policy Docs Not Available Woodland Police Department Archived Link Policy Docs Training Docs Yolo County District Attorney Archived Link Policy Docs Not Available Yolo County Sheriff’s Department Archived Link Policy Docs Not Available Yreka Police Department Archived Link Policy Docs Not Available Yuba City Police Department Archived Link Policy Docs Training Docs Yuba County Sheriff’s Department Archived Link Policy Docs Not Available

  • Your Service Provider’s Terms of Service Shouldn’t Overrule Your Fourth Amendment Rights
    by Jennifer Lynch on April 24, 2021 at 10:41 pm

    Last week, EFF, ACLU, and ACLU of Minnesota filed an amicus brief in State v. Pauli, a case in the Minnesota Supreme Court, where we argue that cloud storage providers’ terms of service (TOS) can’t take away your Fourth Amendment rights. This is the first case on this important issue to reach a state supreme court, and could mean that anyone in Minnesota who violated any terms of a providers’ TOS could lose Fourth Amendment protections over all the files in their account. The facts of the case are a little hazy, but at some point, Dropbox identified video files in Mr. Pauli’s account as child pornography and submitted the files to the National Center for Missing and Exploited Children (NCMEC), a private, quasi-governmental entity created by statute that works closely with law enforcement on child exploitation issues. After viewing the files, a NCMEC employee then forwarded them with a report to the Minnesota Bureau of Criminal Apprehension. This ultimately led to Pauli’s indictment on child pornography charges. Pauli challenged the search, but the trial court held that Dropbox’s TOS—which notified Pauli that Dropbox could monitor his account and disclose information to third parties if it believed such disclosure was necessary to comply with the law—nullified Pauli’s expectation of privacy in the video files. After the appellate court agreed, Pauli petitioned the state supreme court for review. The lower courts’ analysis is simply wrong. Under this logic, your Fourth Amendment rights rise or fall based on unilateral contracts with your service providers—contracts that none of us read or negotiate but all of us must agree to so that we can use services that are a necessary part of daily life. As we argued in our brief, a company’s TOS should not dictate your constitutional rights, because terms of service are rules about the relationship between you and your service provider—not you and the government. Companies draft terms of service to govern how their platforms may be used, and the terms of these contracts are extremely broad. Companies’ TOS control what kind of content you can post, how you can use the platform, and how platforms can protect themselves against fraud and other damage. Actions that could violate a company’s TOS include not just criminal activity, such as possessing child pornography, but also—as defined solely by the provider—actions like uploading content that defames someone or contains profanity, sharing a copyrighted article without permission from the copyright holder, or marketing your small business to all of your friends without their advance consent. While some might find activities such as these objectionable or annoying, they shouldn’t justify the government ignoring your Fourth Amendment right to privacy in your files simply because you store them in the cloud. Given the vast amount of storage many service providers offer (most offer up to 2 terabytes for a small annual fee), accounts can hold tens of thousands of private and personal files, including photos, messages, diaries, medical records, legal data, and videos—each of which could reveal intimate details about our private and professional lives. Storing these records in the cloud with a service provider allows users to free up space on their personal devices, access their files from anywhere, and share (or not share) their files with others. The convenience and cost savings offered by commercial third-party cloud-storage providers means that very few of us would take the trouble to set up our own server to try to achieve privately all that we can do with our data when we could store it with a commercial service provider. But this also means that the only way to take advantage of this convenience is if we agree to a company’s TOS. And several billion of us do agree every day. Since its advent in 2007, Dropbox’s user-base has soared to more than 700 million registered users. Apple offers free iCloud storage to users of its more than 1.5 billion active phones, tablets, laptops, and other devices around the world. And Google’s suite of cloud services—which includes both Gmail and Google Drive (offering access to stored and shareable documents, spreadsheets, photos, slide presentations, videos, and more)—enjoy 2 billion monthly active users. These users would be shocked to discover that by agreeing to their providers’ TOS, they could be giving up an expectation of privacy in their most private records. In 2018, in Carpenter v. United States, all nine justices on the Supreme Court agreed that even if we store electronic equivalents of our Fourth Amendment-protected “papers” and “effects” with a third-party provider, we still retain privacy interests in those records. These constitutional rights would be meaningless, however, if they could be ignored simply because a user agreed to and then somehow violated their provider’s TOS. The appellate court’s ruling in Pauli allows private agreements to trump bedrock Fourth Amendment guarantees for private communications and cloud-stored records. The ruling affects far more than child pornography cases: anyone who violated any terms of a providers’ TOS could lose Fourth Amendment protections over all the files in their account. We hope the Minnesota Supreme Court will reject such a sweeping invalidation of constitutional rights. We look forward to the court’s decision.

  • Canada’s Attempt to Regulate Sexual Content Online Ignores Technical and Historical Realities
    by Daly Barnett on April 23, 2021 at 8:41 pm

    Canadian Senate Bill S-203, AKA the “Protecting Young Persons from Exposure to Pornography Act,” is another woefully misguided proposal aimed at regulating sexual content online. To say the least, this bill fails to understand how the internet functions and would be seriously damaging to online expression and privacy. It’s bad in a variety of ways, but there are three specific problems that need to be laid out:  1) technical impracticality, 2) competition harms, and 3) privacy and security. First, S-203 would make any person or company criminally liable for any time an underage user engages with sexual content through its service. The law applies even if the person or company believed the user to be an adult, unless the person or company “implemented a prescribed age-verification method.” Second, the bill seemingly imposes this burden on a broad swath of the internet stack. S-203 would criminalize the acts of independent performers, artists, blogs, social media, message boards, email providers, and any other intermediary or service in the stack that is in some way “for commercial purposes” and “makes available sexually explicit material on the Internet to a young person.” The only meaningful defense against the financial penalties that a person or company could assert would be to verify the legal adult age of every user and then store that data. The bill would likely force many companies to simply eliminate sexual content The sheer amount of technical infrastructure it would take for such a vast portion of the internet to “implement a prescribed age-verification method” would be costly and overwhelmingly complicated. It would also introduce many security concerns that weren’t previously there. Even if every platform had server side storage with robust security posture, processing high level personally identifiable information (PII) on the client side would be a treasure trove for anyone with a bit of app exploitation skills. And then if this did create a market space for third-party proprietary solutions to take care of a secure age verification system, the financial burden would only advantage the largest players online. Not only that, it’s ahistorical to assume that younger teenagers wouldn’t figure out ways to hack past whatever age verification system is propped up. Then there’s the privacy angle. It’s ludicrous to expect all adult users to provide private personal information every time they log onto an app that might contain sexual content. The implementation of verification schemes in contexts like this may vary on how far privacy intrusions go, but it generally plays out as a cat and mouse game that brings surveillance and security threats instead of responding to initial concerns. The more that a verification system fails, the more privacy-invasive measures are taken to avoid criminal liability. Because of the problems of implementing age verification, the bill would likely force many companies to simply eliminate sexual content instead of carrying the huge risk that an underage user will access it. But even a company that wanted to eliminate prohibited sexual content would face significant obstacles in doing so if they, like much of the internet, host user-generated content. It is difficult to detect and define the prohibited sexual content, and even more difficult when the bill recognizes that the law is not violated if such material “has a legitimate purpose related to science, medicine, education or the arts.” There is no automated tool that can make such distinctions; the inevitable result is that protected materials will be removed out of an abundance of caution. And history teaches us that the results are often sexist, misogynist, racist, LGBT-phobic, ableist, and so on. It is a feature, not a bug, that there is no one-size-fits-all way to neatly define what is and isn’t sexual content. Ultimately, Canadian Senate Bill S-203 is another in a long line of morally patronizing legislation that doesn’t understand how the internet works. Even if there were a way to keep minors away from sexual content, there is no way without vast collateral damage. Sen. Julie Miville-Dechêne, who introduced the bill, stated “it makes no sense that the commercial porn platforms don’t verify age. I think it’s time to legislate.” We gently recommend that next time her first thought be to consult with experts.

  • EFF and ACLU Ask Supreme Court to Review Case Against Warrantless Searches of International Travelers’ Phones and Laptops
    by Rebecca Jeschke on April 23, 2021 at 3:52 pm

    Border Officers Accessing Massive Amounts of Information from Electronic DevicesWashington, D.C. —The Electronic Frontier Foundation (EFF), the American Civil Liberties Union, and the ACLU of Massachusetts today filed a petition for a writ of certiorari, asking the Supreme Court to hear a challenge to the Department of Homeland Security’s policy and practice of warrantless and suspicionless searches of travelers’ electronic devices at U.S. airports and other ports of entry. The lawsuit, Merchant v. Mayorkas, was filed in September 2017 on behalf of several travelers whose cell phones, laptops, and other electronic devices were searched without warrants at the U.S. border. In November 2019, a federal district court in Boston ruled that border agencies’ policies on electronic device searches violate the Fourth Amendment, and required border officers to have reasonable suspicion of digital contraband before they can search a traveler’s device. A three-judge panel at the First Circuit reversed this decision in February 2021. “Border officers every day make an end-run around the Constitution by searching travelers’ electronic devices without a warrant or any suspicion of wrongdoing,” said EFF Senior Staff Attorney Sophia Cope. “The U.S. government has granted itself unfettered authority to rummage through our digital lives just because we travel internationally. This egregious violation of privacy happens with no justification under constitutional law and no demonstrable benefit. The Supreme Court must put a stop to it.” “This case raises pressing questions about the Fourth Amendment’s protections in the digital age,” said Esha Bhandari, deputy director of the ACLU’s Speech, Privacy, and Technology Project. “When border officers search our phones and laptops, they can access massive amounts of sensitive personal information, such as private photographs, health information, and communications with partners, family, and friends—including discussions between lawyers and their clients, and between journalists and their sources. We are asking the Supreme Court to ensure that we don’t lose our privacy rights when we travel.” Every year, a growing number of international travelers are subject to warrantless and suspicionless searches of their personal electronic devices at the U.S. border. These searches are often conducted for reasons that have nothing to do with stopping the importation of contraband or determining a traveler’s admissibility. Border officers claim the authority to search devices for a host of reasons, including enforcement of tax, financial, consumer protection, and environmental laws—all without suspicion of wrongdoing. Border officers also search travelers’ devices if they are interested in information about someone other than the traveler—like a business partner, family member, or a journalist’s source. The petitioners in this case—all U.S. citizens—include a military veteran, journalists, an artist, a NASA engineer, and a business owner. Several are Muslims and people of color, and none were accused of any wrongdoing in connection with their device searches. “It’s been frustrating to be subjected to this power-grab by the government,” said Diane Zorri, a college professor, former U.S. Air Force captain, and a plaintiff in the case. “My devices are mine, and the government should need a good reason before rifling through my phone and my computer. I’m proud to be part of this case to help protect travelers’ rights.” The certiorari petition asks the Supreme Court to overturn the First Circuit’s decision and hold that the Fourth Amendment requires border officers to obtain a warrant based on probable before searching electronic devices, or at the least have reasonable suspicion that the device contains digital contraband. For more information about Merchant v. Mayorkas go to: For the full petition for writ of certiorari: Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights [email protected] KateLagrecaACLU of [email protected] Aaron MadridAksozACLU [email protected]

  • Tell Congress: Federal Money Shouldn’t Be Spent On Breaking Encryption
    by Joe Mullin on April 22, 2021 at 8:47 pm

    We don’t need government minders in our private conversations. That’s because private conversations, whether they happen offline or online, aren’t a public safety menace. They’re not an invitation to criminality, or terrorism, or a threat to children, no matter how many times those tired old lines get repeated.  TAKE ACTION TELL CONGRESS: DON’T SPEND TAX MONEY TO BREAK ENCRYPTION Unfortunately, federal law enforcement officials have not stopped asking for backdoor access to Americans’ encrypted messages. FBI Director Christopher Wray did it again just last month, falsely claiming that end-to-end encryption and “user-only access” have “negligible security advantages” but have a “negative effect on law enforcement’s ability to protect the public.” This year, there’s something we can do about it. Rep. Tom Malinowski (D-NJ) and Rep. Peter Meijer (R-MI) have put forward language that would ban federal money from being used to weaken security standards or introduce vulnerabilities into software or hardware. Last year, the House of Representatives inserted an amendment in the Defense Appropriations bill that prohibits the use of funds to insert security backdoors. That provision targeted the NSA. This year’s proposal will cover a much broader range of federal agencies. It also includes language that would prevent the government from engaging in schemes like client-side scanning or a “ghost” proposal, which would undermine encryption without technically decrypting data. Secure and private communications are the backbone of democracy and free speech around the world. If U.S. law enforcement is able to compel private companies to break encryption, criminals and authoritarian governments will be eager to use the same loopholes. There are no magic bullets, and no backdoors that will only get opened by the “good guys.” It’s important that as many members of Congress as possible sign on as supporters of this proposal. We need to send a strong signal to federal law enforcement that they should, once and for all, stop insisting they should scan all of our messages. To get there, we need your help. TAKE ACTION TELL CONGRESS: DON’T SPEND TAX MONEY TO BREAK ENCRYPTION

  • Data Driven 2: California Dragnet—New Data Set Shows Scale of Vehicle Surveillance in the Golden State
    by Dave Maass on April 22, 2021 at 8:41 pm

    This project is based on data processed by student journalist Olivia Ali, 2020 intern JJ Mazzucotelli, and research assistant Liam Harton, based on California Public Records Act requests filed by EFF and dozens of students at the University of Nevada, Reno Reynolds School of Journalism.  Tiburon, California: a 13-square-mile peninsula town in Marin County, known for its glorious views of the San Francisco Bay and its eclectic retail district.  What the town’s tourism bureau may not want you to know: from the moment you drive into the city limits, your vehicle will be under extreme surveillance. The Tiburon Police Department has the dubious distinction of collecting, mile-for-mile, more data on drivers than any other agency surveyed for a new EFF data set.  Today, EFF is releasing Data Driven 2: California Dragnet, a new public records collection and data set that shines light on the massive amount of vehicle surveillance conducted by police in California using automated license plate readers (ALPRs)—and how very little of this surveillance is actually relevant to an active public safety interest.  Download the Data Driven 2: California Dragnet data set. In 2019 alone, just 82 agencies collected more than 1 billion license plate scans using ALPRs. Yet, 99.9% of this surveillance data was not actively related to an investigation when it was collected. Nevertheless, law enforcement agencies stockpile this data, often for years, and often share the data with hundreds of agencies around the country.   This means that law enforcement agencies have built massive databases that document our travel patterns, regardless of whether we’re under suspicion. With a few keystrokes, a police officer can generate a list of places a vehicle has been seen, with few safeguards and little oversight.   EFF’s dataset also shows for the first time how some jurisdictions—such as Tiburon and  Sausalito in Northern California, and Beverly Hills and Laguna Beach in Southern California—are scanning drivers at a rate far higher than the statewide mean. In each of those cities, an average vehicle will be scanned by ALPRs every few miles it drives. Tiburon first installed Vigilant Solutions ALPRs at the town’s entrance and exit points and downtown about a decade ago. Today, with just six cameras, it has evolved into a massive surveillance program: on average, a vehicle will be scanned by cops once for every 1.85 miles it drives.   Tiburon Police stockpile about 7.7-million license plate scans annually, and yet only .01% or 1 in 10,000 of those records were related to a crime or other public safety interest when they were collected. The data is retained for a year.  ALPRs are a form of location surveillance: the data they collect can reveal our travel patterns and daily routines, the places we visit, and the people with whom we associate. In addition to the civil liberties threat, these data systems also create great security risks, with multiple known breaches of ALPR data and technology occurring over the last few years.  EFF sought comment from Tiburon Police Chief Ryan Monaghan, who defended the program via email. “Since the deployment of the ALPRs, our crime data from the five years prior to having the ALPRs as well as the five years after and beyond have shown marked reductions in stolen vehicles and thefts from vehicles and an increase in the recovery of stolen vehicles,” he wrote.   EFF’s public records data set, which builds on a 2016-2017 survey (Data Driven 1), aims to provide journalists, policymakers, researchers, and local residents with data to independently evaluate and understand the state of California’s ALPR dragnet.  What Are Automated License Plate Readers? A fixed ALPR and a mobile ALPR. Credit: Mike Katz-Lacabe (CC BY) ALPRs are cameras that snap photographs of license plates and then upload the plate numbers, time/data, and GPS coordinates to a searchable database. This allows police to identify and track vehicles in real time, search the historical travel patterns of any vehicle, and identify vehicles that have been spotted near certain locations.  Cops attach these cameras to fixed locations, like highway overpasses or traffic lights. Law enforcement agencies also install ALPRs on patrol vehicles, allowing police to capture data on whole neighborhoods by driving block-by-block, a tactic known as “gridding.” In 2020, the California State Auditor issued a report that found that agencies were collecting large amounts of data without following state law and without addressing some of the most basic cybersecurity and civil liberties concerns when it comes to Californians’ data.  About the Data Set The Data Driven 2: California Dragnet data set is downloadable as an Excel (.xlsx) file, with the data analysis broken into various tabs. We have also presented selections from the data as a table below.  The dataset is based on dozens of California Public Records Act Requests filed by EFF and students at the Reynolds School of Journalism at the University of Nevada, Reno in collaboration with MuckRock News. Data Driven 2 is a sequel to the EFF and MuckRock’s 2018 Data Driven report.  To create the data set, we filed more than 100 requests for information under the California Public Records Act. We sought the following records from each agency:  The number of license plate scans captured by ALPRs per year in 2018 and 2019. These are also called “detections.”  The number of scanned license plates each year for 2018 and 2019 that matched a “hot list,” the term of art for a list of vehicles of interest. These matches are called “hits.”  The list of other agencies that the law enforcement agency is exchanging data with, including both ALPR scans and hot lists.  Most of these public records requests were filed in 2020. For a limited number of requests filed in 2021, we also requested detection and hit information for 2020. The agencies were selected because they had previously provided records for the original report or were known to use an ALPR system that could export the type of aggregate data required for this analysis. Not all agencies provided records in response to our requests.  The spreadsheet includes links to public records for each agency, along with a table of their statistics. In addition, we have included “Daily Vehicle Miles Travelled” from the California Department of Transportation for help in comparing jurisdictions.  The dataset covers 89 agencies from all corners of the state. However, the data was not always presented by the agencies in a uniform manner. Only 63 agencies provided comprehensive and separated data for both 2018 and 2019. Other agencies either produced data for incomparable time periods or provided incomplete or clearly erroneous data. In some cases, agencies did not have ALPRs for the full period, having either started or ended their programs mid-year.  (Note: More than 250 agencies use ALPR in California). In general, our analysis below only includes agencies that provided both 2018 and 2019 data, which we then averaged together. However, we are including all the data we received in the spreadsheet. Hit Ratio: Most ALPR Data Involves Vehicles Not Under Active Suspicion  One way to examine ALPR data is to ask whether the data collected is relevant to an active investigation or other public safety interest at the time it is collected.  Law enforcement agencies create “hot lists” of license plates, essentially lists of vehicles they are actively looking for, for example, because they’re stolen, are suspected of being connected to a crime, or belong to an individual under state supervision, such as a sex offender. When an ALPR scans a license plate that matches a hot list, the system issues an alert to the law enforcement agency that the vehicle was sighted.  Data that is not on a hot list is still stored, often for more than a year depending on the agency’s policy, despite a lack of relevance to an active public safety interest. Police have argued they need this data in case one day you commit a crime, at which point they can look back at your historical travel patterns. EFF and other privacy organizations argue that this a fundamental violation of the privacy of millions of innocent drivers, as well as an enormous cybersecurity risk.  The 63 agencies that provided us 2018-2019 data collected a combined average 840,000,000 plate images each year. However, only 0.05% of the data matched a hot list.  Some agencies only provided us data for three months or one year. Other agencies lumped all the data together. While we have left them out of this analysis, their hit ratios closely followed what we saw in the agencies that provided us consistent data.  The top 15 data-collecting law enforcement agencies accounted for 1.4 billion license plate scans over two years. On average, those 15 law enforcement agencies reported that only .05% of the data was on a hot list.  Agency/Records Link 2018-2019 License Plate Scans Hit Ratio (Percentage on a Hot List) San Bernardino County Sheriff’s Office 439,272,149 0.05% Carlsbad Police Department 161,862,285 0.02% Sacramento Police Department 142,170,129 0.08% Torrance Police Department 132,904,262 0.04% Chino Police Department 83,164,449 0.05% Beverly Hills Police Department 67,520,532 0.03% Fontana Police Department 66,255,835 0.06% Contra Costa County Sheriff’s Office 65,632,313 0.11% Claremont Police Department (Link 2) 45,253,735 0.04% Long Beach Police Department 44,719,586 0.09% Livermore Police Department 39,430,629 0.04% Laguna Beach Police Department 37,859,124 0.04% Pleasant Hill Police Department 27,293,610 0.03% Merced Police Department 25,895,158 0.04% Brentwood Police Department 25,440,363 0.07% In his written response to our finding that Tiburon Police store data for a year, even though 99.9% of the data was not tied to an alert, Chief Monaghan wrote: “Our retention cycle for the scan data is in line with industry-wide standards for ALPRs and does not contain any personal identifying information. Like other agencies that deploy ALPRs, we retain the information to use for investigative purposes only as many crimes are reported after the fact and license plate data captured by the ALPRs can be used as an investigative tool after the fact.”  Monaghan presents a few misconceptions worth dispelling. First, while many agencies do store data for one year or more, there is no industry standard for ALPR.  For example, Flock Safety, a vendor that provides ALPR to many California agencies, deletes data after 30 days. The California Highway Patrol is only allowed to hang onto data for 60 days. According to the National Conferences of State Legislatures, Maine has a 21 day retention period  and Arkansas has a 150 day retention period.  In New Hampshire, the law requires deletion after three minutes if the data is not connected to a crime. Finally there is a certain irony when law enforcement claims that ALPR data is not personally identifying information, when one of the primary purposes of the data is to assist in identifying suspects. In fact, California’s data breach laws explicitly name ALPR as a form of personal information when it is combined with a person’s name. It is very easy for law enforcement to connect ALPR data to other data sets, such as a vehicle registration database, to determine the identity of the owner of the vehicle. In addition, ALPR systems also store photographs, which can potentially capture images of drivers’ faces.  Indeed, Tiburon’s own ALPR policy says that raw ALPR data cannot be released publicly because it may contain confidential information. This is consistent with a California Supreme Court decision that found that the Los Angeles Police Department and Los Angeles County Sheriff’s Department could not release unredacted ALPR data in response to a CPRA request because “the act of revealing the data would itself jeopardize the privacy of everyone associated with a scanned plate. Given that real parties each conduct more than one million scans per week, this threat to privacy is significant.” The Supreme Court agreed with a lower court ruling that “ALPR data showing where a person was at a certain time could potentially reveal where that person lives, works, or frequently visits. ALPR data could also be used to identify people whom the police frequently encounter, such as witnesses or suspects under investigation.” Scans Per Mile: Comparing the Rate of ALPR Surveillance  While the agencies listed in the previous section are each collecting a massive amount of data, it can be difficult to interpret how  law enforcement agencies’ practices compare to one another. Obviously, some cities are bigger than others, and so we sought to establish a way to measure the proportionality of the ALPR data collected.  One way to do this is to compare the number of license plate scans to the size of the jurisdiction’s population. However, this method may not provide the clearest picture, since many commuters, tourists, and rideshare drivers cross city lines many times a day. In addition, the larger and more dense a population is, fewer people may own vehicles with more relying on public transit instead. So we ruled out that method.  Another method is to compare the license plate scans to the number of vehicles registered or owned in a city. That runs into a similar problem: people don’t always work in the same city where their car is registered, particularly in the San Francisco Bay Area and Los Angeles County.  So, we looked for some metric that would allow us to compare the number of license plate scans to how much road traffic there is in a city.  Fortunately, for each city and county in the state, the California Department of Transportation compiles annual data on “Vehicle Miles Traveled” (VMT), a number representing the average number of miles driven by vehicles on roads in the jurisdiction each day. VMT is becoming a standard metric in urban and transportation planning. By comparing license plate scans to VMT we can begin to address the question: Are some cities disproportionally collecting more data than others? The answer is yes: Many cities are collecting data at a far higher rate than others. There are a few different ways to interpret the rate. For example, in Tiburon, police are collecting on average one license plate scan for every 1.85 miles driven. That means that the average vehicle will be captured every 1.85 miles. To put that in perspective, a driver from outside of town who commutes to and from downtown Tiburon (about four miles each way to the town limits), five days a week, 52 weeks a year, should expect their license plate to be scanned on average 1,124 times annually.   Another way to look at it is that for every 100 cars that drive one mile, Tiburon ALPRs on average will scan 54 license plates.  Via email, Tiburon Chief Monaghan responded: “In terms of the number of scans compared to VMT, our ALPRS are strategically placed on two main arterial roadways leading in and out of Tiburon, Belvedere, and incorporated sections of Tiburon. As Tiburon is located on a peninsula, there are limited roadways in and out. The roadways where the ALPRs are deployed are not only used by those who live in the area, but also by commuters who use the ferries that operate out of Tiburon, those who work in Tiburon and Belvedere, parents taking their kids to schools in the area, and those visiting.” Tiburon isn’t the only agency collecting data at a high rate.  In Sausalito, on average police are capturing a plate for every 2.14 miles a car drives. That’s the equivalent of 46 scans per 100 vehicles that drive one mile.  In Laguna Beach, it’s one ALPR scan for every three miles. On average, ALPRs will scan 33 plates for every 100 cars that drive a single mile.  In Beverly Hills, it’s one ALPR scan for every 4.63 miles, or 21 scans per 100 cars that drive one mile.  In comparison: Across 60 cities, police collectively scanned on average one plate for every 48 miles driven by vehicles. Tiburon scanned license plates at more than 25 times that rate.  For this analysis, we only included municipal police that provided data for both 2018 and 2019. We then used those figures to find an average number of daily plates scanned, which we then compared to the cities’ average daily VMTs for 2018-2019.  Here are the top 15 municipal police department based on miles per scan.  Agency/Records Link Average Number of Vehicle Miles Traveled Per Scan (2018-2019)  Average Number of Scans Per 100 Vehicle Miles Traveled (2018-2019) Tiburon Police Department (Link 2) 1.85 miles 54.11 scans Sausalito Police Department (Link 2) 2.14 miles 46.65 scans Laguna Beach Police Department 3.02 miles 33.12 scans Beverly Hills Police Department 4.63 miles 21.60 scans Claremont Police Department (Link 2) 5.99 miles 16.69 scans La Verne Police Department 7.91 miles 12.64 scans Carlsbad Police Department 8.19 miles 12.21 scans Chino Police Department 8.46 miles 11.83 scans Torrance Police Department 9.74 miles 10.27 scans Clayton Police Department 10.22 miles 9.79 scans Pleasant Hill Police Department 10.93 miles 9.15 scans Oakley Police Department 10.97 miles 9.12 scans Brentwood Police Department 11.42 miles 8.76 scans Martinez Police Department 13.59 miles 7.36 scans  A few caveats about this analysis:  We must emphasize the term “average.” Road traffic is not even across every street, nor are ALPRs distributed evenly across a city or county. A driver who only drives a half mile along backroads each day may never be scanned. Or if a city installs ALPRs at every entrance and exit to town, as Tiburon has, a driver who commutes to and from the city everyday would likely be scanned at a much higher rate.  In addition, many police departments attach ALPRs to their patrol cars. This means they are capturing data on parked cars they pass. Your risk of being scanned by an ALPR does not increase linearly with driving—someone who leaves their car parked all year may still be scanned several times. In many jurisdictions, both the city police and the county sheriff use ALPRs; our data analysis does not cover overlapping data collection. Our intention is not to help drivers determine exactly how often they’ve been scanned but to compare the volume of data collection across municipalities of different sizes.  Finally, VMT is not an exact measurement but rather an estimate that is based on measuring roadway traffic on major arteries and projecting across the total number of miles of road in the city.  As such, this ratio presents a useful metric to gauge proportionality broadly and should be interpreted as an estimate or a projection.  What to Do With This Data The Data Driven 2 dataset is designed to give a bird’s eye view of the size and scope of data collection through ALPRs.  However, it does not dive into more granular variations in ALPR programs between agencies, such as the number or type of ALPR cameras used by each agency. For example, some agencies might use 50 stationary cameras, while others may use three mobile cameras on patrol cars and still others use a combination of both. Some agencies may distribute data collection evenly across a city, and others may target particular neighborhoods.  In many cases, agencies provided us with “Data Sharing Reports” that list everyone with whom they are sharing data. This can be useful for ascertaining whether agencies are sharing data broadly with agencies outside of California or even with federal agencies, such as immigration enforcement. Please note that the data sharing list does change from day-to-day as agencies join and leave information exchange networks.  Journalists and researchers can and should use our dataset as a jumping-off point to probe deeper.  By filing public records requests or posing questions directly to police leaders, we can find out how agencies are deploying ALPRs and how that impacts the amount of data collected, the usefulness of the data, and the proportionality of the data. Under a California Supreme Court ruling, requesters may also be able to obtain anonymized data on where ALPR data was collected. Conclusion  Law enforcement often tries to argue that using ALPR technology to scan license plates is no different than a police officer on a stakeout outside a suspected criminal enterprise who writes down the license plates of every car that pulls up.  But let’s say you lived in a place like Brentwood, where police collect on average 25-million license plates a year. Let’s assume that a police observer is able to snap a photo and scribble down the plate number, make, model, and color of a vehicle once every minute (which is still pretty superhuman).  Brentwood would have to add 200 full-time employees to collect as much data manually as they can with their ALPRs.  By comparison: The Brentwood Police Department currently has 71 officers, 36 civilian support workers, and 20 volunteers. The entire city of Brentwood has only 283 employees.  If 200 human data recorders were positioned throughout a city, spying on civilians, it’s unlikely people would stand for it. This report illustrates how that level of surveillance is indeed occurring in many cities across California. Just because the cameras are more subtle, doesn’t make them any less creepy or authoritarian.   2018-2019 Detections with Hit Ratio   Agency/Records Link 2018-2019 Detections Combined Hit Ratio (Percentage on a Hot List) Antioch Police Department 22,524,415 0.10% Auburn Police Department 3,190,715 0.03% Bakersfield Police Department 370,635 0.11% Bell Gardens Police Department 9,476,932 0.02% Belvedere Police Department 4,089,986 0.01% Beverly Hills Police Department 67,520,532 0.03% Brawley Police Department 870,011 0.03% Brentwood Police Department 25,440,363 0.07% Buena Park Police Department 854,156 0.04% Carlsbad Police Department 161,862,285 0.02% Cathedral City Police Department 104,083 0.06% Chino Police Department 83,164,449 0.05% Chula Vista Police Department 672,599 0.03% Citrus Heights Police Department 18,804,058 0.04% Claremont Police Department (Link 2) 45,253,735 0.04% Clayton Police Department 9,485,976 0.02% Contra Costa County Sheriff’s Office 65,632,313 0.11% Cypress Police Department 288,270 0.04% Emeryville Police Department 1,579,100 0.05% Fairfield Police Department 785,560 0.06% Folsom Police Department 14,624,819 0.03% Fontana Police Department 66,255,835 0.06% Fresno Police Department 3,673,958 0.15% Fullerton Police Department 742,996 0.05% Galt Police Department 23,478 0.02% Garden Grove Police Department 332,373 0.24% Gardena Police Department 5,762,032 0.05% Imperial Police Department 23,294,978 0.03% Irvine Police Department 651,578 0.05% La Habra Police Department 888,136 0.05% La Mesa Police Department 1,437,309 0.05% La Verne Police Department 24,194,256 0.03% Laguna Beach Police Department 37,859,124 0.04% Livermore Police Department 39,430,629 0.04% Lodi Police Department 3,075,433 0.05% Long Beach Police Department 44,719,586 0.09% Marin County Sheriff’s Office 1,547,154 0.04% Merced Police Department 25,895,158 0.04% Mill Valley Police Department 529,157 0.12% Monterey Park Police Department 2,285,029 0.04% Newport Beach Police Department (Link 2) 772,990 0.04% Orange County Sheriff’s Office 2,575,993 0.09% Palos Verdes Estates Police 16,808,440 0.03% Pasadena Police Department 3,256,725 0.03% Pleasant Hill Police Department 27,293,610 0.03% Pomona Police Department 11,424,065 0.10% Redondo Beach Police Department (Link 2) 18,436,371 0.04% Sacramento Police Department 142,170,129 0.08% San Bernardino County Sheriff’s Office 439,272,149 0.05% San Diego County Sheriff’s Office 13,542,616 0.04% San Diego Police Department 138,146 0.07% San Mateo County Sheriff’s Office (Link 2) 4,663,684 0.02% Sausalito Police Department (Link 2) 15,387,157 0.02% Simi Valley Police Department 480,554 0.11% Stanislaus County Sheriff’s Office 6,745,542 0.07% Stockton Police Department 1,021,433 0.09% Tiburon Police Department (Link 2) 15,424,890 0.01% Torrance Police Department 132,904,262 0.04% Tracy Police Department 1,006,393 0.06% Tustin Police Department (Link 2) 1,030,106 0.04% West Sacramento Police Department 2,337,027 0.05% Westminster Police Department 1,271,147 0.05% Yolo County Sheriff’s Office 3,049,884 0.02% Irregular Agencies These agencies responded to our records requests, but did not provide complete, reliable or directly comparable information.   Agency/Records Link Detection Years Used in Hit Ratio Detections Hit Ratio (Percentage on a Hot List) American Canyon Police Department 2018 394,827 0.18% Beaumont Police Department 2019 83,141 0.10% Bell Police Department 2018 806,327 Data Not Available Burbank Police Department 2020 364,394 0.05% Coronado Police Department 2019 616,573 0.07% CSU Fullerton Police Department 2019 127,269 0.05% Desert Hot Springs Police Department Data Not Available Data Not Available Data Not Available El Segundo Police Department 2020 24,797,764 0.07% Fountain Valley Police Department 2018-2020 780,940 Data Not Available Glendale Police Department 2019 119,356 0.04% Hemet Police Department 2019 84,087 0.10% Hermosa Beach Police Department 2019 274,577 0.51% Martinez Police Department 2019 12,990,796 Data Not Available Modesto Police Department 2019 10,262,235 0.06% Oakley Police Department 2019 8,057,003 Data Not Available Ontario Police Department Jan 19 – Feb 17, 2021 2,957,671 0.07% Orange Police Department 2018 387,592 0.06% Palm Springs Police Department 2019 58,482 0.29% Redlands Police Department 2019 4,027,149 0.08% Ripon Police Department 2019 2,623,741 0.05% Roseville Police Department 2019 3,733,042 0.03% San Joaquin County Sheriff’s Office 2018-2020 155,105 0.04% San Jose Police Department 2020 1,686,836 0.09% Seal Beach Police Department 2018 38,247 0.49% Woodland Police Department 2019 1,382,297 0.05%

  • No Digital Vaccine Bouncers
    by Alexis Hancock on April 22, 2021 at 6:37 pm

    The U.S. is distributing more vaccines and the population is gradually becoming vaccinated. Returning to regular means of activity and movement has become the main focus for many Americans who want to travel or see family. An increasingly common proposal to get there is digital proof-of-vaccination, sometimes called “Vaccine Passports.” On the surface, this may seem like a reasonable solution. But to “return to normal”, we also have to consider that inequity and problems with access are a part of that normal. Also, these proposals require a new infrastructure and culture of doorkeepers to public places regularly requiring visitors to display a token as a condition of entry. This would be a giant step towards pervasive tracking of our day-to-day movements. And these systems would create new ways for corporations to monetize our data and for thieves to steal our data. That’s why EFF opposes new systems of digital proof-of-vaccination as a condition of going about our day-to-day lives. They’re not “vaccine passports” that will speed our way back to normal. They’re “vaccine bouncers” that will unnecessarily scrutinize us at doorways and unfairly turn many of us away. What Are Vaccine Bouncers? So-called “vaccine passports” are digital credentials proposed to be convenient, digital, and accessible ways to store and present your medical data. In this case, it shows you have been vaccinated. These are not actual passports for international travel, nor are they directly related to systems we have in place to prove you have been vaccinated. Though different proposals vary, these are all new ways of displaying medical data in a way that is not typical for our society as a whole. These schemes require the creation of a vast new electronic gatekeeping system. People will need to download a token to their phone, or in some cases may print that token and carry it with them. Public places will need to acquire devices that can read these tokens. To enter public places, people will need to display their token to a doorkeeper. Many people will be bounced away at the door, because they are not vaccinated, or they left their phone at home, or the system is malfunctioning. This new infrastructure and culture will be difficult to dismantle when we reach herd immunity. We already have vaccination documents we need to obtain for international travel to certain countries. But even the World Health Organization (W.H.O.), the entity that issues Yellow Cards to determine if one has had a Yellow Fever vaccine, has come out against vaccine passports. Requiring people to present their medical data to go to the grocery store, access public services, and other vital activities calls into question who will be ultimately barred from coming in. A large number of people not only in the U.S., but worldwide, do not have access to any COVID vaccines. Many others do not have access to mobile phones, or even to the printers required to create the paper QR code that is sometimes suggested as the supposed work-around. Also, many solutions will be built by private companies offering smartphone applications. Meaning, they will give rise to new databases of information not protected by any privacy law and transmitted on a daily basis far more frequently than submitting a one-time paper proof-of-vaccination to a school. Since we have no adequate federal data privacy law, we are relying on the pinky-promises of private companies to keep our data private and secure. We’ve already seen mission creep with digital bouncer systems. Years ago, some bars deployed devices that scanned patrons’ identification as a condition of entry. The rationale was to quickly ascertain, and then forget, a narrow fact about patrons: whether they are old enough to buy alcohol, and thus enter the premises. Then these devices started to also collect information from patrons, which bars share with each other. Thus, we are not comforted when we hear people today say: “don’t worry, digital vaccine bouncers will only check whether a person was vaccinated, and will not also collect information about them.” Once the infrastructure is built, it requires just a few lines of code to turn digital bouncers into digital panopticons. Temporary Measures with Long Term Consequences When we get to an approximation of normal, what is the plan for vaccine passports? Most proposals are not clear on this point. What will become of that medical data? Will there be a push for making this a permanent part of life? As with any massive new technological system, it will take significant time and great effort to make the system work. We’ve already seen how easy it is to evade New York’s new vaccine bouncer system, and how other digital COVID systems, due to their flaws, fail to advance public health. Even with the best efforts, by the time the bugs are worked out of a new digital vaccine system for COVID, it may not be helpful to combat the pandemic. There’s no need to rush into building a system that will only provide value to the companies that profit by building it. Instead, our scarce resources should go to getting more people vaccinated. We are all in this together, so we should be opening up avenues of access for everyone to a better future in this pandemic. We should not be creating more issues, concerns, and barriers with experimental technology that needs to be worked out during one of the most devastating modern global crises of our time.

  • EFF Sues Proctorio on Behalf of Student It Falsely Accused of Copyright Infringement to Get Critical Tweets Taken Down
    by Karen Gullo on April 21, 2021 at 11:17 pm

    Links to Software Code Excerpts in Tweets Are Fair UsePhoenix, Arizona—The Electronic Frontier Foundation (EFF) filed a lawsuit today against Proctorio Inc. on behalf of college student Erik Johnson, seeking a judgment that he didn’t infringe the company’s copyrights when he linked to excerpts of its software code in tweets criticizing the software maker.Proctorio, a developer of exam administration and surveillance software, misused the copyright takedown provisions of the Digital Millennium Copyright Act (DMCA) to have Twitter remove posts by Johnson, a Miami University computer engineering undergraduate and security researcher. EFF and co-counsel Osborn Maledon said in a complaint filed today in U.S. District Court, District of Arizona, that Johnson made fair use of excerpts of Proctorio’s software code, and the company’s false claims of infringement interfered with Johnson’s First Amendment right to criticize the company.“Software companies don’t get to abuse copyright law to undermine their critics,” said EFF Staff Attorney Cara Gagliano. “Using pieces of code to explain your research or support critical commentary is no different from quoting a book in a book review.”Proctoring apps like Proctorio’s are privacy-invasive software that “watches” students through eye-tracking and face detection for supposed signs of cheating as they take tests or complete schoolwork. The use of these “disciplinary technology” programs has skyrocketed amid the pandemic, raising questions about the extent to which they threaten student privacy and disadvantage students without access to high-speed internet and quiet spaces.Proctorio has responded to public criticism by attacking people who speak out. The company’s CEO released on Reddit contents of a student’s chat log captured by Proctorio after the student posted complaints about the software on the social network. The company has also sued a remote learning specialist in Canada for posting links to Proctorio’s publicly available YouTube videos in a series of tweets showing the software tracks “abnormal” eye and head movements it deems suspicious.Concerned about how much private information Proctorio collects from students’ computers, Johnson, whose instructors have given tests using Proctorio, examined the company’s software, including the files that are downloaded to any computer where the software is installed.He published a series of tweets in September critiquing Proctorio, linking in three of those tweets to short software code excerpts that demonstrate the extent of the software’s tracking and access to users’ computers. In another tweet, Johnson included a screenshot of a video illustrating how the software is able to create a 360-degree image of students’ rooms that is accessible to teachers and seemingly Proctorio’s agents.“Copyright holders should be held liable when they falsely accuse their critics of copyright infringement, especially when the goal is plainly to intimidate and undermine them,” said Gagliano. “We’re asking the court for a declaratory judgment that there is no infringement to prevent further legal threats and takedown attempts against Johnson for using code excerpts and screenshots to support his comments.”For the complaint: more on proctoring surveillance: Contact:  CaraGaglianoStaff [email protected]

  • Fighting FLoC and Fighting Monopoly Are Fully Compatible
    by Cory Doctorow on April 21, 2021 at 9:06 pm

    Are tech giants really damned if they do and damned if they don’t (protect our privacy)? That’s a damned good question that’s been occasioned by Google’s announcement that they’re killing the invasive, tracking third-party cookie (yay!) and replacing it with FLoC, an alternative tracking scheme that will make it harder for everyone except Google to track you (uh, yay?)  (You can find out if Google is FLoCing with you with our Am I FLoCed tool). Google’s move to kill the third-party cookie has been greeted with both cheers and derision. On the one hand, some people are happy to see the death of one of the internet’s most invasive technologies. We’re glad to see it go, too – but we’re pretty upset to see that it’s going to be replaced with a highly invasive alternative tracking technology (bad enough) that can eliminate the majority of Google’s competitors in the data-acquisition and ad-targeting sectors in a single stroke (worse).  It’s no wonder that so many people have concluded that privacy and antitrust are on a collision course. Google says nuking the third-party cookie will help our privacy, specifically because it will remove so many of its (often more unethical) ad-tech competitors from the web.  But privacy and competition are not in conflict.  As EFF’s recent white paper demonstrated, we can have Privacy Without Monopoly. In fact, we can’t settle for anything less. FLoC is quite a power-move for Google. Faced with growing concerns about privacy, the company proposes to solve them by making itself the protector of our privacy, walling us off from third-party tracking except when Google does it. All the advertisers that rely on non-Google ad-targeting will have to move to Google, and pay for their services, using a marketplace that they’ve rigged in their favor.  To give credit where it is due, the move does mean that some bad actors in the digital ad space may be thwarted. But it’s a very cramped view of how online privacy should work. Google’s version of protecting our privacy is appointing itself the gatekeeper who decides when we’re spied on while skimming from advertisers with nowhere else to go. Compare that with Apple, which just shifted the default to “no” for all online surveillance by apps, period (go, Apple!). And while here we think Apple is better than Google, that’s not how any of this should work. The truth is, despite occasional counter-examples, the tech giants can’t be relied on to step up to provide real privacy for users when it conflicts with their business models.  The baseline for privacy should be a matter of law and basic human rights, not just a matter of a corporate whim. America is long, long overdue for a federal privacy law with a private right of action. Users must be empowered to enforce privacy accountability, instead of relying on the largesse of the giants or on overstretched civil servants.  Just because FLoC is billed as pro-privacy and also criticized as anti-competitive, it doesn’t mean that privacy and competition aren’t compatible.  To understand how that can be, first remember the reason to support competition: not for its own sake, but for what it can deliver to internet users. The benefit of well-thought-through competition is more control over our digital lives and better (not just more) choices. Competition on its own is meaningless or even harmful: who wants companies to compete to see which one can trick or coerce you into surrendering your fundamental human rights, in the most grotesque and humiliating ways at the least benefit to you? To make competition work for users, start with Competitive Compatibility and interoperability – the ability to connect new services to existing ones, with or without permission from their operators, so long as you’re helping users exercise more choice over their online lives.  A competitive internet – one dominated by interoperable services – would be one where you didn’t have to choose between your social relationships and your privacy. When all your friends are on Facebook, hanging out with them online means subjecting yourself to Facebook’s totalizing, creepy, harmful surveillance.  But if Facebook was forced to be interoperable, then rival services that didn’t spy on you could enter the market, and you could use those services to talk to your friends who were still on Facebook (for reasons beyond your understanding).  This done poorly could be worse for privacy, but done well, it does not have to be. Interoperability is key to smashing monopoly power, and interoperability’s benefits depend on strong laws protecting privacy. With or without interoperability, we need a strong privacy law. Tech companies unilaterally deciding what user privacy means is dangerous, even when they come up with a good answer (Apple) but especially not when their answer comes packaged in a nakedly anticompetitive power-grab (Google). Of course, it doesn’t help that some of the world’s largest, most powerful corporations depend on this unilateral power, and use some of their tremendous profits to fight every attempt to create a strong national privacy law that empowers users to hold them accountable. Competition and privacy reinforce each other in technical ways, too: lack of competition is the reason online tracking technologies all feed the same two companies’ data warehouses. These companies dominate logins, search, social media and the other areas that the people who build and maintain our digital tools need to succeed. A diverse and competitive online world is one with substantial technical hurdles to building the kinds of personal dossiers on users that today’s ad-tech companies depend on for their profitability.  The only sense in which “pro-privacy” and “competition” are in tension is the twisted sense implied by FLoC, where “pro-privacy” means “only one company gets to track you and present who you are to others.”   Of course that’s incompatible with competition. (What’s more, FLoC won’t even deliver that meaningless assurance. As we note in our original post, FLoC also creates real opportunities for fingerprinting and other forms of re-identification. FLoC is anti-competitive and anti-privacy.) Real privacy—less data-collection, less data-retention and less data-processing, with explicit consent when those activities take place—is perfectly compatible with competition. It’s one of the main reasons to want antitrust enforcement. All of this is much easier to understand if you think about the issues from the perspective of users, not corporations. You can be pro-Apple (when Apple is laying waste to Facebook’s ability to collect our data) and anti-Apple (when Apple is skimming a destructive ransom from software vendors like Hey). This is only a contradiction if you think of it from Apple’s point of view – but if you think of it from the users’ point of view, there’s no contradiction at all. We want competition because we want users to be in control of their digital lives – to have digital self-determination and choices that support that self-determination. Right now, that means that we need a strong privacy law and a competitive landscape that gives breathing space to better options than Google’s “track everything but in a slightly different way” FLoC.   As always, when companies have their users’ backs, EFF has the companies’ backs. And as always, the reason we get their backs is because we care about users, not companies. We fight for the users.

  • Indian Government’s Plans to Ban Cryptocurrency Outright Are A Bad Idea
    by Sasha Mathew on April 20, 2021 at 4:30 pm

    While Turkey hit the headlines last week with a ban on paying for items with cryptocurrency, the government of India appears to be moving towards outlawing cryptocurrency completely. An unnamed senior government official told Reuters last month that a forthcoming bill this parliamentary session would include the prohibition of the “possession, issuance, mining, trading and transferring [of] crypto-assets.” Officials have subsequently done little to dispel the concern that they are seeking a full cryptocurrency ban: in response to questions by Indian MPs about the timing and the content of a potential Cryptocurrency Act, the Finance Ministry was non-committal, beyond stating that the bill would follow “due process.”  If the Indian government plans to effectively police its own draconian rules, it would need to seek to block, disrupt, and spy on Internet traffic If rumors of a complete ban accurately describe the bill, it would be a drastic and over-reaching prohibition that would require draconian oversight and control to enforce. But it would also be in keeping with previous overreactions to cryptocurrency by regulators and politicians in India. India regulators’ involvement with cryptocurrency began four years ago with concerns about consumer safety in the face of scams, Ponzi schemes, and the unclear future of many blockchain projects. The central bank issued a circular prohibiting all regulated entities, including banks, from servicing businesses dealing in virtual currencies. Nearly two years later, the ban was overturned by the Indian Supreme Court on the ground that it amounted to disproportionate regulatory action in the absence of evidence of harm caused to the regulated entities. A subsequent report in 2019 by the Finance Ministry proposed a draft bill that would have led to a broad ban on the use of cryptocurrency. It’s this bill that commentators suspect will form the core of the new legislation. The Indian government is worried about the use of cryptocurrency to facilitate illegal activity, but this ignores the many entirely legal uses for cryptocurrencies that already exist and that will continue to develop in the future. Cryptocurrency is naturally more censorship-resistant than many other forms of financial instruments currently available. It provides a powerful market alternative to the existing financial behemoths that exercise control over much of our online transactions today, so that websites engaged in legal (but controversial) speech have a way to receive funds when existing financial institutions refuse to serve them. Cryptocurrency innovation also holds the promise of righting other power imbalances: it can expand financial inclusion by lowering the cost of credit, offering instant transaction resolution, and enhancing customer verification processes. Cryptocurrency can help unbanked individuals get access to financial services. If the proposed cryptocurrency bill does impose a full prohibition, as rumors suggest, the Indian government should consider, too, the enforcement regime it would have to create. Many cryptocurrencies, including Bitcoin, offer some privacy-enhancing features which make it relatively easy for the geographical location of a cryptocurrency transaction to be concealed, so while India’s cryptocurrency users would be prohibited from using local, regulated cryptocurrency services, they could still covertly join the rest of the world’s cryptocurrency markets. As the Internet and Mobile Association of India has warned, the result would be that Indian cryptocurrency transactions would move to “illicit” sites that would be far worse at protecting consumers. Moreover, if the Indian government plans to effectively police its own draconian rules, it would need to seek to block, disrupt, and spy on Internet traffic to detect or prevent cryptocurrency transactions. Those are certainly powers that the past and present Indian administrations have sought: but unless they are truly necessary and proportionate to a legitimate aim, such interference will violate international law, and, if India’s Supreme Court decides they are unreasonable, will fail once again to pass judicial muster. The Indian government has claimed that it does want to support blockchain technology in general. In particular, the current government has promoted the idea of a “Digital Rupee”, which it expects to be placed on a statutory footing in the same bill that bans private cryptocurrencies. It’s unclear what the two actions have in common. A centrally-run digital currency has no reason to be implemented on a blockchain, a technology that is primarily needed for distributed trust consensus, and has little applicability when the government itself is providing the centralized backstop for trust. Meanwhile, legitimate companies and individuals exploring the blockchain for purposes for which it is well-suited will always fear falling afoul of the country’s criminal sanctions—which will, Reuter’s source claims, include ten-year prison sentences in its list of punishments. Such liability would be the severest disincentive to any independent investor or innovator, whether they are commercial or working in the public interest. Addressing potential concerns around cryptocurrency by banning the entire technology would be excessive and unjust. It denies Indians access to the innovations that may come from this sector, and, if enforced at all, would require prying into Indian’s digital communications to an unnecessary and disproportionate degree.

  • Senators Demand Answers on the Dangers of Predictive Policing
    by Matthew Guariglia on April 19, 2021 at 7:30 pm

    Predictive policing is dangerous and yet its use among law enforcement agencies is growing. Predictive policing advocates, and companies that make millions selling technology to police departments, like to say the technology is based on “data” and therefore it cannot be racially biased. But this technology will disproportionately hurt Black and other overpoliced communities, because the data was created by a criminal punishment system that is racially biased. For example, a data set of arrests, even if they are nominally devoid of any racial information, can still be dangerous by virtue of the fact that police make a disparately high number of arrests in Black neighborhoods. Technology can never predict crime. Rather, it can invite police to regard with suspicion those people who were victims of crime, or live and work in places where crime has been committed in the past.  For all these reasons and more, EFF has argued that the technology should be banned from being used by law enforcement agencies, and some cities across the United States have already begun to do so.  Now, a group of our federal elected officials is raising concerns on the dangers of predictive policing. Sen. Ron Wyden penned a probing letter to Attorney General Garland asking about how the technology is used. He is joined by Rep. Yvette Clarke, Sen. Ed Markey, Sen. Elizabeth Warren, Sen. Jeffery Merkley, Sen. Alex Padilla, Sen. Raphael Warnock, and Rep. Sheila Jackson Lee..  They ask, among other things, whether the U.S. Department of Justice (DOJ)  has done any legal analysis to see if the use of Predictive Policing complies with the 1964 Civil Rights Act. It’s clear that the Senators and Representatives are concerned with the harmful legitimizing effects “data” can have on racially biased policing: “These algorithms, which automate police decisions, not only suffer from a lack of meaningful oversight over whether they actually improve public safety, but are also likely to amplify prejudices against historically marginalized groups.” The elected officials are also concerned about how many jurisdictions the DOJ has helped to fund predictive policing and the data collection requisite to run such programs, as well as whether these programs are measured in any real way for efficacy, reliability, and validity. This is important considering that many of the algorithms being used are withheld from public scrutiny on the assertion that they are proprietary and operated by private companies. Recently, an audit by the state of Utah found that the the state had contracted with a company for surveillance, data analysis, and predictive AI, yet the company actually had no functioning AI and was able to hide that fact inside the black box of proprietary secrets.  You can read more of the questions the elected officials asked of the Attorney General in the full letter, which you can find below. 

  • Video Hearings Tuesday and Wednesday: EFF Will Tell Copyright Office That Consumers Should Have the Freedom to Fix, Modify Digital Devices They Own
    by Karen Gullo on April 19, 2021 at 3:36 pm

    DMCA Blocks Consumers from Downloading Apps That Big Tech Companies Don’t Approve OfSan Francisco—On Tuesday, April 20, and Wednesday, April 21, experts from the Electronic Frontier Foundation (EFF) fighting copyright abuse will testify at virtual hearings held by the Copyright Office in favor of exemptions to the Digital Millennium Copyright Act (DMCA) so people who have purchased digital devices—from cameras and e-readers to smart TVs—can repair or modify them, or download new software to enhance their functionality.The online hearings are part of a rulemaking process held every three years by the Copyright Office to determine whether people are harmed by DMCA “anti-circumvention” provisions, which prohibit anyone from bypassing or disabling access controls built into products by manufacturers to lock down the software that runs them. These provisions are often abused by technology companies to control how their devices are used and stop consumers, innovators, competitors, researchers, and everyday repair businesses from offering new, lower-cost, and creative services.EFF Staff Attorney Cara Gagliano will testify Tuesday in support of a universal DMCA exemption for the repair and modification of any software-enabled device, including everything from digital cameras and e-readers to automated litterboxes and robotic pets. The Copyright Office’s existing policy of granting exemptions in piecemeal fashion for certain devices every three years is unjustified and completely inadequate—the legal analysis for the exemption, that it’s needed to allow noninfringing uses, is the same across all devices, Gagliano will testify.EFF Senior Staff Attorney Mitch Stoltz will testify Wednesday in support of expanding the Copyright Office’s “jailbreaking” exception to the anti-circumvention law. In past years, EFF has fought for and won the right to “jailbreak” or “root” personal computing devices including smartphones, tablets, wearables, smart TVs, and smart speakers, allowing people to install the software of their choice on the devices they own without the manufacturer’s permission. This year, Stoltz will urge the Copyright Office to expand that exemption to cover “streaming boxes” and “streaming sticks”—devices that add “smart TV” functionality to an ordinary TV.WHAT: Virtual Hearings on DMCA RulemakingWHEN AND WHERE:April 20, 7:30 am – 9:30 am (device repair and modification).  To stream via Zoom: 21, 7:30 am – 9:30 am (“jailbreaking” streaming boxes). To stream via Zoom: EFF comments to the Copyright Office: For full hearing agendas: more about DMCA rulemaking and copyright abuse: Contact:  CaraGaglianoStaff [email protected] MitchStoltzSenior Staff [email protected]

  • Proctoring Tools and Dragnet Investigations Rob Students of Due Process
    by Jason Kelley on April 15, 2021 at 8:21 pm

    Update, April 16, 2021: The Foundation for Individual Rights in Education (FIRE) points out that Dartmouth has publicly expressed a commitment to upholding free speech and dissent on campus.  The medical school should strive to uphold these policies, and as FIRE argues, they may even be considered contracts that the school has breached with its social media policy that prohibits “disparaging” and “inappropriate” online speech. Like many schools, Dartmouth College has increasingly turned to technology to monitor students taking exams at home. And while many universities have used proctoring tools that purport to help educators prevent cheating, Dartmouth’s Geisel School of Medicine has gone dangerously further. Apparently working under an assumption of guilt, the university is in the midst of a dragnet investigation of complicated system logs, searching for data that might reveal student misconduct, without a clear understanding of how those logs can be littered with false positives. Worse still, those attempting to assert their rights have been met with a university administration more willing to trust opaque investigations of inconclusive data sets rather than their own students. The Boston Globe explains that the medical school administration’s attempts to detect supposed cheating have become a flashpoint on campus, exemplifying a worrying trend of schools prioritizing misleading data over the word of their students. The misguided dragnet investigation has cast a shadow over the career aspirations of over twenty medical students. Dartmouth medical school has cast suspicion on students by relying on access logs that are far from concrete evidence of cheating What’s Wrong With Dartmouth’s Investigation In March, Dartmouth’s Committee on Student Performance and Conduct (CSPC) accused several students of accessing restricted materials online during exams. These accusations were based on a flawed review of an entire year’s worth of the students’ log data from Canvas, the online learning platform that contains class lectures and information. This broad search was instigated by a single incident of confirmed misconduct, according to a contentious town hall between administrators and students (we’ve re-uploaded this town hall, as it is now behind a Dartmouth login screen). These logs show traffic between students’ devices and specific files on Canvas, some of which contain class materials, such as lecture slides. At first glance, the logs showing that a student’s device connected to class files would appear incriminating: timestamps indicate the files were retrieved while students were taking exams.  But after reviewing the logs that were sent to EFF by a student advocate, it is clear to us that there is no way to determine whether this traffic happened intentionally, or instead automatically, as background requests from student devices, such as cell phones, that were logged into Canvas but not in use. In other words, rather than the files being deliberately accessed during exams, the logs could have easily been generated by the automated syncing of course material to devices logged into Canvas but not used during an exam. It’s simply impossible to know from the logs alone if a student intentionally accessed any of the files, or if the pings exist due to automatic refresh processes that are commonplace in most websites and online services. Most of us don’t log out of every app, service, or webpage on our smartphones when we’re not using them. Much like a cell phone pinging a tower, the logs show files being pinged in short time periods and sometimes being accessed at the exact second that students are also entering information into the exam, suggesting a non-deliberate process. The logs also reveal that the files accessed are largely irrelevant to the tests in question, also indicating  an automated, random process. A UCLA statistician wrote a letter explaining that even an automated process can result in multiple false-positive outcomes. Canvas’ own documentation explicitly states that the data in these logs “is meant to be used for rollups and analysis in the aggregate, not in isolation for auditing or other high-stakes analysis involving examining single users or small samples.” Given the technical realities of how these background refreshes take place, the log data alone should be nowhere near sufficient to convict a student of academic dishonesty.  Along with The Foundation for Individual Rights in Education (FIRE), EFF sent a letter to the Dean of the Medical School on March 30th, explaining how these background connections work and pointing out that the university has likely turned random correlations into accusations of misconduct. The Dean’s reply was that the cases are being reviewed fairly. We disagree. For the last year, we’ve seen far too many schools ignore legitimate student concerns about inadequate, or overbroad, anti-cheating software It appears that the administration is the victim of confirmation bias, turning fallacious evidence of misconduct into accusations of cheating. The school has admitted in some cases that the log data appeared to have been created automatically, acquitting some students who pushed back. But other students have been sanctioned, apparently entirely based on this spurious interpretation of the log data. Many others are anxiously waiting to hear whether they will be convicted so they can begin the appeal process, potentially with legal counsel.  These convictions carry heavy weight, leaving permanent marks on student transcripts that could make it harder for them to enter residencies and complete their medical training. At this level of education, this is not just about being accused of cheating on a specific exam. Being convicted of academic dishonesty could derail an entire career.  University Stifles Speech After Students Express Concerns Online Worse still, following posts from an anonymous Instagram account apparently run by students concerned about the cheating accusations and how they were being handled, the Office of  Student Affairs introduced a new social media policy. An anonymous Instagram account detailed some concerns students have with how these cheating allegations were being handled (accessed April 7). As of April 15, the account was offline. The policy was emailed to students on April 7 but backdated to April 5—the day the Instagram posts appeared. The new policy states that, “Disparaging other members of the Geisel UME community will trigger disciplinary review.” It also prohibits social media speech that is not “courteous, respectful, and considerate of others” or speech that is “inappropriate.” Finally, the policy warns, “Students who do not follow these expectations may face disciplinary actions including dismissal from the School of Medicine.”  One might wonder whether such a policy is legal. Unfortunately, Dartmouth is a private institution and so not prohibited by the First Amendment from regulating student speech. If it were a public university with a narrower ability to regulate student speech, the school would be stepping outside the bounds of its authority if it enforced the social media policy against medical school students speaking out about the cheating scandal. On the one hand, courts have upheld the regulation of speech by students in professional programs at public universities under codes of ethics and other established guidance on professional conduct. For example, in a case about a mortuary student’s posts on Facebook, the Minnesota Supreme Court held that a university may regulate students’ social media speech if the rules are “narrowly tailored and directly related to established professional conduct standards.” Similarly, in a case about a nursing student’s posts on Facebook, the Eleventh Circuit held that “professional school[s] have discretion to require compliance with recognized standards of the profession, both on and off campus, so long as their actions are reasonably related to legitimate pedagogical concerns.” On the other hand, the Sixth Circuit has held that a university can’t invoke a professional code of ethics to discipline a student when doing so is clearly a “pretext” for punishing the student for her constitutionally protected speech. Although the Dartmouth medical school is immune from a claim that its social media policy violates the First Amendment, it seems that the policy might unfortunately be a pretext to punish students for legitimate speech. Although the policy states that the school is concerned about social media posts that are “lapses in the standards of professionalism,” the timing of the policy suggests that the administrators are sending a message to students who dare speak out against the school’s dubious allegations of cheating. This will surely have a chilling effect on the community to the extent that students will refrain from expressing their opinions about events that occur on campus and affect their future careers. The Instagram account was later taken down, indicating that the chilling effect on speech may have already occurred. (Several days later, a person not affiliated with Dartmouth, and therefore protected from reprisal, has reposted many of the original Instagram’s posts.) Students are at the mercy of private universities when it comes to whether their freedom of speech will be respected. Students select private schools based on their academic reputation and history, and don’t necessarily think about a school’s speech policies. Private schools shouldn’t take advantage of this, and should instead seek to sincerely uphold free speech principles. Investigations of Students Must Start With Concrete Evidence Though this investigation wasn’t the result of proctoring software, it is part and parcel of a larger problem: educators using the pandemic as an excuse to comb for evidence of cheating in places that are far outside their technical expertise. Proctoring tools and investigations like this one flag students based on flawed metrics and misunderstandings of technical processes, rather than concrete evidence of misconduct.  Simply put: these logs should not be used as the sole evidence for potentially ruining a student’s career.  Proctoring software that assumes all students take tests the same way—for example, in rooms that they can control, their eyes straight ahead, fingers typing at a routine pace—puts a black mark on the record of students who operate outside the norm. One problem that has been widely documented with proctoring software is that students with disabilities (especially those with motor impairment) are consistently flagged as exhibiting suspicious behavior by software suites intended to detect cheating. Other proctoring software has flagged students for technical snafus such as device crashes and Internet cuts out, as well as completely normal behavior that could indicate misconduct if you squint hard enough. For the last year, we’ve seen far too many schools ignore legitimate student concerns about inadequate, or overbroad, anti-cheating software. Across the country, thousands of students, and some parents, have created petitions against the use of proctoring tools, most of which (though not all) have been ignored. Students taking the California and New York bar exams—as well as several advocacy organizations and a group of deans—advocated against the use of proctoring tools for those exams. As expected, many of those students then experienced “significant software problems” with the Examsoft proctoring software, specifically, causing some students to fail.  Many proctoring companies have defended their dangerous, inequitable, privacy-invasive, and often flawed software tools by pointing out that humans—meaning teachers or administrators—usually have the ability to review flagged exams to determine whether or not a student was actually cheating. That defense rings hollow when those reviewing the results don’t have the technical expertise—or in some cases, the time or inclination—to properly examine them. Similar to schools that rely heavily on flawed proctoring software, Dartmouth medical school has cast suspicion on students by relying on access logs that are far from concrete evidence of cheating. Simply put: these logs should not be used as the sole evidence for potentially ruining a student’s career.  The Dartmouth faculty has stated that they will not continue to look at Canvas logs in the future for violations (51:45 into the video of the town hall). That’s a good step forward. We insist that the school also look beyond these logs for the students currently being investigated, and end this dragnet investigation entirely, unless additional evidence is presented.

  • EFF Partners with DuckDuckGo to Enhance Secure Browsing and Protect User Information on the Web
    by Karen Gullo on April 15, 2021 at 1:33 pm

    DuckDuckGo Smarter Encryption Will Be Incorporated Into HTTPS Everywhere San Francisco, California—Boosting protection of Internet users’ personal data from snooping advertisers and third-party trackers, the Electronic Frontier Foundation (EFF) today announced it has enhanced its groundbreaking HTTPS Everywhere browser extension by incorporating rulesets from DuckDuckGo Smarter Encryption. The partnership represents the next step in the evolution of HTTPS Everywhere, a collaboration with The Tor Project and a key component of EFF’s effort to encrypt the web and make the Internet ecosystem safe for users and website owners. “DuckDuckGo Smarter Encryption has a list of millions of HTTPS-encrypted websites, generated by continually crawling the web instead of through crowdsourcing, which will give HTTPS Everywhere users more coverage for secure browsing,” said Alexis Hancock, EFF Director of Engineering and manager of HTTPS Everywhere and Certbot web encrypting projects. “We’re thrilled to be partnering with DuckDuckGo as we see HTTPS become the default protocol on the net and contemplate HTTPS Everywhere’s future.” “EFFs pioneering work with the HTTPS Everywhere extension took privacy protection in a new and needed direction, seamlessly upgrading people to secure website connections,” said Gabriel Weinberg, DuckDuckGo founder and CEO. “We’re delighted that EFF has now entrusted DuckDuckGo to power HTTPS Everywhere going forward, using our next generation Smarter Encryption dataset.” When EFF launched HTTPS Everywhere over a decade ago, the majority of web servers used the non-secure HTTP protocol to transfer web pages to browsers, rendering user content and information vulnerable to attacks. EFF began building and maintaining a crowd-sourced list of encrypted HTTPS versions of websites for a free browser extension— HTTPS Everywhere—which automatically takes users to them. That keeps users’ web searching, pages visited, and other private information encrypted and safe from trackers and data thieves that try to intercept and steal personal information in transit from their browser. Fast forward ten years­—the web is undergoing a massive change to HTTPS. Mozilla’s Firefox has an HTTPS-only mode, while Google Chrome is slowly moving towards HTTPS mode. DuckDuckGo, a privacy-focused search engine, also joined the effort with Smarter Encryption to help users browse securely by detecting unencrypted, non-secure HTTP connections to websites and automatically upgrading them to encrypted connections. With more domain coverage in Smarter Encryption, HTTPS Everywhere users are provided even more protection. HTTPS Everywhere rulesets will continue to be hosted through this year, giving our partners who use them time to adjust. We will stop taking new requests for domains to be added at the end of May. To download HTTPS Everywhere: more on encrypting the web: more from DuckDuckGo: Contact:  AlexisHancockDirector of Engineering, Certbot [email protected] [email protected]

  • HTTPS Everywhere Now Uses DuckDuckGo’s Smarter Encryption
    by Alexis Hancock on April 15, 2021 at 12:58 am

    Over the last few months the HTTPS Everywhere project has been deciding what to do with the new landscape of HTTPS in major browsers. Encrypted web traffic has increased in the last few years and major browsers have made strides in seeing that HTTPS becomes the default. This project has shepherded a decade of encrypted web traffic and we look onward to setting our efforts protecting people when new developments occur in the future. That said we’d like to announce that we have partnered with the DuckDuckGo team to utilize their Smarter Encryption rulesets into the HTTPS Everywhere web extension. This is happening for several reasons: Firefox has an HTTPS-Only Mode now. Chrome doesn’t have HTTPS by default, but is slowly moving towards that goal with now directing to HTTPS in the navigation bar first before going to HTTP. DuckDuckGo’s Smarter Encryption covers more domains than our current model. Browsers and websites are moving away from issues that created a need for more granular ruleset maintenance. Mixed content is now blocked in major browsers Different domains for secure connection are now an older habit (i.e., further removing the need for granular maintenance on HTTPS Everywhere rulesets Chrome’s Manifest V3 declarativeNetRequest API will force the web extensions to have a ruleset cap. Instead of competing with other extensions like DuckDuckGo, if the user prefers to use HTTPS Everywhere or DuckDuckGo’s Privacy Essentials, they will receive virtually the same coverage. We don’t want to create confusion for users on “who to choose” when it comes to getting the best coverage. As HTTPS Everywhere goes into “maintenance mode”, users will have the opportunity to move to DuckDuckGo’s Privacy Essentials or use a browser that has HTTPS by default. More info on DuckDuckGo’s Smarter Encryption here: Phases for HTTPS Everywhere’s Rulesets DuckDuckGo Update Channel with Smarter Encryption Rulesets [April 15, 2021]. Still accept HTTPS Everywhere Ruleset changes in Github Repo until the end of May, 2021. Still host HTTPS Everywhere Rulesets until various partners and downstream channels that use our current rulesets, make needed changes and decisions. Sunset HTTPS Everywhere Rulesets [Late 2021] Afterwards, this will start the HTTPS Everywhere web extension EoL (End of Life) stage, which will be determined later after completing the sunset of HTTPS Everywhere Rulesets. By adding the DuckDuckGo Smarter Encryption Update Channel we can give everyone time to adjust and plan. Thank you for contributing and using this project through the years. We hope you can celebrate with us the monumental effort HTTPS Everywhere has accomplished.

  • Congress, Don’t Let ISP Lobbyists Sabotage Fiber for All
    by Ernesto Falcon on April 14, 2021 at 7:16 pm

    For the first time, an American president has proposed a plan that wouldn’t just make a dent in the digital divide, it will end it. By deploying federal resources at the level and scale this country has not seen since electrification nearly 100 years ago, the U.S. will again connect every resident to a necessary service. Like with water and electricity, robust internet access, as the pandemic has proven, is an essential service. And so the effort and resources expended are well-worth it. The president’s plan, which matches well with current Congressional efforts of Representative James Clyburn and Senator Klobuchar, is welcomed news and a boost to efforts by Congress to get the job done. This plan draws a necessary line that government should only be investing its dollars in “future-proof” (i.e. fiber) broadband infrastructure, which is something we have failed to do for years with subsidies for persistently low metrics for what qualifies as broadband. Historically the low expectations pushed by the telecom industry have resulted in a lot of wasted tax dollars. Every state (barring North Dakota) has wasted billions in propping up outdated networks. Americans are left with infrastructure unprepared for 21st century, a terrible ratio of price to internet speed, and one of the largest private broadband provider bankruptcies in history.  Policymakers are learning from these mistakes as well as from the demand shown by the COVID-19 crisis and this is the year we chart a new course. Now is the time for people to push their elected representatives in Congress to pass the Accessible, Affordable Internet for All Act. Take Action Tell Congress: Americans Deserve Fast, Reliable, and Affordable Internet What Is “Future-Proof” and Why Does Fiber Meet the Definition? Fiber is the superior medium for 21st-century broadband, and policymakers need to understand that reality in making decisions about broadband policy. No other data transmission medium has the inherent capacity and future potential as fiber, which is why all 21st-century networks, from 5G to Low Earth Orbit Satellites, are dependent on fiber. However, some of the broadband access industry keeps trying to sell state and federal legislatures on the need to subsidize slower, outdated infrastructure, which diverts money from going towards fiber and puts it into their pockets. What allows for that type of grifting on a massive scale is the absence of the government mandating future proofing in its infrastructure investments. The absence of a future-proof requirement in law means the government subsidizes whatever meets the lowest speed required today without a thought about tomorrow. This is arguably one of the biggest reasons why so much of our broadband subsidies have gone to waste over the decades to any qualifying service with no thought towards the underlying infrastructure. The President’s endorsement of future-proofing broadband infrastructure is a significant and necessary course correction and it needs to be codified into the infrastructure package Congress is contemplating. If every dollar the government spends on building broadband infrastructure needed to go to an infrastructure that met the needs of today and far into the future, there are a lot of older, slower, more expensive broadband options that are no longer eligible for government money or qualify as sufficient. At the same time, that same fiber infrastructure can be leveraged to enable dozens of services not just in broadband access, but other data-intensive services, which makes the Accessible, Affordable Internet for All Act’s preference for open access important and likely should be expanded on, given how fiber lifts all boats. One possible solution is to require government bidders to design networks around enabling wireless and wireline services instead of just one broadband service. Notably, the legislation currently lacks a future proof requirement in government investments, but given the President’s endorsement, it is our hope that it will be included in any final package that passes Congress to avoid wastefully enriching capacity-limited (and future-limited) broadband services. Building the roads to enable commerce should be how the government views broadband infrastructure. We Need Fast Upload Speeds and Fiber, No Matter What Cable Lobbyists Try to Claim The cable industry unsurprisingly hates the President’s proposal to get fiber to everyone because they are the high-speed monopolists for a vast majority of us and nothing forces them to upgrade other than fiber. Fiber has orders of magnitude greater capacity than cable systems and will be able to do deliver cheaper access to the high-speed era than cable systems as demand grows. In the 2020 debate on California’s broadband program, the state was deciding whether or not to  continue to subsidize DSL copper broadband, the cable lobby regularly argued that no one needs a fast upload that fiber provides because, look, no one who uses cable broadband uses a lot of upload (especially the throttled users). There are a lot of flaws with the premise that asymmetric use of the internet is user preference, rather than the result of the current standards and cable packages. But the most obvious one is the fact that you need a market of symmetric gigabit and higher users for the technology sector to develop widely used applications and services.  Just like with every other innovation we have seen on the internet, if the capacity is delivered, someone comes up with ways to use it. And that multi-gigabit market is coming online, but it will start in China, where an estimated 57% of all multi-gigabit users will reside by 2023 under current projections. China has in fact been laying fiber optics nine times faster than the United States since 2013 and that is a problem for American competitiveness. If we are not building fiber networks everywhere soon to catch up in the next five years, the next Silicon Valley built around the gigabit era of broadband access will be in China and not here. It will also mean that next-generation applications and services will just be unusable by Americans stuck on upload throttled cable systems. The absence of a major fiber infrastructure investment by the government effectively means many of us will be stuck in the past while paying monopoly rents to cable. Fiber Is Not “Too Expensive,” Nor Should Investment Go to Starlink (SpaceEx) No Matter What Its Lobbyists Say The current FCC is still reeling from the fact that the outgoing FCC at the end of its term granted nearly $1 billion to Starlink to cover a lot of questionable things. Despite the company historically saying it did not need a dollar of subsidy. And the actual fact that it really does not. But if government money is on the table, it appears clear that Starlink will deploy its lobbyists to argue for its share, even when it needs none of it. That should be a clear concern to any congressional office that will be lobbied on the broadband infrastructure package. Under the current draft of the Accessible, Affordable Internet for All Act, it seems very unlikely that Starlink’s current deployment will qualify for any money, and certainly not if future proofing was included as a condition of receiving federal dollars.  This is due to the fact that satellites are inherently capacity-constrained as a means to deliver broadband access. Each satellite deployed must maintain line of sight, can carry only so much traffic, require more and more satellites to share the burden for more capacity, and ultimately need to beam down to a fiber-connected base station. It is this capacity constraint that is why Starlink will never be competitive in cities. But Starlink does benefit from a focus on fiber in that the more places its base stations can connect between fiber and satellites, the more robust the network itself will become. But in the end, there is no way for those satellites to keep up with the expected increases in capacity that fiber infrastructure will yield nor are they as long-lasting an investment as each satellite needs to be replaced at a fairly frequent basis as new ones are launched while fiber will remain useful once laid for decades. While they will argue that they are a cheaper solution for rural Americans, the fact is the number of Americans that cannot feasibly get a fiber line is extremely small. Basically, if you can get an electrical line to a house, then you can get them a fiber line. For policymakers, Starlink’s service is best understood as the equivalent of a universal basic income of broadband access where it reaches far and wide and establishes a robust floor.  That on its own has a lot of value and is a reason why its effort to expand to boats, trucks, and airplanes is a good thing. But this is not the tool of long-term economic development for rural communities. It should be understood as a lifeline when all other options are exhausted rather than the frontal solution by the government for ending the digital divide. Lastly, given the fact that the satellite constellation is meant to serve customers globally and not just in the United States, it makes no sense for the United States to subsidize a global infrastructure to enrich one private company. The investments need to be in domestic infrastructure. The Future Is Not 5G, No Matter What Wireless Carrier Lobbyists Say For years, the wireless industry hyped 5G broadband, resulting in a lot of congressional hearings, countless hours by the FCC focusing its regulatory power on it, a massive merger between T-Mobile and Sprint, and very little to actually show for it today. In fact, early market analysis has found that 5G broadband is making zero profits, mostly because people are not willing to pay that much more on their wireless bill for the new network. The reality is 5G’s future is likely to be in non-broadband markets that have yet to emerge. But most importantly, national 5G coverage does not happen if you don’t have dense fiber networks everywhere. Any infrastructure plan that comes out of the government should avoid making 5G as part of its core focus given that it is the derivative benefit of fiber. You can have fiber without 5G, but you can’t have 5G without the fiber. So even when companies like AT&T argue for 5G to be the infrastructure plan, the wireless industry has slowly started to come around to the fact that it’s actually the fiber in the end that matters. Last year, the wireless industry acknowledged that 5G and fiber are linked and even AT&T is emphasizing now that the future is fiber. The risks are great if we put wireless on par with wires in infrastructure policy as we are seeing now with the Rural Development Opportunity Fund giving money to potentially speculative gigabit wireless bids instead of proven fiber to the home. This has prompted a lot of Members of Congress to ask the FCC to double-check the applicants before money goes out the door. Rather than repeat the RDOF mistakes, it’s best to understand that infrastructure means the foundation of what delivers these services and not the services themselves. We Can Get Fiber to Everyone If We Embrace the Public Model of Broadband Access The industry across the board will refuse to acknowledge that the private model of broadband has failed many communities before and during the pandemic, whereas the public model of broadband has soared in areas where it exists. In both President Biden’s plan and the Clyburn/Klobuchar legislation is an emphasis on embracing local government and local community broadband networks. The Affordable, Accessible Internet for All Act outright bans states from preventing local governments from building broadband service. Those state laws that will be repealed by the bill were primarily driven by the cable lobby afraid of cheap public fiber access.  By today we know their opposition to community fiber is premised on keeping broadband prices exceedingly high. We know now that if you eliminate the profit motive in delivering fiber, there is almost no place in this country that can’t be connected to fiber. When we have cooperatives delivering gigabit fiber at $100 a month to an average of 2.5 people per mile, one is hard-pressed in finding what areas are left out of reach. But equally important is making access affordable to low-income people, and given that we’ve seen municipal fiber offer ten years of free 100/100 mbps to 28,000 students of low-income families at a subsidy cost of $3.50 a month, it seems clear that public fiber is an essential piece to solving both the universal access challenge as well as making sure all people can afford to connect to the internet. Whatever infrastructure bill passes Congress, it must fully embrace the public model of access for fiber for all to be a reality.  

  • Forced Arbitration Thwarts Legal Challenge to AT&T’s Disclosure of Customer Location Data
    by Aaron Mackey on April 14, 2021 at 4:26 pm

    Location data generated from our cell phones paint an incredibly detailed picture of our movements and private lives. Despite the sensitive nature of this data and a federal law prohibiting cellphone carriers from disclosing it, repeated unauthorized disclosures over the last several years show that carriers will sell this sensitive information to almost any willing buyer. With cellphone carriers brazenly violating their customers’ privacy and the Federal Communication Commission moving slowly to investigate, it fell to consumers to protect themselves. That’s why in June 2019 EFF filed a lawsuit representing customers challenging AT&T’s unlawful disclosure of their location data. Our co-counsel are lawyers at Hagens Berman Sobol Shapiro LLP. The case, Scott v. AT&T, alleged that AT&T had violated a federal privacy law protecting cellphone customers’ location data, among other protections. How AT&T Compelled Arbitration That legal challenge, however, quickly ran into an all-too-familiar roadblock: the arbitration agreements AT&T forces on its customers to sign every time they buy a cellphone or new service from the company. AT&T claimed that this clause prevented the Scott case from proceeding. The court ended up dismissing the plaintiffs’ lawsuit earlier this year. The way it did so demonstrates why Congress needs to change federal law so that the public can meaningfully protect themselves from companies’ abusive practices. In response to the lawsuit, AT&T first moved to compel the plaintiffs to arbitration, arguing that because they had signed arbitration agreements—buried deep within an ocean of contract terms—they had no right to sue. But under California law, AT&T cannot enforce contracts, like those at issue here, which prevent people from seeking court orders, called “public injunctions,” to prevent future harm to the public. California law also recognizes that these one-sided “contracts of adhesion” can sometimes be so unfair that they cannot be enforced. We argued that both of these principles voided AT&T’s contracts, emphasizing that our clients sought to prevent AT&T from disclosing all customers’ location data without their notice and consent to protect the broader public’s privacy and to prevent AT&T from publicly misrepresenting its practices moving forward. AT&T responded by moving to dismiss the public injunction claims, asserting that because the company stopped disclosing customer location data to certain third parties identified in media reports, plaintiffs had no legal basis–known as standing–to seek a public injunction. AT&T’s strategy was clear: rather than admit that it had done anything wrong in the past, the company argued that because it had stopped disclosing customer location data, there was no future public harm that the court needed to prohibit via a public injunction. Because no public injunction was necessary, AT&T argued, California’s rule against the arbitration agreements did not apply and the plaintiffs remained subject to them. We did not trust AT&T’s representations that it stopped disclosing customer location data, particularly because the company had previously promised to stop disclosing the same data, only for media reports to later show that the disclosures were ongoing. Additionally, AT&T was not clear about whether it had other programs or services that disclosed the same location data without customers’ knowledge and consent. The plaintiffs spent months trying to learn more about AT&T’s location data sharing practices in the face of AT&T’s stonewalling. What we found was concerning: AT&T continued to disclose customer location data, including to enable commercial call routing by third-party services, without customers’ notice and consent. We asked the court to let the case proceed, arguing that this information undercut AT&T’s claims that it had stopped its harmful practices. The court sided with AT&T. It ruled that the evidence did not establish that there was an ongoing risk that AT&T would disclose customer location data in the future and thus plaintiffs lacked standing to seek a public injunction. Next, the court upheld the legality of AT&T’s one-sided contracts and ruled that plaintiffs could be forced into arbitration. We disagree with the court’s ruling in multiple respects. The court largely ignored evidence in the record showing that AT&T continues to disclose customer location data, putting all of its customers’ privacy at risk. It also mischaracterized plaintiffs’ allegations, allowing the court to avoid having to wrestle with AT&T’s ongoing privacy failures. Finally, the court failed to protect consumers subject to AT&T’s one-sided arbitration agreements—these contracts are fundamentally unfair and their continued enforcement is unjust. Importantly, the court did not rule on the merits: it did not decide whether AT&T’s disclosure of customer location data was lawful. Instead, it sidestepped that question by deciding that the plaintiffs’ case didn’t belong in federal court. Next Steps: Legislative Reform of Arbitration Agreements The court’s decision to enforce AT&T’s arbitration agreement is problematic because it prevents consumers from vindicating their rights under a longstanding federal privacy law written to protect them. Unlike other areas of consumer privacy where comprehensive federal legislation is sorely needed, Congress has already prohibited phone services like AT&T from disclosing customer location data without notice and consent. The legislative problem this case highlights is different: rather than writing a new law, Congress needs to amend an existing one—the Federal Arbitration Act. Arbitration was originally intended to allow large, sophisticated entities like corporations to avoid expensive legal fights. Today, however, it is used to prevent consumers, employees, and anyone with less bargaining power from having any meaningful redress in court. Congress can easily fix this injustice by prohibiting forced arbitration in one-sided contracts of adhesion, and it’s past time that they did so. Likewise, when Congress enacts a comprehensive consumer data privacy law, it must bar enforcement of arbitration agreements that unfairly limit user enforcement of their legal rights in court. The better proposed bills do so. Despite the federal court’s dismissal of the case against AT&T, we remain hopeful that the FCC will take action against the company for its disclosure of location data. The agency began an enforcement proceeding last year, and we hope that once President Biden appoints new FCC leadership, the agency will move quickly to hold AT&T accountable. Related Cases: Geolocation Privacy

  • California: Demand Broadband for All
    by Chao Liu on April 13, 2021 at 11:11 pm

    From the pandemic to the Frontier bankruptcy to the ongoing failures in remote learning, we’ve seen now more than ever how current broadband infrastructure fails to meet the needs of the people. This pain is particularly felt in already under-served communities—urban and rural—where poverty and lack of choice leaves millions at the mercy of monopolistic Internet Service Providers (ISPs) who have functionally abandoned them.  Take Action Tell your Senators to Support S.B. 4 This is why EFF is part of a coalition of nonprofits, private-sector companies, and local governments in support of S.B. 4. Authored by California State Senator Lena Gonzalez, the bill would promote construction of the 21st century infrastructure necessary to finally put a dent in, and eventually close, the digital divide in California. S.B. 4 passed out of the California Senate Energy, Utilities, and Communications Committee by a vote of 11-1 on April 12. This demonstrates that lawmakers who understand these issues recognize it is vital for communities who are suffering at the hands of ISP monopolies to have greater opportunities to get the Internet access they need. If the monopolistic ISPs didn’t come to deliver adequate service during a time when many Californians’ entire lives depended on the quality of their broadband, they aren’t coming now. It is high time local communities are allowed to take the future into their own hands and build out what they need. S.B. 4 is California’s path to doing so.  TAKE ACTION TELL YOUR SENATORS TO SUPPORT S.B. 4

  • Why EFF Supports Repeal of Qualified Immunity
    by Adam Schwartz on April 12, 2021 at 9:28 pm

    Our digital rights are only as strong as our power to enforce them. But when we sue government officials for violating our digital rights, they often get away with it because of a dangerous legal doctrine called “qualified immunity.” Do you think you have a First Amendment right to use your cell phone to record on-duty police officers, or to use your social media account to criticize politicians? Do you think you have a Fourth Amendment right to privacy in the content of your personal emails? Courts often protect these rights. But some judges invoke qualified immunity to avoid affirmatively recognizing them, or if they do recognize them, to avoid holding government officials accountable for violating them. Because of these evasions of judicial responsibility to enforce the Constitution, some government officials continue to invade our digital rights. The time is now for legislatures to repeal this doctrine. What is Qualified Immunity? In 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark law empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983. In 1967, the U.S. Supreme Court first created a “good faith” defense against claims for damages (i.e., monetary compensation) under this law. In 1982, the Court broadened this defense, to create immunity from damages if the legal right at issue was not “clearly established” at the time the official violated it. Thus, even if a judge holds that a constitutional right exists, and finds that a government official violated this right, the official nonetheless is immune from paying damages—if that right was not “clearly established” at the time. Qualified immunity directly harms people in two ways. First, many victims of constitutional violations are not compensated for their injury. Second, many more people suffer constitutional violations, because the doctrine removes an incentive to government officials to follow the Constitution. The consequences are shocking. For example, though a judge held that these abusive acts violated the Constitution, the perpetrators evaded responsibility through qualified immunity when: Jail officials subjected a detainee to seven months of solitary confinement because he asked to visit the commissary. A police officer pointed a gun at a man’s head, though he had already been searched, was calmly seated, and was being guarded by a second officer. It gets worse. Judges had been required to engage in a two-step qualified immunity analysis. First, they determined whether the government official violated a constitutional right—that is, whether the right in fact exists. Second, they determined whether that right was clearly established at the time of the incident in question. But in 2009, the U.S. Supreme Court held that a federal judge may skip the first step, grant an official qualified immunity, and never rule on what the law is going forward. As a result, many judges shirk their responsibility to interpret the Constitution and protect individual rights. This creates a vicious cycle, in which legal rights are not determined, allowing government officials to continue harming the public because the law is never “clearly established.” For example, judges declined to decide whether these abuses were unconstitutional: A police officer attempted to shoot a nonthreatening pet dog while it was surrounded by children, and in doing so shot a child. Police tear gassed a home, rendering it uninhabitable for several months, after a resident consented to police entry to arrest her ex-boyfriend. In the words of one frustrated judge: The inexorable result is “constitutional stagnation”—fewer courts establishing law at all, much less clearly doing so. Section 1983 meets Catch-22. Plaintiffs must produce precedent even as fewer courts are producing precedent. Important constitutional questions go unanswered precisely because no one’s answered them before. Courts then rely on that judicial silence to conclude there’s no equivalent case on the books. No precedent = no clearly established law = no liability. An Escherian Stairwell. Heads government wins, tails plaintiff loses. Qualified Immunity Harms Digital Rights Over and over, qualified immunity has undermined judicial protection of digital rights. This is not surprising. Many police departments and other government agencies use high-tech devices in ways that invade our privacy or censor our speech. Likewise, when members of the public use novel technologies in ways government officials dislike, they often retaliate. Precisely because these abuses concern cutting-edge tools, there might not be clearly established law. This invites qualified immunity defenses against claims of digital rights violations. Consider the First Amendment right to use our cell phones to record on-duty police officers. Federal appellate courts in the First, Third, Fifth, Seventh, Ninth, and Eleventh Circuits have directly upheld this right. (EFF has advocated for this right in many amicus briefs.) Yet last month, in a case called Frasier v. Evans, the Tenth Circuit held that this digital right was not clearly established. Frasier had used his tablet to record Denver police officers punching a suspect in the face as his head bounced off the pavement. Officers then retaliated against Frasier by detaining him, searching his tablet, and attempting to delete the video. The court granted the officers qualified immunity, rejecting Frasier’s claim that the officers violated the First Amendment. Even worse, the Tenth Circuit refused to rule on whether, going forward, the First Amendment protects the right to record on-duty police officers. The court wrote: “we see no reason to risk the possibility of glibly announcing new constitutional rights … that will have no effect whatsoever on the case.” But a key function of judicial precedent is to protect the public from further governmental abuses. Thus, when the Third Circuit reached this issue in 2017, while it erroneously held that this right was not clearly established, it properly recognized this right going forward. Qualified immunity has harmed other EFF advocacy for digital rights. To cite just two examples: In Rehberg v. Palk, we represented a whistleblower subjected to a bogus subpoena for his personal emails. The court erroneously held it was not clearly established that the Fourth Amendment protects email content, and declined to decide this question going forward. In Hunt v. Regents, we filed an amicus brief arguing that a public university violated the First Amendment by disciplining a student for their political speech on social media. The court erroneously held that the student’s rights were not clearly established, and declined to decide the issue going forward. The Movement to Repeal Qualified Immunity A growing chorus of diverse stakeholders, ranging from the Cato Institute and the Institute of Justice to the ACLU, is demanding legislation to repeal this destructive legal doctrine. A recent “cross-ideological” amicus brief brought together the NAACP and the Alliance Defending Freedom. Activists against police violence also demand repeal. This movement is buoyed by legal scholars who show the doctrine has no support in the 1871 law’s text and history. Likewise, judges required to follow the doctrine have forcefully condemned it. Congress is beginning to heed the call. Last month, the U.S. House of Representatives passed the George Floyd Justice in Policing Act (H.R. 1280), which would repeal qualified immunity as to police. Even better, the Ending Qualified Immunity Act (S. 492) would repeal it as to all government officials. It was originally introduced by Rep. Ayanna Pressley (D-Mass.) and Rep. Justin Amash (L-Mich.). States and cities are doing their part, too. Colorado, New Mexico, and New York City recently enacted laws to allow lawsuits against police misconduct, with no qualified immunity defense. A similar bill is pending in Illinois. Next Steps EFF supports legislation to repeal qualified immunity—a necessary measure to ensure that when government officials violate our digital rights, we can turn to the courts for justice. We urge you to do the same.

  • After Cookies, Ad Tech Wants to Use Your Email to Track You Everywhere
    by Bennett Cyphers on April 12, 2021 at 8:41 pm

    Cookies are dying, and the tracking industry is scrambling to replace them. Google has proposed Federated Learning of Cohorts (FLoC), TURTLEDOVE, and other bird-themed tech that would have browsers do some of the behavioral profiling that third-party trackers do today. But a coalition of independent surveillance advertisers has a different plan. Instead of stuffing more tracking tech into the browser (which they don’t control), they’d like to use more stable identifiers, like email addresses, to identify and track users across their devices. There are several proposals from ad tech providers to preserve “addressable media” (read: individualized surveillance advertising) after cookies die off. We’ll focus on just one: Unified Identifier 2.0, or UID2 for short, developed by independent ad tech company The Trade Desk. UID2 is a successor to The Trade Desk’s cookie-based “unified ID.” Much like FLoC, UID2 is not a drop-in replacement for cookies, but aims to replace some of their functionality. It won’t replicate all of the privacy problems of third-party cookies, but it will create new ones.  There are key differences between UID2 and Google’s proposals. FLoC will not allow third-party trackers to identify specific people on its own. There are still big problems with FLoC: it continues to enable auxiliary harms of targeted ads, like discrimination, and it bolsters other methods of tracking, like fingerprinting. But FLoC’s designers intend to move towards a world with less individualized third-party tracking. FLoC is a misguided effort with some laudable goals. In contrast, UID2 is supposed to make it easier for trackers to identify people. It doubles down on the track-profile-target business model. If UID2 succeeds, faceless ad tech companies and data brokers will still track you around the web—and they’ll have an easier time tying your web browsing to your activity on other devices. UID2’s proponents want advertisers to have access to long-term behavioral profiles that capture nearly everything you do on any Internet-connected device, and they want to make it easier for trackers to share your data with each other. Despite its designers’ ill-taken claims around “privacy” and “transparency,” UID2 is a step backward for user privacy. How Does UID2 Work? In a nutshell, UID2 is a series of protocols for collecting, processing, and passing around users’ personally-identifying information (“PII”). Unlike cookies or FLoC, UID2 doesn’t aim to change how browsers work; rather, its designers want to standardize how advertisers share information. The UID2 authors have published a draft technical standard on Github. Information moves through the system like this: A publisher (like a website or app) asks a user for their personally-identifying information (PII), like an email address or a phone number.  The publisher shares that PII with a UID2 “operator” (an ad tech firm). The operator hashes the PII to generate a “Unified Identifier” (the UID2). This is the number that identifies the user in the system. A centralized administrator (perhaps The Trade Desk itself) distributes encryption keys to the operator, who encrypts the UID2 to generate a “token.” The operator sends this encrypted token back to the publisher. The publisher shares the token with advertisers. Advertisers who receive the token can freely share it throughout the advertising supply chain. Any ad tech firm who is a “compliant member” of the ecosystem can receive decryption keys from the administrator. These firms can decrypt the token into a raw identifier (a UID2).  The UID2 serves as the basis for a user profile, and allows trackers to link different pieces of data about a person together. Raw UID2s can be shared with data brokers and other actors within the system to facilitate the merging of user data. The description of the system raises several questions. For example: Who will act as an “administrator” in the system? Will there be one or many, and how will this impact competition on the Internet?  Who will act as an “operator?” Outside of operators, who will the “members” of the system be? What responsibilities towards user data will these actors have? Who will have access to raw UID2 identifiers? The draft specification implies that publishers will only see encrypted tokens, but most advertisers and data brokers will see raw, stable identifiers. What we do know is that a new identifier, the UID2, will be generated from your email. This UID2 will be shared among advertisers and data brokers, and it will anchor their behavioral profiles about you. And your UID2 will be the same across all your devices. How Does UID2 Compare With Cookies? Cookies are associated with a single browser. This makes it easy for trackers to gather browsing history. But they still need to link cookie IDs to other information—often by working with a third-party data broker—in order to connect that browsing history to activity on phones, TVs, or in the real world.  UID2s will be connected to people, not devices. That means an advertiser who collects UID2 from a website can link it to the UID2s it collects through apps, connected TVs, and connected vehicles belonging to the same person. That’s where the “unified” part of UID2 comes in: it’s supposed to make cross-device tracking as easy as cross-site tracking used to be. UID2 is not a drop-in replacement for cookies. One of the most dangerous features of cookies is that they allow trackers to stalk users “anonymously.” A tracker can set a cookie in your browser the first time you open a new window; it can then use that cookie to start profiling your behavior before it knows who you are. This “anonymous” profile can then be used to target ads on its own (“we don’t know who this person is, but we know how they behave”) or it can be stored and joined with personally-identifying information later on. In contrast, the UID2 system will not be able to function without some kind of input from the user. In some ways, this is good: it means if you refuse to share your personal information on the Web, you can’t be profiled with UID2. But this will also create new incentives for sites, apps, and connected devices to ask users for their email addresses. The UID2 documents indicate that this is part of the plan:  Addressable advertising enables publishers and developers to provide the content and services consumers have come to enjoy, whether through mobile apps, streaming TV, or web experiences. … [UID2] empowers content creators to have the value exchange conversations with consumers while giving them more control and transparency over their data. The standard authors take for granted that “addressable advertising” (and tracking and profiling) is necessary to keep publishers in business (it’s not). They also make it clear that under the UID2 framework, publishers are expected to demand PII in exchange for content. How UID2 will work on websites, according to the documentation. This creates bad new incentives for publishers. Some sites already require log-ins to view content. If UID2 takes off, expect many more ad-driven websites to ask for your email before letting you in. With UID2, advertisers are signaling that publishers will need to acquire, and share, users’ PII before they can serve the most lucrative ads.  Where Does Google Fit In? In March, Google announced that it “will not build alternate identifiers to track individuals as they browse across the web, nor… use them in [its] products.” Google has clarified that it won’t join the UID2 coalition, and won’t support similar efforts to enable third-party web tracking. This is good news—it presumably means that advertisers won’t be able to target users with UID2 in Google’s ad products, the most popular in the world. But UID2 could succeed despite Google’s opposition. Unified ID 2.0 is designed to work without the browser’s help. It relies on users sharing personal information, like email addresses, with the sites they visit, and then uses that information as the basis for a cross-context identifier. Even if Chrome, Firefox, Safari, and other browsers want to rein in cross-site tracking, they will have a hard time preventing websites from asking for a user’s email address. Google’s commitment to eschew third-party identifiers doesn’t mean said identifiers are going away. And it doesn’t justify creating new targeting tech like FLoC. Google may try to present these technologies as alternatives, and force us to choose: see, FLoC doesn’t look so bad when compared with Unified ID 2.0. But this is a false dichotomy. It’s more likely that, if Google chooses to deploy FLoC, it will complement—not replace—a new generation of identifiers like UID2. UID2 focuses on identity, while FLoC and other “privacy sandbox” proposals from Google focus on revealing trends in your behavior. UID2 will help trackers capture detailed information about your activity on the apps and websites to which you reveal your identity. FLoC will summarize how you interact with the rest of the sites on the web. Deployed together, they could be a potent surveillance cocktail: specific, cross-context identifiers connected to comprehensive behavioral labels. What Happens Next? UID2 is not a revolutionary technology. It’s another step in the direction that the industry has been headed for some time. Using real-world identifiers has always been more convenient for trackers than using pseudonymous cookies. Ever since the introduction of the smartphone, advertisers have wanted to link your activity on the Web to what you do on your other devices. Over the years, a cottage industry has developed among data brokers, selling web-based tracking services that link cookie IDs to mobile ad identifiers and real-world info.  The UID2 proposal is the culmination of that trend. UID2 is more of a policy change than a technical one: the ad industry is moving away from the anonymous profiling that cookies enabled, and is planning to demand email addresses and other PII instead.  The demise of cookies is good. But if tracking tech based on real-world identity replaces them, it will be a step backward for users in important ways. First, it will make it harder for users in dangerous situations—for whom web activity could be held against them—to access content safely. Browsing the web anonymously may become more difficult or outright impossible. UID2 and its ilk will likely make it easier for law enforcement, intelligence agencies, militaries, and private actors to buy or demand sensitive data about real people. Second, UID2 will incentivize ad-driven websites to erect “trackerwalls,” refusing entry to users who’d prefer not to share their personal information. Though its designers tout “consent” as a guiding principle, UID2 is more likely to force users to hand over sensitive data in exchange for content. For many, this will not be a choice at all. UID2 could normalize “pay-for-privacy,” widening the gap between those who are forced to give up their privacy for first-class access to the Internet, and those who can afford not to.

  • Deceptive Checkboxes Should Not Open Our Checkbooks
    by Shirin Mori on April 9, 2021 at 11:18 pm

    Last week, the New York Times highlighted the Trump 2020 campaign’s use of deceptive web designs to deceive supporters into donating far more money than they had intended. The campaign’s digital donation portal hid an unassuming but unfair method for siphoning funds: a pre-checked box to “make a monthly recurring donation.” This caused weekly withdrawals from supporters’ bank accounts, with some being depleted. The checkbox in question, from the New York Times April 3rd piece. A pre-checked box to donate more than you intended is just one  example of a “dark pattern”—a term coined by user experience (UX) designer Harry Brignull to define tricks used in websites and apps that make you do things that you didn’t mean to, such as buying a service. Unfortunately, dark patterns are widespread. Moreover, the pre-checked box is a particularly common way to subvert our right to consent to serious decisions, or to withhold our consent. This ruse dupes us into “agreeing” to be signed up for a mailing list, having our data shared with third party advertisers, or paying recurring donations. Some examples are below. A screenshot of the November 3rd, 2020 donation form from WinRed on, which shows two pre-checked boxes: one for monthly donations, and one for an additional automatic donation of the same amount on an additional date. The National Republican Congressional Committee, which uses the same WinRed donation flow that the Trump campaign utilizes, displays two instances of the pre-checked boxes. A screenshot of the National Republican Congressional Committee donation site (, from the WayBack Machine’s crawl on November 3rd, 2020. The Democratic Congressional Campaign Committee’s donation site, using ActBlue software, shows a pre-selected option for monthly donations. The placement is larger and the language is much clearer for what users should expect around monthly contributions. However, this may also require careful observation from users who intend to donate only once. A screenshot from August 31, 2020 of a pre-selected option for monthly contributions on the Democratic Congressional Campaign Committee ( What’s Wrong with a Dark Pattern Using Pre-Selected Recurring Options?  Pre-selected options, such as pre-checked boxes, are common and not limited to the political realm. Organizations understandably seek financial stability by asking their donors for regular, recurring contributions. However, the approach of pre-selecting a recurring contribution can deprive donors of choice and undermine their trust. At best, this stratagem manipulates a user’s emotions by suggesting they are supposed to give more than once. More maliciously, it preys on the likely chance that a user passively skimming doesn’t notice a selected option. Whereas, requiring a user to click an option to consent to contribute on a recurring basis puts the user in an active position of decision-making. Defaults matter: whether making donations monthly is set as “yes, count me in” by default or “no, donate once” by default. So, does a pre-selected option indicate consent? A variety of laws across the globe have aimed to minimize the use of these pre-selected checkboxes, but at present, most U.S. users are protected by no such law. Unfortunately, some U.S. courts have even ruled that pre-selected boxes (or “opt-out” models) do represent express consent. By contrast, Canadian spam laws require a separate box, not pre-checked, for email opt-ins. Likewise, the European Union’s GDPR has banned the use of pre-selected checkboxes for allowing cookies on web pages. But for now, much of the world’s users are at the whims of deceptive product teams when it comes to the use of pre-selected checkboxes like these.  Are there instances in which it’s okay to use a pre-selected option as a design element? For options that don’t carry much weight beyond what the user expects (that is, consistent with their expectations of the interaction), a pre-selected option may be appropriate. One example might be if a user clicks a link with language like “become a monthly donor,” and ends up on a page with a pre-selected monthly contribution option. It also might be appropriate to use a pre-selected option to send a confirmation email of the donation. This is very different than, for example, adding unexpected items onto a user’s cart before processing a donation that unexpectedly shows up on their credit card bill later.  How Do We Better Protect Users and Financial Contributors? Dark patterns are ubiquitous in websites and apps, and aren’t limited to financial contributions or email signups. We must build a new landscape for users. UX designers, web developers, and product teams must ensure genuine user consent when designing interfaces. A few practices for avoiding dark patterns include: Present opt-in, rather than opt-out flows for significant decisions, such as whether  to share data or to donate on a monthly level (e.g. no pre-selected options for recurring contributions). Avoid manipulative language. Options should tell the user what the interaction will do, without editorializing (e.g. avoid “if you UNCHECK this box, we will have to tell __ you are a DEFECTOR”). Provide explicit notice for how user data will be used. Strive to meet web accessibility practices, such as aiming for plain, readable language (for example, avoiding the use of double-negatives). Only use a pre-selected option for a choice that doesn’t obligate users to do more than they are comfortable with. For example, EFF doesn’t assume all of our donors want to become EFF members: users are given the option to uncheck the “Make me a member” box. Offering this choice allows us to add a donor to our ranks as a member, but doesn’t obligate them to anything.  We also need policy reform. As we’ve written, we support user-empowering laws to protect against deceptive practices by companies. For example, EFF supported regulations to protect users against dark patterns, issued under the California Consumer Privacy Act. 

  • EFF Challenges Surreptitious Collection of DNA at Iowa Supreme Court
    by Jennifer Lynch on April 9, 2021 at 7:35 pm

    Last week, EFF, along with the ACLU and the ACLU of Iowa, filed an amicus brief in the Iowa Supreme Court challenging the surreptitious collection of DNA without a warrant. We argued this practice violates the Fourth Amendment and Article I, Section 8 of the Iowa state constitution. This is the first case to reach a state supreme court involving such a challenge after results of a genetic genealogy database search linked the defendant to a crime. The case, State v. Burns, involves charges from a murder that occurred in 1979. The police had no leads in the case for years, even after modern technology allowed them to extract DNA from blood left at the crime scene and test it against DNA collected in government-run arrestee and offender DNA databases like CODIS.  In 2018, the police began working with a company called Parabon Nanolabs, which used the forensic DNA profile to predict the physical appearance of the alleged perpetrator and to generate an image that the Cedar Rapids Police Department released to the public. That image did not produce any new leads, so the police worked with Parabon to upload the DNA profile to a consumer genetic genealogy database called GEDMatch, which we’ve written about in the past. Through GEDMatch, the police linked the crime scene DNA to three brothers, including the defendant in this case, Jerry Burns. Police then surveilled Mr. Burns until they could collect something containing his DNA. The police found a straw he used and left behind at a restaurant, extracted a profile from DNA left on the straw, matched it to DNA found at the crime scene, and arrested Mr. Burns. The State claims that the Fourth Amendment doesn’t apply in this context because Mr. Burns abandoned his privacy interest in his DNA when he left it behind on the straw. However, we argue the Fourth Amendment creates a high bar against collecting DNA from free people, even if it’s found on items the person has voluntarily discarded. In 1978, the Supreme Court ruled that the Fourth Amendment does not protect the contents of people’s trash left for pickup because they have “abandoned” an expectation of privacy in the trash. But unlike a gum wrapper or a cigarette butt or the straw in this case, our DNA contains so much private information that the data contained in a DNA sample can never be “abandoned.” Even if police don’t need a warrant to rummage through your trash (and many states disagree on this point), Police should need a warrant to rummage through your DNA.  A DNA sample—whether taken directly from a person or extracted from items that person leaves behind—contains a person’s entire genetic makeup. It can reveal intensely sensitive information about us, including our propensities for certain medical conditions, our ancestry, and our biological familial relationships. Some researchers have also claimed that human behaviors such as aggression and addiction can be explained, at least in part, by genetics. And private companies have claimed they can use our DNA for everything from identifying our eye, hair, and skin colors and the shapes of our faces; to determining whether we are lactose intolerant, prefer sweet or salty foods, and can sleep deeply; to discovering the likely migration patterns of our ancestors and the identities of family members we never even knew we had. Despite the uniquely revealing nature of DNA, we cannot avoid leaving behind the whole of our genetic code wherever we go. Humans are constantly shedding genetic material; In less time than it takes to order a coffee, most humans lose nearly enough skin cells to cover an entire football field. The only way to avoid depositing our DNA on nearly every item we touch out in the world would be to never leave one’s home. For these reasons, as we argue in our brief, we can never abandon a privacy interest in our DNA. The Burns case also raises thorny Fourth Amendment issues related to law enforcement use of consumer genetic genealogy databases. We’ve written about these issues before, and, unfortunately, the process of searching genetic genealogy databases in criminal investigations has become quite common. Estimates are that genetic genealogy sites were used in around 200 cases just in 2018 alone. This is because more than 26 million people have uploaded their genetic data to sites like GEDmatch to try to identify biological relatives, build a family tree, and learn about their health. These sites are available to anyone and are relatively easy to use. And many sites, including GEDMatch, lack any technical restrictions that would keep the police out. As a result, law enforcement officers have been capitalizing on all this freely available data in criminal investigations across the country. And in none of the cases we’ve reviewed, including Burns, have officers ever sought a warrant or any legal process at all before searching the private database.  Police access to this data creates immeasurable threats to our privacy. It also puts us at much greater risk of being accused of crimes we didn’t commit. For example, in 2015, a similar forensic genetic genealogy search led police to suspect an innocent man. Even without genetic genealogy searches, DNA matches may lead officers to suspect—and jail—the wrong person, as happened in a California case in 2012. That can happen because our DNA may be transferred from one location to another, possibly ending up at the scene of a crime, even if we were never there. Even if you yourself never upload your genetic data to a genetic genealogy website, your privacy could be impacted by a distant family member’s choice to do so. Although GEDmatch’s 1.3 million users only encompass about 0.5% of the U.S. adult population, research shows that their data alone could be used to identify 60% of white Americans. And once GEDmatch’s users encompass just 2% of the U.S. population, 90% of white Americans will be identifiable. Other research has shown that adversaries may be able to compromise these databases to put many users at risk of having their genotypes revealed, either at key positions or at many sites genome-wide.  This is why this case and others like it are so important—and why we need strong rules against police access to genetic genealogy databases. Our DNA can reveal so much about us that our genetic privacy must be protected at all costs.  We hope the Iowa Supreme Court and other courts addressing this issue will recognize that the Fourth Amendment protects us from surreptitious collection and searches of our DNA. Related Cases: People v. Buza

  • Am I FLoCed? A New Site to Test Google’s Invasive Experiment
    by Andrés Arrieta on April 9, 2021 at 7:22 pm

    Today we’re launching Am I FLoCed, a new site that will tell you whether your Chrome browser has been turned into a guinea pig for Federated Learning of Cohorts or FLoC, Google’s latest targeted advertising experiment. If you are a subject, we will tell you how your browser is describing you to every website you visit. Am I FLoCed is one of an effort to bring to light the invasive practices of the adtech industry—Google included—with the hope we can create a better internet for all, where our privacy rights are respected regardless of how profitable they may be to tech companies. FLoC is a terrible idea that should not be implemented. Google’s experimentation with FLoC is also deeply flawed. We hope that this site raises awareness about where the future of Chrome seems to be heading, and why it shouldn’t. FLoC takes most of your browsing history in Chrome, and analyzes it to assign you to a category or “cohort.” This identification is then sent to any website you visit that requests it, in essence telling them what kind of person Google thinks you are. For the time being, this ID changes every week, hence leaking new information about you as your browsing habits change. You can read a more detailed explanation here. Because this ID changes, you will want to visit often to see those changes. Why is this happening? Users have been demanding more and more respect from big business for their online privacy, realizing that the false claim “privacy is dead” was nothing but a marketing campaign. The biggest players that stand to profit from privacy invasion are those from the behavioural targeting industry. Some companies and organizations have listened to users’ requests and improved some of their practices, giving more security and privacy assurances to their users. But most have not. This entire industry sells its intricate knowledge about people in order to target them for advertisement, most notably Google and Facebook, but also many other data brokers with names you’ve probably never heard before. The most common way these companies identify you is by using “cookies” to track every movement you make on the internet. This relies on a tracking company convincing as many sites as possible to install their tracking cookie. But with tracking protections being deployed via browser extensions like Privacy Badger, or in browsers like Firefox and Safari, this has become more difficult. Moreover, stronger privacy laws are coming. Thus, many in the adtech industry have realized that the end is near for third-party tracking cookies. While some cling to the old ways, others are trying to find new ways to keep tracking users, monetizing their personal information, without third-party cookies. These companies will use the word “privacy” in their marketing, and try to convince users, policy makers, and regulators that their solutions are better for users and the market. Or they will claim the other solutions are worse, creating a false impression that users have to choose between “bad” and ”worse.” But our digital future should not be one where an industry keeps profiting from privacy violations, but one where our rights are respected. The Google Proposal Google announced the launch of its FLoC test with a recent blogpost. It contains lots of mental gymnastics to twist this terrible idea into the semblance of a privacy-friendly endeavour. Perhaps most disturbing is the notion that FLoC’s cohorts are not based on who you are as an individual. The reality is FLoC uses your detailed and unique browsing history to assign you to a cohort. The number of people in a cohort is tailored to still be useful to advertisers, and according to some of Google’s own research it is 95% effective, meaning cohorts are a marginal improvement over cookies on privacy. FLoC might not share your detailed browsing history. But we reject the notion of “because it’s in your device it’s private.” If data is used to infer something about you, about who you are, and how you can be targeted, and then shared with other sites and advertisers, then it’s not private at all. And let’s not forget that Google Sync already shares your detailed Chrome browsing history with Google when enabled by default. The sole intent of FLoC is to keep the status quo of surveillance capitalism, with a vague appearance of user choice. It cements even more the dependability on “Google’s benevolence” and access to the internet. A misguided belief that Google is our friendly corporate overlord, that they know better, and that we should sign out our rights in exchange for crumbs for the internet to survive.Google has also made unsubstantiated statements like “FLoC allows you to remain anonymous as you browse across websites and also improves privacy by allowing publishers to present relevant ads to large groups (called cohorts),” but as far as we can tell, FLoC does not make you anonymous in any way. Only a few browsers, like Tor, can accurately make such difficult claims. Now with FLoC, your browser is still telling sites something about your behavior. Google cannot equate grouping users into advertising cohorts with “anonymity.” This experiment is irresponsible and antagonistic to users. FLoC, with marginal improvements on privacy, is riddled with issues, and yet is planned to be rolled out to millions of users around the world with no proper notification, opt-in consent, or meaningful individual opt-out at launch. This is not just one more Chrome experiment. This is a fundamental change to the browser and how people are exploited for their data. After all the pushback, concerns, and issues, the fact that Google has chosen to ignore the warnings is telling of where the company stands with regard to our privacy. Try it!

  • What Movie Studios Refuse to Understand About Streaming
    by Katharine Trendacosta on April 7, 2021 at 7:57 pm

    The longer we live in the new digital world, the more we are seeing it replicate systemic issues we’ve been fighting for decades. In the case of movie studios, what we’ve seen in the last few years in streaming mirrors what happened in the 1930s and ‘40s, when a small group of movie studios also controlled the theaters that showed their films. And by 1948, the actions of the studios were deemed violations of antitrust law, resulting in a consent decree. The Justice Department ended that decree in 2019 under the theory that the remaining studios could not “reinstate their cartel.” Maybe not in physical movie theaters. But online is another story. Back in the ‘30s and ‘40s, the problem was that the major film studios—including Warner Bros. and Universal which exist to this day—owned everything related to the movies they made. They had everyone involved on staff under exclusive and restrictive contracts. They owned the intellectual property. They even owned the places that processed the physical film. And, of course, they owned the movie theaters. In 1948, the studios were forced to sell off their stakes in movie theaters and chains, having lost in the Supreme Court. The benefits for audiences were pretty clear. The old system had theaters scheduling showings so that they wouldn’t overlap with each other, so that you could not see a movie at the most convenient theater and most convenient time for you. Studios were also forcing theaters to buy their entire slates of movies without seeing them (called “blind buying”), instead of picking, say, the ones of highest quality or interest—the ones that would bring in audiences. And, of course, the larger chains and the theaters owned by the studios would get preferential treatment. There is a reason the courts stopped this practice. For audiences, separating theaters from studios meant that their local theaters now had a variety of films, were more likely to have the ones they wanted to see, and would be showing them at the most convenient times. So they didn’t have to search listings for some arcane combination of time, location, and movie. And now it is 2021. If you consume digital media, you may have noticed something… familiar. The first wave of streaming services—Hulu, Netflix, iTunes, etc.—had a diversity of content from a number of different studios. And for the back catalog, the things that had already aired, services had all of the episodes available at once. Binge-watching was ascendant. The value of these services to the audience was, like your local theater, convenience. You pay a set price and can pick from a diverse catalog to watch what you wanted, when you wanted, from the comfort of your home. As they did almost 100 years ago, studios suddenly realized the business opportunity presented in owning every step of the process of making entertainment. It’s just that those steps look different today than they did back then. Instead of owning the film processing labs, they now own the infrastructure in the form of internet service providers (ISPs). AT&T owns Warner Brothers and HBO. Comcast owns Universal and NBC. And so on. Instead of having creative people on restrictive contracts they… well, that they still do. Netflix regularly makes headlines for signing big names to exclusive deals. And studios buy up other studios and properties to lock down the rights to popular media. Disney in particular has bought up Star Wars and Marvel in a bid to put as much “intellectual property” under its exclusive control as possible, owning not just movie rights but every revenue stream a story can generate. As the saying goes, no one does to Disney what Disney did to the Brothers Grimm. Instead of owning theater chains, studios have all launched their own streaming services. And as with theaters, a lot of the convenience has been stripped. Studios saw streaming making money and did not want to let others reap the rewards, so they’ve been pulling their works and putting them under the exclusive umbrella of their own streaming services. Rather than having a huge catalog of diverse studio material, which is what made Netflix popular to begin with, convenience has been replaced with exclusivity. Of course, much like in the old days, the problem is that people don’t want everything a single studio offers. They want certain things. But a subscription fee isn’t for just what you want, it’s for everything. Much like the old theater chains, we are now blind buying the entire slate of whatever Disney, HBO, Amazon, etc. are offering. And, of course, they can’t take the chance that we’ll pay the monthly fee once, watch what we’re looking for, and cancel. So a lot of these exclusives are no longer released in binge-watching mode, but staggered to keep us paying every month for new installments. Which is how the new world of streaming is becoming a hybrid of the old world of cable TV and movie theaters. To watch something online legally these days is a very frustrating search of many, many services. The hope you have is the thing you want is on one of the services you already pay for and not on a new one. Sometimes, you’re looking for something that was on a service you paid for, but has now been locked into another one. Instead of building services that provide the convenience audiences want—with competition driving services to make better and better products for audiences—the value is now in making something rare. Something that can only be found on your service. And even if it is not good, it is at least tied to a popular franchise or some other thing people do not want to be left out of. Instead of building better services—faster internet access, better interfaces, better content—the model is all based on exclusive control. Many Americans don’t have a choice in their broadband provider, a monopoly ISPs jealously guard rather than building a service so good we’d pick it on purpose. Instead of choosing the streaming service with the best price or library or interface, we have to pay all of them. Our old favorites are locked down, so we can’t access everything in one place anymore. New things set in our favorite worlds are likewise locked down to certain services, and sometimes even to certain devices. And creators we like? Also locked into exclusive contracts at certain services. And the thing is, we know from history that this isn’t what consumers want. We know from the ‘30s and ‘40s that this kind of vertical integration is not good for creativity or for audiences. We know from the recent past that convenient, reasonably-priced, and legal internet services are what users want and will use. So we very much know that this system is untenable and anticompetitive, that it can encourage copyright infringement and drives the growth of reactionary draconian copyright laws that hurt innovators and independent creators. We also know what works. Antitrust enforcers back in the ‘30s and ‘40s recognized that a system like this should not exist and put a stop to it. Breaking the studios’ cartel in the ‘40s led to more independent movie theaters, more independent studios, and more creativity in movies in general. So why have we let this system regrow itself online?

  • Organizations Call on President Biden to Rescind President Trump’s Executive Order that Punished Online Social Media for Fact-Checking
    by Aaron Mackey on April 7, 2021 at 6:01 pm

    President Joe Biden should rescind a dangerous and unconstitutional Executive Order issued by President Trump that continues to threaten internet users’ ability to obtain accurate and truthful information online, six organizations wrote in a letter sent to the president on Wednesday. The organizations, Rock The Vote, Voto Latino, Common Cause, Free Press, Decoding Democracy, and the Center for Democracy & Technology, pressed Biden to remove his predecessor’s “Executive Order on Preventing Online Censorship” because “it is a drastic assault on free speech designed to punish online platforms that fact-checked President Trump.” The organizations filed lawsuits to strike down the Executive Order last year, with Rock The Vote, Voto Latino, Common Cause, Free Press, and Decoding Democracy’s challenge currently on appeal in the U.S. Court of Appeals for the Ninth Circuit. The Center for Democracy & Technology’s appeal is currently pending in the U.S. Court of Appeal for the D.C. Circuit. (Cooley LLP, Protect Democracy, and EFF represent the plaintiffs in Rock The Vote v. Trump.) As the letter explains, Trump issued the unconstitutional Executive Order in retaliation for Twitter fact-checking May 2020 tweets spreading false information about mail-in voting. The Executive Order issued two days later sought to undermine a key law protecting internet users’ speech, 47 U.S.C. § 230 (“Section 230”) and punish online platforms, including by directing federal agencies to review and potentially stop advertising on social media and kickstarting a federal rulemaking to re-interpret Section 230. From the letter: His actions made clear that the full force of the federal government would be brought down on those whose speech he did not agree with, in an effort to coerce adoption of his own views and to prevent the dissemination of accurate information about voting. As the letter notes, despite President Biden eliminating other Executive Orders issued by Trump, the order targeting online services remains active. Biden’s inaction is troubling because the Executive Order “threatens democracy and the voting rights of people who have been underrepresented for generations,” the letter states. “Thus, your Administration is in the untenable position of defending an unconstitutional order that was issued with the clear purpose of chilling accurate information about the 2020 election and undermining public trust in the results,” the letter continues. The letter concludes: Eliminating this egregiously unconstitutional hold-over from the prior Administration vindicates the Constitution’s protections for both online services and the users who rely on them for accurate, truthful information about voting rights. Related Cases: Rock the Vote v. Trump

  • India’s Strict Rules For Online Intermediaries Undermine Freedom of Expression
    by Katitza Rodriguez on April 7, 2021 at 3:06 pm

    India has introduced draconian changes to its rules for online intermediaries, tightening government control over the information ecosystem and what can be said online. It has created rules that seek to restrict social media companies and other content hosts from coming up with their own moderation policies, including those framed to comply with international human rights obligations. The new “Intermediary Guidelines and Digital Media Ethics Code” (2021 Rules) have already been used in an attempt to censor speech about the government. Within days of being published, the rules were used by a state in which the ruling Bharatiya Janata Party is in power to issue a legal notice to an online news platform that has been critical of the government. The legal notice was withdrawn almost immediately after public outcry, but served as a warning of how the rules can be used. The 2021 Rules, ostensibly created to combat misinformation and illegal content, substantially revise India’s intermediary liability scheme. They were notified as rules under the Information Technology Act 2000, replacing the 2011 Intermediary Rules. New Categories of Intermediaries The 2021 Rules create two new subsets of intermediaries: “social media intermediaries” and “significant social media intermediaries,” the latter of which are subject to more onerous regulations. The due diligence requirements for these companies include having proactive speech monitoring, compliance personnel who reside in India, and the ability to trace and identify the originator of a post or message. “Social media intermediaries” are defined broadly, as entities which primarily or solely “enable online interaction between two or more users and allow them to create, upload, share, disseminate, modify or access information using its services.” Obvious examples include Facebook, Twitter, and YouTube, but the definition could also include search engines and cloud service providers, which are not social media in a strict sense. “Significant social media intermediaries” are those with registered users in India above a 5 million threshold. But the 2021 Rules also allow the government to deem any “intermediary” – including telecom and internet service providers, web-hosting services, and payment gateways – a ‘significant’ social media intermediary if it creates a “material risk of harm” to the sovereignty, integrity, and security of the state, friendly relations with Foreign States, or public order. For example, a private messaging app can be deemed “significant” if the government decides that the app allows the “transmission of information” in a way that could create a “material risk of harm.” The power to deem ordinary intermediaries as significant also encompasses ‘parts’ of services, which are “in the nature of an intermediary” – like Microsoft Teams and other messaging applications. New  ‘Due Diligence’ Obligations The 2021 Rules, like their predecessor 2011 Rules, enact a conditional immunity standard. They lay out an expanded list of due diligence obligations that intermediaries must comply with in order to avoid being held liable for content hosted on their platforms. Intermediaries are required to incorporate content rules—designed by the Indian government itself—into their policies, terms of service, and user agreements. The 2011 Rules contained eight categories of speech that intermediaries must notify their users not to “host, display, upload, modify, publish, transmit, store, update or share.” These include content that violates Indian law, but also many vague categories that could lead to censorship of legitimate user speech. By complying with government-imposed restrictions, companies cannot live up to their responsibility to respect international human rights, in particular freedom of expression, in their daily business conduct.  Strict Turnaround for Content Removal The 2021 Rules require all intermediaries to remove restricted content within 36 hours of obtaining actual knowledge of its existence, taken to mean a court order or notification from a government agency. The law gives non-judicial government bodies great authority to compel intermediaries to take down restricted content. Platforms that disagree with or challenge government orders face penal consequences under the Information Technology Act and criminal law if they fail to comply. The Rules impose strict turnaround timelines for responding to government orders and requests for data. Intermediaries must provide information within their control or possession, or ‘assistance,’ within 72 hours to government agencies for a broad range of purposes: verification of identity, or the prevention, detection, investigation, or prosecution of offenses or for cybersecurity incidents. In addition, intermediaries are required to remove or disable, within 24 hours of receiving a complaint, non-consensual sexually explicit material or material in the “nature of impersonation in an electronic form, including artificially morphed images of such individuals.” The deadlines do not provide sufficient time to assess complaints or government orders. To meet them, platforms will be compelled to use automated filter systems to identify and remove content. These error-prone systems can filter out legitimate speech and are a threat to users’ rights to free speech and expression. Failure to comply with these rules could lead to severe penalties, such as a jail term of up to seven years. In the past, the Indian government has threatened company executives with prosecution – as, for instance, when they served a legal notice on Twitter, asking the company to explain why recent territorial changes in the state of Kashmir were not reflected accurately on the platform’s services. The notice threatened to block Twitter or imprison its executives if a “satisfactory” explanation was not furnished. Similarly, the government threatened Twitter executives with imprisonment when they reinstated content about farmer protests that the government had ordered them to take down. Additional Obligations for Significant Social Media Intermediaries On a positive note, the Rules require significant social media intermediaries to have transparency and due process rules in place for content takedowns. Companies must notify users when their content is removed, explain why it was taken down, and provide an appeals process. On the other hand, the 2021 Rules compel providers to appoint an Indian resident “Chief Compliance Officer,” who will be held personally liable in any proceedings relating to non-compliance with the rules, and a “Resident Grievance Officer” responsible for responding to users’ complaints and government and court orders. Companies must also appoint a resident employee to serve as a contact person for coordination with law enforcement agencies. With more executives residing in India, where they could face prosecution, intermediaries may find it difficult to challenge or resist arbitrary and disproportionate government orders. Proactive Monitoring Significant social media intermediaries are called on to “endeavour to deploy technology-based measures,” including automated tools or other mechanisms, to proactively identify certain types of content. This includes information depicting rape or child sexual abuse and content that has previously been removed for violating rules. The stringent provisions in Rule 2021 already encourage over-removal of content; requiring intermediaries to deploy automated filters will likely result in more takedowns. Encryption and Traceability Requirements The Indian government has been wrangling with messaging app companies—most famously WhatsApp—for several years now, demanding “traceability” of the originators of forwarded messages. The demand first emerged in the context of a series of mob lynchings in India, triggered by rumors that went viral on WhatsApp. Subsequently, petitions were filed in Indian courts seeking to link social networking accounts with the users’ biometric identity (Aadhar) numbers. Although the court ruled against the proposal, expert opinions supplied by a member of the Prime Minister’s scientific advisory committee suggested technical measures to enable traceability on end-to-end encrypted platforms. Because of their privacy and security features, some messaging systems don’t learn or record the history of who first created particular content that was then forwarded by others, a state of affairs that the Indian government and others have found objectionable. The 2021 Rules represent a further escalation of this conflict, requiring private messaging intermediaries to “enable the identification of the first originator of the information” upon a court order or a decryption request issued under the 2009 Decryption Rules. (The Decryption Rules allow authorities to request the interception or monitoring of decryption of any information generated, transmitted, received ,or stored in any computer resource). If the first originator of a message is located outside the territory of India, the private messaging app will be compelled to identify the first originator of that information within India. The 2021 Rules place various limitations on these court orders, namely they can only be issued for serious crimes. However, limitations will not solve the core problem with this proposal: A technical mandate for companies to reengineer or re-design messaging services to comply with the government’s demand to identify the originator of a message. Conclusion The 2021 Rules were fast-tracked without public consultation or a pre-legislative consultation, where the government seeks recommendations from stakeholders in a transparent process. They will have profound implications for the privacy and freedom of expression of Indian users. They restrict companies’ discretion in moderating their own platforms and create new possibilities for government surveillance of citizens. These rules threaten the idea of a free and open internet built on a bedrock of international human rights standards.

  • The EU Online Terrorism Regulation: a Bad Deal
    by Jillian C. York on April 7, 2021 at 7:00 am

    On 12 September 2018, the European Commission presented a proposal for a regulation on preventing the dissemination of terrorist content online—dubbed the Terrorism Regulation, or TERREG for short—that contained some alarming ideas. In particular, the proposal included an obligation for platforms to remove potentially terrorist  content within one hour, following an order from national competent authorities.  Ideas such as this one have been around for some time already. In 2016, we first wrote about the European Commission’s attempt to create a voluntary agreement for companies to remove certain content (including terrorist expression) within 24 hours, and Germany’s Network Enforcement Act (NetzDG) requires the same. NetzDG has spawned dozens of copycats throughout the world, including in countries like Turkey with far fewer protections for speech, and human rights more generally. Beyond the one hour removal requirement, the TERREG also contained a broad definition of what constitutes terrorist content as “material that incites or advocates committing terrorist offences, promotes the activities of a terrorist group or provides instructions and techniques for committing terrorist offences”.   Furthermore, it introduced a duty of care for all platforms to avoid being misused for the dissemination of terrorist content. This includes the requirement of taking proactive measures to prevent the dissemination of such content. These rules were accompanied by a framework of cooperation and enforcement.  These aspects of the TERREG are particularly concerning, as research we’ve conducted in collaboration with other groups demonstrates that companies routinely make content moderation errors that remove speech that parodies or pushes back against terrorism, or documents human rights violations in countries like Syria that are experiencing war. TERREG and human rights TERREG was created  without real consultation of free expression and human rights groups and has serious repercussions for online expression. Even worse, the proposal was adopted based on political spin rather than evidence.  Notably, in 2019, the EU Fundamental Rights Agency—tasked with an opinion by the EU parliament—expressed concern about the regulation. In particular, the FRA noted that the definition of terrorist content had to be modified as it was too wide and would interfere with freedom of expression rights. Also, “According to the FRA, the proposal does not guarantee the involvement by the judiciary and the Member States’ obligation to protect fundamental rights online has to be strengthened.”  Together with many other civil society groups, we voiced our deep concern over the proposed legislation and stressed that the new rules would pose serious potential threats to fundamental rights of privacy, freedom of expression. The message to EU policymakers was clear: Abolish the one-hour time frame for content removal, which is too tight for platforms and will lead to over removal of content; Respect the principles of territoriality and ensure access to justice in cases of cross-border takedowns by ensuring that only the Member State in which the hosting service provider has its legal establishment can issue removal orders; Ensure due process and clarify that the legality of content be determined by a court or independent administrative authority; Don’t impose the use of upload or re-upload filters (automated content recognition technologies) to services under the scope of the Regulation; Exempt certain protected forms of expression, such as educational, artistic, journalistic, and research materials. However, while responsible committees of the EU Parliament showed willingness to take the concerns of civil society groups into account, things looked more grim in Council, where government ministers from each EU country meet to discuss and adopt laws. During the closed-door negotiations between the EU-institutions to strike a deal, different versions of TERREG were discussed, which culminated in further letters by civil society groups, urging the lawmakers to ensure key safeguards on freedom of expressions and the rule of law. Fortunately, civil society groups and fundamental rights-friendly MEPs in the Parliament were able to achieve some of their goals. For example, the agreement reached by the EU institutions includes exceptions for journalistic, artistic, and educational purposes. Another major improvement concerns the definition of terrorist content (now matching the narrower definition of the EU Directive on combating terrorism) and the option for host providers to invoke technical and operational reasons for non-complying with the strict one-hour removal obligation. And most importantly, the deal states that authorities cannot impose upload filters on platforms. The Deal Is Still Not Good Enough While civil society intervention has resulted in a series of significant improvements to the law, there is more work to be done. The proposed regulation still gives broad powers to national authorities, without judicial oversight, to censor online content that they deem to be “terrorism” anywhere in the EU, within a one-hour timeframe, and to incentivize companies to delete more content of their own volition. It further encourages the use of automated tools, without any guarantee of human oversight.Now, a broad coalition of civil society organizations is voicing their concerns with the Parliament, which must agree to the deal for it to become law. EFF and others suggest that the Members of the European Parliament should vote against the adoption of the proposal. We encourage our followers to raise awareness about the implications of TERREG and reach out to their national members of the EU Parliament.

  • Victory for Fair Use: The Supreme Court Reverses the Federal Circuit in Oracle v. Google
    by Michael Barclay on April 6, 2021 at 12:34 am

    In a win for innovation, the U.S. Supreme Court has held that Google’s use of certain Java Application Programming Interfaces (APIs) is a lawful fair use. In doing so, the Court reversed the previous rulings by the Federal Circuit and recognized that copyright only promotes innovation and creativity when it provides breathing room for those who are building on what has come before. This decision gives more legal certainty to software developers’ common practice of using, re-using, and re-implementing software interfaces written by others, a custom that underlies most of the internet and personal computing technologies we use every day.To briefly summarize over ten years of litigation: Oracle claims a copyright on the Java APIs—essentially names and formats for calling computer functions—and claims that Google infringed that copyright by using (reimplementing) certain Java APIs in the Android OS. When it created Android, Google wrote its own set of basic functions similar to Java (its own implementing code). But in order to allow developers to write their own programs for Android, Google used certain specifications of the Java APIs (sometimes called the “declaring code”). APIs provide a common language that lets programs talk to each other. They also let programmers operate with a familiar interface, even on a competitive platform. It would strike at the heart of innovation and collaboration to declare them copyrightable. EFF filed numerous amicus briefs in this case explaining why the APIs should not be copyrightable and why, in any event, it is not infringement to use them in the way Google did. As we’ve explained before, the two Federal Circuit opinions are a disaster for innovation in computer software. Its first decision—that APIs are entitled to copyright protection—ran contrary to the views of most other courts and the long-held expectations of computer scientists. Indeed, excluding APIs from copyright protection was essential to the development of modern computers and the internet.Then the second decision made things worse. The Federal Circuit’s first opinion had at least held that a jury should decide whether Google’s use of the Java APIs was fair, and in fact a jury did just that. But Oracle appealed again, and in 2018 the same three Federal Circuit judges reversed the jury’s verdict and held that Google had not engaged in fair use as a matter of law.Fortunately, the Supreme Court agreed to review the case. In a 6-2 decision, Justice Breyer explained why Google’s use of the Java APIs was a fair use as a matter of law. First, the Court discussed some basic principles of the fair use doctrine, writing that fair use “permits courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster.” Furthermore, the court stated: Fair use “can play an important role in determining the lawful scope of a computer program copyright . . . It can help to distinguish among technologies. It can distinguish between expressive and functional features of computer code where those features are mixed. It can focus on the legitimate need to provide incentives to produce copyrighted material while examining the extent to which yet further protection creates unrelated or illegitimate harms in other markets or to the development of other products.” In doing so, the decision underlined the real purpose of copyright: to incentivize innovation and creativity. When copyright does the opposite, fair use provides an important safety valve. Justice Breyer then turned to the specific fair use statutory factors. Appropriately for a functional software copyright case, he first discussed the nature of the copyrighted work. The Java APIs are a “user interface” that allow users (here the developers of Android applications) to “manipulate and control” task-performing computer programs. The Court observed that the declaring code of the Java APIs differs from other kinds of copyrightable computer code—it’s “inextricably bound together” with uncopyrightable features, such as a system of computer tasks and their organization and the use of specific programming commands (the Java “method calls”). As the Court noted: Unlike many other programs, its value in significant part derives from the value that those who do not hold copyrights, namely, computer programmers, invest of their own time and effort to learn the API’s system. And unlike many other programs, its value lies in its efforts to encourage programmers to learn and to use that system so that they will use (and continue to use) Sun-related implementing programs that Google did not copy. Thus, since the declaring code is “further than are most computer programs (such as the implementing code) from the core of copyright,” this factor favored fair use.Justice Breyer then discussed the purpose and character of the use. Here, the opinion shed some important light on when a use is “transformative” in the context of functional aspects of computer software, creating something new rather than simply taking the place of the original. Although Google copied parts of the Java API “precisely,” Google did so to create products fulfilling new purposes and to offer programmers “a highly creative and innovative tool” for smartphone development. Such use “was consistent with that creative ‘progress’ that is the basic constitutional objective of copyright itself.” The Court discussed “the numerous ways in which reimplementing an interface can further the development of computer programs,” such as allowing different programs to speak to each other and letting programmers continue to use their acquired skills. The jury also heard that reuse of APIs is common industry practice. Thus, the opinion concluded that the “purpose and character” of Google’s copying was transformative, so the first factor favored fair use.Next, the Court considered the third fair use factor, the amount and substantiality of the portion used. As a factual matter in this case, the 11,500 lines of declaring code that Google used were less than one percent of the total Java SE program. And even the declaring code that Google used was to permit programmers to utilize their knowledge and experience working with the Java APIs to write new programs for Android smartphones. Since the amount of copying was “tethered” to a valid and transformative purpose, the “substantiality” factor favored fair use.Finally, several reasons led Justice Breyer to conclude that the fourth factor, market effects, favored Google. Independent of Android’s introduction in the marketplace, Sun didn’t have the ability to build a viable smartphone. And any sources of Sun’s lost revenue were a result of the investment by third parties (programmers) in learning and using Java. Thus, “given programmers’ investment in learning the Sun Java API, to allow enforcement of Oracle’s copyright here would risk harm to the public. Given the costs and difficulties of producing alternative APIs with similar appeal to programmers, allowing enforcement here would make of the Sun Java API’s declaring code a lock limiting the future creativity of new programs.” This “lock” would interfere with copyright’s basic objectives.The Court concluded that “where Google reimplemented a user interface, taking only what was needed to allow users to put their accrued talents to work in a new and transformative program, Google’s copying of the Sun Java API was a fair use of that material as a matter of law.”The Supreme Court left for another day the issue of whether functional aspects of computer software are copyrightable in the first place. Nevertheless, we are pleased that the Court recognized the overall importance of fair use in software cases, and the public interest in allowing programmers, developers, and other users to continue to use their acquired knowledge and experience with software interfaces in subsequent platforms. Related Cases: Oracle v. Google

  • 553,000,000 Reasons Not to Let Facebook Make Decisions About Your Privacy
    by Cory Doctorow on April 6, 2021 at 12:03 am

    Another day, another horrific Facebook privacy scandal. We know what comes next: Facebook will argue that losing a lot of our data means bad third-party actors are the real problem that we should trust Facebook to make more decisions about our data to protect against them. If history is any indication, that’ll work. But if we finally wise up, we’ll respond to this latest crisis with serious action: passing America’s long-overdue federal privacy law (with a private right of action) and forcing interoperability on Facebook so that its user/hostages can escape its walled garden. Facebook created this problem, but that doesn’t make the company qualified to fix it, nor does it mean we should trust them to do so.  In January 2021, Motherboard reported on a bot that was selling records from a 500 million-plus person trove of Facebook  data, offering phone numbers and other personal information. Facebook said the data had been scraped by using a bug that was available as early as 2016, and which the company claimed to have patched in 2019. Last week, a dataset containing 553 million Facebook users’ data—including phone numbers, full names, locations, email addresses, and biographical information—was published for free online. (It appears this is the same dataset Motherboard reported on in January). More than half a billion current and former Facebook users are now at high risk of various kinds of fraud. While this breach is especially ghastly, it’s also just another scandal for Facebook, a company that spent decades pursuing deceptive and anticompetitive tactics to amass largely nonconsensual dossiers on its 2.6 billion users as well as many billions of people who have no Facebook, Instagram or WhatsApp account, including many who never had an account with Facebook. Based on past experience, Facebook’s next move is all but inevitable: after regretting this irretrievable data breach, the company will double down on the tactics that lock its users into its walled gardens, in the name of defending their privacy. That’s exactly what the company did during the Cambridge Analytica fiasco, when it used the pretense of protecting users from dangerous third-parties to lock out competitors, including those who use Facebook’s APIs to help users part ways with the service without losing touch with their friends, families, communities, and professional networks. According to Facebook, the data in this half-billion-person breach was harvested thanks to a bug in its code. We get that. Bugs happen. That’s why we’re totally unapologetic about defending the rights of security researchers and other bug-hunters who help discover and fix those bugs. The problem isn’t that a Facebook programmer made a mistake: the problem is that this mistake was so consequential. Facebook doesn’t need all this data to offer its users a social networking experience: it needs that data so it can market itself to advertisers, who paid the company $84.1 billion in 2020. It warehoused that data for its own benefit, in full knowledge that bugs happen, and that a bug could expose all of that data, permanently.  Given all that, why do users stay on Facebook? For many, it’s a hostage situation: their friends, families, communities, and professional networks are on Facebook, so that’s where they have to be. Meanwhile, those friends, family members, communities, and professional networks are stuck on Facebook because their friends are there, too. Deleting Facebook comes at a very high cost. It doesn’t have to be this way. Historically, new online services—including, at one time, Facebook—have smashed big companies’ walled gardens, allowing those former user-hostages to escape from dominant services but still exchange messages with the communities they left behind, using techniques like scraping, bots, and other honorable tools of reverse-engineering freedom fighters.  Facebook has gone to extreme lengths to keep this from ever happening to its services. Not only has it sued rivals who gave its users the ability to communicate with their Facebook friends without subjecting themselves to Facebook’s surveillance, the company also bought out successful upstart rivals specifically because it knew it was losing users to them. It’s a winning combination: use the law to prevent rivals from giving users more control over their privacy, use the monopoly rents those locked-in users generate to buy out anyone who tries to compete with you. Those 553,000,000 users whose lives are now an eternal open book to the whole internet never had a chance. Facebook took them hostage. It harvested their data. It bought out the services they preferred over Facebook.  And now that 553,000,000 people should be very, very angry at Facebook, we need to watch carefully to make sure that the company doesn’t capitalize on their anger by further increasing its advantage. As governments from the EU to the U.S. to the UK consider proposals to force Facebook to open up to rivals so that users can leave Facebook without shattering their social connections, Facebook will doubtless argue that such a move will make it impossible for Facebook to prevent the next breach of this type. Facebook is also likely to weaponize this breach in its ongoing war against accountability: namely, against a scrappy group of academics and Facebook users. Ad Observer and Ad Observatory are a pair of projects from NYU’s Online Transparency Project that scrape the ads its volunteers are served by Facebook and places them in a public repository, where scholars, researchers, and journalists can track how badly Facebook is living up to its promise to halt paid political disinformation. Facebook argues that any scraping—even highly targeted, careful, publicly auditable scraping that holds the company to account—is an invitation to indiscriminate mass-scraping of the sort that compromised the half-billion-plus users in the current breach. Instead of scraping its ads, the company says that its critics should rely on a repository that Facebook itself provides, and trust that the company will voluntarily reveal any breaches of its own policies. From Facebook’s point of view, a half-billion person breach is a half-billion excuses not to open its walled garden or permit accountability research into its policies. In fact, the worse the breach, the more latitude Facebook will argue it should get: “If this is what happens when we’re not being forced to allow competitors and critics to interoperate with our system, imagine what will happen if these digital trustbusters get their way!” Don’t be fooled. Privacy does not come from monopoly. No one came down off a mountain with two stone tablets, intoning “Thou must gather and retain as much user data as is technologically feasible!” The decision to gobble up all this data and keep it around forever has very little to do with making Facebook a nice place to chat with your friends and everything to do with maximizing the company’s profits.  Facebook’s data breach problems  are the inevitable result of monopoly, in particular the knowledge that it can heap endless abuses on its users and retain them. Even if they resign from Facebook, they’re going to end up on acquired Facebook subsidiaries like Instagram or WhatsApp, and even if they don’t, Facebook will still get to maintain its dossiers on their digital lives. Facebook’s breaches are proof that we shouldn’t trust Facebook—not that we should trust it more. Creating a problem in no way qualifies you to solve that problem. As we argued in our January white-paper, Privacy Without Monopoly: Data Protection and Interoperability, the right way to protect users is with a federal privacy law with a private right of action. Right now, Facebook’s users have to rely on Facebook to safeguard their interests. That doesn’t just mean crossing their fingers and hoping Facebook won’t make another half-billion-user blunder—it also means hoping that Facebook won’t intentionally disclose their information to a third party as part of its normal advertising activities.  Facebook is not qualified to decide what the limits on its own data-processing should be. Those limits should come from democratically accountable legislatures, not autocratic billionaire CEOs. America is sorely lacking a federal privacy law, particularly one that empowers internet users to sue companies that violate their privacy. A privacy law with a private right of action would mean that you wouldn’t be hostage to the self-interested privacy decisions of vast corporations, and it would mean that when they did you dirty, you could get justice on your own, without having to convince a District Attorney or Attorney General to go to bat for you. A federal privacy law with a private right of action would open a vast possible universe of new interoperable services that plugged into companies like Facebook, allowing users to leave without cancelling their lives; these new services would have to play by the federal privacy rules, too. That’s not what we’re going to hear from Facebook, though: in Facebookland, the answer to their abuse of our trust is to give them more of our trust; the answer to the existential crisis of their massive scale is to make them even bigger. Facebook created this problem, and they are absolutely incapable of solving it.

  • First Circuit Upholds First Amendment Right to Secretly Audio Record the Police
    by Sophia Cope on April 5, 2021 at 9:52 pm

    EFF applauds the U.S. Court of Appeals for the First Circuit for holding that the First Amendment protects individuals when they secretly audio record on-duty police officers. EFF filed an amicus brief in the case, Martin v. Rollins, which was brought by the ACLU of Massachusetts on behalf of two civil rights activists. This is a victory for people within the jurisdiction of the First Circuit (Massachusetts, Maine, New Hampshire, Puerto Rico and Rhode Island) who want to record an interaction with police officers without exposing themselves to possible reprisals for visibly recording. The First Circuit struck down as unconstitutional the Massachusetts anti-eavesdropping (or wiretapping) statute to the extent it prohibits the secret audio recording of police officers performing their official duties in public. The law generally makes it a crime to secretly audio record all conversations without consent, even where participants have no reasonable expectation of privacy, making the Massachusetts statute unique among the states. The First Circuit had previously held in Glik v. Cunniffe (2011) that the plaintiff had a First Amendment right to record police officers arresting another man in Boston Common. Glik had used his cell phone to openly record both audio and video of the incident. The court had held that the audio recording did not violate the Massachusetts anti-eavesdropping statute’s prohibition on secret recording because Glik’s cell phone was visible to officers. Thus, following Glik, the question remained open as to whether individuals have a First Amendment right to secretly audio record police officers, or if instead they could be punished under the Massachusetts statute for doing so. (A few years after Glik, in Gericke v. Begin (2014), the First Circuit held that the plaintiff had a First Amendment right to openly record the police during someone else’s traffic stop to the extent she wasn’t interfering with them.) The First Circuit in Martin held that recording on-duty police officers, even secretly, is protected newsgathering activity similar to that of professional reporters that “serve[s] the very same interest in promoting public awareness of the conduct of law enforcement—with all the accountability that the provision of such information promotes.” The court further explained that recording “play[s] a critical role in informing the public about how the police are conducting themselves, whether by documenting their heroism, dispelling claims of their misconduct, or facilitating the public’s ability to hold them to account for their wrongdoing.” The ability to secretly audio record on-duty police officers is especially important given that many officers retaliate against civilians who openly record them, as happened in a recent Tenth Circuit case. The First Circuit agreed with the Martin plaintiffs that secret recording can be a “better tool” to gather information about police officers, because officers are less likely to be disrupted and, more importantly, secret recording may be the only way to ensure that recording “occurs at all.” The court stated that “the undisputed record supports the Martin Plaintiffs’ concern that open recording puts them at risk of physical harm and retaliation.” Finally, the court was not persuaded that the privacy interests of civilians who speak with or near police officers are burdened by secretly audio recording on-duty police officers. The court reasoned that “an individual’s privacy interests are hardly at their zenith in speaking audibly in a public space within earshot of a police officer.” Given the critical importance of recordings for police accountability, the First Amendment right to record police officers exercising their official duties has been recognized by a growing number of federal jurisdictions. In addition to the First Circuit, federal appellate courts in the Third, Fifth, Seventh, Ninth, and Eleventh Circuits have directly upheld this right. Disappointingly, the Tenth Circuit recently dodged the question. For all the reasons in the First Circuit’s Martin decision, the Tenth Circuit erred, and the remaining circuits must recognize the First Amendment right to record on-duty police officers as the law of the land.

  • Maine Should Take this Chance to Defund the Local Intelligence Fusion Center
    by Matthew Guariglia on April 2, 2021 at 6:18 pm

    Maine state representative Charlotte Warren has introduced LD1278 (HP938), or An Act To End the Maine Information and Analysis Center Program, a bill that would defund the Maine Information and Analysis Center (MIAC), also known as Maine’s only fusion center. EFF is happy to support this bill in hopes of defunding an unnecessary, intrusive, and often-harmful piece of the U.S. surveillance regime. You can read the full text of the bill here.  Fusion centers are yet another unnecessary cog in the surveillance state—and one that serves the intrusive function of coordinating surveillance activities and sharing information between federal law enforcement, the national security surveillance apparatus, and local and state police. Across the United States, there are at least 78 fusion centers that were formed by the Department of Homeland Security in the wake of the war on terror and the rise of post-9/11 mass surveillance. Since their creation, fusion centers have been hammered by politicians, academics, and civil society groups for their ineffectiveness, dysfunction, mission creep, and unregulated tendency to veer into political policing. As scholar Brendan McQuade wrote in his book Pacifying the Homeland: Intelligence Fusion and Mass Supervision, “On paper, fusion centers have the potential to organize dramatic surveillance powers. In practice however, what happens at fusion centers is circumscribed by the politics of law enforcement. The tremendous resources being invested in counterterrorism and the formation of interagency intelligence centers are complicated by organization complexity and jurisdictional rivalries. The result is not a revolutionary shift in policing but the creation of uneven, conflictive, and often dysfunctional intelligence-sharing systems.” But in recent months, the dysfunction of fusion centers and the ease with which they sink into policing First Amendment-protected activities have been on full display. After a series of leaks that revealed communications from inside police departments, fusion centers, and law enforcement agencies across the country, MIAC came under particular scrutiny for sharing dubious intelligence generated by far-right wing social media accounts with local law enforcement. Specifically, the Maine fusion center helped perpetuate disinformation that stacks of bricks and stones had been strategically placed throughout a Black Lives Matter protest as part of a larger plan for destruction, and caused police to plan and act accordingly. This was, to put it plainly, a government intelligence agency spreading fake news that could have deliberately gotten people exercising their First Amendment rights hurt. This is in addition to a whistleblower lawsuit from a state trooper that alleged the fusion center routinely violated civil rights.   The first decade of the twenty-first century is characterized by a blank check to grow and expand the infrastructure that props up mass surveillance. Fusion centers are at the very heart of that excess. They have proven themselves to be unreliable and even harmful to the people the national security apparatus claims to want to protect. Why do states continue to fund intelligence fusion when, at its best, it enacts political policing that poses an existential threat to immigrants, activists, and protestors—and at worst, it actively disseminates false information to police?  We echo the sentiments of Representative Charlotte Warren and other dedicated Maine residents who say it’s time to shift MIAC’s nearly million-dollar per year budget towards more useful programs. Maine, pass LD1278 and defund the Maine Information and Analysis Center. 

  • Ethos Capital Is Grabbing Power Over Domain Names Again, Risking Censorship-For-Profit. Will ICANN Intervene?
    by Mitch Stoltz on April 2, 2021 at 5:15 am

    Ethos Capital is at it again. In 2019, this secretive private equity firm that includes insiders from the domain name industry tried to buy the nonprofit that runs the .ORG domain. A huge coalition of nonprofits and users spoke out. Governments expressed alarm, and ICANN (the entity in charge of the internet’s domain name system) scuttled the sale. Now Ethos is buying a controlling stake in Donuts, the largest operator of “new generic top-level domains.” Donuts controls a large swathe of the domain name space. And through a recent acquisition, it also runs the technical operations of the .ORG domain. This acquisition raises the threat of increased censorship-for-profit: suspending or transferring domain names against the wishes of the user at the request of powerful corporations or governments. That’s why we’re asking the ICANN Board to demand changes to Donuts’ registry contracts to protect its users’ speech rights. Donuts is big. It operates about 240 top-level domains, including .charity, .community, .fund, .healthcare, .news, .republican, and .university. And last year it bought Afilias, another registry company that also runs the technical operations of the .ORG domain. Donuts already has questionable practices when it comes to safeguarding its users’ speech rights. Its contracts with ICANN contain unusual provisions that give Donuts an unreviewable and effectively unlimited right to suspend domain names—causing websites and other internet services to disappear. Relying on those contracts, Donuts has cozied up to powerful corporate interests at the expense of its users. In 2016, Donuts made an agreement with the Motion Picture Association to suspend domain names of websites that MPA accused of copyright infringement, without any court process or right of appeal. These suspensions happen without transparency: Donuts and MPA haven’t even disclosed the number of domains that have been suspended through their agreement since 2017. Donuts also gives trademark holders the ability to pay to block the registration of domain names across all of Donuts’ top-level domains. In effect, this lets trademark holders “own” words and prevent others from using them as domain names, even in top-level domains that have nothing to do with the products or services for which a trademark is used. It’s a legal entitlement that isn’t part of any country’s trademark law, and it was considered and rejected by ICANN’s multistakeholder policy-making community. These practices could accelerate and expand with Ethos Capital at the helm. As we learned last year during the fight for .ORG, Ethos expects to deliver high returns to its investors while preserving its ability to change the rules for domain name registrants, potentially in harmful ways. Ethos refused meaningful dialogue with domain name users, instead proposing an illusion of public oversight and promoting it with a slick public relations campaign. And private equity investors have a sordid record of buying up vital institutions like hospitals, burdening them with debt, and leaving them financially shaky or even insolvent. Although Ethos’s purchase of Donuts appears to have been approved by regulators, ICANN should still intervene. Like all registry operators, Donuts has contracts with ICANN that allow it to run the registry databases for its domains. ICANN should give this acquisition as much scrutiny as it gave Ethos’s attempt to buy .ORG. And to prevent Ethos and Donuts from selling censorship as a service at the expense of domain name users, ICANN should insist on removing the broad grants of censorship power from Donuts’ registry contracts. ICANN did the right thing last year when confronted with the takeover of .ORG. We hope it does the right thing again by reining in Ethos and Donuts.  

  • Content Moderation Is A Losing Battle. Infrastructure Companies Should Refuse to Join the Fight
    by Corynne McSherry on April 2, 2021 at 12:44 am

    It seems like every week there’s another Big Tech hearing accompanied by a flurry of mostly bad ideas for reform. Two events set last week’s hubbub apart, both involving Facebook. First, Mark Zuckerberg took a new step in his blatant effort to use 230 reform to entrench Facebook’s dominance. Second, new reports are demonstrating, if further demonstration were needed, how badly Facebook is failing at policing the content on its platform with any consistency whatsoever. The overall message is clear: if content moderation doesn’t work even with the kind of resources Facebook has, then it won’t work anywhere. Inconsistent Policies Harm Speech in Ways That Are Exacerbated the Further Along the Stack You Go Facebook has been swearing for many months that it will do a better job of rooting out “dangerous content.” But a new report from the Tech Transparency Project demonstrates that it is failing miserably. Last August, Facebook banned some militant groups and other extremist movements tied to violence in the U.S. Now, Facebook is still helping expand the groups’ reach by automatically creating new pages for them and directing people who “like” certain militia pages to check out others, effectively helping these movements recruit and radicalize new members.  These groups often share images of guns and violence, misinformation about the pandemic, and racist memes targeting Black Lives Matter activists. QAnon pages also remain live despite Facebook’s claim to have taken them down last fall. Meanwhile, a new leak of Facebook’s internal guidelines shows how much it struggles to come up with consistent rules for users living under repressive governments. For example, the company forbids “dangerous organizations”—including, but not limited to, designated terrorist organizations—but allows users in certain countries to praise mass murderers and “violent non-state actors” (designated militant groups engaged that do not target civilians) unless their posts contain an explicit reference to violence. A Facebook spokesperson told the Guardian: “We recognise that in conflict zones some violent non-state actors provide key services and negotiate with governments – so we enable praise around those non-violent activities but do not allow praise for violence by these groups.” The problem is not that Facebook is trying to create space for some speech – they should probably do more of that. But the current approach is just incoherent. Like other platforms, Facebook does not base its guidelines on international human rights frameworks, nor do the guidelines necessarily adhere to local laws and regulations. Instead, they seem to be based upon what Facebook policymakers think is best. The capricious nature of the guidelines is especially clear with respect to LGBTQ+ content. For example, Facebook has limited use of the rainbow “like” button in certain regions, including the Middle East, ostensibly to keep users there safe. But in reality, this denies members of the LGBTQ+ community there the same range of expression as other users and is hypocritical given the fact that Facebook refuses to bend its “authentic names” policy to protect the same users. Whatever Facebook’s intent, in practice, it is taking sides in a region that it doesn’t seem to understand. Or as Lebanese researcher Azza El Masri put it on Twitter: “The directive to let pro-violent/terrorist content up in Myanmar, MENA, and other regions while critical content gets routinely taken down shows the extent to which [Facebook] is willing to go to appease our oppressors.” This is not the only example of a social media company making inconsistent decisions about what expression to allow. Twitter, for instance, bans alcohol advertising from every Arab country, including several (such as Lebanon and Egypt) where the practice is perfectly legal. Microsoft Bing once limited sexual search terms from the entire region, despite not being asked by governments to do so. Now imagine the same kinds of policies being applied to internet access. Or website hosting. Or cloud storage. All the Resources in the World Can’t Make Content Moderation Work at Scale Facebook’s lopsided policies are deserving of critique and point to a larger problem that too much focus on specific policies misses: if Facebook, with the money to hire thousands of moderators, implement filters, and fund an Oversight Board can’t manage to develop and implement a consistent, coherent and transparent moderation policy, maybe we should finally admit that we can’t look to social media platforms to solve deep-seated political problems – and we should stop trying. Even more importantly, we should call a halt to any effort to extend this mess beyond platforms. If two decades of experience with social media has taught us anything, it is that the companies are bad at creating and implementing consistent, coherent policies. But at least, when a social media company makes an error in judgement, its impact is relatively limited. But at the infrastructure level, however, those decisions necessarily hit harder and wider. If an internet service provider (ISP) shut down access to LGTBQ+ individuals using the same capricious whims as Facebook, it would be a disaster. What Infrastructure Companies Can Learn The full infrastructure of the internet, or the “full stack” is made up of a range of companies and intermediaries that range from consumer facing platforms like Facebook or Pinterest to ISPs, like Comcast or AT&T. Somewhere in the middle are a wide array of intermediaries, such as upstream hosts like Amazon Web Services (AWS), domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services. For most of us, most of the stack is invisible. We send email, tweet, post, upload photos and read blog posts without thinking about all the services that have to function to get the content from the original creator onto the internet and in front of users’ eyeballs all over the world. We may think about our ISP when it gets slow or breaks, but day-to-day, most of us don’t think about AWS at all. We are more aware of the content moderation decisions—and mistakes—made by the consumer facing platforms. We have detailed many times the chilling effect and other problems with opaque, bad, or inconsistent content moderation decisions from companies like Facebook. But when ISPs or intermediaries decide to wade into the content moderation game and start blocking certain users and sites, it’s far worse. For one thing, many of these services have few, if any, competitors. For example, too many people in the United States and overseas only have one choice for an ISP. If the only broadband provider in your area cuts you off because they (or your government) didn’t like what you said online—or what someone else whose name is on the account said—how can you get back online? Further, at the infrastructure level, services usually cannot target their response narrowly. Twitter can shut down individual accounts; when those users migrate to Parler and continue to engage in offensive speech, AWS can only deny service to the entire site including speech that is entirely unobjectionable. And that is exactly why ISPs and intermediaries need to stay away from this fight entirely. The risks from getting it wrong at the infrastructure level are far too great. It is easy to understand why repressive governments (and some advocates) want to pressure ISPs and intermediaries in the stack to moderate content: it is a broad, blunt and effective way to silence certain voices. Some intermediaries might also feel compelled to moderate aggressively in the hopes of staving off criticism down the line.  As last week’s hearing showed, this tactic will not work. The only way to avoid the pressure is to stake out an entirely different approach. To be clear, in the United States, businesses have a constitutional right to decide what content they want to host. That’s why lawmakers who are tempted to pass laws to punish intermediaries beyond platforms in the stack for their content moderation decisions would face the same kind of First Amendment problems as any bill attempting to meddle with speech rights. But, just because something is legally permissible does not mean it is the right thing to do, especially when implementation will vary depending on who is asking for it, when. Content moderation is empirically impossible to do well at scale; given the impact of the inevitable mistakes, ISPs and infrastructure intermediaries should not try. Instead, they should reject pressure to moderate like platforms, and clarify that they are much more like the local power company. If you wouldn’t want the power company shutting off service to a house just because someone doesn’t like what’s going on inside, you shouldn’t want a domain name registrar freezing a domain name because someone doesn’t like a site, or an ISP shutting down an account. And if you would hold the power company responsible for the behavior you don’t like just because that behavior relied on electricity, you shouldn’t hold an ISP or a domain name registrar or CDN, etc, responsible for behavior or speech that relies on their services either.   If more than two decades of social media content moderation has taught us anything, it is that we cannot tech our way out of a fundamentally political problem. Social media companies have tried and failed to do so; beyond the platform, companies should refuse to replicate those failures.

  • The FCC Wants Your Broadband Horror Stories: You Know What to Do
    by Chao Liu on April 1, 2021 at 9:41 pm

    At long last, the Federal Communications Commission (FCC) is asking for your broadband experiences. When you submit your experiences here, you will let the FCC know whether you have been adequately served by your internet service provider (ISP). The feedback you provide informs future broadband availability as well as funding, standards, and federal policy. Traditionally, the FCC credulously relied on monopolistic ISPs to self-report coverage and service, which allowed these giant businesses to paint a deceptive, deeply flawed portrait of broadband service where everything was generally just fine. It was not fine. It is not fine. The pandemic demonstrated how millions are left behind or stuck with second-rate service, in a digital age where every aspect of a thriving, prosperous life turns on the quality of your broadband. Just look at the filings from Frontier’s recent bankruptcy and see how mismanagement, misconduct, and poor service are standard industry practice. It’s not just Frontier, either: recurring horror stories of ISPs not delivering upon their basic promise of service by upload throttling customers or even harassing customers seeking to cancel service demonstrate that ISPs don’t think of us as customers, but rather as captives to their monopolies. Last Wednesday, the White House announced a plan to invest $100 billion in building and improving high-speed broadband infrastructure. It’s overdue. Last February, Consumer Reports released a survey which found that 75% of Americans say they rely on the internet to carry out their daily activities seven days a week. EFF has long advocated for broadband for all, and today we are part of a mass movement demanding universal and affordable access for all people so that they may be full participants in twenty-first century society. Trump’s FCC, under the chairmanship of former Verizon executive Ajit Pai, threw away citizen comments opposing the 2017 net neutrality repeal. It’s taken years to learn what was in those comments. Now that the FCC is finally seriously asking for citizen comments, this is your chance to let them know just how badly you’ve been treated, and to demand an end to the long, miserable decades when the monopolistic ISPs got away with charging sky-high rates for some of the worse service among the advanced broadband markets. We all deserve better.  Submit Your Comments

  • Tenth Circuit Misses Opportunity to Affirm the First Amendment Right to Record the Police
    by Sophia Cope on April 1, 2021 at 5:47 pm

    We are disappointed that the U.S. Court of Appeals for the Tenth Circuit this week dodged a critical constitutional question: whether individuals have a First Amendment right to record on-duty police officers. EFF had filed an amicus brief in the case, Frasier v. Evans, asking the court to affirm the existence of the right to record the police in the states under the court’s jurisdiction (Colorado, Oklahoma, Kansas, New Mexico, Wyoming, and Utah, and those portions of the Yellowstone National Park extending into Montana and Idaho). Frasier had used his tablet to record Denver police officers engaging in what he believed to be excessive force: the officers repeatedly punched a suspect in the face to get drugs out of his mouth as his head bounced off the pavement, and they tripped his pregnant girlfriend. Frasier filed a First Amendment retaliation claim against the officers for detaining and questioning him, searching his tablet, and attempting to delete the video. Qualified Immunity Strikes Again In addition to refusing to affirmatively recognize the First Amendment right to record the police, the Tenth Circuit held that even if such a right did exist today, the police officers who sought to intimidate Frasier could not be held liable for violating his constitutional right because they had “qualified immunity”—that is, because the right to record the police wasn’t clearly established in the Tenth Circuit at the time of the incident in August 2014. The court held not only that the right had not been objectively established in federal case law, but also that it was irrelevant that the officers subjectively knew the right existed based on trainings they received from their own police department. Qualified immunity is a pernicious legal doctrine that often allows culpable government actors to avoid accountability for violations of constitutional rights. Thus, the police officers who clearly retaliated against Frasier are off the hook, even though “the Denver Police Department had been training its officers since February 2007” that individuals have a First Amendment right to record them, and that “each of the officers in this case had testified unequivocally that, as of August 2014, they were aware that members of the public had the right to record them.” Recordings of Police Officers Are Critical for Accountability As we wrote last year in our guide to recording police officers, “[r]ecordings of police officers, whether by witnesses to an incident with officers, individuals who are themselves interacting with officers, or by members of the press, are an invaluable tool in the fight for police accountability. Often, it’s the video alone that leads to disciplinary action, firing, or prosecution of an officer.” This is particularly true in the murder of George Floyd by former Minneapolis police officer Derek Chauvin. Chauvin’s criminal trial began this week and that Chauvin is being prosecuted at all is in large part due to the brave bystanders who recorded the scene. Notwithstanding the critical importance of recordings for police accountability, the First Amendment right to record police officers exercising their official duties has not been recognized by all federal jurisdictions. Federal appellate courts in the First, Third, Fifth, Seventh, Ninth, and Eleventh Circuits have directly upheld this right. We had hoped that the Tenth Circuit would join this list. Instead, the court stated, “because we ultimately determine that any First Amendment right that Mr. Frasier had to record the officers was not clearly established at the time he did so, we see no reason to risk the possibility of glibly announcing new constitutional rights … that will have no effect whatsoever on the case.” This statement by the court is surprisingly dismissive given the important role courts play in upholding constitutional rights. Even with the court’s holding that the police officers had qualified immunity against Frasier’s First Amendment claim, if the court declared that the right to record the police, in fact, exists within the Tenth Circuit, this would unequivocally help to protect the millions of Americans who live within the court’s jurisdiction from police misconduct. But the Tenth Circuit refused to do so, leaving this critical question to another case and another appellate panel.   All is Not Lost in Colorado Although the Tenth Circuit refused to recognize that the right to record the police exists as a matter of constitutional law throughout its jurisdiction, it is comforting that the Colorado Legislature passed two statutes in the wake of the Frasier case. The first law created a statutory right for civilians to record police officers (Colo. Rev. Stat. § 16-3-311). The second created a civil cause of action against police officers who interfere with an individual’s lawful attempt to record an incident involving a police officer, or who destroy, damage, or seize a recording or recording device (Colo. Rev. Stat. § 13-21-128). Additionally, the Denver Police Department revised its operations manual to prohibit punching a suspect to get drugs out of his mouth (Sec. 116.06(3)(b)), and to explicitly state that civilians have a right to record the police and that officers may not infringe on this right (Sec. 107.04(3)).

  • EFF to Court: Don’t Let Pseudo-IP Thwart Speech, Innovation, and Competition
    by Corynne McSherry on March 31, 2021 at 11:46 pm

    The threats to online expression and innovation keep coming. One that’s flown under the radar is a misguided effort to convince the Third Circuit Court of Appeals to allow claims based on the “right of publicity,” (i.e., the right to control the commercial exploitation of your persona) because some people think of this right as a form of “intellectual property.” State law claims are normally barred under Section 230, a law has enabled decades of innovation and online expression. But Section 230 doesn’t apply to “intellectual property” claims, so if publicity rights are intellectual property (“IP”), the theory goes, intermediaries can be sued for any user content that might evoke a person. That interpretation of Section 230 would effectively eviscerate its protections altogether. Good news: it’s wrong. Bad news: the court might not see that, which is why EFF, along with group of small tech companies and advocates, filed an amicus brief to help explain the law and the stakes of the case for the internet. The facts here are pretty ugly. The plaintiff is a reporter who discovered that an image of her caught on a surveillance camera was being used in ads and shared on social media without her permission. She’s suing Facebook, Reddit, Imgur and a porn site for violating her publicity rights, The district court dismissed the case on Section 230 ground, following strong precedent from the Ninth Circuit holding that the IP carveout doesn’t include state law publicity claims. Hepp appealed. As we explain in our brief, the court should start by looking at Section 230 itself. Generally, if the wording of a law makes sense to a general reader, a court will keep things simple and assume the straightforward meaning. But if the words are unclear or have multiple meanings, the court has to dig deeper. In this case, the term at issue, “intellectual property,” varies widely depending on context. The term didn’t even come into common use until the latter half of the 20th Century, but it’s now used loosely to refer to everything from trade secrets to unfair competition. Given that ambiguity, the court should look beyond the text of the law and consider Congress’ intent. Within the context of Section 230, construing the term to include publicity rights is simply nonsensical. Congress passed Section 230 so that new sites and services could grow and thrive without fear that a failure to perfectly manage content that might run afoul of 50 different state laws might lead to crippling liability. Thanks to Section 230, we have an internet that enables new forms of collaboration and cultural production; allows ordinary people to stay informed, organize and build communities in new and unexpected ways; and, especially in a pandemic, helps millions learn, work and serve others. And new platforms and services emerge every day because they can afford to direct their budgets toward innovation, rather than legal fees. Excluding publicity rights claims from the immunity afforded by Section 230 would put all of that in jeopardy. In California, publicity rights protections apply to virtually anything that evokes a person, and endure for 70 years after the death of that person. In Virginia, a publicity rights violation can result in criminal penalties. Alaska doesn’t recognize a right of publicity at all. Faced with a panoply of standards, email providers, social media platforms, and any site that supports user-generated content will be forced to tailor their sites and procedures to ensure compliance with the most expansive state law, or risk liability and potentially devastating litigation costs. For all their many flaws, copyrights and patent laws are relatively clear, relatively knowable, and embody a longstanding balance between rightsholders, future creators and inventors, and the public at large. Publicity rights are none of these things. Instead, whatever we call them, they look a lot more like other torts, like privacy violations, that are included within Section 230’s traditional scope. Ms. Hepp has a good reason to be angry, and we didn’t file our amicus because we are concerned about the effects of an adverse ruling on Facebook in particular, which can doubtless afford any liability it might incur. The problem is everyone else: the smaller entities that cannot afford that risk, or even the costs of defending a lawsuit; and the users who rely on intermediaries to communicate with family, friends and the world, and who will be unable to share content that might include an image, likeness or phrase associated with a person should those intermediaries be saddled with defending against state publicity claims based on their users’ speech. What is worse, it will help entrench the current dominant players. Section 230 led to the emergence of all kinds of new products and forums but, equally importantly, it has also kept the door open for competitors to follow. Today, social media is dominated by Twitter, Facebook and Youtube, but dissatisfied users can turn to Discord, Parler, Clubhouse, TikTok and Rumble. Dissatisfied Gmail users can turn to Proton, Yahoo!, Riseup and many others. None of these entities, entrenched or emergent, would exist without Section 230. Hepp’s theory raises that barrier to entry back up, particularly given that intermediaries would face potential liability not only for images and video, but mere text as well. To mitigate that liability risk, any company that relies on ads will be forced to try to screen potentially unlawful content, rewrite their terms of service, and/or require consent forms for any use of anything that might evoke a persona. But even strict terms of service, consent forms, and content filters would not suffice: many services will be swamped by meritless claims or shaken down for nuisance settlements. Tech giants like Facebook and Google might survive this flood of litigation, but nonprofit platforms and startups – like the next competitors to Facebook and Google – would not. And investors who would rather support innovation than lawyers, filtering technologies, and content moderators, will choose not to fund emerging alternative services at all.

  • Schools Can’t Punish Students for Off-Campus Speech, Including Social Media Posts, EFF Tells Supreme Court
    by Karen Gullo on March 31, 2021 at 6:33 pm

    Online Comments Made Outside School Are Fully Protected by the First AmendmentWashington, D.C.—The Electronic Frontier Foundation (EFF) urged the Supreme Court to rule that when students post on social media or speak out online while off campus, they are protected from punishment by school officials under the First Amendment—an important free speech principle amid unprecedented, troubling monitoring and surveillance of students’ online activities outside the classroom. EFF, joined by the Brennan Center for Justice and the Pennsylvania Center for the First Amendment, said in a brief filed today that a rule the Supreme Court established in the 1960s allowing schools to punish students for what they say on campus in some limited circumstances should not be expanded to let schools regulate what students say in their private lives outside of school, including on social media. “Like all Americans, students have free speech protections from government censorship and policing,” said EFF Stanton Fellow Naomi Gilens. “In the 1969 case Tinker v. Des Moines, the Supreme Court carved out a narrow exception to this rule, allowing schools to regulate some kinds of speech on campus only in limited circumstances, given the unique characteristics of the school environment. Interpreting that narrow exception to let schools punish students for speech uttered outside of school would dramatically expand schools’ power to police students’ private lives.” In B.L. v. Mahanoy Area School District, the case before the court, a high school student who failed to make the varsity cheerleading squad posted a Snapchat selfie with text that said, among other things, “fuck cheer.” She shared the post over the weekend and outside school grounds—but one of her Snapchat connections took a screen shot and shared it with the cheerleading coaches, who suspended B.L. from the J.V. squad. The student and her family sued the school. In a victory for free speech, the U.S. Court of Appeals for the Third Circuit issued a historic decision in the case, holding that the school’s limited power to punish students for disruptive speech doesn’t apply to off-campus speech, even if that speech is shared on social media and finds its way into school via other students’ smartphones or devices. EFF also explained that protecting students’ off-campus speech, including on social media, is critical given the central role that the Internet and social media play in young people’s lives today. Not only do students use social media to vent their daily frustrations, as the student in this case did, but students also use social media to engage in politics and advocacy, from promoting candidates during the 2020 election to advocating for action on climate change and gun violence. Expanding schools’ ability to punish students would chill students from engaging online with issues they care about—an outcome that is antithetical to the values underlying the First Amendment. “The Supreme Court should uphold the Third Circuit ruling and guarantee that schools can’t chill children and young people from speaking out in their private lives, whether at a protest, in an op-ed, in a private conversation, or online, including on social media,” said Gilens. For the brief: For more on free speech:   Contact:  NaomiGilensFrank Stanton [email protected]

Share This Information.

4 thoughts on “Deeplinks

  1. Thanks for a marvelous posting! I seriously enjoyed reading it, you will be a great author.

    I will ensure that I bookmark your blog and will often come back down the road.
    I want to encourage you to continue your great writing, have a nice

Leave a Reply

Your email address will not be published. Required fields are marked *