Deeplinks EFF’s Deeplinks Blog: Noteworthy news from around the internet

  • EFF Asks Appeals Court to Rule DMCA Anti-Circumvention Provisions Violate First Amendment
    by Karen Gullo on January 13, 2022 at 8:55 pm

    Lawsuit Filed on Behalf of Computer Scientist and Security Researcher Seeks to Bar Enforcement of Section 1201 ProvisionsWashington D.C.—The Electronic Frontier Foundation (EFF) asked a federal appeals court to block enforcement of onerous copyright rules that violate the First Amendment and criminalize certain speech about technology, preventing researchers, tech innovators, filmmakers, educators, and others from creating and sharing their work.EFF, with co-counsel Wilson Sonsini Goodrich & Rosati, asked the U.S. Court of Appeals for the District of Columbia yesterday to reverse a district court decision in Green v. DOJ, a lawsuit we filed in 2016 challenging the anti-circumvention and anti-trafficking provisions of the Digital Millennium Copyright Act (DMCA) on behalf of security researcher Matt Green and technologist Andrew “bunnie” Huang. Both are pursuing projects highly beneficial to the public and perfectly lawful except for DMCA’s anti-speech provisions.These provisions—contained in Section 1201 of the DMCA—make it unlawful for people to get around the software that restricts access to lawfully-purchased copyrighted material, such as films, songs, and the computer code that controls vehicles, devices, and appliances. This ban applies even where people want to make noninfringing fair uses of the materials they are accessing. The only way to challenge the ban is to go through an arduous, cumbersome process, held every three years, to petition the Library of Congress for an exemption.While enacted to combat music and move piracy, Section 1201 has long served to restrict people’s ability to access, use, and even speak out about copyrighted materials—including the software that is increasingly embedded in everyday things. Our rights to tinker with or repair the devices we own are under threat by the law, which makes it a crime to create or share tools that could, for example, allow people to convert their videos so they can play on multiple platforms or conduct independent security research to find dangerous flaws in vehicles or medical devices.Green, a computer security researcher at Johns Hopkins University, works to make Apple messaging and financial transactions systems more secure by uncovering software vulnerabilities, an endeavor that requires finding and exploiting weaknesses in code. Green seeks to publish a book about his work but fears that it could invite criminal charges under Section 1201.Meanwhile Huang, a prominent computer scientist and inventor, and his company Alphamax LLC, are developing devices for editing digital video streams that would enable people to make innovative uses of their paid video content, such as captioning a presidential debate with a running Twitter comment field or enabling remixes of high-definition video. But using or offering this technology could also run afoul of Section 1201.Ruling on the government’s motion to dismiss the lawsuit, a federal judge said Green and Huang could proceed with claims that 1201 violated their First Amendment rights to pursue their projects but dismissed the claim that the section was itself unconstitutional. The court also refused to issue an injunction preventing the government from enforcing 1201.“Section 1201 makes it a federal crime for our clients, and others like them, to exercise their right to free expression by engaging in research, creating software, and publish their work,” said EFF Senior Staff Attorney Kit Walsh. “This creates a censorship regime under the guise of copyright law that cannot be squared with the First Amendment.”For the filing: more about this case: Contact:  CorynneMcSherryLegal [email protected] KitWalshSenior Staff [email protected]

  • EFF Threat Lab’s “apkeep” APK Downloader, Now More Capable and Available in More Places
    by Bill Budington on January 13, 2022 at 8:21 pm

    In September, we introduced EFF Threat Lab’s very own APK Downloader, apkeep. It is a tool that allows us to make the job of tracking state-sponsored malware and combatting the stalkerware of abusive partners easier. Since that time, we’ve added some additional functionality that we’d like to share. F-Droid In addition to the ability to download Android packages from the Google Play Store and APKPure, we’ve added support for downloading from the free and open source app repository F-Droid. Packages downloaded from F-Droid are checked against the repository maintainers’ signing key, just like in the F-Droid app itself. The package index is also cached, which makes it easy to run multiple subsequent requests for downloads. Versioning You can now download specific versions of apps from either the apk-pure app store, which mirrors the Google Play Store, or from f-droid. To try it, issue the following command to see which versions are available: apkeep -l -a -d apk-pureOnce you’ve picked a desired version, download it with this command: apkeep -a [email protected] -d apk-pure .Keep in mind not all versions will be retained by these download sources, so only recent versions may be available. Additional Platform Support On initial launch, we supported only 6 platforms: GNU/Linux x86_64, i686, aarch64, and armv7 Android aarch64 and armv7 We have been quickly building our platform support to bring the current tally to 9: GNU/Linux x86_64, i686, aarch64, and armv7 Android x86_64, i686, aarch64 and armv7 Windows x86_64 and we plan to continue to build out to more platforms in the future. Termux Repositories The Android terminal application Termux now makes it easy to install apkeep. We have added our package to their repository, so that Termux users now only need to issue a simple command to install the latest version: pkg install apkeepFuture Plans In addition to continuing to build out to additional platforms, we would also like to add more Android markets to download from, such as the Amazon Appstore. Have any suggestions for features or new platforms you’d like to see supported? Let us know by opening an issue on our GitHub page! Special Thanks We would like to thank the F-Droid and Termux communities for their assistance in this build-out, and thank our users for their feedback and support.

  • San Francisco Police Illegally Used Surveillance Cameras at the George Floyd Protests. The Courts Must Stop Them
    by Nathan Sheard on January 13, 2022 at 6:51 pm

    Update: This post has been updated to reflect that the hearing date in this case has been moved to January 21. By Hope Williams, Nathan Sheard, and Nestor Reyes The authors are community activists who helped organize and participated in protests against police violence in San Francisco after the murder of George Floyd. A hearing in their lawsuit against the San Francisco Police Department over surveillance of Union Square protests is scheduled for Friday. This article was first published in the San Francisco Standard. A year and a half ago, the San Francisco Police Department illegally spied on us and thousands of other Bay Area residents as we marched against racist police violence and the murder of George Floyd. Aided by the Electronic Frontier Foundation (EFF) and the ACLU of Northern California, we have taken the SFPD to court. Our lawsuit defends our right to organize protests against police violence without fear of illegal police surveillance. After the police murdered George Floyd, we coordinated mass actions and legal support and spent our days leading the community in chants, marches and protests demanding an end to policing systems that stalk and kill Black and Brown people with impunity. Our voice is more important than ever as the mayor and Chris Larsen, the billionaire tech executive funding camera networks across San Francisco, push a false narrative about our lawsuit and the law that the SFPD violated.  In 2019, the city passed a landmark ordinance that bans the SFPD and other city agencies from using facial recognition and requires them to get approval from the Board of Supervisors for other surveillance technologies. This transparent process sets up guardrails, allows for public input and empowers communities to say “no” to more police surveillance on our streets.  But the police refuse to play by the rules. EFF uncovered documents showing that the SFPD violated the 2019 law and illegally tapped into a network of more than 300 video cameras in the Union Square area to surveil us and our fellow protesters. Additional documents and testimony in our case revealed that an SFPD officer repeatedly viewed the live camera feed, which directly contradicts the SFPD’s prior statements to the public and the city’s Board of Supervisors that “the feed was not monitored.” Larsen has also backpedaled. Referencing the network, he previously claimed that “the police can’t monitor it live.” Now, Larsen is advocating for live surveillance and criticizing us for defending our right under city law to be free from unfettered police spying. He even suggests that we are to blame for recent high-profile retail thefts at San Francisco’s luxury stores.  As Black and Latinx activists, we are outraged—but not surprised—by rich and powerful people supporting illegal police surveillance. They are not the ones targeted by the police and won’t pay the price if the city rolls back hard-won civil rights protections.  Secret surveillance will not protect the public. What will actually make us safer is to shift funding away from the police and toward housing, healthcare, violence interruption programs and other services necessary for racial justice in the Bay Area. Strong and well-resourced communities are far more likely to be safe than they would be with ever-increasing surveillance. As members of communities that are already overpoliced and underserved we know that surveillance is a trigger that sets our most violent and unjust systems in motion. Before the police kill a Black person, deport an immigrant, or imprison a young adult for a crime driven by poverty, chances are the police surveilled them first. That is why we support democratic control over police spying and oppose the surveillance infrastructure that Larsen is building in our communities. We joined organizations like the Harvey Milk LGBTQ Democratic Club in a successful campaign against Larsen’s plan to fund more than 125 cameras in San Francisco’s Castro neighborhood. And we made the decision to join forces with the EFF and the ACLU to defend our rights in court after we found out the SFPD spied on us and our movement. On January 21, we will be in court to put a stop to the SFPD’s illegal spying and evasion of democratic oversight. We won’t let the police or their rich and powerful supporters intimidate activists into silence or undermine our social movements. Related Cases: Williams v. San Francisco

  • Nearly 130 Public Interest Organizations and Experts Urge the United Nations to Include Human Rights Safeguards in Proposed UN Cybercrime Treaty
    by Katitza Rodriguez on January 13, 2022 at 4:35 pm

    (UPDATE: Due to the ongoing situation concerning the coronavirus disease (COVID-19), the Ad Hoc Committee won’t hold its first session from 17 to 28 January 2022 in New York, as planned. Further information will be provided in due course). EFF and Human Rights Watch, along with nearly 130 organizations and academics working in 56 countries, regions, or globally, urged members of the Ad Hoc Committee responsible for drafting a potential United Nations Cybercrime Treaty to ensure human rights protections are embedded in the final product. The first session of the Ad Hoc Committee is scheduled to begin on January 17th.  The proposed treaty will likely deal with cybercrime, international cooperation, and access to potential digital evidence by law enforcement authorities, as well as human rights and procedural safeguards. UN member states have already written opinions discussing the scope of the treaty, and their proposals vary widely. In a letter to the committee chair, EFF and Human Rights Watch along with partners across the world asked that members include human rights considerations at every step in the drafting process. We also recommended  that cross-border investigative powers include strong human rights safeguards, and that global civil society be provided opportunities to participate robustly in the development and drafting of any potential convention. Failing to prioritize human rights and procedural safeguards in criminal investigations can have dire consequences.  As many countries have already abused their existing cybercrime laws to undermine human rights and freedoms and punish peaceful dissent, we have grave concerns that this Convention might become a powerful weapon for oppression. We also worry that cross-border investigative powers without strong human rights safeguards will sweep away progress on protecting people’s privacy rights, creating a race to the bottom among jurisdictions with the weakest human rights protections. We hope the Member States participating in the development and drafting of the treaty will recognize the urgency of the risks we mention, commit to include civil society in their upcoming discussions, and take our recommendations to heart. Drafting of the letter was spearheaded by EFF, Human Rights Watch, AccessNow, ARTICLE19, Association for Progressive Communications, CIPPIC, European Digital Rights, Privacy International, Derechos Digitales, Data Privacy Brazil Research Association, European Center For Not-For-Profit Law, IT-Pol – Denmark, SafeNet South East Asia, Fundación Karisma, Red en Defensa de los Derechos Digitales, OpenNet Korea, among many others. The letter is available in English and Spanish, and will be available in other UN languages in due course. The full text of the letter and list of signatories are below: December 22, 2021 H.E. Ms Faouzia Boumaiza MebarkiChairpersonAd Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Communication Technologies for Criminal Purposes Your Excellency, We, the undersigned organizations and academics, work to protect and advance human rights, online and offline. Efforts to address cybercrime are of concern to us, both because cybercrime poses a threat to human rights and livelihoods, and because cybercrime laws, policies, and initiatives are currently being used to undermine people’s rights. We therefore ask that the process through which the Ad Hoc Committee does its work includes robust civil society participation throughout all stages of the development and drafting of a convention, and that any proposed convention include human rights safeguards applicable to both its substantive and procedural provisions. Background The proposal to elaborate a comprehensive “international convention on countering the use of information and communications technologies for criminal purposes” is being put forward at the same time that UN human rights mechanisms are raising alarms about the abuse of cybercrime laws around the world. In his 2019 report, the UN special rapporteur on the rights to freedom of peaceful assembly and of association, Clément Nyaletsossi Voule, observed, “A surge in legislation and policies aimed at combating cybercrime has also opened the door to punishing and surveilling activists and protesters in many countries around the world.” In 2019 and once again this year, the UN General Assembly expressed grave concerns that cybercrime legislation is being misused to target human rights defenders or hinder their work and endanger their safety in a manner contrary to international law. This follows years of reporting from non-governmental organizations on the human rights abuses stemming from overbroad cybercrime laws. When the convention was first proposed, over 40 leading digital rights and human rights organizations and experts, including many signatories of this letter, urged delegations to vote against the resolution, warning that the proposed convention poses a threat to human rights. In advance of the first session of the Ad Hoc Committee, we reiterate these concerns. If a UN convention on cybercrime is to proceed, the goal should be to combat the use of information and communications technologies for criminal purposes without endangering the fundamental rights of those it seeks to protect, so people can freely enjoy and exercise their rights, online and offline. Any proposed convention should incorporate clear and robust human rights safeguards. A convention without such safeguards or that dilutes States’ human rights obligations would place individuals at risk and make our digital presence even more insecure, each threatening fundamental human rights. As the Ad Hoc Committee commences its work drafting the convention in the coming months, it is vitally important to apply a human rights-based approach to ensure that the proposed text is not used as a tool to stifle freedom of expression, infringe on privacy and data protection, or endanger individuals and communities at risk.   The important work of combating cybercrime should be consistent with States’ human rights obligations set forth in the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), and other international human rights instruments and standards. In other words, efforts to combat cybercrime should also protect, not undermine, human rights. We remind States that the same rights that individuals have offline should also be protected online. Scope of Substantive Criminal Provisions There is no consensus on how to tackle cybercrime at the global level or a common understanding or definition of what constitutes cybercrime. From a human rights perspective, it is essential to keep the scope of any convention on cybercrime narrow. Just because a crime might involve technology does not mean it needs to be included in the proposed convention. For example, expansive cybercrime laws often simply add penalties due to the use of a computer or device in the commission of an existing offense. The laws are especially problematic when they include content-related crimes. Vaguely worded cybercrime laws purporting to combat misinformation and online support for or glorification of terrorism and extremism, can be misused to imprison bloggers or block entire platforms in a given country. As such, they fail to comply with international freedom of expression standards. Such laws put journalists, activists, researchers, LGBTQ communities, and dissenters in danger, and can have a chilling effect on society more broadly. Even laws that focus more narrowly on cyber-enabled crimes are used to undermine rights. Laws criminalizing unauthorized access to computer networks or systems have been used to target digital security researchers, whistleblowers, activists,  and journalists. Too often, security researchers, who help keep everyone safe, are caught up in vague cybercrime laws and face criminal charges for identifying flaws in security systems. Some States have also interpreted unauthorized access laws so broadly as to effectively criminalize any and all whistleblowing; under these interpretations, any disclosure of information in violation of a corporate or government policy could be treated as “cybercrime.” Any potential convention should explicitly include a malicious intent standard, should not transform corporate or government computer use policies into criminal liability, should provide a clearly articulated and expansive public interest defense, and include clear provisions that allow security researchers to do their work without fear of prosecution. Human Rights and Procedural Safeguards Our private and personal information, once locked in a desk drawer, now resides on our digital devices and in the cloud. Police around the world are using an increasingly intrusive set of investigative tools to access digital evidence. Frequently, their investigations cross borders without proper safeguards and bypass the protections in mutual legal assistance treaties. In many contexts, no judicial oversight is involved, and the role of independent data protection regulators is undermined. National laws, including cybercrime legislation, are often inadequate to protect against disproportionate or unnecessary surveillance. Any potential convention should detail robust procedural and human rights safeguards that govern criminal investigations pursued under such a convention. It should ensure that any interference with the right to privacy complies with the principles of legality, necessity, and proportionality, including by requiring independent judicial authorization of surveillance measures. It should also not forbid States from adopting additional safeguards that limit law enforcement uses of personal data, as such a prohibition would undermine privacy and data protection. Any potential convention should also reaffirm the need for States to adopt and enforce “strong, robust and comprehensive privacy legislation, including on data privacy, that complies with international human rights law in terms of safeguards, oversight and remedies to effectively protect the right to privacy.” There is a real risk that, in an attempt to entice all States to sign a proposed UN cybercrime convention, bad human rights practices will be accommodated, resulting in a race to the bottom. Therefore, it is essential that any potential convention explicitly reinforces procedural safeguards to protect human rights and resists shortcuts around mutual assistance agreements. Meaningful Participation Going forward, we ask the Ad Hoc Committee to actively include civil society organizations in consultations—including those dealing with digital security and groups assisting vulnerable communities and individuals—which did not happen when this process began in 2019 or in the time since. Accordingly, we request that the Committee: Accredit interested technological and academic experts and nongovernmental groups, including those with relevant expertise in human rights but that do not have consultative status with the Economic and Social Council of the UN, in a timely and transparent manner, and allow participating groups to register multiple representatives to accommodate the remote participation across different time zones. Ensure that modalities for participation recognize the diversity of non-governmental stakeholders, giving each stakeholder group adequate speaking time, since civil society, the private sector, and academia can have divergent views and interests. Ensure effective participation by accredited participants, including the opportunity to receive timely access to documents, provide interpretation services, speak at the Committee’s sessions (in-person and remotely), and submit written opinions and recommendations. Maintain an up-to-date, dedicated webpage with relevant information, such as practical information (details on accreditation, time/location, and remote participation), organizational documents (i.e., agendas, discussions documents, etc.), statements and other interventions by States and other stakeholders, background documents, working documents and draft outputs, and meeting reports. Countering cybercrime should not come at the expense of the fundamental rights and dignity of those whose lives this proposed Convention will touch. States should ensure that any proposed cybercrime convention is in line with their human rights obligations, and they should oppose any proposed convention that is inconsistent with those obligations. We would be highly appreciative if you could kindly circulate the present letter to the Ad Hoc Committee Members and publish it on the website of the Ad Hoc Committee. Signatories,* Access Now – International Alternative ASEAN Network on Burma (ALTSEAN) – Burma Alternatives – Canada Alternative Informatics Association – Turkey AqualtuneLab – Brazil ArmSec Foundation – Armenia ARTICLE 19 – International Asociación por los Derechos Civiles (ADC) – Argentina Asociación Trinidad / Radio Viva – Trinidad Asociatia Pentru Tehnologie si Internet (ApTI) – Romania Association for Progressive Communications (APC) – International Associação Mundial de Rádios Comunitárias (Amarc Brasil) – Brazil ASEAN Parliamentarians for Human Rights (APHR)  – Southeast Asia Bangladesh NGOs Network for Radio and Communication (BNNRC) – Bangladesh BlueLink Information Network  – Bulgaria Brazilian Institute of Public Law – Brazil Cambodian Center for Human Rights (CCHR)  – Cambodia Cambodian Institute for Democracy  –  Cambodia Cambodia Journalists Alliance Association  –  Cambodia Casa de Cultura Digital de Porto Alegre – Brazil Centre for Democracy and Rule of Law – Ukraine Centre for Free Expression – Canada Centre for Multilateral Affairs – Uganda Center for Democracy & Technology – United States Civil Society Europe Coalition Direitos na Rede – Brazil Collaboration on International ICT Policy for East and Southern Africa (CIPESA) – Africa CyberHUB-AM – Armenia Data Privacy Brazil Research Association – Brazil Dataskydd – Sweden Derechos Digitales – Latin America Defending Rights & Dissent – United States Digital Citizens – Romania DigitalReach – Southeast Asia Digital Security Lab – Ukraine Državljan D / Citizen D – Slovenia Electronic Frontier Foundation (EFF) – International Electronic Privacy Information Center (EPIC) – United States Elektronisk Forpost Norge – Norway for digital rights – Austria European Center For Not-For-Profit Law (ECNL) Stichting – Europe European Civic Forum – Europe European Digital Rights (EDRi) – Europe ​​eQuality Project – Canada Fantsuam Foundation – Nigeria Free Speech Coalition  – United States Foundation for Media Alternatives (FMA) – Philippines Fundación Acceso – Central America Fundación Ciudadanía y Desarrollo de Ecuador Fundación CONSTRUIR – Bolivia Fundación Karisma – Colombia Fundación OpenlabEC – Ecuador Fundamedios – Ecuador Garoa Hacker Clube  –  Brazil Global Partners Digital – United Kingdom GreenNet – United Kingdom GreatFire – China Hiperderecho – Peru Homo Digitalis – Greece Human Rights in China – China  Human Rights Defenders Network – Sierra Leone Human Rights Watch – International Igarapé Institute — Brazil IFEX – International Institute for Policy Research and Advocacy (ELSAM) – Indonesia The Influencer Platform – Ukraine INSM Network for Digital Rights – Iraq Internews Ukraine Instituto Beta: Internet & Democracia (IBIDEM) – Brazil Instituto Brasileiro de Defesa do Consumidor (IDEC) – Brazil Instituto Educadigital – Brazil Instituto Nupef – Brazil Instituto de Pesquisa em Direito e Tecnologia do Recife (IP.rec) – Brazil Instituto de Referência em Internet e Sociedade (IRIS) – Brazil Instituto Panameño de Derecho y Nuevas Tecnologías (IPANDETEC) – Panama Instituto para la Sociedad de la Información y la Cuarta Revolución Industrial – Peru International Commission of Jurists – International The International Federation for Human Rights (FIDH) IT-Pol – Denmark JCA-NET – Japan KICTANet – Kenya Korean Progressive Network Jinbonet – South Korea Laboratorio de Datos y Sociedad (Datysoc) – Uruguay  Laboratório de Políticas Públicas e Internet (LAPIN) – Brazil Latin American Network of Surveillance, Technology and Society Studies (LAVITS) Lawyers Hub Africa Legal Initiatives for Vietnam Ligue des droits de l’Homme (LDH) – France Masaar – Technology and Law Community – Egypt Manushya Foundation – Thailand  MINBYUN Lawyers for a Democratic Society – Korea Open Culture Foundation – Taiwan Open Media  – Canada Open Net Association – Korea OpenNet Africa – Uganda Panoptykon Foundation – Poland Paradigm Initiative – Nigeria Privacy International – International Radio Viva – Paraguay Red en Defensa de los Derechos Digitales (R3D) – Mexico Regional Center for Rights and Liberties  – Egypt Research ICT Africa  Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC) – Canada Share Foundation – Serbia Social Media Exchange (SMEX) – Lebanon, Arab Region SocialTIC – Mexico Southeast Asia Freedom of Expression Network (SAFEnet) – Southeast Asia Supporters for the Health and Rights of Workers in the Semiconductor Industry (SHARPS) – South Korea Surveillance Technology Oversight Project (STOP)  – United States Tecnología, Investigación y Comunidad (TEDIC) – Paraguay Thai Netizen Network  – Thailand Unwanted Witness – Uganda Vrijschrift – Netherlands  West African Human Rights Defenders Network – Togo World Movement for Democracy – International 7amleh – The Arab Center for the Advancement of Social Media  – Arab Region Individual Experts and Academics Jacqueline Abreu, University of São Paulo Chan-Mo Chung, Professor, Inha University School of Law Danilo Doneda, Brazilian Institute of Public Law David Kaye, Clinical Professor of Law, UC Irvine School of Law, former UN Special Rapporteur on Freedom of Opinion and Expression (2014-2020) Wolfgang Kleinwächter, Professor Emeritus, University of Aarhus; Member, Global Commission on the Stability of Cyberspace Douwe Korff, Emeritus Professor of International Law, London Metropolitan University Fabiano Menke, Federal University of Rio Grande do Sul Kyung-Sin Park, Professor, Korea University School of Law Christopher Parsons, Senior Research Associate, Citizen Lab, Munk School of Global Affairs & Public Policy at the University of Toronto Marietje Schaake, Stanford Cyber Policy Center Valerie Steeves, J.D., Ph.D., Full Professor, Department of Criminology University of Ottawa *List of signatories as of January 13, 2022

  • VICTORY: Google Releases “disable 2g” Feature for New Android Smartphones
    by Cooper Quintin on January 12, 2022 at 9:50 pm

    Last year Google quietly pushed a new feature to its Android operating system allowing users to optionally disable 2G at the modem level in their phones. This is a fantastic feature that will provide some protection from cell site simulators, an invasive police surveillance technology employed throughout the country. We applaud Google for implementing this much needed feature. Now Apple needs to implement this feature as well, for the safety of their customers.  What is 2G and why is it vulnerable?2G is the second generation of mobile communications, created in 1991. It’s an old technology from a time when standards bodies did not account for certain risk scenarios such as rogue cell towers and the need for strong encryption. As years have gone by, many vulnerabilities have been discovered in 2G. There are two main problems with 2G. First, it uses weak encryption between the tower and device that can be cracked in real time by an attacker to intercept calls or text messages. In fact, the attacker can do this passively without ever transmitting a single packet. The second problem with 2G is that there is no authentication of the tower to the phone, which means that anyone can seamlessly impersonate a real 2G tower and a phone using the 2G protocol will never be the wiser.  Cell-site simulators sometimes work this way. They can exploit security flaws in 2G in order to intercept your communications. Even though many of the security flaws in 2G have been fixed in 4G, more advanced cell-site simulators can downgrade your connection to 2G, making your phone susceptible to the above attacks. This makes every user vulnerable—from journalists and activists to medical professionals, government officials, and even law enforcement. What you can do to protect yourself nowIf you have a newer Android phone (such as a Pixel, or newer Samsung phone) you can disable 2G right now by going to Settings > Network & Internet > SIMs > Allow 2G and turning that setting off.  2G_on.png Here by default 2G is enabled.  2G_off.png Now 2G is disabledIf you have an older Android phone, these steps may or may not work. Unfortunately due to limitations of old hardware, Google was only able to implement this feature on newer phones. If you have a newer Samsung phone you may also be able to shut off 2G support the same way, unfortunately this is not supported on all networks or all Samsung phones. For iPhone owners unfortunately Apple does not support this feature, but you can tweet at them to demand it! Take action Tell apple: Let us turn off 2G! We are very pleased with the steps that Google has taken here to protect users from vulnerabilities in 2G, and though there is a lot more work to be done this will ensure that many people can finally receive a basic level of protection. We strongly encourage Google, Apple, and Samsung to invest more resources into radio security so they can better protect smartphone owners.

  • Livestreamed Hearing Moved to Jan. 21: EFF Will Ask Court to Issue Judgment Against SFPD for Illegally Spying on Protesters Marching in Support of Black Lives
    by Karen Gullo on January 12, 2022 at 6:29 pm

    San Francisco Police Violated City Law in Using Private Camera NetworkUpdate: The hearing has been moved to January 21. San Francisco—On Friday, Jan. 21, at 9:30 am, the Electronic Frontier Foundation (EFF) and the ACLU of Northern California will ask a California state court to find that the San Francisco Police Department (SFPD) violated city law when it used a network of non-city surveillance cameras to spy on Black-led protests in 2020 against police violence in the wake of George Floyd’s murder.EFF and ACLU of Northern California sued the City and County of San Francisco in October 2020 on behalf of three activists of color for violating the city’s landmark Surveillance Technology Ordinance, which prohibits city departments from using surveillance technology without first putting it before the board of supervisors, who would have to pass an ordinance allowing it. The SFPD flouted a law meant to bring democratic control over government access to privacy-intrusive camera networks that can be used, as they were here, to spy on people exercising their First Amendment right to protest.EFF uncovered evidence showing the SFPD broke the law when it obtained and used a business district’s network of more than 300 video surveillance cameras to conduct remote, live surveillance of Black-led protests for eight days in May and June 2020 without supervisors’ approval. Demonstrators marched through San Francisco’s Union Square business and shopping district to protest Floyd’s murder and racist police violence. At a hearing scheduled for Jan. 21 that will be livestreamed for public viewing, EFF Staff Attorney Saira Hussain will tell the court that the evidence supports a judgment, without trial, against the SFPD and in favor of plaintiffs Hope Williams, Nathan Sheard, and Nestor Reyes. They are Black and Latinx activists who participated in and organized numerous protests that crisscrossed San Francisco in 2020.The SFPD initially denied its officers viewed the camera feed during the eight days that it had access to the camera network. EFF and ACLU of Northern California obtained documents and deposition testimony showing at least one officer viewed the feed repeatedly over that time.SFPD’s unlawful actions have made plaintiffs fearful of attending future protests, and will make it harder for them to recruit people for future demonstration, EFF and ACLU of Northern California wrote in a brief for the case Williams v. San Francisco.Who: EFF Staff Attorney Saira Hussain What: Oral arguments on motion for summary judgment in Williams v. San Francisco When: Friday, Jan. 21, 2022, at 9:30 am PTLivestream link:San Francisco Superior Court EFF’s motion for summary judgment: more on this case: Contact:  KarenGulloAnalyst, Senior Media Relations [email protected] SairaHussainStaff [email protected]

  • Court Orders Authorizing Law Enforcement To Track People’s Air Travels In Real Time Must Be Made Public
    by Aaron Mackey on January 11, 2022 at 10:54 pm

    The public should get to see whether a court that authorized the FBI to track someone’s air travels in real time for six months also analyzed whether the surveillance implicated the Fourth Amendment, EFF argued in a brief filed this week. In Forbes Media LLC v. United States, the news organization and its reporter are trying to make public a court order and related records concerning an FBI request to use the All Writs Act to compel a travel data broker to disclose people’s movements. Forbes reported on the FBI’s use of the All Writs Act to force the company, Sabre, to disclose a suspect’s travel data in real time after one of the agency’s requests was unsealed. The All Writs Act is not a surveillance statute, though authorities frequently seek to use it in their investigations. Perhaps most famously, the FBI in 2016 sought an order under the statute to require Apple to decrypt an iPhone by writing custom software for the phone. But when Forbes sought to unseal court records related to the FBI’s request to obtain data from Sabre, two separate judges ruled that the materials must remain secret. Forbes appealed to the U.S. Court of Appeals for the Ninth Circuit, arguing that the public has a presumptive right to access the court records under both the First Amendment and common law. EFF, along with the ACLU, ACLU of Northern California, and Riana Pfefferkorn, filed a friend-of-the-court brief in support of Forbes’ effort to unseal the records. EFF’s brief argues the public has the right to see the court decisions and any related legal arguments made by the federal government in support of its requests because court decisions have historically been public under our transparent, democratic traditions. But the public has a particular interest in these orders sought against Sabre for several reasons. First, the disclosure of six months worth of travel data implicates the Fourth Amendment’s privacy protections, just as the U.S. Supreme Court recently recognized in Carpenter v. United States. “Just like in that case, air travel data creates ‘a detailed chronicle of a person’s physical presence’ that goes well beyond knowing a person’s location at a particular time,” the brief argues. The public has a legitimate interest in seeing the court’s ruling to learn whether it grappled with the Fourth Amendment questions raised by the FBI’s request. Second, because federal law enforcement often requests secrecy regarding its requests under the All Writs Act, the public has very little understanding of the legal limits on when it can use the statute to require third parties to disclose private data about people’s movements. The brief argues: This ongoing secrecy violates the public’s right of access to judicial records and, critically, it also frustrates public and congressional oversight of law enforcement surveillance, including whether the Executive Branch is evading legislative limits on its surveillance authority. Third, the broad law enforcement effort to seal its requests under the All Writs Act and surveillance statutes frustrates the public’s ability to know what authorities are doing and whether they are violating people’s privacy rights. From the brief: This results in the public lacking even basic details about how frequently law enforcement requests orders under the AWA or other statutes such as the SCA [Stored Communications Act] and PRA [Pen Register Act]. This is problematic because, without public access to dockets and orders reflecting authorities’ surveillance activities, there are almost no opportunities for public oversight or intervention by Congress. Fourth, because Sabre collects data about the public’s travels without most people’s knowledge or consent, public disclosure is crucial so that people can understand whether the company is protecting their privacy. The brief argues: Disclosure of the judicial records at issue here is thus crucial because the public has no way to avoid Sabre’s collection of their location data and has almost no information about when and how Sabre discloses their data. Court records reflecting law enforcement demands for people’s data are thus likely to be the only records of when and how Sabre responds to law enforcement requests.

  • Standing Up For Privacy In New York State
    by Hayley Tsukayama on January 11, 2022 at 7:38 pm

    New York’s legislature is open for business in the new year, and we’re jumping in to renew our support for two crucial bills that protect New Yorkers’ privacy rights. While very different, both pieces of legislation would uphold a principle we hold dear: people should not worry that their everyday activities will fuel unnecessary surveillance. The first piece of legislation is A. 7326/S. 6541—New York bills must have identical versions in each house to pass—which protects the confidentiality of medical immunity information. It does this in several key ways, including: limiting the collection, use and sharing of immunity information; expressly prohibiting such information from being shared with immigration or child services agencies; and requiring that those asking for immunity information also accept an analog credential—such as a paper record. As New Yorkers present information about their immunity—vaccination records, for example, or test results— to get in the door at restaurants or gyms, they shouldn’t have to worry that that information will end up in places they never expected. They shouldn’t have to worry that a company working with the government on an app to present these records will keep them to track their movements. And they should not have to worry that this information will be collected for other purposes by companies or government agencies. Assuring people that their information will not be used in unauthorized ways increases much-needed trust in public health efforts.  The second piece of legislation, A. 84/ S. 296, also aims to stop unnecessary intrusion on people’s everyday lives. This legislation would stop law enforcement from conducting a particularly troubling type of dragnet surveillance on New Yorkers, by stopping “reverse location” warrants. Such warrants—sometimes also called “geofence” warrants—allow law enforcement agencies to conduct fishing expeditions and access data about dozens, or even hundreds, of devices at once. Government use of this surveillance tactic is incredibly dangerous to our freedoms, and has been used to disproportionately target marginalized communities. Unfortunately courts have rubber-stamped these warrant requests without questioning their broad scope. This has shown that requiring warrants alone is not enough to protect our privacy; legislatures must act to stop these practices. Location data is highly sensitive, and can reveal information not only about where we go, but about whom we associate with, the state of our health, or how we worship. Reverse location warrant searches implicate innocent people and have a real impact on people’s lives. Even if you are later able to clear your name, if you spend any time at all in police custody, this could cost you your job, your car, and your ability to get back on your feet after the arrest. We urge the New York Legislature to pass these bills, stand up for their constituents’ privacy, and stand against creeping surveillance that disrupts the lives of people just trying to get through the day. 

  • Podcast Episode: Algorithms for a Just Future
    by rainey Reitman on January 11, 2022 at 10:41 am

    Episode 107 of EFF’s How to Fix the Internet Modern life means leaving digital traces wherever we go. But those digital footprints can translate to real-world harms: the websites you visit can impact the mortgage offers, car loans and job options you see advertised. This surveillance-based, algorithmic decision-making can be difficult to see, much less address. These are the complex issues that Vinhcent Le, Legal Counsel for the Greenlining Institute, confronts every day. He has some ideas and examples about how we can turn the tables—and use algorithmic decision-making to help bring more equity, rather than less.   EFF’s Cindy Cohn and Danny O’Brien joined Vinhcent to discuss our digital privacy and how U.S. laws haven’t kept up with safeguarding our rights when we go online.  Click below to listen to the episode now, or choose your podcast player: Privacy info. This embed will serve content from      You can also find the MP3 of this episode on the Internet Archive. The United States already has laws against redlining, where financial companies engage in discriminatory practices such as preventing people of color from getting home loans. But as Vinhcent points out, we are seeing lots of companies use other data sets—including your zip code and online shopping habits—to make massive assumptions about the type of consumer you are and what interests you have. These groupings, even though they are often inaccurate, are then used to advertise goods and services to you—which can have big implications for the prices you see.  But, as Vinhcent explains, it doesn’t have to be this way. We can use technology to increase transparency in online services and ultimately support equity.   In this episode you’ll learn about:  Redlining—the pernicious system that denies historically marginalized people access to loans and financial services—and how modern civil rights laws have attempted to ban this practice. How the vast amount of our data collected through modern technology, especially browsing the Web, is often used to target consumers for products, and in effect recreates the illegal practice of redlining. The weaknesses of the consent-based models for safeguarding consumer privacy, which often mean that people are unknowingly waving away their privacy whenever they agree to a website’s terms of service.  How the United States currently has an insufficient patchwork of state laws that guard different types of data, and how a federal privacy law is needed to set a floor for basic privacy protections. How we might reimagine machine learning as a tool that actively helps us root out and combat bias in consumer-facing financial services and pricing, rather than exacerbating those problems. The importance of transparency in the algorithms that make decisions about our lives. How we might create technology to help consumers better understand the government services available to them.  Vinhcent Le serves as Legal Counsel with the Greenlining Institute’s Economic Equity team. He leads Greenlining’s work to close the digital divide, protect consumer privacy, ensure algorithms are fair, and insist that technology builds economic opportunity for communities of color. In this role, Vinhcent helps develop and implement policies to increase broadband affordability and digital inclusion as well as bring transparency and accountability to automated decision systems. Vinhcent also serves on several regulatory boards including the California Privacy Protection Agency. Learn more about the Greenlining Institute.  Resources Data Harvesting and Profiling: Ricci v. DeStefano (Brittanica) Data Brokers are the Problem (EFF) ‘GMA’ Gets Answers: Some Credit Card Companies Financially Profiling Customers (ABC News) Automated Decisions Systems (Algorithms): States’ Automated Systems Are Trapping Citizens in Bureaucratic Nightmares With Their Lives on the Line (Time) The Accountability Decisions System Accountability Act (AB13) Explained (Greenlining Institute) The Administrative Procedures Act (APA) (EPIC) Community Control and Consumer Protection: Community Control of Police Surveillance (CCOPS) (EFF) Ten Questions-And Answers-About the California Consumer Privacy Act (EFF) Racial Discrimination and Data: Investing in Disadvantaged Communities (Greenlining Institute) EFF to HUD: Algorithms Are No Excuse for Discrimination (EFF) Fintech Industry and Advertising IDs Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance (EFF) Visa Wants to Buy Plaid, and With It, Transaction Data for Millions of People (EFF) Transcript Vinhcent: When you go to the grocery store and you put in your phone number to get those discounts, that’s all getting recorded, right? It’s all getting attached to your name or at least an ID number. Data brokers purchased that from people, they aggregate it, they attach it to your ID, and then they can sell that out. There, there was a website, where you could actually look up a little bit of what folks have on you. And interestingly enough that they had all my credit card purchases, they thought I was a middle-aged woman that loved antiques, ‘cause I was going to TJ Maxx a lot.  Cindy: That’s the voice of Vinhcent Le. He’s a lawyer at the Greenlining Institute, which works to overcome racial, economic, and environmental inequities. He is going to talk with us about how companies collect our data and what they do with it once they have it and how too often that reinforces those very inequities. Danny: That’s because  some companies look at the things we like, who we text and what we subscribe to online to make decisions about what we’ll see next, what prices we’ll pay and what opportunities we have in the future. THEME MUSIC Cindy: I’m Cindy Cohn, EFF’s Executive Director. Danny: And I’m Danny O’Brien. And welcome to How to Fix the Internet, a podcast of the Electronic Frontier Foundation. On this show, we help you to understand the web of technology that’s all around us and explore solutions to build a better digital future.  Cindy: Vinhcent, I am so happy that you could join us today because you’re really in the thick of thinking about this important problem. Vinhcent: Thanks for having me.  Cindy: So let’s start by laying a little groundwork and talk about how data collection and analysis about us is used by companies to make decisions about what opportunities and information we receive. Vinhcent: It’s surprising, right? Pretty much all of the decisions that we, that companies encounter today are increasingly being turned over to AI and automated decision systems to be made. Right. The FinTech industry is determining what rates you pay, whether you qualify for a loan, based on, you  know, your internet data. It determines how much you’re paying for a car insurance. It determines whether or not you get a good price on your plane ticket, or whether you get a coupon in your inbox or whether or not you get a job. It’s pretty widespread. And, you know, it’s partly driven by, you know, the need to save costs, but this idea that these AI automated algorithmic systems are somehow more objective and better than what we’ve had before.  Cindy: One of the dreams of using AI in this kind of decision making is that it was supposed to be more objective and less discriminatory than humans are. The idea was that if you take the people out, you can take the bias out.. But  it’s very clear now that it’s more complicated than that. The data has bias baked it in ways that is hard to see, so walk us through that from your perspective.  Vinhcent: Absolutely. The Greenlining Institute where I work, was founded to essentially oppose the practice of red lining and close the racial wealth gap. And red lining is the practice where banks refuse to lend to communities of color, and that meant that access to wealth and economic opportunity was limited for, you know, decades. Red lining is now illegal, but the legacy of that lives on in our data. So they look at the zip code and look at all of the data associated with that zip code, and they use that to make the decisions. They use that data, they’re like, okay, well this zip code, which so, so often happens to be full of communities of color isn’t worth investing in because poverty rates are high or crime rates are high, so let’s not invest in this. So even though red lining is outlawed, these computers are picking up on these patterns of discrimination and they’re learning that, okay, that’s what humans in the United States think about people of color and about these neighborhoods, let’s replicate that kind of thinking in our computer models.  Cindy: The people who design and use these systems try to reassure us that they can adjust their statistical models, change their math, surveill more, and take these problems out of the equation. Right? Vinhcent: There’s two things wrong with that. First off, it’s hard to do. How do you determine how much of an advantage to give someone, how do you quantify what the effect of redlining is on a particular decision? Because there’s so many factors: decades of neglect and discrimination and like that that’s hard to quantify for. Cindy: It’s easy to envision this based on zip codes, but that’s not the only factor. So even if you control for race or you control for zip codes, there’s still multiple factors that are going into this is what I’m hearing. Vinhcent: Absolutely. When they looked at discrimination and algorithmic lending, and they found out that essentially there was discrimination. People of color were paying more for the same loans as similarly situated white people. It wasn’t because of race, but it was because they were in neighborhoods that have less competition and choice in their neighborhood. The other problem with fixing it with statistics is that it’s essentially illegal, right? If you find out, in some sense, that people of color are being treated worse under your algorithm, if you correct it on racial terms, like, okay, brown people get a specific bonus because of the past redlining, that’s disparate treatment, that’s illegal, under in our anti-discrimination law.  Cindy: We all want a world where people are not treated adversely because of their race, but it seems like we are not very good at designing that world, and for the the last 50 years in the law at least we have tried to avoid looking at race. Chief Justice Roberts famously said “the way to stop discrimination on the basis of race is to stop discriminating on the basis of race. But it seems pretty clear that hasn’t worked, maybe we should flip that approach and actually take race into account?  Vinhcent: Even if you’re an engineer wanted to fix this, right, their legal team would say, no, don’t do it because, there was a Supreme court case Ricci a while back where a fire department thought that its test for promoting firefighters was discriminatory. They wanted to redo the tests, and the Supreme court said that  trying to redo that test to promote more people of color, was disparate treatment, they got sued, and now no one wants to touch it.  MUSIC BREAK Danny: One of the issues here I think is that as the technology has advanced, we’ve shifted from, you know, just having an equation to calculate these things, which we can kind of understand.  Where are they getting that data from?  Vinhcent: We’re leaving little bits of data everywhere. And those little bits of data, may be what website we’re looking at, but it’s also things like how long you looked at a particular piece of the screen or did your mouse linger over this link or what did you click? So it gets very, very granular. So what data brokers do is they, you know, they have tracking software, they have agreements and they’re able to collect all of this data from multiple different sources, put it all together and then put people into what are called segments. And they had titles like, single and struggling, or urban dweller down on their luck. So they have very specific segments that put people into different buckets. And then what happens after that is advertisers will be like, we’re trying to look for people that will buy this particular product. It may be innocuous, like I want to sell someone shoes in this demographic. Where it gets a little bit more dangerous and a little bit more predatory is if you have someone that’s selling payday loans or for-profit colleges saying, Hey, I want to target people who are depressed or recently divorced or are in segments that are associated with various other emotional states that make their products more likely to be sold. Danny: So it’s not just about your zip code. It’s like, they just decide, oh, everybody who goes and eats at this particular place, turns out nobody is giving them credit. So we shouldn’t give them credit. And that begins to build up a kind of, it just re-enacts that prejudice.  Vinhcent: Oh my gosh, there was a great example of exactly that happening with American express. A gentleman, Wint, was traveling and he went to a Walmart in I guess a bad part of town and American Express reduced his credit limit because of the shopping behavior of the people that went to that store. American Express was required under the equal credit opportunity act to give him a reason, right. That why this credit limit changed. That same level of transparency and accountability for a lot of these algorithmic decisions that do the same thing, but they’re not as well regulated as more traditional banks. They don’t have to do that. They can just silently, change your terms or what you’re going to get and you might not ever know.   Danny: You’ve talked about how red lining was a problem that was identified and there was a concentrated effort to try and fix that both in the regulatory space and in the industry. Also we’ve had like a stream of privacy laws again, sort of in this area, roughly kind of consumer credit. In what ways have those laws sort of failed to keep up with what we’re seeing now?  Vinhcent: I will say the majority of our privacy laws for the most part that maybe aren’t specific to the financial sector, they fail us because they’re really focused on this consent based model where we agree and these giant terms of service to give away all of our rights. Putting guardrails up so predatory use of data doesn’t happen, hasn’t been a part of our privacy laws. And then with regards to our consumer protection laws, perhaps around FinTech, our civil rights laws, it’s because it’s really hard to detect  algorithmic discrimination. You have to provide some statistical evidence to take a company to court, proving that, you know, their algorithm was discriminatory. We really can’t do that because the companies have all that data so our laws need to kind of shift away from this race blind strategy that we’ve kind of done for the last, you know, 50, 60 years where like, okay, let’s not consider a race, let’s just be blind to it. And that’s our way of fixing discrimination. With algorithms where you don’t need to know someone’s race or ethnicity to discriminate against them based on those terms, that needs to change. We need to start collecting all that data. You can be anonymous and then testing the results of these algorithms to see whether or not there’s a disparate impact happening: aka are people of color being treated significantly worse than say white people or are women being treated worse than men? If we can get that right, we get that data. We can see that these patterns are happening. And then we can start digging into where does this bias arise? You know, where is this like vestige of red lining coming up in our data or in our model.  Cindy: I think transparency is especially difficult in this question of  machine learning decision-making because as Danny pointed out earlier, often even the people who are running it don’t, we don’t know what it’s picking up on all that easily.  MUSIC BREAK Danny: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. Cindy: We understand that different communities are being impacted differently…Companies are using these tools and we are seeing the disparate impacts. What happens when those situations end up in the courts? Because from what I’ve seen the courts have been pretty hostile to the idea that companies need to show their reasons for those disparate impacts. Vinhcent: Yeah. So, you know, my idea, right, is that if we get the companies on records, like showing that oh, you’re causing disparate impact, it’s their responsibility to provide a reason, a reasonable business necessity that justifies that disparate impact. And that’s what I really want to know. What reasons are you using, what reasons all these companies using to charge people of color more  for loans or insurance, right? It’s not based off their driving record or their, their income. So what is it? And once we get that information, right, we can begin to have a conversation as a society around what are the red lines for us around like the use of data, what certain particular uses, say, targeting predatory ads towards depressed people should be banned. We can’t get there yet because all of those cards are being held really close to the vest of the people who are designing the AI. Danny:  I guess there is a positive side to this in that I think at a society level, we recognize that this is a serious problem. That excluding people from loans, excluding people from a chance to improve their lot is something that we’ve recognized that racism plays a part in and we’ve attempted to fix and that machine learning is, is contributing to this. I play around with some of the sort of more trivial versions of machine-learning, I play around with things like GPT three. What’s fascinating about that is that it draws from the Internet’s huge well of knowledge, but it also draws from the less salubrious parts of the internet. And you can, you can see that it is expressing some of the prejudices that it’ss been fed with. My concern here is that that what we’re going to see is a percolation of that kind of prejudice into areas where we we’ve never really thought about the nature of racism. And if we can get transparency in that area and we can tackle it here, maybe we can stop this from spreading to the rest of our automated systems.  Vinhcent: I don’t think all AI is bad. Right? There’s a lot of great stuff happening in Google translate, I think is great. I think in the United States, what we’re going to see is at least with housing and employment and banking, those are the three areas where we have strong civil rights protections in the United States. I’m hoping and pretty optimistic that we’ll get action, at least in those three sectors to reduce the incidents of algorithmic bias and exclusion.  Cindy: What are the kinds of things you think we can do that will make a better future for us, with these and pull out the good of machine learning and less of the bad Vinhcent: I think we’re at the early stage of algorithmic regulation and kind of reigning in the free hand that tech companies have had over the past decade or so.  I think what we need to have, do we need to have an inventory of AI systems, as they’re used in government, right? Is your police department using facial surveillance? Is your court system using criminal sentencing algorithms? Is your social service department determining your access to healthcare or food assistance using an algorithm? We need to figure out where those systems are, so we can begin to know, all right, where do we, where do we ask for more transparency? When we’re using taxpayer dollars to purchase an algorithm, then that’s going to make decisions for millions of people. For example, Michigan purchased the Midas algorithm, which was, you know, over $40 million and it was designed to send out unemployment checks to people who recently lost their job. They accused thousands, 40,000 people of fraud. Many people went bankrupt, and the algorithm was wrong. So when you’re purchasing these, these expensive systems, there needs to be a risk assessment done around who could be impacted negatively by this obviously wasn’t tested enough in Michigan. Specifically in the finance industry, right, banks are allowed to collect data on mortgage loan race and ethnicity. I think we need to expand that, so that they are allowed to collect that data on small, personal loans, car loans, small business loans. That type of transparency and allowing regulators, academia, folks like that to study those decisions that they’ve made and essentially hold, hold those companies accountable for the results of their systems is necessary. Cindy: That’s one of the things is that you think about who is being impacted by the decisions that the machine is making and what control do they have over how this thing is workin, and it can give you kind of a shortcut for how to think about, these problems. Is that something that you’re seeing as well?  Vinhcent: I think what is missing actually is that right? There is a strong desire for public participation, at least from advocates in the development of these models. But none, none of us including me have figured out what does that look like? Because  the tech industry has pushed off any oversight by saying, this is too complicated. This is too complicated. And having delved into it, a lot of it is, is too complicated. Right. But I think people have a role to play in setting the boundaries for these systems. Right? When does something make me feel uncomfortable? When does this cross the line from being helpful to, to being manipulative? So I think that’s what it should look like, but how does that happen? How do we get people involved into these opaque tech processes when they’re, they’re working on a deadline, the engineers have no time to care about equity and deliver a product. How do we slow that down to get community input? Ideally in the beginning, right, rather than after it’s already baked,  Cindy: That’s what government should be doing. I mean, that’s what civil servants should be doing. Right. They should be running processes, especially around tools that they are going to be using. And the misuse of trade secret law and confidentiality in this space drives me crazy. If this is going to be making decisions that have impact on the public, then a public servant’s job ought to be making sure that the public’s voice is in the conversation about how this thing works, where it works, where you buy it from and, and that’s just missing right now. Vinhcent: Yeah, that, that was what AB 13, what we tried to do last year. And there was a lot of hand wringing about, putting that responsibility on to public servants. Because now they’re worried that they’ll get in trouble if they didn’t do their job. Right. But that’s, that’s your job, you know, like you have to do it that’s government’s role to protect the citizens from this kind of abuse.  MUSIC BREAK Danny:  I also think there’s a sort of new and emerging sort of disparity and inequity in that the fact that we’re constantly talking about how large government departments and big companies using these machine learning techniques, but I don’t get to use them. Well, I would love, as you said, Vincent, I would love the machine learning thing that could tell me what government services are out there based on what it knows about me. And it doesn’t have to share that information with anyone else. It should be my little, I want to pet AI. Right?  Vinhcent: Absolutely. The public use of AI is so far limited to like these, putting on a filter on your face or things like that, right? Like let’s give us real power right over, you know, our ability to navigate this world to get opportunities. Yeah, how to flip. That is a great question and something, you know, I think I’d love to tackle with you all.  Cindy: I also think if you think about things like the administrative procedures act, getting a little lawyerly here, but this idea of notice and comment, you know, before something gets purchased and adopted. Something that we’ve done in the context of law enforcement purchases of surveillance equipment in these CCOPS ordinances that EFF has helped pass in many places across the country. And as you point out disclosure of how things are actually going after the fact isn’t new either and something that we’ve done in key areas around civil rights in the past and could do in the future. But it really does point out how important transparency, both, you know, transparency before, evaluation before and transparency after is as a key to, to try to solving, try to get at least enough of a picture of this so we can begin to solve it. Vinhcent: I think we’re almost there where governments are ready. We tried to pass a risk assessment and inventory bill in California AB 13 this past year and what you mentioned in New York and what it came down to was the government agencies didn’t even know how to define what an automated decision system was. So there’s a little bit of reticence. And I think, uh, as we get more stories around like Facebook or, abuse in these banking that will eventually get our legislators and government officials to realize that this is a problem and, you know, stop fighting over these little things and realize the bigger picture is that we need to start moving on this and we need to start figuring out where this bias is arising. Cindy: We would be remiss if we were talking about solutions and we didn’t talk about, you know, a baseline strong privacy law. I know you think a lot about that as well, and we don’t have the real, um, comprehensive look at things, and we also really don’t have a way to create accountability when, when companies fall short.  Vinhcent: I am a board member of the California privacy protection agency. California what is really the strongest privacy law in the United States, at least right now part of that agency’s mandate is to require folks that have automated decision systems that include profiling, to give people the ability to opt out and to give customers transparency into the logic of those systems. Right. We still have to develop those regulations. Like what does that mean? What does logic mean? Are we going to get people answers that they can understand. Who is subject to, you know, those disclosure requirements, but that’s really exciting, right?  Danny: Isn’t there a risk that this is sort of the same kind of piecemeal solution that we sort of described in the rest of the privacy space? I mean, do you think there’s a need for, to put this into a federal privacy law?  Vinhcent: Absolutely. Right. So this is, you know, what California does, hopefully will influence a overall federal one. I do think that the development of regulations in the AI space will happen. In a lot of instances in a piecemeal fashion, we’re going to have different rules for healthcare AI. We’re going to have different rules for, uh, housing employment, maybe lesser rules for advertising, depending on what you’re advertising. So to some extent, these roles will always be sector specific. That’s just how the United States legal system has developed these rules for all these sectors.  Cindy: We think of three things and the California law has a bunch of them, but,  you know, we think of private right of action. So actually empowering consumers to do something, if this doesn’t work for them and that’s something we weren’t able to get in California. We also think about non-discrimination, so if you opt out of, tracking, you know, you still get the service, right. We kind of fix this situation that we talked about a little little earlier where you know, we pretend like consumers have consent, but, the reality is they really don’t have consent. And then of course, for us, no preemption, which is really just a tactical and strategic recognition that if we want the states to experiment with stuff that’s stronger we can’t have the federal law come in and undercut them, which is always a risk. We need the federal law to hopefully set a very high baseline, but given the realities of our Congress right now, making sure that it doesn’t become a ceiling when it really needs to be a floor.  Vinhcent: It would be a shame if California put out strong rules on algorithmic transparency and risk assessments and then the federal government said, no,you can’t do that where you’re preempted.  Cindy: As new problems arise,  I don’t think we know all the ways in which racism is going to pop up in all the places or other problems, other societal problems. And so we do want the states to be free to innovate, where they need to. MUSIC BREAK Cindy: Let’s talk a little bit about what the world looks like if we get it right, and we’ve tamed our machine learning algorithms. What does our world look like? Vinhcent: Oh my gosh, it was such a, it’s such a paradise, right? Because that’s why I got into this work. When I first got into AI, I was sold that promise, right? I was like, this is objective, like this is going to be data-driven things are going to be great. We can use these services, right, this micro-targeting, let’s not use it to sell predatory ads, but let’s give these people that need it, like the government assistance program. So we have California has all these great government assistance programs that pay for your internet. They pay for your cell phone bill, enrollment is at 34%. We have a really great example of where this worked in California. As you know, California has cap and trade. So you’re taxed on your carbon emissions, that generates billions of dollars in revenue for California. And we got into a debate, you know a couple years back about how that money should be spent and what California did was create an algorithm with the input of a lot of community members that determined which cities and regions of California would get that funding. We didn’t use any racial terms, but we used data sources that are associated with red lining. Right? Are you next to pollution? You have high rates of asthma, heart attacks. Does your area have more higher unemployment rates? So we took all of those categories that banks are using to discriminate against people in loans, and we’re using those same categories to determine which areas of California get more access to a cap and trade reinvestment funds. And that’s being used to build electronic electric vehicle charging stations, affordable housing, parks, trees, and all these things to abate the, the impact of the environmental discrimination that these neighborhoods faced in the past. Vinhcent: So I think in that sense, you know, we could use algorithms for Greenlining, right? Uh, not redlining, but to drive equitable, equitable outcomes. And that, you know, doesn’t require us to change all that much. Right. We’re just using the tools of the oppressor to drive change and to drive, you know, equity. So I think that’s really exciting work. And I think, um, we saw it work in California and I’m hoping we see it adopted in more places.  Cindy: I love hearing a vision of the future where, you know, the fact that there are individual decisions possible about us are things that lift us up rather than crushing us down. That’s a pretty inviting way to think about it.  Danny: Vinhcent Le thank you so much for coming and talking to us.  Vinhcent: Thank you so much. It was great.  MUSIC BREAK Cindy: Well, that was fabulous. I really appreciate how he articulates thethe dream of machine learning that we would get rid of bias and discrimination in official decisions. And instead, you know, we’ve, we’ve basically reinforced it. Um, and, and how, you know, it’s, it’s hard to correct for these historical wrongs when they’re kind of based in so many, many different places. So just removing the race of the people involved, it doesn’t get it all the ways in discrimination creeps into society. Danny: Yea,  I guess the lesson that, you know, a lot of people have learned in the last few years, and everyone else has kind of known is this sort of prejudice is, is wired in to so many systems. And it’s kind of inevitable that algorithms that are based on drawing all of this data and coming to conclusions are gonna end up recapitulating it. I guess one of the solutions is this idea of transparency. Vinhcent was very honest about with just in our infancy about learning how to make sure that we know how algorithms make the decision. But I think that has to be part of the research and where we go forward with. Cindy: Yeah. And, you know, EFF, we spent a little time trying to figure out what transparency might look like with these systems because the center of the systems, it’s very hard to get the kind of transparency that we think about. But there’s transparency in all the other places, right. He started off, he talked about an inventory of just all the places it’s being used. Then looking at how the algorithms, what, what they’re putting out. Looking at the results across the board, not just about one person, but about a lot of people in order to try to see if there’s a disparate impact. And then running dummy data through the systems to try to, to see what’s going on. Danny: Sometimes we talk about algorithms as though we’ve never encountered them in the world before, but in some ways, governance itself is this incredibly complicated system. And we don’t know why like that system works the way it does. But what we do is we build accountability into it, right? And we build transparency around the edges of it. So we know how the process at least is going to work. And we have checks and balances. We just need checks and balances for our sinister AI overlords.  Cindy: And of course we just need better privacy law. We need to set the floor a lot higher than it is now. And, of course that’s a drum we beat all the time at EFF. And it certainly seems very clear from this conversation as well. What was interesting is that, you know, Vincent comes out of the world of home mortgages and banking and, other areas, and Greenlining itself, you know, who, who gets to buy, houses where, and at what terms, that has a lot of mechanisms already in place both to protect people’s privacy, but to have more transparency. So it’s interesting to talk to somebody who comes from a world where we’re a little more familiar with that kind of transparency and how privacy plays a role in it than I think in the general uses of machine learning or on the tech side.  Danny: I think it’s, it’s funny because when you talk to tech folks about this, you know, actually kind of pulling our hair out because we, this is so new and we don’t understand how to handle this kind of complexity. And it’s very nice to have someone come from like a policy background and come in and go, you know what? We’ve seen this problem before we pass regulations. We change policies to make this better, you just have to do the same thing in this space. Cindy: And again, there’s still a piece that’s different, but as far less than I think sometimes people think about it. But what I, the other thing I really loved is is that he really, he gave us such a beautiful picture of the future, right? And, and it’s, it’s, it’s one where we, we still have algorithms. We still have machine learning. We may even get all the way to AI. But it is empowering people and helping people. And I, I love the idea of better being able to identify people who might qualify for public services that we’re, we’re not finding right now. I mean, that’s just a it’s a great version of a future where these systems serve the users rather than the other way around, right.  Danny: Our friend, Cory Doctorow always has this banner headline of seize the methods of computation. And there’s something to that, right? There’s something to the idea that we don’t need to use these things as tools of law enforcement or retribution or rejection or exclusion. We have an opportunity to give this and put this in the hands of people so that they feel more empowered and they’re going to need to be that empowered because we’re going to need to have a little AI of our own to be able to really work better with these these big machine learning systems that will become such a big part of our life going on. Cindy: Well, big, thanks to Vinhcent Le for joining us to explore how we can better measure the benefits of machine learning, and use it to make things better, not worse. Danny:  And thanks to Nat Keefe and Reed Mathis of Beat Mower for making the music for this podcast. Additional music is used under a creative commons license from CCMixter. You can find the credits and links to the music in our episode notes. Please visit where you’ll find more episodes, learn about these issues, you can donate to become a member of EFF, as well as lots more. Members are the only reason we can do this work plus you can get cool stuff like an EFF hat, or an EFF hoodie or an EFF camera cover for your laptop camera. How to Fix the Internet is supported by the Alfred P Sloan foundation’s program and public understanding of science and technology. I’m Danny O’Brien.  

  • “Worst in Show Awards” Livestreams Friday: EFF’s Cindy Cohn and Cory Doctorow Will Unveil Most Privacy-Defective, Least Secure Consumer Tech Products at CES
    by Karen Gullo on January 6, 2022 at 11:38 pm

    “Cool” Products That Collect Your Data, Lock Out UsersLas Vegas—On Friday, January 7, at 9:30 am PT, Electronic Frontier Foundation (EFF) Executive Director Cindy Cohn and EFF Special Advisor and sci-fi author Cory Doctorow will present the creepiest, most privacy-invasive, and unsecure consumer tech devices debuting at this year’s Consumer Electronics Show (CES).EFF, in partnership with iFixit, USPIRG, and Repair.Org, will unveil their 2022 Worst in Show picks, an annual award given at CES, the massive trade show in Las Vegas where vendors demonstrate the coolest in purportedly life-changing tech gadgets and devices (think movie-streaming sunglasses and color-changing cars).Not all these products will change our lives for the better. A panel of judges will present the least secure, safe, repairable, and eco-friendly gadgets from the show. Doctorow will emcee the event and will join Cohn and guest judges Nathan Proctor (USPIRG), Gay Gordon-Byrne (, Paul Roberts (securepairs), and Kyle Wiens (iFixit) to discuss their selections.To watch the presentation live, before it goes on YouTube, fill out this form to request access. You’ll be sent a Zoom link to join the event (no-video/audio-only is fine).Who: EFF’s Cindy Cohn and Cory DoctorowWhat: Annual CES Worst in Show AwardsWhen: Friday, January 7, 2022, 9:30 am PT and 12:30 pm ETZoom link: out last year’s winners: more on Right to Repair:   Contact:  KarenGulloAnalyst, Media Relations [email protected] [email protected]

  • How are Police Using Drones?
    by Matthew Guariglia on January 6, 2022 at 8:32 pm

    Across the country, police departments are using myriad means and resources at their disposal to stock up on drones. According to the most recent tally on the Atlas of Surveillance (a project of EFF and the University of Nevada), at least 1,172 police departments nationwide are using drones. And over time, we can expect more law enforcement agencies to deploy them. A flood of COVID relief money, civil asset forfeiture money, federal grants, or military surplus transfers enable more departments to acquire these flying spies. But how are police departments using them? A new law in Minnesota mandates the yearly release of information related to police use of drones, and gives us a partial window into how law enforcement use them on a daily basis. The 2021 report released by the Minnesota Bureau of Criminal Apprehension documents use of drones in the state during the year 2020. According to the report, 93 law enforcement agencies from across the state deployed drones 1,171 times in 2020—with an accumulative price tag of almost $1 million. The report shows that the vast majority of the drone deployments are not used for the public safety disasters that so many departments use to justify drone use. Rather, almost half (506) were just for the purpose of “training officers.” Other uses included information collection based on reasonable suspicion of unspecified crimes (185), requests from other government agencies unrelated to law enforcement (41), road crash investigation (39), and preparation for and monitoring of public events (6 and 12, respectively). There were zero deployments to counter the risk of terrorism.  Police deployed drones 352 times in the aftermath of an “emergency” and 27 times for “disaster” response. This data isn’t terribly surprising. After all, we’ve spent years seeing police drones being deployed in more and more mundane policing situations and in punitive ways. After the New York City Police Department accused one racial justice activist, Derrick Ingram, of injuring an officer’s ears by speaking too loudly through his megaphone at a protest, police flew drones by his apartment window—a clear act of intimidation. The government also flew surveillance drones over multiple protests against police racism and violence during the summer of 2020. When police fly drones over a crowd of protestors, they chill free speech and political expression through fear of reprisal and retribution from police. Police could easily apply face surveillance technology to footage collected by a surveillance drone that passed over a crowd, creating a preliminary list of everyone that attended that day’s protest. As we argued back in May 2020, drones don’t disappear once the initial justification for purchasing them no longer seems applicable. Police will invent ways to use their invasive toys–which means that drone deployment finds its way into situations where they are not needed, including everyday policing and the surveillance of First Amendment-protected activities. In the case of Minnesota’s drone deployment, police can try to hide behind their use of drones as a glorified training tool, but the potential for their invasive use will always hang over the heads (literally) of state residents. 

  • EFF Condemns the Unjust Conviction and Sentencing of Activist and Friend Alaa Abd El Fattah
    by Karen Gullo on January 4, 2022 at 6:37 pm

    EFF is deeply saddened and angered by the news that our friend, Egyptian blogger, coder, and free speech activist Alaa Abd El Fattah, long a target of oppression by Egypt’s successive authoritarian regimes, was sentenced to five years in prison by an emergency state security court just before the holidays. According to media reports and social media posts of family members, Fattah, human rights lawyer Mohamed el-Baqer, and blogger Mohamed ‘Oxygen’ Ibrahim were convicted on December 20 of  “spreading false news undermining national security” by the court, which has extraordinary powers under Egypt’s state of emergency. El-Baqer and Ibrahim received four-year sentences.  A trial on the charges held in November was a travesty, with defense lawyers denied access to case files or a chance to present arguments. At least 48 human rights defenders, activists, and opposition politicians in pre-trial detention for months and years were referred to the emergency courts for trial just before Egyptian President Abdel Fattah El Sisi lifted the state of emergency in October, Human Rights Watch reported. The profoundly unjust conviction and years-long targeting of Fattah and other civil and human rights activists is a testament to the lengths the Egyptian government will go to attack and shut down, through harassment, physical violence, arrest, and imprisonment, those speaking out for free speech and expression and sharing information. In the years since the revolution, journalists, bloggers, activists and peaceful protestors have been arrested and charged under draconian press regulations and anti-cybercrime laws being used to suppress dissent and silence those criticizing the government. A free speech advocate and software developer, Fattah, who turned 40 on November 18, has repeatedly been targeted and jailed for working to ensure Egyptians and others in the Middle East and North Africa have a voice, and privacy, online. He has been detained under every Egyptian head of state in his lifetime, and has spent the last two years at a maximum-security prison in Tora, 12 miles south of Cairo, since his arrest in 2019. It’s clear that Egypt has used the emergency courts as another tool of oppression to punish Fattah and other activists and government critics. We condemn the government’s actions and call for Fattah’s conviction to be set aside and his immediate release. We stand in solidarity with #SaveAlaa, and Fattah’s family and network of supporters. Fattah has never stopped fighting for free speech, and the idea that through struggle and debate, change is possible. In his own words (from a collection of Fattah’s prison writings, interviews, and articles, entitled “You Have Not Yet Been Defeated,” order here or here): I’m in prison because the regime wants to make an example of us. So let us be an example, but of our own choosing. The war on meaning is not yet over in the rest of the world. Let us be an example, not a warning. Let’s communicate with the world again, not to send distress signals nor to cry over ruins or spilled milk, but to draw lessons, summarize experiences, and deepen observations, may it help those struggling in the post-truth era. …every step of debate and struggle in society is a chance. A chance to understand, a chance to network, a chance to dream, a chance to plan. Even if things appear simple and indisputable, and we aligned – early on – with one side of a struggle, or abstained early from it altogether, seizing such opportunities to pursue and produce meaning re- mains a necessity. Without it we will never get past defeat. Fattah’s lawyer said in September that Fattah was contemplating suicide because of the conditions under which he is being held. He has been denied due process, with the court refusing to give his lawyers access to case files or evidence against him, and jailed without access to books or newspapers, exercise time or time out of the cell and—since COVID-19 restrictions came in to play—with only one visit, for twenty minutes, once a month.  Laila Soueif, a mathematics professor and Fattah’s mother, wrote in a New York Times op-ed just days before his sentencing that her son’s crime “is that, like millions of young people in Egypt and far beyond, he believed another world was possible. And he dared to try to make it happen.” He is charged with spreading false news, she said, “for retweeting a tweet about a prisoner who died after being tortured, in the same prison where Alaa is now held.”  Fattah himself addressed the court at trial: “The truth is, in order for me to understand this, I must understand why I am standing here,” he said, according to an English translation of a Facebook post of his statement. “My mind does not accept that I am standing here for the sake of sharing.” We urge everyone to order Fattah’s book and send a message to the Egyptian government and all authoritarian regimes that his fight for human rights, and our support for this courageous activist, will never be defeated.

  • Cross-Border Access to User Data by Law Enforcement: 2021 Year in Review
    by Katitza Rodriguez on January 3, 2022 at 7:29 pm

    Law enforcement around the world is apparently getting its holiday wish list, thanks to the Council of Europe’s adoption of a flawed new protocol to the Budapest Convention, a treaty governing procedures for accessing digital evidence across borders in criminal investigations. The Second Additional Protocol (“the Protocol”) to the Budapest Convention, which will reshape how police in one country access data from internet companies based in another country, was heavily influenced by law enforcement and mandates new intrusive police powers without adequate protections for privacy and other fundamental rights. It was approved on November 17, 2021—a major disappointment that can endanger technology users, journalists, activists, and vulnerable populations in countries with flimsy privacy protections and weaken everyone’s right to privacy and free expression across the globe. Following the decision by the CoE’s Committee of Ministers of the Council of Europe, the Protocol will open for signatures to countries that have ratified the Budapest Convention (currently 66 countries) around May 2022.  It’s been a long fight and a very busy year. EFF, along with CIPPIC, European Digital Rights (EDRi), and other allies, fought to let the CoE and the world know that the Protocol was being pushed through without adequate human rights protections. We sounded warnings in February about the problem and noted that draft meetings to finalize the text were held in closed session, excluding civil society and even privacy regulators. After the draft protocol was approved in May by the CoE’s Cybercrime Committee, EFF and 40 organizations urged the Committee of Ministers, which also reviews the draft, to allow more time for suggestions and recommendations so that human rights are adequately protected in the protocol. In August, we submitted 20 solid, comprehensive recommendations to strengthen the Protocol, including requiring law enforcement to garner independent judicial authorization as a condition for cross border requests for user data, prohibiting police investigative teams from bypassing privacy safeguards in secret data transfer deals, and deleting provisions mandating that internet providers directly cooperate with foreign law enforcement orders for user data, even where local laws require them to have independent judicial authorization for such disclosures. We then defended our position at a virtual hearing before the Parliamentary Assembly of the Council of Europe (PACE), which suggested amendments to the Protocol text.  Sadly, PACE did not take all of our concerns to heart. While some of our suggestions were acted on, the core of our concerns about weak privacy standards went unaddressed. PACE’s report and opinion on the matter responds to our position by noting a “difficult dilemma” about the goal of international legal cooperation given significantly inconsistent laws and safeguards in countries that will sign on to the treaty. PACE  fears that “higher standards [could] jeopardize” the goal of effectively fighting cybercrime and concludes that it would be unworkable to make privacy-protective rules stronger. Basically, PACE is willing to sacrifice human rights and privacy to get more countries to sign on to their treaty. This position is unfortunate, since many parts of the Protocol are a law enforcement wish list—not surprising since it was mainly written by prosecutors and law enforcement officials. Meanwhile, gaps in human rights protections under some participating countries’ laws are deep. As EFF told PACE in testimony at its virtual hearing, detailed international law enforcement powers should come with robust legal safeguards for privacy and data protection. “The Protocol openly avoids imposing strong harmonized safeguards in an active attempt to entice states with weaker human rights records to sign on,” EFF stated. “The result is a net dilution of privacy and human rights on a global scale. But the right to privacy is a universal right.” PACE suggested a few privacy-protecting changes to the Committee of Ministers—some of them based on our suggestions—but the Committee did not take these into account. For example, PACE agreed that the Protocol ought to incorporate new references to proportionality as a requirement in privacy and data protection safeguards (Articles 13 and 14). It also said that “immunities of certain professions, such as lawyers, doctors, journalists, religious ministers or parliamentarians” should be explicitly respected, and that there ought to be public statistics about how the powers created by the Protocol were used and how many people were affected. Other civil society concerns were left unaddressed; among several examples, PACE did not propose changes to a provision that prohibits states from maintaining adequate standards for access to biometric data. The Council of Ministers then tied up a holiday gift to law enforcement by adopting the Protocol as-is, without any of the improvements that PACE suggested. As a result, applying human rights safeguards will be up to the broad range of individual countries that will now sign onto the treaty in the near future. Further Fights on The Horizon For 2022 With the Protocol’s adoption, there will now be debates in national Parliaments across the world about its ratification and what standards countries adopt as they implement it. There will be an opportunity for countries to declare reservations when accessing the treaty. That means numerous chances at the domestic level to influence how governments act on the Protocol throughout 2022. People—and national data protection authorities—in countries with strong protections for personal information should demand that those safeguards not be circumvented by implementation of the Protocol. This is notably the case of European Union countries. Despite strong criticism of the Protocol by the European Data Protection Board, which represents all 27 national data protection authorities in the EU, the European Commission advised Member States to join the Protocol with as few reservations as possible. Latin American countries should also be cautious and aware of their particular challenges.  Law enforcement pushed for a quick adoption of the Protocol should have not override current legal safeguards or impair national debates towards adequate minimum standards. Data protection and privacy advocates around the world should be ready for the fight.  CoE’s Secretary-General welcomed the Protocol’s adoption “in the context of a free and open internet where restrictions apply only as a means to tackle crime”—an optimistic view, to be sure, given the recent spate of intense internet crackdowns by governments, including some Budapest Convention signatories. Part of the impetus for rushing the adoption of the Protocol in the first place was to forestall efforts to create a more intrusive framework for cross-border policing. Specifically, a new international cybercrime treaty, first proposed by Russia, is gaining support at the United Nations. The UN cybercrime treaty would address many of the same investigative powers as the Protocol and the Budapest Convention in ways that could be even more concerning for human rights. (As background, Russia has been promoting its cybercrime treaty for at least a decade). Unfortunately, the adoption of the Protocol has not staved off those efforts. Not only are these efforts actively moving forward, but the Protocol has now created a new baseline of privacy-invasive police powers that the UN treaty can expand upon. Negotiations on the UN treaty will begin in January.  EFF and its civil society allies are already advocating for a human rights-based approach to drafting the proposed UN treaty, and pushing for a more active role in the UN negotiations than was afforded by the CoE. Our focus in the coming year will be on working with our allies across the world to ensure that any new data-access rules incorporate clear and robust human rights safeguards. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • Fighting For A More Open, Balanced Patent System: 2021 in Review
    by Joe Mullin on January 2, 2022 at 9:54 pm

    At EFF, we’ve always stood up for the freedom to tinker and innovate. Unfortunately, our patent system doesn’t promote those freedoms. In some areas, like software, it’s causing much more harm than good. And the system is rife with patent trolls: companies that are focused on licensing and litigating patents, instead of making things. In 2021, the majority of all patent lawsuits were filed by these trolls. In fact, patent trolls have filed the majority of patent lawsuits for many years now.  But there’s reason for hope. Patent trolls have finally been seen as the problem they are, and both courts and Congress seem to be moving away from simplistic misconceptions like believing they can create more innovation just by handing out more patents.  This year, EFF fought hard for increased transparency in the patent system that will allow us to call out the worst actors, and ultimately get a more balanced patent system. We also worked to defend and strengthen patent review systems that allow the worst patents to be kicked out of the system more efficiently.  Open Records in Courts, and at the Patent Office Patent cases in particular suffer from a problem of overzealous secrecy. In 2019, EFF intervened in a court case called Uniloc v. Apple to defend the public’s right to know the details of what’s going on in patent cases. This case was an egregious one, in which a patent troll that had sued hundreds of companies was sealing up court records showing whether it even had the right to sue at all.  Turns out, Uniloc didn’t have the legal right, known as standing, to sue over this patent. By intervening in the case, EFF was able to get the whole story showing that Uniloc did not have the ability to litigate the case, and vindicate the public’s right to access court records.   Although EFF won the right for the public to read nearly all of the court records in this case, Uniloc has continued  to argue for keeping a small but critical portion of the evidence in this case hidden—the documents that show how much Uniloc was paid by the companies who paid it for a patent license. These license fees were generally paid due to litigation, or the threat of litigation. The sums were critical to whether Uniloc had a right to sue at all, as the court’s ruling dismissing Uniloc’s suit hinged on the fact that Uniloc had not made enough licensing revenue to have the right to bring the patent infringement claims.  We won a powerful decision in February that ordered Uniloc to disclose all of the remaining information at issue, including the licensing information that was central to the district court’s dismissal of the patent suit. Uniloc appealed again, and in December we argued before the U.S. Court of Appeals for the Federal Circuit that the public had a right to access the records. We’ll continue to defend the public’s right to open courts in patent litigation.  The Uniloc case isn’t the only place where we’re fighting for the public’s right to a more open patent system. We’re also continuing to push for real accountability and openness in Congress.  Very often, victims of patent troll lawsuits don’t even know the identities of the people who sued them and stand to profit from the lawsuit. EFF is supporting a new bill in Congress that would remedy this unacceptable situation. The bill, called the “Pride in Patent Ownership Act” (S. 2774), would require patent owners to record their ownership at the U.S. Patent and Trademark Office (USPTO). The bill suffers from a very weak enforcement mechanism, in that the penalties for noncompliance are much too light. Still, we’re glad to see the issue of bringing more transparency to the patent system is getting some public attention.  Fighting for Strong Defenses Against Bad Patents The USPTO grants hundreds of thousands of patents each year, and examiners don’t have enough time to get it right. That’s why it’s critical that we have a robust patent review system, which gives people and companies threatened over patents a chance to get a patent reviewed by professionals—without spending the millions of dollars that a jury trial can cost.  The best system for this so far is inter partes review, or IPR, a system that Congress set up 10 years ago to weed out some of the worst patents. IPR isn’t perfect, but it’s thrown out thousands of bad patents over the years and is a big improvement over the previous review systems that were used by the patent office.  That’s why we’re supporting the “Restore America Invents Act,” (S. 2891), which was introduced in September and closes some big loopholes that certain patent owners have used to avoid IPR challenges. While other reforms are needed, the Restore AIA bill takes some important steps that will make clear a strong IPR system is here to stay.  We also fought off an attempt to overthrow the IPR system altogether. Unsurprisingly, patent owners have tried repeatedly to convince the Supreme Court that post-grant challenges such as IPR are unconstitutional. This year, they failed again, when the Supreme Court declined to throw out the IPR system in U.S. v. Arthrex. As EFF explained in our brief for that case, filed together with Engine Advocacy, the IPR system has driven down the number of patent infringement lawsuits that clog federal courts, raise prices, and smother innovation.  Speaking Up for Users at the Patent Office  Finally, at two different times this year, EFF filed comments with the U.S. Patent and Trademark Office expressing our opposition to the agency’s continued efforts to increase the number of patent monopolies that are created at the public’s expense.  First, we spoke out against proposed regulations that would have opened the floodgates to new and unnecessary types of design patents on computer-generated images. Design patents on the whole are a terrible deal for the public: they give rights holders the power to limit competition, like utility patents, but in return the patent owner provides almost nothing to the public realm.  Later in the year, we also spoke up about a planned USPTO study of patent eligibility that looks to be rigged in favor of patent owners from the get-go. The “study” is a list of loaded questions proposed by U.S. senators who have made it clear they want to revoke important legal precedents, including Alice v. CLS Bank, the landmark decision that bars so many abstract “do-it-on-a-computer” style patents.  In 2020, the great majority of software-related appeals where patent eligibility was at issue ended up with the patents being found invalid. That’s happening because of the Alice precedent, and we won’t let that progress get rolled back.  This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • Fighting For You From Coast to Coast: 2021 In Review
    by Hayley Tsukayama on January 2, 2022 at 9:54 pm

    EFF makes its presence known in statehouses across the country to advocate for strong privacy laws, broadband access, and to protect and advance your digital rights. The pandemic has changed a lot about how state legislators operate in 2021, but one thing has remained the same: EFF steps up to fight for you from coast to coast.   Golden Opportunities in the Golden State We helped win a huge victory in for all Californians this year, finally securing an historic $6 billion investment for broadband infrastructure for the state of California. Building on the work and community support we started building in 2020 for investment to close the digital divide, we were able to help bring those efforts across the finish line. EFF has vocally supported efforts to expand and improve broadband infrastructure to bring access to 21st-century broadband technology to every community. For years, internet service providers have carved up the state, neglecting low-income and rural communities. It’s become abundantly clear that the market alone will not close the digital divide; that’s why we need policy. The struggles many people had while learning and working remotely in the pandemic made it clearer than ever that California needed to change the status quo. California’s new broadband program approaches the problem on multiple fronts. It empowers local public entities, local private actors, and the state government itself to be the source of the solution. Through a combination of new construction, low-interest loans, and grants, this money will allow communities to have more input on where and how networks are built. This victory came from combination of persistent statewide activism from all corners, political leadership by people such as Senator Lena Gonzalez, investment funding from the American Rescue Plan passed by Congress, and a multi-billion broadband plan included in Governor Newsom’s budget. In addition to our broadband work, we also collaborated with other civil liberties groups in California on a couple of bills to improve privacy around genetic data. S.B. 41, authored by Sen. Tom Umberg, adds privacy requirements for direct-to-consumer (DTC) genetic testing companies such as and 23 and Me. It gives consumers more transparency about how these companies use their information, more control over how it’s shared and retained, and establishes explicit protections against discrimination using genetic data. A.B. 825, authored by Assemblymember Marc Levine, expanded the definition of personal information, for the purposes of the state’s data security and data breach notification laws, to include genetic data. That means that if a company is irresponsible with your genetic data, they can be held to account for it. We were pleased that Governor Newsom signed both bills into law. Make no mistake: our victories are yours, too. We thank every person who picked up the phone or sent an email to their California representative or senator. We could not have done this without that support. Across The Country Of course, California is not the only state where we fight for your digital rights. We’ve advocated across the country—from Washington to Virginia—to fight the bad bills and support the good ones in partnership with friends in those states. In Washington, we joined a coalition to help pass Rep. Drew Hansen’s HB 1336, which expanded broadband choice. Signed into law by Washington’s Gov. Jay Inslee, HB 1336 will improve access not only for rural parts of the state, but also underserved urban communities. Of course, we haven’t won every fight. Over our opposition, Virginia’s legislature passed an empty privacy law—weak, underfunded, not designed with consumers in mind—that puts the desires of companies over the needs of consumers. As Reuters reported, lobbyists for Amazon handed the bill to the author and pushed hard for it to pass. Virginians deserved better. Privacy will continue to be a hot topic in legislatures across the country next year. We urge lawmakers not to look at weak bills, such as Virginia’s or the recent “model bill” put forward by the Uniform Law Commission as examples to follow. Instead, we urge you to consider EFF’s top priorities for privacy legislation, including strong enforcement. Looking Ahead Our state legislative work is as busy as it’s ever been. We’re working with more partners on the ground in states across the country—especially those in our local Electronic Frontier Alliance—to connect with our fellow advocates and fight together for everyone’s digital rights. We look forward to being just as busy in 2022.   This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.  

  • Police Use of Artificial Intelligence: 2021 in Review
    by Matthew Guariglia on January 1, 2022 at 9:49 am

    Decades ago, when imagining the practical uses of artificial intelligence, science fiction writers imagined autonomous digital minds that could serve humanity. Sure, sometimes a HAL 9000 or WOPR would subvert expectations and go rogue, but that was very much unintentional, right?   And for many aspects of life, artificial intelligence is delivering on its promise. AI is, as we speak, looking for evidence of life on Mars. Scientists are using AI to try to develop more accurate and faster ways to predict the weather. But when it comes to policing, the actuality of the situation is much less optimistic.  Our HAL 9000 does not assert its own decisions on the world—instead, programs which claim to use AI for policing just reaffirm, justify, and legitimize the opinions and actions already being undertaken by police departments. AI presents two problems: tech-washing, and a classic feedback loop. Tech-washing is the process by which proponents of the outcomes can defend those outcomes as unbiased because they were derived from “math.” And the feedback loop is how that math continues to perpetuate historically-rooted harmful outcomes. “The problem of using algorithms based on machine learning is that if these automated systems are fed with examples of biased justice, they will end up perpetuating these same biases,” as one philosopher of science notes. Far too often artificial intelligence in policing is fed data collected by police, and therefore can only predict crime based on data from neighborhoods that police are already policing. But crime data is notoriously inaccurate, so policing AI not only misses the crime that happens in other neighborhoods, it reinforces the idea that the neighborhoods they are already over-policed are exactly the neighborhoods that police are correct to direct patrols and surveillance to. How AI tech washes unjust data created by an unjust criminal justice system is becoming more and more apparent. In 2021, we got a better glimpse into what “data-driven policing” really means. An investigation conducted by Gizmodo and The Markup showed that the software that put PredPol, now called Geolitica, on the map disproportionately predicts that crime will be committed in neighborhoods inhabited by working-class people, people of color, and Black people in particular. You can read here about the technical and statistical analysis they did in order to show how these algorithms perpetuate racial disparities in the criminal justice system. Gizmodo reports that, “For the 11 departments that provided arrest data, we found that rates of arrest in predicted areas remained the same whether PredPol predicted a crime that day or not. In other words, we did not find a strong correlation between arrests and predictions.” This is precisely why so-called predictive policing or any data-driven policing schemes should not be used. Police patrol neighborhoods inhabited primarily by people of color–that means these are the places where they make arrests and write citations. The algorithm factors in these arrests and determines these areas are likely to be the witness of crimes in the future, thus justifying heavy police presence in Black neighborhoods. And so the cycle continues again. This can occur with other technologies that rely on artificial intelligence, like acoustic gunshot detection, which can send false-positive alerts to police signifying the presence of gunfire. This year we also learned that at least one so-called artificial intelligence company which received millions of dollars and untold amounts of government data from the state of Utah actually could not deliver on their promises to help direct law enforcement and public services to problem areas. This is precisely why a number of cities, including Santa Cruz and New Orleans, have banned government use of predictive policing programs. As Santa Cruz’s mayor said at the time, “If we have racial bias in policing, what that means is that the data that’s going into these algorithms is already inherently biased and will have biased outcomes, so it doesn’t make any sense to try and use technology when the likelihood that it’s going to negatively impact communities of color is apparent.” Next year, the fight against irresponsible police use of artificial intelligence and machine learning will continue. EFF will continue to support local and state governments in their fight against so-called predictive or data-driven policing. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • 2021 Year in Review: EFF Graphics
    by Hugh D'Andrade on December 31, 2021 at 1:33 pm

    EFF’s small design team sometimes struggles to keep up with the frenetic pace of our activist, legal and development colleagues. Whenever EFF launches a new legal case, activism campaign, tech project, or development campaign, we try to create unique and inspiring graphics to promote it. At EFF, we find that the ability to visualize the issues at hand encourages supporters to engage more fully with our work, to learn more and share more about what we do, and to donate to our cause. All the graphics we create are original, and free to the public to use on a Creative Commons Attribution license. That means that if you are fighting to stop police misuse of surveillance technology in your community, promoting free expression online, or simply looking for a way to share your love for EFF and digital rights with the world, you are free to download our graphics and use them for your own purposes without permission. It’s our way of seeding the Commons! Below is a selection of graphics we produced this year. We hope you enjoy perusing them! To learn more about each project, go ahead and click the image. It will link you to a page where you can learn more. Don’t forget: many of our graphics are gifted to you in t-shirt or sticker form when you join EFF. And for a limited time, you can purchase postcard versions of some of our graphics in our shop. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • In 2021, the Police Took a Page Out of the NSA’s Playbook: 2021 in Review
    by Jennifer Lynch on December 31, 2021 at 11:57 am

    With increasing frequency, law enforcement has been using unconstitutional, suspicionless digital dragnet searches in an attempt to identify unknown suspects in criminal cases. Whether these searches are for everyone who was near a building where a crime occurred or who searched for a keyword like “bomb” or who shares genetic data with a crime scene DNA sample, 2021 saw more and more of these searches—and more attempts to push back and rein in unconstitutional law enforcement behavior.  While dragnet searches were once thought to be just the province of the NSA, they are now easier than ever for domestic law enforcement to conduct as well. This is because of the massive amounts of digital information we share—knowingly or not—with companies and third parties. This data, including information on where we’ve been, what we’ve searched for, and even our genetic makeup, is stored in vast databases of consumer-generated information, and law enforcement has ready access to it—frequently without any legal process. All of this consumer data allows police to, essentially, pluck a suspect out of thin air. EFF has been challenging unconstitutional dragnet searches for years, and we’re now seeing greater awareness and pushback from other organizations, judges, legislators, and even some companies. This post will summarize developments in 2021 on one type of dragnet suspicionless search—reverse location data searches.  Reverse Location Searches: Geofence Warrants & Location Data Brokers Reverse location searches allow the police to identify every device in a given geographic area during a specific period of time in the past as well as to track people’s paths of travel. Geographic areas can include city blocks full of people unconnected to the crime, including those living in private residences and driving on busy streets.  Unlike ordinary searches for electronic records, which identify a suspect, account, or device in advance of the search, reverse location searches essentially work backward by scooping up the location data from every device in hopes of finding one that might be linked to the crime. The searches therefore allow the government to examine the data from potentially hundreds or thousands of individuals wholly unconnected to any criminal activity and give law enforcement unlimited discretion to try to pinpoint suspect devices—discretion that can be deployed arbitrarily and invidiously. Two main types of reverse location searches gained prominence in 2021: Geofence warrants and searches of location data generated by device applications and aggregated and sold by data brokers.   Geofence Warrants The first type of search, a “geofence warrant,” is primarily directed at Google. Through these warrants, police are able to access precise location data on the vast majority of Android device users and other Google account holders (including iPhone users). This data comes from wifi connections, GPS and Bluetooth signals, and cellular networks and allows Google to estimate a device’s location to within 20 meters or less. Using the data, Google can infer where a user has been, the path they took to get there, and what they were doing at the time. Google appears to be disclosing location data only in response to court-authorized warrants.  In 2021, we learned more about just how prevalent the use of geofence warrants has become. Over the summer, Google released a transparency report showing it had received approximately 20,000 geofence warrants between 2018 and 2020. The vast majority of these warrants (95.6%) came from state and local police agencies, with nearly 20% of all state requests coming solely from agencies in California. The report also shows that many states have ramped up their use of geofence warrants exponentially over the last couple years—in 2018, California issued 209 geofence warrant requests, but in 2020, it issued 1,909. Each of these requests can reveal the location of thousands of devices. Geofence requests now constitute more than a quarter of the total number of all warrants Google receives. This is especially concerning because police are continuing to use these warrants even for minor crimes. And, as The Markup discovered following Google’s report, agencies have been less than transparent about their use of this search technique—there are huge discrepancies between Google’s geofence warrant numbers and the data that California police agencies are disclosing to the state—data that they are explicitly required to report under state law. Aggregated App-Generated Location Data In 2021, we also learned more about searches of aggregated app-generated location data. With this second type of reverse location search, the government is able to access location data generated by many of the applications on users’ phones. This data is purportedly deidentified and then aggregated and sold to various shady and secretive data brokers who re-sell it to other data brokers and companies and to the government. Unlike geofence warrants directed to Google, neither the data brokers nor the government seem to think any legal process at all is required to access these vast treasure troves of data—data that the New York Times described as “accurate to within a few yards and in some cases updated more than 14,000 times a day.” And although the data brokers argue the data has been anonymized, data like this is notoriously easy to re-identify.  In 2020, we learned that several federal agencies, including DHS, the IRS, and the U.S. military, purchased access to this location data and used it for law enforcement investigations and immigration enforcement. In 2021, we started to learn more about how this data is shared with state and local agencies as well. For example, data broker Veraset shared raw, individually-identifiable GPS data with the Washington DC local government, providing the government with six months of regular updates about the locations of hundreds of thousands of people as they moved about their daily lives. Ostensibly, this data was meant to be used for COVID research, but there appears to have been nothing that truly prevented the data from ending up in the hands of law enforcement. We also learned that the Illinois Department of Transportation (IDOT) purchased access to precise geolocation data about over 40% of the state’s population from Safegraph, a controversial data broker later banned from Google’s app store. For just $49,500, the agency got access to two years’ worth of raw location data. The dataset consisted of over 50 million “pings” per day from over 5 million users, and each data point contained precise latitude and longitude, a timestamp, a device type, and a so-called “anonymized” device identifier. We expect to find many more examples of this kind of data sharing as we further pursue our location data transparency work in 2022. Location Data Has Been Used to Target First Amendment-Protected Activity There is more and more evidence that data available through reverse location searches can be used to track protestors, invade people’s privacy, and falsely associate people with criminal activity. In 2021, we saw several examples of law enforcement trolling Google location data to identify people in mass gatherings, including many who were likely engaged in First Amendment protected political protests. The FBI requested geofence warrants to identify individuals involved in the January 6 riot at the U.S. Capitol. Minneapolis police used geofence warrants around the time of the protests following the police killing of George Floyd, catching an innocent bystander who was filming the protests. And ATF used at least 12 geofence warrants to identify people in the protests in Kenosha, Wisconsin following the police shooting of Jacob Blake. One of these warrants encompassed a third of a major public park for a two-hour window during the protests.  Efforts to Push Back on Reverse Location Searches In 2021, we also saw efforts to push back on the increasingly indiscriminate use of these search techniques. We called on Google to both fight geofence warrants and to be much more transparent about the warrants it’s receiving, as did the Surveillance Technology Oversight Project and a coalition of 60 other organizations. Both Google and Apple pushed back on shady location data aggregators by banning certain SDKs from their app stores and kicking out at least one location data broker entirely.  There were other efforts in both the courts and legislatures. While we are still waiting on rulings in two criminal cases involving geofence warrants: People v. Dawes, (we filed an amicus brief), and United States v. Chatrie (a case being litigated by the National Association of Criminal Defense Lawyers), judges in other parts of the country have been proactive on these issues. In 2021, a Kansas federal magistrate judge issued a public order denying a government request for a geofence warrant, joining several other judges from Illinois who issued a series of similar orders in 2020. All of these judges held that the government’s geofence requests were overbroad and failed to meet the Fourth Amendment’s particularity and probable cause requirements, and one judge chided the government publicly, stating:  [t]he government’s undisciplined and overuse of this investigative technique in run-of-the-mill cases that present no urgency or imminent danger poses concerns to our collective sense of privacy and trust in law enforcement officials.  We’re hoping the judges in Dawes and Chatrie follow these magistrate judges and find those respective geofence orders unconstitutional as well.  In the meantime, however, the Fourth Circuit Court of Appeals, sitting en banc, issued a great ruling over the summer in a case that could have ramifications for reverse location searches. In Leaders of a Beautiful Struggle v. Baltimore Police Department, the court held that Baltimore’s use of aerial surveillance that could track the movements of every person and vehicle across the city violated the Fourth Amendment. We filed an amicus brief in the case. The court recognized that, even if the surveillance program only collected data in “shorter snippets of several hours or less,” that was “enough to yield ‘a wealth of detail’ greater than the sum of the individual trips” and to create an “encyclopedic’” record of where those people came and went. Also, crucially, the court recognized that, even if people were not directly identifiable from the footage alone, police could distinguish individuals and deduce identity from their patterns of travel and through cross-referencing other surveillance footage like ALPR and security cameras. This was sufficient to create a Fourth Amendment violation. Like the reverse location searches discussed in this post, police could have used the Baltimore program to identify everyone who was in a given area in the past, so the ruling in this case will be important for our legal work in 2022 and beyond. Finally, in 2021 we also saw legislative efforts to curb the use of reverse location search techniques. We strongly supported the federal “Fourth Amendment is Not For Sale Act,” introduced by Senator Ron Wyden, which would close loopholes in existing surveillance laws to prohibit federal law enforcement and intelligence agencies from purchasing location data (and other types of data) on people in the United States and Americans abroad. The bill has bipartisan support and 20 co-sponsors in the Senate, and a companion bill has been introduced in the House. We also supported a state bill in New York that would outright ban all reverse location searches and reverse keyword searches. This bill was reintroduced for the 2021-2022 legislative session and is currently in committee. We’re waiting to see what happens with both of these bills, and we hope to see more legislation like this introduced in other states in 2022. The problem of suspicionless dragnet searches is not going away anytime soon, unfortunately. Given this, we will continue our efforts to hold the government and companies accountable for unconstitutional data sharing and digital dragnet searches throughout 2022 and beyond. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • Every State Has a Chance to Deliver a “Fiber for All” Broadband Future: 2021 in Review
    by Ernesto Falcon on December 30, 2021 at 6:23 pm

    This year’s passage of the Infrastructure Investment and Jobs Act (IIJA)—also known as the bipartisan infrastructure package—delivered on a goal EFF has sought for years. It finally creates a way for people to remedy a serious problem: a severe lack of fiber-to-the-home connectivity. Fiber optics lie at the core of all future broadband access options because it is the superior medium for moving data, with no close comparisons. As a result, global demand for fiber infrastructure is extremely high: China’s seeking to connect 1 billion of its citizens to symmetrical gigabit access, and many advanced EU and Asian nations rapidly approach near-universal deployment. Crucially, these countries did not reach these outcomes naturally through market forces alone, but rather by passing infrastructure policies much like the IIJA. Now it’s up to elected officials in states, from governors to state legislators, to work to ensure the federal infrastructure program delivers 21st-century ready infrastructure to all people. Some states are ahead of the curve. In 2021, California embraced a fiber infrastructure for all effort with the legislature unanimously passing a historic investment in public fiber. State Senator Lena Gonzalez led this effort by introducing the first fiber broadband-for-all bill; EFF was a proud sponsor of this bill in Sacramento. Other states are behind the curve by overly restricting the ability for local governments and utilities to plug the gaps that private internet service providers (ISPs) have left for sixteen years and counting. (2005 was when private fiber-to-the home deployment really kicked off.) Maintaining those barriers, even as federal dollars are finally released, guarantees those states’ failures to deliver universal fiber; the federal law, while important, isn’t sufficient on its own.  Success requires maximum input from local efforts to make the most of this funding. What Is in the Federal Infrastructure Law? Understanding what progress we’ve made this year—and what still needs to be done—requires understanding the IIJA itself. The basic structure of the law is a collaboration between the federal government’s National Telecommunication Information Administration (NTIA), the Federal Communications Commission (FCC), and the states and territories. Congress appropriated $65 billion in total. That includes $45 billion for construction funds and $20 billion for efforts promoting affordability and digital inclusion. This money can be paired with state money, which will be essential in many states facing significant broadband gaps. Responsibility for different parts of this plan falls to different people. The NTIA will set up a grant program, provide technical guidance to the states, and oversee state efforts. The FCC will issue regulations that require equal access to the internet, produce mapping data that will identify eligible zones for funding, and implement a new five-year subsidy of $30 per month to improve broadband access for low-income Americans. Both agencies will be resources to the states, which will be responsible for creating their own multi-year action plan that must be approved by the NTIA. The timelines behind many of these steps are varied. The NTIA’s grant program must be established by around May 2022; states will then take in that guidance and develop their own action plans. Every state will receive $100 million plus additional funding to reflect their share of the national unserved population—a statistic that the FCC will estimate. Congress also ordered the FCC to issue “digital discrimination” (also known as digital redlining) rules that ban deployment decisions based on income, race, ethnicity, color, religion, or national origin. EFF and many others have sought such digital redlining bans. Without these kinds of rules, we risk cementing first- and second- class internet infrastructure based on income status. Currently, companies offer high-income earners ever-increasingly cheaper and faster broadband, while middle to low-income users are stuck on legacy infrastructure that grows more expensive to maintain, while increasingly growing slower as broadband needs expand. The digital discrimination provisions do allow carriers to exempt themselves from the rules if they can show economic and technical infeasibility for building in a particular area, which will limit the impact of these rules in rural markets. However, there should be no mistake that there is no good excuse for discriminatory deployment decisions in densely populated urban markets. These areas are fully profitable to serve, which is why the major ISPs that don’t want to serve everyone, such as AT&T and Comcast, fought so hard to remove these provisions from the bipartisan agreement. But this rulemaking is how we fix the access problem. It is time to move past a world where kids go to fast-food parking lots to do their homework and where school districts’ only solution is to rent a slow mobile hotspot. This rulemaking is how we change things for those kids and for all of us. Local Choice and Open Access Are Necessary If States Want to Reach Everyone with Fiber The states are going to need to embrace new models of deployment that focus on fostering the development of local ISPs, as well as openly accessible fiber infrastructure. The federal law explicitly prioritizes projects that can “easily scale” speeds over time to “meet evolving connectivity needs” and “support 5G [and] successor wireless technologies.” Any objective reading of this leads to the conclusion that pushing fiber optics deep into a community should lie at the core of every project (satellite and 5G rely on fiber optics). That’s true whether it is wired or wireless delivery at the end. A key challenge will be how to build one infrastructure to service all of these needs. The answer is to deploy the fiber and make it accessible to all players. Shared fiber infrastructure is going to be essential in order to extend its reach far and wide. EFF has produced cost-model data demonstrating that the most efficient means to reach the most people with fiber connections is deploying it on an open-access basis. This makes sense when considering that all 21st-century broadband options from satellite to 5G rely on fiber optics, but not all carriers intend to build redundant, overlapping fiber networks in any place other than urban markets. The shared infrastructure approach is already happening in Utah, where fiber infrastructure local governments are deploying fiber and enabling several small private ISPs to offer competitive gigabit fiber services. Similarly, California’s rural county governments have banded together to jointly build open-access fiber to all people through the EFF-supported state infrastructure law. Needless to, say states have to move past the idea that a handful of grants and subsidies will fix their long-term infrastructure problems. They have to recognize that we’ve done that already and understand the mistakes of the past. This is, in fact, the second wave of $45 billion in funding we’ve done for broadband. The previous $45 billion was just spent on slow speeds and non-future proofed solutions, which is why we have nothing to show for it in most states. Only fully embracing future-proofed projects with fiber optics at their core is going to deliver the long-term value Congress is seeking with its priority provisions written into statute. States Must Resist The Slow Broadband Grift Here is a fact: It is unlikely Congress will come around again to produce a national broadband infrastructure fund. A number of states will do it right this time, which will alleviate the political pressure to have Congress act again. A number of states will take the lessons of 2021 and of the past when planning how to spend their infrastructure funding. In a handful of years, those states are probably going to have a super-majority of their residents connected to fiber. But, unfortunately, it’s possible some states will fall for the lie—often pushed by big ISPs—that slow networks save money. We know that the “good enough for now” mindset doesn’t work. Taking this path will waste every dollar, with nothing to show for it. Networks good enough for 2021 will look slow by 2026, forcing communities to replace them to remain economically competitive. The truth is, speed-limited networks cost a fortune in the long run because they will face obsolescence quickly as needs grow. On average, we use about 21% more data each year, and that trend has been with us for decades. Furthermore, with the transition towards distributed work, and the increasingly remote delivery of services such as healthcare and education, the need for ever-increasing upload speeds and symmetrical speeds will continue to grow. The slow broadband grift will come from industry players who are over-invested in “good enough for now” deployment strategies. It is worth billions of dollars to them for states to get this wrong. And so they will repeat their 2021 playbook and deploy their lobbyists just like they did with Congress—though they mostly failed—to the states. Industry players failed to sway Congress because everyone understands the simple fact that we will need more and more broadband with each passing year. Any ISP that comes to a state leader with a suggested plan needs to have its suggestions scrutinized using the technical objectives Congress has laid out this year. Can their deployment plan “easily scale” into ever increasing speeds? Will it meet community needs and enable 5G and successor wireless services? And, most importantly, will it deliver low-cost, high-quality broadband access? Many of these questions are answerable with proper technical vetting. There are no magical secrets of technology, just physics and financial planning. But it remains to be seen whether the states will allow politically well-connected legacy industries to make the call for them, or to rely on objective analysis focused on long term value to their citizens. EFF worked hard in 2021 to make 21st century ready broadband-for-all a reality for every community. We will continue do everything we can to ensure the best long-term outcome for people. If you need help convincing your local leadership to do the right thing for the public—connecting everyone to 21st-century internet access through fiber optics laid deep into your community—you have a partner in EFF. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • Shining a Light on Black Box Technology Used to Send People to Jail: 2021 Year in Review
    by Kit Walsh on December 30, 2021 at 12:32 pm

    If you’re accused of a crime based on an algorithm’s analysis of the evidence, you should have a right to refute the assumptions, methods, and programming of that algorithm. Building on previous wins, EFF and its allies turned the tide this year on the use of these secret programs in criminal prosecutions. One of the most common forms of forensic programs is probabilistic genotyping software. It is used by the prosecution to examine DNA mixtures, where an analyst doesn’t know how many people contributed to the sample (such as a swab taken from a weapon). These programs are designed to make choices about how to interpret the data, what information to disregard as likely irrelevant, and compute statistics based on how often the different genes appear in different populations—and all of the different programs do it differently. These assumptions and processes are subject to challenge by the person accused of a crime. For that challenge to be meaningful, the defense team must have access to source code and other materials used in developing the software. The software vendors claim both that the software contains valuable secrets that must not be disclosed and that their methods are so well-vetted that there’s no point letting a defendant question them. Obviously, both can’t be true, and in fact it’s likely that neither is true. When a was finally able to access one of these programs, the Forensic Statistical Tool (FST), they discovered an undisclosed function and shoddy programming practices that could lead the software to implicate an innocent person. The makers of FST submitted sworn declarations about how they thought it worked, it had been subject to ‘validation’ studies where labs test some set of inputs to see if the results seem right, and so on. But any programmer knows that programs don’t always do what you think you programmed them to do, and so it was with FST: in trying to fix one bug, they unwittingly introduced another serious error. That’s why there’s no substitute for independent review of the actual code that performs the analysis. Fortunately, this year saw two very significant wins for the right to challenge secret software. First, in a detailed and thoughtful opinion, a New Jersey appellate court explained in plain language why forensic software isn’t above the law and isn’t exempt from being analyzed by a defense expert to make sure it’s reliable and does what it says it does. Then, the first Federal court to consider the issue also ordered disclosure. But that’s not the end of the story. In the New Jersey case, the prosecution decided to withdraw the evidence to avoid disclosure. And in the federal case, the defense says that the prosecution handed over unusable and incomplete code fragments. The defense is continuing to fight to get meaningful transparency into the software used to implicate the defendant. With the battle ongoing, we’re also continuing to brief the issue in other courts. Most recently, we filed an amicus brief in NY v. Easely, where the defendant was assaulted by a half dozen people and then arrested, accused of unlawful possession of a firearm based solely on the fact that he was near it and the DNA software said the DNA mixture on the gun likely contained some of his DNA. To make matters worse, the software at issue is closely related to the version of FST that was found to contain serious flaws. Given the history of junk science being used in the courtrooms, we must be vigilant to protect the rights of defendants to challenge the evidence used against them. We also fight to protect the public’s interest in fair judicial proceedings, and that means no convictions based on the say-so of secret software programs. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • 2021 Year In Review: Sex Online
    by Daly Barnett on December 29, 2021 at 4:26 pm

    We don’t entrust Internet companies to be arbiters of morality. We shouldn’t hand them the responsibility of making broad cultural decisions about the forms sexuality is allowed to take online. And yet, this last year has been a steady and consistent drum of Internet companies doing just that. Rather than seeking consensus among users, Apple, Mastercard, Amazon, Ebay and others chose to impose their own values on the world, negatively affecting a broad range of people. The ability to express oneself fully—including the expression of one’s sexuality—is a vital part of freedom of expression, and without that ability, free speech is an incomplete and impotent concept. To be clear, we are talking here about legal sexual expression, fully protected in the U.S. by the First Amendment, and not the limited category of sexual expression, called “obscenity” in U.S. law, the distribution of which may be criminalized. Here is a tiring and non-exhaustive list of the ways Internet platforms have taken it upon themselves to undermine free expression in this way in 2021. Prologue: 2018, FOSTA It’s apt to take a look backwards at the events leading up to SESTA/FOSTA, the infamously harmful carveout to Section 230, as it set the stage for platforms to take severe moderation policies regarding sexual content. Just before SESTA/FOSTA passed, the adult classified listing site Backpage was shut down by the U.S. government and two of its executives were indicted under pre-SESTA/FOSTA legal authority. FOSTA/SESTA sought to curb sex trafficking by removing legal immunities for platforms that hosted content that could be linked to it. In a grim slapstick routine of blundering censorship, platforms were overly broad in disappearing any questionable content, since the law had no clear and granular rules about what is and isn’t covered. Another notable platform to fall was Craigslist Personals section, which unlike Backpage, was attributed as a direct result of FOSTA. As predicted by many, the bill itself did not prevent any trafficking, but actually increased it—enough so that a follow-up bill, SAFE SEX Workers Study Act, was introduced as a federal research project to analyze the harms that occur to people in the sex industry when resources and platforms are removed online. The culture of fear around hosting sexual expression has only increased since, and we continue to track the ramifications of SESTA/FOSTA. Jump ahead to December, 2020: Visa, Mastercard, Mindgeek Late last year, Pornhub announced that it would purge all unverified content from its platform. Shortly after the news broke, Visa and Mastercard revealed they were cutting ties with Pornhub. As we have detailed, payment processors are a unique infrastructural chokepoint in the ecosystem of the Internet, and it’s a dangerous precedent when they decide what types of content is allowed. A few months after the public breakup with Pornhub, Mastercard announced it would require “clear, unambiguous and documented consent” for all content on adult sites that it partners with, as well as the age and identity verification of all performers in said content. Mastercard claimed it was in an effort to protect us, demonstrating little regard for how smaller companies might be able to meet these demands while their revenue stream is held ransom. Not long after that, Mindgeek, parent company to Pornhub, closed the doors for good on its other property, Xtube, which was burdened with the same demands.  As documented by the Daily Beast and other publications, this campaign against Mindgeek was a concerted effort by evangelical group Exodus Cry and its supporters, though that detail was shrouded from the NYTimes piece that preceded, and seemed to garner support from the general public. This moral policing on behalf of financial institutions would set a precedent for the rest of 2021. AVN Just this month, December 2021, AVN Media Network announced that because of pressure from banking institutions, they will discontinue all monetization features on their sites AVN Stars and GayVN Stars. Both sites are platforms for performers to sell video clips. As of January 1st, 2022, all content on those platforms will be free and performers cannot be paid directly for their labor on them. Twitch In May, Twitch revoked the ability for hot tub streamers to make money off advertisements. Although these streamers weren’t in clear violation of any points on Twitch’s community guidelines policies, it was more a you know it when you see it type of violation. It’s not a far leap to draw a connection between this and the “brand safety score” that a cybersecurity researcher discovered on Twitch’s internal APIs. The company followed up on that revelation that the mechanism was simply a way to ensure the right advertisements were “appropriately matched” with the right communities, then  said in their follow-up statement: “Sexually suggestive content—and where to draw the line—is an area that is particularly complex to assess, as sexual suggestiveness is a spectrum that involves some degree of personal interpretation of where the line falls.” After their mistake in providing inconsistent enforcement and unclear policies, Twitch added a special category for hot tub streamers. No word yet on their followup promise to make clearer community standards policies. Apple App Store During this year’s iteration of the WWDC conference where Apple unveils new features and changes to their products, a quiet footnote to these announcements was a change to their App Store Review Guidelines: “hookup apps” that include pornography would not be allowed on the App Store. Following outcries that this would have a disproportionate impact on LGBTQ+ apps, Apple followed up with reporters that those apps, such as Grindr and Scruff, wouldn’t be affected. They wanted to make it clear that only apps that featured pornography would be banned. They did not comment on if, or how, they cracked the code to determine what is and isn’t porn. Discord Discord describes itself as “giving the people the power to create space to find belonging in their lives,”—that is, unless Apple thinks it’s icky. Discord is now prohibiting all iOS users from accessing NSFW servers, regardless of user age. Much like Tumblr did in 2018, this is likely in response to the pressure put on by the above-mentioned strict policies imposed by the Apple App Store. This means that adult users are no longer allowed to view entirely legal NSFW content on Discord. These servers are accessible on Android and Desktop. OnlyFans In August, OnlyFans declared that it would ban explicit content starting in October. Given their reputation, this was confusing. Following significant pushback and negative press, they backtracked on their decision just a week later. Ebay With just a month’s notice, Ebay revised their guidelines to ban adult items starting in June. Offending material includes movies, video games, comic books, manga, magazines, and more. Confusing matters even more, they took care to note that nudist publications (also known as Naturist publications, usually non-sexual media representing a lifestyle of those that choose not to wear clothes) would not be allowed, but risqué sexually explicit art from pre-1940 and some pin-up art from pre-1970 are allowed. Many have pointed out that this change will endanger the archival capabilities and preservation of LGBTQ history. Instagram Instagram, a platform often criticized for its opaque restrictions on sexual content,  stands out in this list as the only example here that puts an option in the user’s hands to see what they wish. The new “Sensitive Content Control” was released in July. It is a feature which enables users to opt into how restrictive they would like the content they view on the app to be moderated. Although Instagram still has many, many, many, issues when it comes to regulating sexual content on its platform, a feature like this, at the very least this type of interface, is a step in the right direction. Perhaps they are paying attention to the petition with over 120,000 signatures pleading them to stop unfairly censoring sexuality.  Given that no two user-groups will agree on what is beyond the threshold of material too sexually explicit to be on social media, that Instagram itself can’t agree with the professional art world on what is pornography versus art, the obvious choice is to let the users decide. Let this “Sensitive Content Control” be a proof of concept for how to appropriately implement a moderation feature. Anything beyond what is already illegal should be up to users—and not advertisers, shareholders, developers, or biased algorithms—to decide whether or not they want to see it. Internet for All, Not Few The Internet is a complex amalgam of protocols, processes, and patchwork feature-sets constructed to accommodate all users. Like scar tissue, the layers are grown out of a need to represent us, a reflection of our complexities in the real world. Unfortunately, the last few years have been regressive to that growth; what we’ve instead seen is a pruning of that accommodation, a narrowing of the scope of possibility. Rather than a complex ecosystem that contains edges, like in the offline world, those edges are being shaved down and eliminated to make the ecosystem more child-proof.  Research shows that overly restrictive moderation is discriminatory and destructive to non-normative communities, communities that because of their marginalized status, might exist in some of the edges these platforms deem to be too dangerous to exist. Laying unnecessary restrictions on how marginalized people get to exist online, whether intentional or not, has real world effects. It increases the margins that prevent people living in safety, with dignity, with free expression and autonomy. If we take the proof of concept from the above Instagram example, we can imagine a way to accommodate more possibilities, without sanding down the edges for all.  And if we’ve learned anything from the few positive changes made by companies this year, it’s that these platforms occasionally do listen to their user base. They’re more likely to listen when reminded that people, not the homogeneous monopolies they’ve constructed for themselves, hold the power. That’s why we’ll continue to work with diverse communities to hold their feet to the fire and help ensure a future where free expression is for everyone. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • Students Are Learning To Resist Surveillance: Year in Review 2021
    by Jason Kelley on December 29, 2021 at 9:17 am

    It’s been a tough year for students – but a good one for resistance.  As schools have shuffled students from in-person education to at-home learning and testing, then back again, the lines between “school” and “home” have been blurred. This has made it increasingly difficult for students to protect their privacy and to freely express themselves, as online proctoring and other sinister forms of surveillance and disciplinary technology have spread. But students have fought back, and often won, and we’re glad to have been on their side.  Dragnet Cheating Investigations Rob Students of Due Process Early in the year, medical students at Dartmouth’s Geisel School of Medicine were blindsided by an unfounded dragnet cheating investigation conducted by the administration. The allegations were based on a flawed review of an entire year’s worth of student log data from Canvas, the online learning platform that contains class lectures and other substantive information. After a technical examination, EFF determined that the logs easily could have been generated by the automated syncing of course material to devices logged into Canvas.  When EFF and FIRE reached out to Dartmouth and asked them to more carefully review the logs—which Canvas’ own documentation explicitly states should not be used for high-stakes analysis—we were rebuffed. With the medical careers of seventeen students hanging in the balance, the students began organizing. At first, the on-campus protest, the letter to school administrators, and the complaints of unfair treatment from the student government didn’t make much of an impact. In fact, the university administration dug in, instituting a new social media policy that seemed aimed at chilling anonymous speech that had appeared on Instagram, detailing concerns students had with how these cheating allegations were being handled.  But shortly after news coverage of the debacle appeared in the Boston Globe and the New York Times, the administration, which had failed to offer even a hint of proper due process to the affected students, admitted it had overstepped, and dropped its allegations. This was a big victory, and helped show that with enough pushback, students can help schools understand the right and wrong ways to use technology in education. Students from all over the country have now reached out to EFF and other advocacy organizations because teachers and administrators have made flimsy claims about cheating based on digital logs from online learning platforms that don’t hold up to scrutiny. We’ve created a guide for anyone whose schools are using such logs for disciplinary purposes, and welcome any students to reach out to us if they are in a similar position.  Online Proctoring Begins to Back Down—But the Fight Isn’t Over During 2020 and 2021, online proctoring tools saw upwards of a 500% increase in their usage. But legitimate concerns about the invasiveness of these tools, potential bias, and efficacy were also widespread, as more people became aware of the ways that automated proctoring software, which purports to flag cheating, often flags normal test-taking behavior—and may even flag behavior of some marginalized groups more often. During the October 2020 bar exam, ExamSoft flagged more than one-third of test-takers or almost 3200 people. Human review by the Bar removed nearly all flags, leaving only 47 examinees with sanctions. In December of 2020, the U.S. Senate even requested detailed information from three of the top proctoring companies—Proctorio, ProctorU, and ExamSoft—which combined have proctored at least 30 million tests over the course of the pandemic.  This year, we continued the fight to protect private student data from proctoring companies, and to ensure students get due process when their behavior is flagged. We took a close look at the companies’ replies to the Senate and offered our own careful interpretation of how they missed the mark. In particular, we continue to take significant issue with the companies’ use of doublespeak—claiming that their services don’t flag cheating, just aberrant behavior, and human review is required for any cheating to be determined. Why then do many of the companies offer an automation-only service? You simply can’t have it both ways.  After coming under fire, ProctorU, one of the largest online proctoring companies, announced in May that it will no longer sell fully-automated proctoring services. The company admitted that “only about 10 percent of faculty members review the video” for students who are flagged by the automated tools—leaving the grades of the vast majority of test takers at the whims of biased and faulty algorithms. This is a big win, but it doesn’t solve every problem. Human review on the company side may simply result in teachers and administrators ignoring even more potential false flags, as they further trust the companies to make the decisions for them.  We must continue to carefully scrutinize the danger to students whenever schools outsource academic responsibilities to third-party tools, algorithmic or otherwise. And we hope legislators begin to reign in unnecessary data collection by proctoring companies with some common-sense legislation in the new year. The New Future of Privacy Forum “Student Privacy Pledge” Has New Problems (and Old Ones) The Future of Privacy Forum (FPF) originally launched the Student Privacy Pledge in 2014 to encourage edtech companies, which often collect very sensitive data on K-12 students, to take voluntary steps to protect privacy. In 2016, we criticized the Legacy Pledge after it reached 300 signatories—to FPF’s dismay. This year, we carefully reviewed the new Privacy Pledge, and found it equally lacking. This matters because schools, students, and parents may believe that a company which abides by the pledge is protecting privacy in ways that it is not. The Student Privacy Pledge is a self-regulatory program, but those who choose to sign are committing to public promises that are enforceable by the Federal Trade Commission (FTC) and state attorneys general under consumer protection laws—but this is cold comfort when the pledge falls so short, and because enforcement actions against edtech companies for violating students’ privacy have been few and far between.  The new pledge stumbles in a variety of ways. In sum: it is filled with inconsistent terminology and fails to define material terms; it lacks clarity on which parts of a company that signs the pledge must abide by it; it leaves open the question of whether companies that update certain privacy policies must notify schools; it provides a variety of unclear exceptions for activities undertaken for “authorized educational/school purposes”; it does not define any sort of minimum standard for resources companies must offer to schools about using their tools in a privacy-protective way; and it does not give any guidance as to the privacy-by-design requirements that it otherwise expresses a company should engage in.  EFF is not opposed to voluntary mechanisms like the Student Privacy Pledge to protect users—if they work. The FTC rarely brings enforcement actions focused on student privacy, and the gaps in the Pledge don’t help. We hope the FTC and state attorneys general are willing to enforce it, but so far, its usefulness has been underwhelming, and its holes don’t help.  Disciplinary Technology Isn’t Going Away While we’ve made some headway in protecting student privacy during the pandemic, the threats aren’t going away. Petitions and other campaigns have helped individual schools and students, but we are still pushing for Canvas, Blackboard, and other learning tools to clarify the accuracy of their logs. And we are glad that the California Bar this year is offering free re-do’s and adjusting scores of those affected by 2021’s glitch-filled experience–but that comes on the heels of the Bar also signing a lengthy agreement with ExamSoft. Proctoring must be reined in, and used more carefully; and the only data that should be collected from students should be what is required to offer proctoring services. EFF devoted additional resources to student privacy this year, and we’re glad we did. We’ve learned a lot about what it takes to resist school surveillance, defend young people’s free expression, and protect student privacy—and we’ll continue the fight in 2022. If you’re interested in learning more about protecting your privacy at school, take a look at our Surveillance Self-Defense guide on privacy for students. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • Where Net Neutrality Is Today and What Comes Next: 2021 in Review
    by Ernesto Falcon on December 28, 2021 at 11:12 am

    When all is said and done—and there are some major steps to take in 2022—the United States will mark 2021 as the last year without federal net neutrality protections. Next year is when we will undo the 2017 repeal and once again put the Federal Communications Commission (FCC) back to work doing its job: protecting consumers from bad actors, working towards universal and net-neutral internet access, and accurately assessing the playing field in telecommunications. With President Biden’s appointments of Chairwoman Jessica Rosenworcel and Gigi Sohn, a net neutrality pioneer, to staff the FCC’s leadership team, we can usher in a better era. Both appointees made clear their support for the 2015 Open Internet Order and belief that the FCC should begin a process to re-establish federal authority over broadband carriers, including network neutrality rules. More fights lie ahead when the new federal rules are established but let’s review what’s happened so far and what they mean for protecting your access to the Internet. The Pandemic Has Changed How We Use the Internet At its core, the necessity for net neutrality protections rests on one simple fact: people don’t want their broadband provider to dictate their experience online. It’s a need that only grew during the pandemic. As the country rapidly transitions education, social activities, and jobs to rely on a persistent, open, and non-discriminatory connection to the world, views of access have shifted. Today, an eye-popping 76% of American Internet users consider internet service to be as important as water and electricity in their daily life. But unlike those utility services, internet access is subject to the whims of private carriers for a large number of American users. People do not like that power imbalance, and they should not settle for it. They pay for access, the providers are exceedingly well-compensated for access, and the Congress set aside nearly $20 billion in funding to help people afford broadband access. Yet major broadband providers such as AT&T, Comcast, and Verizon still resist the notion that their role as essential service providers should not mean rules that protect consumers should apply to them. California’s Law and the Role of State Power to Protect Consumers Right now California’s net neutrality law (SB 822) is being reviewed by the Ninth Circuit after the state’s Attorney General prevailed in the lower court. The law is now in effect in California, forcing carriers to abandon things that contradicted net neutrality such as AT&T self-preferencing its online streaming service HBO Max. We were glad to see the law get rid of a business practice that has generally been shown to make broadband access more expensive while negatively impacting the competitive landscape among services and products. No one likes it when a broadband carrier decides the products it owns should run “cheaper” by simply making alternatives on the internet more expensive to use, but that was exactly what AT&T was doing. If the 2015 Open Internet Order was still in effect, the federal rules would have blocked this practice as the FCC was investigating it as a net neutrality violation. The battle over California’s law makes clear that ISPs like AT&T, Verizon, and Comcast didn’t ask Aijit Pai’s FCC to abolish net neutrality protections because it was an overreach of the federal government or because the FCC didn’t have the authority. It was because they wanted to be free of any consumer protections, at any level. They know they sit on an essential service that people literally cannot live without, but they want to be in complete control over what you have to pay, how you get it, and how you are treated by them. But it doesn’t work that way. The ISPs can’t have the FCC give up its authority and prevent the states for stepping in on behalf of their residents. Remember, California was the state where Verizon was caught throttling a firefighter command center during a wildfire. California has a demonstrated need to regulate ISPs in the interest of public safety. The state in fact passed AB 1699 by Assembly Member Mark Levine the year after SB 822 to explicitly ban Verizon from throttling first responder access at times of emergency. This law was also opposed by the CTIA, which represents Verizon because even though they know they were completely wrong, they don’t want to be regulated at any level. The importance broadband access has for health, education, work, economic activity, public safety, and nearly every facet of everyday life cannot be overstated. That makes the legal question as to whether states can protect their citizens in the absence of federal protections an extremely important one where we at EFF hope California prevails. If California were to prevail, there is little reason why consumers will need to rely exclusively on the FCC to protect their access to the internet when they can go to their Governors and state elected officials. Victory at the 9th Circuit would not only enshrine net neutrality for the 5th-largest economy in the world, but it would inoculate California citizens from the whims of DC. Furthermore, it would likely protect federal net neutrality because reversing it at the federal level would have less of an impact on broadband access and would be less attractive to the major ISPs that started us down this path in the first place. We Will Fight to Push the FCC to Adopt New 21st Century Net Neutrality Rules in 2022 Net neutrality will always be pushed so long as the public continues to want and fight for it. Much to the chagrin of ISP lobbyists (though they get paid to do the bidding of their employers of perpetually opposing net neutrality), no one intends to let net neutrality just go away. EFF represent the public’s desire for the FCC to begin the process of restoring the rules. Chairwoman Rosenworcel stated clearly she intends to revisit the reinstatement of net neutrality rules in 2022. Once the Senate confirms Gigi Sohn as the 5th Commissioner to the FCC, the work will begin. At a minimum, California’s state law establishes the basic floor of what net neutrality should look like federally, but even those rules were written in a pre-pandemic world. When broadband access is on par with access to electricity and water for most people, the rules should reflect that importance from the FCC. In fact, hundreds of organizations petitioned the incoming Biden Administration at the start of this year to issue rules that prohibited the disconnection from critical services such as water and electricity regularly would include broadband access. Furthermore, when Americans were forced to switch to remote access to engage in social and economic activity, ISPs that still retained data caps opted to lift them. But less than a year into the pandemic with vaccinations just starting to come into circulation, these ISPs reversed themselves and restored artificial scarcity schemes despite home usage skyrocketing due to realities on the ground. In other words, despite the fact that internet usage was necessarily rising due to remote work and remote education, and despite solid profits, companies like AT&T decided they needed to make broadband access even more expensive for users. This is despite the fact that a multi-billion emergency benefit program came online to provide generous subsidies to ISPs at $50 a month ensured that no one would miss their bill and disrupt the carriers’ revenues. Should the power remain completely in the hands of the ISP to decide the entirety of your future connection to the internet? EFF does not believe so and we will fight for consumers next year at the FCC to ensure that the rules firmly empower users, not ISPs. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • In 2021, We Told Apple: Don’t Scan Our Phones
    by Joe Mullin on December 28, 2021 at 10:53 am

    Strong encryption provides privacy and security for everyone online. We can’t have private conversations, or safe transactions, without it. Encryption is critical to democratic politics and reliable economic transactions around the world. When a company rolls back its existing commitments to encryption, that’s a bad sign.  In August, Apple made a startling announcement: the company would be installing scanning software on all of its devices, which would inspect users’ private photos in iCloud and iMessage.  This scanning software, intended to protect children online, effectively abandoned Apple’s once-famous commitment to security and privacy. Apple has historically been a champion of encryption, a feature that would have been undermined by its proposed scanning software. But after years of pressure from law enforcement agencies in the U.S. and abroad, it appears that Apple was ready to cave and provide a backdoor to users’ private data, at least when it comes to photos stored on their phones.  At EFF, we called out the danger of this plan the same day it was made public. There is simply no way to apply something like “client-side scanning” and still meet promises to users regarding privacy.  Apple planned two types of scanning. One would have used a machine learning algorithm to scan the phones of many minors for material deemed to be “sexually explicit,” then notified the minors’ parents of the messages. A second system would have scanned all photos uploaded to iCloud to see if they matched a photo in a government database of known child sexual abuse material (CSAM). Both types of scanning would have been ripe for abuse both in the U.S., and particularly abroad, in nations where censorship and surveillance regimes are already well-established.  EFF joined more than 90 other organizations to send a letter to Apple CEO Tim Cook asking him to stop the company’s plan to weaken security and privacy on Apple’s iPhones and other products. We also created a petition where users from around the world could tell Apple our message loud and clear: don’t scan our phones!  More than 25,000 people signed EFF’s petition. Together with petitions circulated by Fight for the Future and OpenMedia, nearly 60,000 people told Apple to stop its plans to install mass surveillance software on its devices.  In September, we delivered those petitions to Apple. We held protests in front of Apple stores around the country. We even flew a plane over Apple’s headquarters during its major product launch to make sure its employees and executives got our message. After the unprecedented public pushback, Apple agreed to delay its plans.  A Partial Victory  In November, we got good news: Apple agreed to cancel its plan to send notifications to parent accounts after scanning iMessages. We couldn’t have done this without the help of tens of thousands of supporters who spoke out and signed the petitions. Thank you.  Now we’re asking Apple to take the next step and not break its privacy promise with a mass surveillance program to scan user phones for CSAM. Apple’s recent ad campaigns, with slogans like “Privacy: That’s iPhone,” have sent powerful messages to its more than one billion users worldwide. From Detroit to Dubai, Apple has said it in dozens of languages: the company believes privacy is “a fundamental human right.” It has sent this message not just to liberal democracies, but also to people who live in authoritarian regimes, and countries where LGBTQ people are criminalized.  It’s understandable that companies don’t want users to misuse their cloud-based systems, including using them to store illegal images. No one wants child exploitation material to spread. But rolling back commitments to encryption isn’t the answer. Abandoning encryption to scan images against a government database will do far more harm than good.  As experts from around the world explained at the EFF-hosted Encryption and Child Safety event, once backdoors to encryption exist, governments can and will use them to go well beyond scanning for CSAM. These systems can and will be used against dissidents and minorities. We hope Apple will sidestep this dangerous pressure, stand with users, and cancel its photo scanning plans.  This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • We Encrypted the Web: 2021 Year in Review
    by Alexis Hancock on December 27, 2021 at 12:40 pm

    In 2010, EFF launched its campaign to encrypt the entire web—that is, move all websites from non-secure HTTP to the more secure HTTPS protocol. Over 10 years later, 2021 has brought us even closer to achieving that goal. With various measurement sources reporting over 90% of web traffic encrypted, 2021 saw major browsers deploy key features to put HTTPS first. Thanks to Let’s Encrypt and EFF’s own Certbot, HTTPS deployment has become ubiquitous on the web. Default HTTPS in All Browsers For more than 10 years, EFF’s HTTPS Everywhere browser extension has provided a much-needed service to users: encrypting their browser communications with websites and making sure they benefit from the protection of HTTPS wherever possible. Since we started offering HTTPS Everywhere, the battle to encrypt the web has made leaps and bounds: what was once a challenging technical argument is now a mainstream standard offered on most web pages. Now HTTPS is truly just about everywhere, thanks to the work of organizations like Let’s Encrypt. We’re proud of EFF’s own Certbot tool, which is Let’s Encrypt’s software complement that helps web administrators automate HTTPS for free.​​The goal of HTTPS Everywhere was always to become redundant. That would mean we’d achieved our larger goal: a world where HTTPS is so broadly available and accessible that users no longer need an extra browser extension to get it. Now that world is closer than ever, with mainstream browsers offering native support for an HTTPS-only mode. In 2020, Firefox announced an “HTTPS-only” mode feature that all users can turn on, signaling that HTTPS adoption was substantial enough to implement such a feature. 2021 was the year the other major browsers followed suit, starting with Chrome introducing an HTTPS default for navigation when a user types in the name of a URL without specifying insecure HTTP or secure HTTPS. Then in June, Microsoft’s Edge announced an “automatic HTTPS feature” that users can opt into. Then later in July, Chrome announced their “HTTPS-first mode”, which attempts to automatically upgrade all pages to HTTPS or display a warning if HTTPS isn’t available. Given Chrome’s dominant share of the browser market, this was a huge step forward in web security. Safari 15 also implemented a HTTPS-first mode in its browsers. However, it does not block insecure requests like in Firefox, Chrome, and Edge.  With these features rolled out, HTTPS is truly everywhere, accomplishing the long-standing goal to encrypt the web. SSL/TLS Libraries Get A Critical Update SSL/TLS libraries are heavily used in everyday critical components of our security infrastructure, like transportation of web traffic. These tools are primarily built in the C programming language. However, C has a long history of memory safety vulnerabilities. So the Internet Security Research Group has led the development of building an alternative to certain libraries like OpenSSL in the Rust language. Rust is a modern, memory-safe programming language and the TLS library built in Rust has been named “Rustls.” Rustls has also been integrated for support in popular networking command line utilities such as Curl. With Rustls, important tools that use TLS can gain memory safety and make networks ever more secure and less vulnerable. Making Certbot More Accessible Since 2015, EFF’s Certbot tool has helped millions of web servers deploy HTTPS by making the certificate process free and easy. This year we significantly updated the user experience of Cerbot’s command-line output for clarity. We also translated parts of the website into Farsi in response to user requests, and now we have the Instructions Generator available in this language. We hope to add more languages in the future and make TLS deployment in websites even more accessible across the globe. On The Horizon Even as we see positive movement by major browsers—from the HTTPS-by-default victories above to ending insecure FTP support and even Chrome adopting a Root Store program—we are also watching the potential dangers to these gains. Encrypting the net means sustaining the wins and fighting for tighter controls across all devices and major services.  HTTPS is ubiquitous on the web in 2021, and this victory is the result of over a decade of work by EFF, our partners, and the supporters who have believed in the dream of encrypting the web every step of the way. Thank you for your support in fighting for a safer and more secure internet. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.  

  • The Battle for Communications Privacy in Latin America: 2021 in Review
    by Veridiana Alimonti on December 27, 2021 at 11:49 am

    Uncovering government surveillance and fighting for robust and effective legal safeguards and oversight is a continuous battle in Latin American countries. Surveillance capabilities and technologies are becoming more intrusive and prevalent, surrounded by a culture of secrecy and entrenched views that pit security against privacy. There are several challenges to face. Alongside growing resistance against government biometric surveillance, the long-standing problem of unfettered communications surveillance persists and presents new troublesome trends. Both appear tied together, for example, in renewed attempts to compel individuals to give their biometric data in order to access mobile phone services, as we saw in México and Paraguay in 2021, with fierce opposition from civil society. The Supreme Court in Mexico indefinitely suspended the creation of the Padrón Nacional de Usuarios de Telefonía Móvil (PANAUT), a national registry of mobile users associated with their biometric data, after the federal agency assigned to implement the registry filed a constitutional complaint affirming its budgetary autonomy and its duty to ensure users’ rights to privacy, data protection, and access to information. In Paraguay, the bill forcing users to register their biometrics to enable a mobile telephone service was rejected by a parliamentary commission and has been halted in Congress since then.  This post highlights a few relevant developments this year regarding communications privacy in Latin America in its relation with other rights, such as freedom of expression and assembly. #ParoNacional in Colombia: Patrolling Phones and the Web In the wake of Colombia’s tax reform proposal, demonstrations spread over the country in late April, reviving the social unrest and socio-economic demands that led people to the streets in 2019. Media has reported on government crackdowns against the protestors, including physical violence, missing persons, and deaths. Fundación Karisma has also stressed implications for the right to protest online and the violation of rights due to internet shutdowns and online censorship and surveillance. Amid the turmoil, EFF has put together a set of resources to help people navigate digital security in protest settings. As seen in the 2019 protests, Colombian security forces once again abused their powers searching people’s phones at their discretion in 2021. Relying on controversial regulation that allows law enforcement agents to check the IMEI of mobile devices to curb cell phone thefts, police officers compelled protesters to hand over their passwords or unlock their phones, even though neither of these are needed to verify the IMEI of a device. As Fundación Karisma pointed out, like the search of a house, police can only seize a cell phone with a court order. Otherwise, it will interfere with peoples’ fundamental rights to privacy, right to a fair trial, and the presumption of innocence. Over the years, the IMEI regulation has led to cases where the police reviewed people’s social networks or deleted potential evidence of police brutality and abuses. Colombian police “patrolling” on the web has also reinforced concerns over its invasive nature. Karisma points out that a 2015 Colombian police resolution authorizing law enforcement “cyber patrolling” is unclear about its specific scope, procedures, tools, and limits. Yet, a June 2021 Ministry of Defense report of their activities during the national strike indicates that digital patrolling served to detect cyberthreats, profile suspicious people and activities related to “vandalism” acts, and combat what the government deemed online disinformation. In the latter case, cyber patrolling was combined with a narrative dispute over the truth about reports, images, and videos on the excessive use of police force that have gained national and international attention. Karisma’s report shed light on government campaigns framing critical publications to the army or the police as fake news or digital terrorism. The report concludes that such prejudicial framing served as the first step in a government strategy to stigmatize protests online and offline and to encourage censorship of critical content. Following the Inter-American Commission on Human Rights (IACHR) mission to Colombia in June, the IACHR expressed concern about Colombian security forces taking on a fact-checking role, especially on information related to their own actions. The IACHR has also highlighted the importance of the internet as a space for protest during the national strike, taking into account evidence of restrictions presented by Colombian groups, like Fundación Karisma, Linterna Verde, and FLIP.  Threats and Good News for Encryption  In Brazil, legislative discussions on the draft bill 2630/2020, the so-called “Fake News” bill, continued throughout 2021. Concerns around disinformation gained new strength with the propagation of a narrative in favor of ineffective methods to tackle the COVID pandemic promoted by President Jair Bolsonaro and his supporters. Despite its legitimate concerns with the disproportionate effects of disinformation campaigns, the text approved in the Senate still in 2020 contained serious threats for privacy, data protection, and free expression. Among them, the traceability mandate for instant messaging applications stood out.  EFF along with civil society groups and activists on the ground stayed firm on opposing the traceability rule. This rule compelled instant messaging applications to massively retain the chain of forwarded communications, undermining users’ expectation of privacy and strong end-to-end encryption safeguards and principles. In August, we testified in Brazil’s Congress stressing how massive data retention obligations and pushes to move away from robust end-to-end encryption implementations not only erode rights of privacy and free expression, but also impair freedom of assembly and association. As a piece of good news, the traceability mandate was dropped in the latest version of the bill, though perils to privacy remain in other provisions.  Also in the Brazilian Congress, a still-pending threat to encryption lies in a proposed obligation for internet applications to assist law enforcement in telematic interception of communications. The overbroad language of such assistance obligation endangers the rights and security of users of end-to-end encrypted services. Coalizão Direitos na Rede, a prominent digital rights coalition in Brazil, underlined this and other dangerous provisions in this bill that changes the country’s Criminal Procedure Code. The coalition pointed out serious concerns and even setbacks in regard to legal safeguards for law enforcement access to communications data.  Gathering forces to coordinate efforts in advancing a proactive agenda to promote and defend encryption, another piece of great news took place regionally, with the launch of the Alliance for Encryption in Latin America and the Caribbean (AC-LAC). EFF is a member of the Alliance, which, so far, comprises over 20 organizations throughout the region.  Pegasus Project: New Revelations, Persistent Rights Violations  Last, but not least, one of the most remarkable developments in 2021  on the communications privacy front was the Pegasus Project revelations. In July, the Pegasus Project unveiled governments’ espionage on journalists, activists, opposition leaders, judges, and others based on a list of more than 50,000 smartphone numbers of possible targets of the spyware Pegasus since 2016. As reported by The Washington Post, the leaked phone numbers concentrated in countries known to engage in surveillance against their citizens and to have been clients of NSO Group, the Israeli company which develops and sells the spyware. The list of possible targets as well as confirmed attacks through forensic analysis contradict NSO Group’s claims that their surveillance software is used only against terrorism and serious crimes. Phone numbers in the revealed list spanned more than 45 countries across the globe, but the greatest chunk of them related to Mexican phones—over 15,000 numbers in the leaked data.  Among them were people from the inner circle of President López Obrador, including close political allies and family members, when he was an opposition leader still aspiring for the country’s presidential position. Human rights defenders, researchers from the Inter-American Commission on Human Rights, and journalists were not spared on Mexico’s list. Cecilio Pineda Brito, a freelance reporter, was shot dead in 2017 just a few weeks after he was selected as a possible target for surveillance. When a mobile device is infected with Pegasus, messages, photographs, email messages, call logs, location data can be extracted, and microphones and cameras can be activated, giving full access to people’s private information and lives. The revelations confirmed findings published in 2017 by joint investigations held by R3D, Article 19, SocialTIC, and Citizen Lab about attacks carried out during former President Peña Nieto’s administration. Since then, the country’s General Attorney Office has initiated an investigation that is still open and with limited developments. Yet, the new leaked data has spurred advances in shedding light on government contracts related to Pegasus and in detaining and prosecuting, within the Attorney Office investigations, a key person in the political and business complex scheme of the spyware’s use in Mexico.  Revelations of the Pegasus Project have raised the red flag regarding ongoing government negotiations with NSO Group in other Latin American countries, like Uruguay and Paraguay. It has also reinforced concerns around a troublesome procurement procedure involving Pegasus spyware in Brazil, firmly challenged by a group of human rights organizations, including Conectas and Transparency International Brazil. In El Salvador, Apple warned journalists from the well-known independent digital news outlet El Faro of possible targeting of their iPhones by state-sponsored attackers. Similar warnings were sent to El Salvadoran leaders of civil society organizations and opposition political parties. At the regional level, leading digital rights groups in Latin America requested a thematic  hearing to discuss surveillance risks for human rights before the Inter-American Commission on Human Rights. During the October 2021 hearing, they stressed serious concerns with various surveillance technologies employed in countries in the region without proper controls, legal basis, and safeguards aligned with international human rights standards. They urged the Commission to start a regional consultation process to establish a set of inter-American guidelines to guide the processes of acquisition and use of technologies with surveillance capabilities, based on the principles of legality, necessity, and proportionality, which should be the baseline parameters of surveillance policies. In fact, the widespread use of malicious software by Latin American governments generally occurs with no clear and precise legal authorization, much less strict necessity and proportionality standards or strong due process safeguards. The call for a global moratorium on the use of malware technology until states have adopted robust legal safeguards and effective controls to ensure the protection of human rights—voiced by United Nations experts, the U.N. High Commissioner, and dozens of organizations across the globe, including EFF—is the culmination of persistent human rights abuses and arbitrary violence related to government use of spywares. Moreover, as we said, outrage will continue until governments recognize that intelligence agency and law enforcement hostility to device security puts us all in danger. Instead of taking advantage of system weaknesses and bugs, governments should align in favor of strong cybersecurity for everyone. Conclusion Communications surveillance continues to be a pervasive problem in Latin America. Feeble legal safeguards and unfettered surveillance practices erode our ability to speak up against abuses, organize resistance, and fully enjoy a set of fundamental rights. Throughout 2021 and for years prior, EFF has been working with partners in Latin America to foster stronger human rights standards for government access to data. Along with robust safeguards and controls, governments must commit to promote and protect strong encryption and device security—they are two sides of the same coin. And we’ll keep joining forces to push for advances and uphold victories on this front in 2022 and the years to come. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • Vaccine Passports: 2021 in Review
    by Jon Callas on December 26, 2021 at 10:04 am

    2021 has been the year of vaccines, in light of the continuing worldwide pandemic. It has also been the year of vaccine passports. To fully tell this story, let’s go back to 2020, because the term vaccine passport as many people use it has changed since then. Early in the pandemic, there were discussions of “immunity passports” that would declare that someone had recovered from COVID-19, and which we thought were a bad idea. We, along with other civil liberties organizations, are against creating a new surveillance system that will be hard to remove, and will be inequitable for people who have health issues or do not have smartphones. In those days before even a timeline for vaccines, immunity passports also created perverse incentives for people who could not shelter in place. Fortunately for us all, events kept immunity passports from gaining wide adoption. As the hope for vaccines in 2020 became a certainty in 2021, attention shifted from immunity passports to vaccine passports. Our stance remained the same: we want equity for all and no surveillance. Thus, we raised concerns about “vaccination passports,” by which we meant efforts to digitize credentials,  rather than using the tried and true mechanism of vaccination documents. Digital, scannable credentials, we said, are hard to separate from the introduction of new systems used to track our movements. As we said in that post, “immunizations and providing proof of immunizations are not new. However, there’s a big difference between utilizing existing systems to adapt to a public health crisis and vendor-driven efforts to deploy new, potentially dangerous technology under the guise of helping us all move past this pandemic.” In 2021, though, language shifted. The term vaccine passport shifted from meaning frequent, active document checks and came to mean simply vaccination records. In April, therefore, we wrote about our opposition to digital “vaccine bouncers”—proposals that required a new tracking infrastructure and normalized a culture of doorkeepers to public places. We opposed regularly requiring visitors to display a digital token as a condition of entry.  We also called for equitable distribution of vaccines. In the middle part of 2021, we continued our skepticism of active, frequent checks, especially when this was outsourced to private companies that have a financial interest in surveillance and being vaccine bouncers. We also analyzed systems in Germany, California, New York, Illinois, Colorado, and other places. The spread of the Delta variant, anti-vaccination movements, paper and digital document forgeries, have further made the situation confusing especially as vaccine mandates have followed around the world. Our opinions have still remained the same: we are against surveillance and in favor of equity for all. In 2021, that continued to mean strong support for paper documents over digital ones, because of the obvious links between digital documents and surveillance systems. As this year closes out, we are all now concerned about the Omicron variant, and how that is going to affect next year’s handling of the pandemic. This very article has been rewritten more than once because of it. As we move into 2022, we expect more surprises in our continued pandemic-influenced life. We expect that companies selling surveillance will continue to exploit the moment. We continue to advocate for measures that do not create surveillance and provide for appropriate equities. This is apt to become more complex with digital documents expanding in context. Summing up, we will continue to advocate against pandemic-related surveillance. We don’t believe such surveillance will help us out of the pandemic. We also continue to advocate for equitable treatment of marginalized people. Above all, we hope we won’t be writing a similar year-end post next year. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • 2021 Was the Year Lawmakers Tried to Regulate Online Speech
    by Joe Mullin on December 26, 2021 at 9:23 am

    On the biggest internet platforms, content moderation is bad and getting worse. It’s difficult to get it right, and at the scale of millions or billions of users, it may be impossible. It’s hard enough for humans to sift between spam, illegal content, and offensive but legal speech. Bots and AI have also failed to  rise to the job. So, it’s inevitable that services make mistakes—removing users’ speech that does not violate their policies, or terminating users’ accounts with no explanation or opportunity to appeal. And inconsistent moderation often falls hardest on oppressed groups.  The dominance of a handful of online platforms like Facebook, YouTube, and Twitter increases the impact of their content moderation decisions and mistakes on internet users’ ability to speak, organize, and participate online. Bad content moderation is a real problem that harms internet users.  There’s no perfect solution to this issue. But U.S. lawmakers seem enamored with trying to force platforms to follow a government-mandated editorial line: host this type of speech, take down this other type of speech. In Congressional hearing after hearing, lawmakers have hammered executives of the largest companies over what content stayed up, and what went down. The hearings ignored smaller platforms and services that could be harmed or destroyed by many of the new proposed internet regulations.  Lawmakers also largely ignored worthwhile efforts to address the outsized influence of the largest online services—like legislation supporting privacy, competition, and interoperability. Instead, in 2021, many lawmakers decided that they themselves would be the best content moderators. So EFF fought off, and is continuing to fight off, repeated government attempts to undermine free expression online.  The Best Content Moderators Don’t Come From Congress  It’s a well-established part of internet law that individual users are responsible for their own speech online. Users and the platforms distributing users’ speech are generally not responsible for the speech of others. These principles are embodied in a key internet law, 47 U.S.C. § 230 (“Section 230”), which prevents online platforms from being held liable for most lawsuits relating to their users’ speech. The law applies to small blogs and websites, users who republish others’ speech, as well as the biggest platforms.  In Congress, lawmakers have introduced a series of bills that suggest online content moderation will be improved by removing these legal protections. Of course, it’s not clear how a barrage of expensive lawsuits targeting platforms will improve online discourse. In fact, having to potentially litigate every content moderation decision will actually make hosting online speech prohibitively expensive, meaning that there will be strong incentives to censor user speech whenever anyone complains. Anyone who’s not a Google or a Facebook will have a very hard time affording to run a website that hosts user content, that is also legally compliant.  Nevertheless, we saw bill after bill that actively sought to increase the number of lawsuits over online speech. In February, a group of Democratic senators took a shotgun-like approach to undermining internet law, the SAFE Tech Act. This bill would have knocked out Section 230 from applying to speech in which “the provider or user has accepted payment” to create the speech. If it had passed, SAFE Tech would have both increased censorship and hurt data privacy (as more online providers switched to invasive advertising, and away from “accepting payment,” which would cause them to lose protections.)  The following month, we saw the introduction of a revised PACT Act. Like the SAFE Tech Act, PACT would reward platforms for over-censoring user speech. The bill would require a “notice and takedown” system in which platforms remove user speech when a requestor provides a judicial order finding that the content is illegal. That sounds reasonable on its face, but the PACT Act failed to provide safeguards, and would have allowed for would-be censors to delete speech they don’t like by getting preliminary or default judgments.  The PACT Act would also mandate certain types of transparency reporting, an idea that we expect to see come back next year. While we support voluntary transparency reporting (in fact, it’s a key plank of the Santa Clara Principles), we don’t support mandated reporting that’s backed by federal law enforcement, or the threat of losing Section 230’s protections. Besides being bad policy, these regulations would intrude on services’ First Amendment rights. Last but not least, later in the year we grappled with the Justice Against Malicious Algorithms, or JAMA Act. This bill’s authors blamed problematic online content on a new mathematical boogeyman: “personalized recommendations.” JAMA Act removes Section 230 protections for platforms that use a vaguely-defined “personal algorithm” to suggest third-party content. JAMA would make it almost impossible for a service to know what kind of curation of content might render it susceptible to lawsuits.  None of these bills have been passed into law—yet. Still, it was dismaying to see Congress continue down repeated dead-end pathways this year, trying to create some kind of internet speech-control regime that wouldn’t violate the Constitution and produce widespread public dismay. Even worse, lawmakers seem completely uninterested in exploring real solutions, such as consumer privacy legislation, antitrust reform, and interoperability requirements, that would address the dominance of online platforms without having to violate users’ First Amendment rights. State Legislatures Attack Free Speech Online While Democrats in Congress expressed outrage at social media platforms for not removing user speech quickly enough, Republicans in two state legislatures passed laws to address the platforms’ purported censorship of conservative users’ speech.  First up was Florida, where Gov. Ron DeSantis decried Twitter’s ban of President Donald Trump and other “tyrannical behavior” by “Big Tech.” The state’s legislature passed a bill this year that prohibits social media platforms from banning political candidates, or deprioritizing any posts by or about them. The bill also prohibits platforms from banning large news sources or posting an “addendum” (i.e., a fact check) to the news sources’ posts. Noncompliant platforms can be fined up to $250,000 per day, unless the platform also happens to own a large theme park in the state. A Florida state representative who sponsored the bill explained that this exemption was designed to allow the Disney+ streaming service to avoid regulation.  This law is plainly unconstitutional. The First Amendment prohibits the government from requiring a service to let a political candidate speak on their website, any more than it can require traditional radio, TV, or newspapers to host the speech of particular candidates. EFF, together with Protect Democracy, filed a friend-of-the-court brief in a lawsuit challenging the law, Netchoice v. Moody. We won a victory in July, when a federal court blocked the law from going into effect. Florida has appealed the decision, and EFF has filed another brief in the U.S. Court of Appeals for the Eleventh Circuit. Next came Texas, where Governor Greg Abbott signed a bill to stop social media companies that he said “silence conservative viewpoints and ideas.” The bill prohibits large online services from moderating content based on users’ viewpoints. The bill also required platforms to follow transparency and complaint procedures. These requirements, if carefully crafted to accommodate constitutional and practical concerns, could be appropriate as an alternative to editorial restrictions. But in this bill, they are part and parcel of a retaliatory, unconstitutional law.  This bill, too, was challenged in court, and EFF again weighed in, telling a Texas federal court that the measure is unconstitutional. The court recently blocked the law from going into effect, including its transparency requirements. Texas is appealing the decision.  A Path Forward: Questions Lawmakers Should Ask Proposals to rewrite the legal underpinnings of the internet came up so frequently this year that at EFF, we’ve drawn up a more detailed process of analysis. Having advocated for users’ speech for more than 30 years, we’ve developed a series of questions lawmakers should ask as they put together any proposal to modify the laws governing speech online. First we ask, what is the proposal trying to accomplish? If the answer is something like “rein in Big Tech,” the proposal shouldn’t impede competition from smaller companies, or actually cement the largest services’ existing dominance. We also look at whether the legislative proposal is properly aimed at internet intermediaries. If the goal is something like stopping harassment, or abuse, or stalking—those activities are often already illegal, and the problem may be better solved with more effective law enforcement, or civil actions targeting the individuals perpetuating the harm. We’ve also heard an increasing number of calls to impose content moderation through the infrastructure level. In other words, shutting down content by getting an ISP or a content delivery network (CDN) to take certain action, or a payment processor. These intermediaries are potential speech “chokepoints” and there are serious questions that policymakers should think through before attempting infrastructure-level moderation.  We hope 2022 will bring a more constructive approach to internet legislation. Whether it does or not, we’ll be there to fight for users’ right to free expression. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • Stalkerware: 2021 in Review
    by Eva Galperin on December 25, 2021 at 9:45 am

    Stalkerware—that is, commercially-available apps that can be covertly installed on another person’s device for the purpose of monitoring their activity without their knowledge or consent—is nothing new, but 2021 has underscored just how prevalent and dangerous these apps continue to be and how important it is for companies and government to take action to rein them in.  2021 saw the 2-year anniversary of the Coalition Against Stalkerware, of which EFF is a founding member. In 2021, the Coalition continued to provide training, published tools and research, and worked directly with survivors of domestic abuse and intimate partner violence and the organizations that support them. EFF also took part in dozens of awareness-raising events, including EFF at Home’s Fighting Stalkerware edition in May and a talk on the state of stalkerware in the Apple ecosystem at 2021’s Objective by the Sea. A 2021 Norton Lifelock survey of 10,000 adults across ten countries found that almost 1 in 10 respondents who had been in a romantic relationship admitted to using a stalkerware app to monitor a current or former partner’s device activity. The same report indicates that the problem may be worsening. Norton Labs found that “the number of devices reporting stalkerware samples on a daily basis increased markedly by 63% between September 2020 and May 2021” with the 30-day moving average blowing up from 48,000 to 78,000 detections. Norton Labs reported that 250,000 devices were compromised with more than 6,000 stalkerware variants in May 2021 alone, with many devices infected with multiple stalkerware apps. Meanwhile, antivirus vendor Kaspersky reported that in the first ten months of 2021, almost 28,000 of its mobile users were affected by the threat of stalkerware. The range in numbers between these two antivirus companies suggests that we may be comparing apples to oranges, but even Kaspersky’s significantly lower number of detections indicates that stalkerware remains a significant threat in 2021. 2021 was also the year that Apple chose to enter the physical tracker market, debuting the AirTag. Apple used all of the existing iPhones to create a powerful network that gave it a major advantage over Tile and Chipolo in location tracking, but it had also created a powerful tool for stalkers with insufficient mitigations. Aside from an easily-muffled beep after 36 hours (shortened after our criticism to 24), there was no way for users outside of the Apple ecosystem to know that they were being tracked. In December, Apple introduced an Android app called Tracker Detect to allow Android users to scan for Air Tags, but there is still a long way to go before iPhone users have the same notification abilities as Android users. 2021 also continued the trend of stalkerware data leaks. In February, developer Till Kottman discovered that stalkerware app KidsGuard, which markets itself both as a stealthy way for parents to monitor their children and also as a useful tool to “catch a cheating spouse,” was leaking victims’ data by exfiltrating it to an unprotected Alibaba cloud bucket. And in September, security researcher Jo Coscia found that stalkerware app pcTattleTale left screenshots of victims’ phones entirely exposed and visible to anyone who knew the URL to go to. Coscia also showed that pcTattleTale failed to delete the screenshots made by users of the 30-day trial of the stalkerware whose 30 days had expired, even though the company explicitly claimed otherwise. The FTC also cracked down on a stalkerware app maker, issuing its very first outright ban on Support King, maker of the Spyfone stalkerware app, and its CEO Scott Zuckerman. The FTC took action against Spyfone, which it says “harvested and shared data on people’s physical movements, phone use and online activities through a hidden device hack,” not just because the app facilitated illegal surveillance, but because like KidsGuard and pcTattleTale, the product leaked the data collected from victims. The FTC described Spyfone’s security as “slipshod,” stated its intention to “be aggressive about seeking surveillance bans when companies and their executives egregiously invade our privacy,” and cited our advocacy as inspiration. We hope this means we will see more bans in 2022. In 2020, Google banned stalkerware ads in its Play store. The result has been the occasional purge of stalkerware ads, including one in October 2021. While many ads were purged, TechCrunch journalist Zack Whittacker found that “several stalkerware apps used a variety of techniques to successfully evade Google’s ban on advertising apps for partner surveillance and were able to get Google ads approved.” The whack-a-mole continues. With your support, we can move beyond whack-a-mole and continue to fight stalkerware through policy, education, and detection in 2022. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • The Atlas of Surveillance Turns the Dragnet on Police Tech: 2021 Year in Review
    by Dave Maass on December 25, 2021 at 9:09 am

    This past year, EFF’s Atlas of Surveillance project mobilized hundreds of student journalists and volunteer researchers to turn the tables on police spying by building the largest ever public-facing database of police surveillance technology. As EFF has long documented, local law enforcement agencies around the United States are amassing arsenals of surveillance technology to gather as much data as possible on the public. From automated license plate readers (ALPRs) that track our vehicles to real-time crime centers (RTCCs), where police analysts use algorithms to mine live camera streams and social media feeds, this technology has been spreading into communities often under the radar. The EFF and Reynolds School of Journalism at the University of Nevada, Reno launched the Atlas of Surveillance in July 2020 as a literal effort to watch the watchers. Combining a variety of newsgathering tools—crowdsourcing, data journalism, and public records requests—the Atlas of Surveillance is an interactive database and map that reveals what surveillance tech is used by more than 4,500 law enforcement agencies nationwide. The Atlas of Surveillance has two main aims: The first goal is to create a searchable inventory of police tech that can be used by journalists, researchers, and members of the public to better understand what spy tools police have deployed in their communities and how individual technologies, such as face recognition and body-worn cameras, are spreading across the country. The second goal is to involve as many people as possible in the information-gathering process. To achieve this we developed a crowdsourcing tool called Report Back that allows us to assign small research tasks (e.g. “Spend up to 20 minutes searching the internet for information about drones in Phoenix, Arizona”). By working with journalism classes and volunteers, we are not only creating a greater resource, but we are also growing the body of people who know how to investigate surveillance technology. As we enter the holiday season and the end of our sixth semester with UNR, we’d like to share our achievements and milestones in this largest-of-its-kind effort to document police surveillance: As of December 2021, the Atlas of Surveillance contains more than 8,100 data points — each representing a technology acquired or used by a police agency. That’s a roughly 50% increase in data collection since the launch. We also increased the number of agencies covered from 3,000 in mid-2020 to about 4,500 agencies today, including law enforcement from every U.S. state and territory. However, with approximately 18,000 law enforcement agencies in the U.S., we still have a long way to go.  We estimate around 1,000 students and volunteers have completed at least one assignment in Report Back, resulting in more than 2,000 completed research mini-tasks. In addition, UNR data journalism students and EFF interns acquired government surveillance datasets on body-worn cameras and other technologies, and converted them so that thousands of pieces of data could be added to the Atlas. In 2021 we also taught classes using the Atlas at Arizona State University, Temple University, and Harvard University.  The Atlas of Surveillance has also been recognized for its innovation. The Society of Professional Journalists Northern California Chapter presented the team with a prestigious James Madison Freedom of Information Award for its achievements in electronic access to information. The Atlas of Surveillance is also featured in Indiana State University’s Places & Spaces: Mapping Science exhibition, which is currently on display at Notre Dame University through March 2022.  EFF’s new investigative researcher Beryl Lipton also presented a lightning talk on the Atlas at the CATO Surveillance Conference. In March 2022, EFF and UNR student journalist Hailey Rodis published “Scholars Under Surveillance: How Campus Police Use High Tech to Spy on Students,” a deep dive into what the Atlas of Surveillance tells us about university police departments. As a result of this report, we were invited to present our work to journalists covering higher ed at an online training seminar organized by the Education Writers Association.  In fact, the Atlas was regularly used as a training tool for journalists and other watchdogs covering surveillance technology. At the Investigative Reporters & Editors’ NICAR conference, we led a panel discussing reporting techniques with several journalists whose work is featured in the Atlas, including Neil Bedi, whose reporting on the Pasco County Sheriff’s predictive policing program later won a Pulitzer Prize for the Tampa Bay Times. The Global Investigative Journalism Network featured the Atlas in its “Tips to Uncover the Spy Tech Your Government Buys.” We also trained members of the National Association for Civilian Oversight of Law Enforcement, who serve on police review boards, using the Atlas in a lesson about surveillance technologies and the types of civil rights abuses they can facilitate. That in-depth training is now available to all on YouTube. Freedom of the Press Foundation incorporated the Atlas of Surveillance into its new curriculum for teaching digital security at journalism schools. Our partnership with UNR was highlighted as one of only two digital security elective courses offered a journalism schools in the U.S. Reporters and researchers frequently use the Atlas when covering surveillance tech. For example, you’ll find the Atlas referenced in national reporting from organizations such as The Guardian, Axios, and OneZero and also in stories reaching communities through local publications such as Bethesda Magazine, Mendocino Voice, Phoenix New Times, Merced Sun-Times,  News 13 Orlando, The Lens, and the Pioneer Press. The Atlas has also fueled academic and advocacy research, such as in papers published by the Berkeley Political Review, the Belfer Center at Harvard University, and the Immigrant Defense Project. We can’t close out the year without giving credit to the many people outside of EFF who’ve collaborated with us on the project. Many thanks to the faculty at the UNR Reynolds School, especially Associate Dean Gi Yun and professors Patrick File, Ran Duan and Paro Pain, and 2021 student researchers Jayme Sileo, Dylan Kubeny, and Taylor Johnson. We are also very grateful to Paul Tepper, a volunteer who is single-handedly responsible for hundreds of datapoints in the Atlas. We are also proud to collaborate with Data 4 Black Lives on its #NoMoreDataWeapons campaign to increase awareness of surveillance tech in regions with large Black populations and a history of over-policing. In the coming year, we will continue to grow not only the Atlas but the body of contributors to the project. To learn more about opportunities to collaborate, don’t hesitate to reach out. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • The Future is in Interoperability Not Big Tech: 2021 in Review
    by Cory Doctorow on December 24, 2021 at 4:26 pm

    2021 was not a good year for Big Tech: a flaming cocktail of moderation failings, privacy breaches, leaked nefarious plans, illegal collusion and tone-deaf, arrogant pronouncements stoked public anger and fired up the political will to do something about the unaccountable power and reckless self-interest of the tech giants. We’ve been here before. EFF’s been fighting tech abuses for 30 years, and we’re used to real tech problems giving rise to nonsensical legal “solutions,” that don’t address the problem – or make it worse. There’s been some of that (okay, there’s been a lot of that). But this year, something new happened: lawmakers, technologists, public interest groups, and regulators around the world converged on an idea we’re very fond of around here: interoperability. There’s a burgeoning, global understanding that the internet doesn’t have to be five giant websites, each filled with text from the other four. Sure, tech platforms have “network effects” on their side – meaning that the more they grow, the more useful they are. Every iPhone app is a reason to buy an iPhone; every person who buys an iPhone is a reason to create a new iPhone app. Likewise, every Facebook user is a reason to join Facebook (in order to socialize with them) and every time someone joins Facebook, they become a reason for more people to join. But tech’s had network effects on its side since the earliest days, and yet the web was once a gloriously weird and dynamic place, where today’s giant would become tomorrow’s punchline – when was the last time you asked Jeeves anything, and did you post the results to your Friendster page? Network effects aren’t anything new in tech. What is new are the legal strictures that prevent interoperability: new ways of applying cybersecurity law, copyright, patents, and other laws and regulations that make it illegal (or legally terrifying) to make new products that plug into existing ones. That’s why you can’t leave Facebook and still talk to your Facebook friends. It’s why you can’t switch mobile platforms and take your apps with you. It’s why you can’t switch audiobook providers without losing your audiobooks, and why your local merchants don’t just give you a browser plugin that replaces Amazon’s “buy” buttons with information about which store near you has the item you’re looking for on its shelves. These switching costs are wholly artificial. By their very nature, computers and networks are flexible enough to allow new services to piggyback on existing ones. That’s the secret history of all the tech we love today. Interoperability – whether through legally mandated standards or guerilla reverse-engineering – is how we can deliver technological self-determination to internet users today. It’s how we can give users the power to leave the walled gardens where they are tormented by the indifference, incompetence, and malice of tech platforms, and relocate to smaller, more responsive alternatives that are operated by co-ops, nonprofits, startups, or hobbyists.  Which is why this year’s progress on interoperability has been so heartening. It represents a break from the dismal policy syllogism of “Something must be done. There, I did something.” It represents a chance to free the hostages of Big Tech’s walled garden. Here’s the interop news that excited us this year: The US Congress took up the ACCESS Act, a law that would require the largest platforms to open up APIs to their rivals; The EU launched the Digital Markets Act (DMA), a sweeping pro-competition proposal. The initial draft had a lot of stuff we loved on interop, which was removed from subsequent drafts, and then, in a victory for common sense and good policy, the European Parliament put all the interop stuff back in, and more besides! That’s not all, of course! There’s also pro-interop action that’s more of a mixed bag: for example, China’s new “cyberspace regulations” (which ban Chinese tech giants from blocking interoperability) and the policy recommendations from the UK’s Competition and Markets Authority report on ad-tech, which leans heavily on interop to encourage competition (but is intended, in part, to improve the market for commercial surveillance of internet users). Beyond state action, there are independent interop efforts from big companies and individual tinkerers alike. On the corporate side, Twitter continues to make progress on its “Project Blue Sky,” billed as “an app store for social media algorithms.” On the tinkerer side, we’re delighted to see the guardians of the Public Interest Internet continue to fight for the user by creating DIY glue that sticks together all kinds of messenger apps, like Pidgin and Matterbridge. Interoperability is a technical solution to a technical problem, but it’s not just a nerdy answer to a social conundrum. By changing the law to make it easier for users to walk away from Big Tech silos, we change what kind of technology can be built, what kinds of businesses can be operated, and what kind of lives digital users can make. 2021 was a landmark year for interoperability – and 2022 is shaping up to be even better. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • Pushing Back on Police Surveillance: 2021 Year in Review
    by Mukund Rathi on December 24, 2021 at 4:04 pm

    A year after the police murder of George Floyd, Black-led protests against police violence continue, as does resistance to police departments across the country growing their surveillance toolbelts and unnecessarily amassing troves of personal data. EFF stands with protesters against police abuse, and stands up for the core rights to privacy, speech, and protest threatened by police surveillance. This year we have gone to court to hold police accountable, endorsed regulatory and defunding proposals, and published records shedding light on police surveillance. Surveillance in San Francisco The San Francisco Board of Supervisors kicked off the year by voting unanimously in favor of special business districts—such as the Union Square Business Improvement District (USBID)—disclosing any new surveillance plans to the Board. The Board acted in the wake of an EFF investigation and lawsuit that exposed the San Francisco Police Department’s (SFPD) spying on last year’s Black-led protests against police violence. The SFPD monitored the demonstrations by using the USBID’s camera network. EFF welcomes the Board’s small step toward transparency, but the city continues to defend the SFPD’s unlawful surveillance. In October 2020, EFF sued the SFPD on behalf of three activists who helped organize last year’s protests in the city. This fall, EFF asked the court to rule that the SFPD violated the city’s landmark surveillance technology ordinance and to prohibit the SFPD from using the USBID cameras without prior Board approval. While the SFPD initially claimed it did not monitor the camera feed, an SFPD officer admitted during a deposition that she repeatedly looked at the camera feed during the eight days that the department had access. Privacy on the Road EFF is also in court to protect your privacy from Automated License Plate Readers (ALPRs), which police use to amass large databases of location and other sensitive information on millions of drivers. In October, we filed a lawsuit on behalf of immigrant rights activists to stop the Marin County Sheriff in California from sharing its ALPR data with over 400 out-of-state agencies and 18 federal agencies, including Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP), which violates two state laws. Earlier in the year, EFF released Data Driven 2: California Dragnet, a new public records collection and data set that shines a light on police ALPR use across California. In 2019 alone, just 82 agencies collected more than 1 billion license plate scans using ALPRs. Yet, 99.9% of this surveillance data was not actively related to an investigation when it was collected. In Tiburon and Sausalito in Northern California, and Beverly Hills and Laguna Beach in Southern California, an average vehicle will be scanned by ALPRs every few miles it drives. EFF supports state legislation that imposes shorter retention periods on ALPR data, annually audits searches of the data, and strengthens other regulations. The Police Look Down on You In a major victory this summer, the Fourth Circuit Court of Appeals blocked Baltimore from using data from its aerial surveillance of people’s movements throughout the city. For a six-month pilot period, the surveillance planes continuously captured an estimated 12 hours of coverage every day of 90 percent of the city. While Baltimore’s spending board ended the surveillance contract early, the city retained some of the location data and asserted a right to search it. Joined by several other organizations, EFF filed an amicus brief with the court arguing that Baltimore’s detailed tracking of the population of an entire city violated the Fourth Amendment and disparately impacted communities of color. The court agreed and Chief Judge Gregory, in a powerful concurring opinion, emphasized that because Black communities “are over-surveilled, they tend to be over-policed, resulting in inflated arrest rates and increased exposure to incidents of police violence.” EFF also joined other fights against aerial surveillance. Early this year, St. Louis rejected a similar version of Baltimore’s tracking program after an education campaign and pressure from EFF and several local community organizations. We also endorsed Rep. Ayanna Pressley’s legislation to greatly curtail the amount of dangerous military equipment, including surveillance drones, that the Department of Defense can transfer to local and state law enforcement agencies. And in November, the ACLU of Northern California published records and footage of the California Highway Patrol’s extensive aerial video surveillance of last year’s Black-led protests against police violence. The Surveillance Grab-bag EFF pushed back this year on other police surveillance tools too. At the beginning of the year, Oakland’s City Council voted unanimously to strengthen their Surveillance and Community Safety Ordinance by prohibiting government use of “predictive policing” algorithms and a range of biometric surveillance like voice recognition. EFF endorsed a Maine bill that would defund the state’s “fusion center,” which coordinates surveillance and information sharing between federal law enforcement, intelligence agencies, and local and state police, and often threatens people’s free speech and right to protest. We also continued calling on Google to stand up for its users against geofence warrants and to be more transparent about the warrants they receive and how they handle them. During racial justice protests in Kenosha, Wisconsin, federal police used at least 12 geofence warrants to force Google to hand over data on people who were in the vicinity of—but potentially as far as a football field away from—property damage incidents.We made a lot of progress this year to protect your privacy and free expression from police surveillance, but the fight continues. As the new year approaches, the coming weeks are an opportune time to contact your local representatives. Ask them to stand with you and your neighbors in the fight against government surveillance. This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021. Related Cases: Williams v. San Francisco

  • 2021 Year in Review
    by Cindy Cohn on December 23, 2021 at 3:28 pm

    2021 ended up being a time where we dug into our new realities of distributed work and the ever-changing COVID news. At the same time, news continued to come fast and furious, with the events of one week often obliterating memories of the week before. So it’s helpful for all of us to look back at the last year and remember just what we accomplished. Looking at what we did—at what you, our supporters, helped us do—we can be confident that whatever changes continue to roll in, we will continue our vital work. We’re thankful for our roughly 38,000 members who not only support us financially but spring into action whenever it’s needed. It allowed us to build on what we did in 2020, to meet the new challenges brought by this new era. Our biggest action this year was a powerful pushback against Apple when it announced that it was reneging on its promise to provide us with secure devices. In the summer, Apple announced it would be scanning some images on our devices in a poorly-conceived strategy aimed at child safety. With 25,000 of your signatures, we delivered a single, simple message to Apple: don’t scan our phones. We sponsored a protest at Apple stores and an alternative event to make sure that Apple heard from those, especially children, who have first-hand experience with the real dangers of device insecurity. We even flew a plane over Apple’s headquarters during its major product launch to make sure its employees and executives got our message. Our message was received.  Apple first delayed and then agreed not to scan iMessage and send notifications to parents. This was a first victory, but a big one, and it was only made possible by your contributions. Of course, we’ll keep pushing until all your devices are secure and answer only to you.  We also stood up with parents and students against the increased surveillance of students. This year, Dartmouth accused medical students of cheating based on a flawed understanding of how technology works. Our experts dug into the data and showed that what looked like cheating was just applications working as they should. After first doubling down and also instituting a policy preventing students from speaking out on social media, news coverage fueled by EFF’s technical and activism work finally convinced Dartmouth to admit its error and drop its allegations. We also brought litigation to protect a student who faced copyright claims after demonstrating the extent of surveillance conducted by student surveillance company Proctorio.  We also continued our work to hold police responsible for illegally spying on protestors. In fall, EFF and the ACLU, representing three activists of color, asked the court to declare without a trial that the San Francisco Police Department had violated the law. Documents and testimony gathered by EFF in 2021 proved that, as our 2020 investigation had theorized, the police had accessed a local business district’s security cameras during the 2020 Black Lives Matter protests. However, San Francisco law prohibits the use of any surveillance tech by city departments like the police without approval from the Board of Supervisors. By accessing these cameras without that permission, the police violated the law. EFF also partnered with the ACLU to challenge Marin County for sharing Automated License Plate Reader information with ICE and the CPB.  2021 wasn’t entirely a year about surveillance. In March, EFF urged the Supreme Court to rule that when students post on social media or speak out online while off campus, they are protected from punishment by school officials under the First Amendment—an important free speech principle amid the increasing surveillance of students’ online activities outside the classroom. In September, we were victorious: the Court held that public high school officials violated a student’s First Amendment rights when they suspended her from cheerleading for posting a vulgar Snapchat selfie over the weekend and off school grounds (yes, this is the infamous “fuck cheer” case). We also stood up against efforts in Texas and Florida to require platforms to host speech they do not want to host.  We also continued our focus on breaking the internet out of the grip of the five tech giants.  After analyzing and giving feedback to Congress on a package of antitrust reform bills, those bills moved forward after a marathon hearing in the House Judiciary Committee. And we didn’t just do work in the U.S. We also worked tirelessly to reform the EU’s Digital Markets Act so it would create actual competition in the online marketplace. Also related to standing up to the tech giants, after much international consultation and feedback, we updated the Santa Clara Principles on Transparency and Accountability in Content Moderation to better match the global landscape and current issues with regard to the platforms that host so much of our speech. Finally, on a state and federal level, we pushed for, and successfully obtained, much more governmental support for universal, affordable, high-speed internet access. Even with all of those things listed, we’re leaving much out. Season 2 of our podcast premiered. We published papers on interoperability and on the future of high-speed internet in the United States. That, in addition to countless briefs filed, testimony given to legislators, and activism campaigns launched. Please consider joining EFF. None of this is possible without our supporters. EFF has an annual tradition of writing several blog posts on what we’ve accomplished this year, what we’ve learned, and where we have more to do. We will update this page with new stories about digital rights in 2021 every day between now and New Year’s Day. Donate to EFF Support Digital Freedom 2021 in Review Articles: Pushing Back on Police Surveillance Electronic Frontier Alliance Defending Local Communities The Future is in Interoperability, Not Big Tech Stalkerware The Atlas of Surveillance Turns the Dragnet on Police Tech Vaccine Passports 2021 Was the Year Lawmakers Tried to Regulate Online Speech We Encrypted the Web The Battle for Communications Privacy in Latin America In 2021, We Told Apple: Don’t Scan Our Phones Where Net Neutrality Is Today and What Comes Next 2021 Year In Review: Sex Online Students Are Learning To Resist Surveillance Every State Has a Chance to Deliver a “Fiber for All” Broadband Future: 2021 in Review Shining a Light on Black Box Technology Used to Send People to Jail: 2021 Year in Review In 2021, the Police Took a Page Out of the NSA’s Playbook 2021 Year in Review: EFF Graphics Police Use of Artificial Intelligence Cross-Border Access to User Data by Law Enforcement Our Fight From Coast to Coast A More Balanced and Open Patent System

  • Electronic Frontier Alliance Defending Local Communities: 2021 in Review
    by Rory Mir on December 23, 2021 at 1:56 pm

    In another year of masking up, local communities have found enough footing to push back on surveillance tech and fight for our digital rights. Members of the Electronic Frontier Alliance have continued to innovate by organizing workshops and trainings for neighbors, overwhelmingly online, and made important headway on issues like more equitable broadband access, surveillance oversight, and even banning government use of face recognition. The Electronic Frontier Alliance (EFA) is an information-sharing network of local groups that span a range of organizational models. Some are fully volunteer-run, some are affiliated with a broader institution (such as student groups), and others are independent non-profit organizations. What these groups all share in common is an investment in local organizing, a not-for-profit model, and a passion for five guiding principles: Free Expression: People should be able to speak their minds to whomever will listen. Security: Technology should be trustworthy and answer to its users. Privacy: Technology should allow private and anonymous speech, and allow users to set their own parameters about what to share with whom. Creativity: Technology should promote progress by allowing people to build on the ideas, creations, and inventions of others. Access to Knowledge: Curiosity should be rewarded, not stifled. Since first forming in 2016, the alliance has grown to 73 member groups across 26 states. It’s not possible to review everything these grassroots groups have accomplished over the last year, but this post highlights a number of exemplary victories. We hope they will inspire others to take action in the new year. Advocacy Pushing Back on Police Surveillance EFA members have been vital in the fight against government use of face recognition technology. This type of biometric surveillance comes in many forms, and is a special menace to civil liberties. Since 2019, when San Francisco became the first city to ban government use of this technology, more than a dozen municipalities nationwide have followed suit, including Portland and Boston last year. In 2021, these victories continued with the passage of bans in Minneapolis and King County, Washington, which were won by a close collaboration between EFA members, local ACLU chapters, other local community groups, and the support of EFF. Alliance member Restore the Fourth Minnesota (RT4MN), and the rest of the Twin Cities-based Safety Not Surveillance (SNS) coalition, successfully advocated to pass their ban on government use of face recognition technology in Minneapolis. During the year-long fight for the ban, the coalition built widespread community support, took the argument to the local press, and won with a unanimous vote from the city council. The SNS coalition didn’t rest on its laurels after this victory, but instead went on to mobilize against increased state funding to the local fusion center, and to continue to advocate for a Community Control Over Police Surveillance (CCOPS) ordinance. These campaigns and other impressive work coming out of Minnesota are covered in more detail in EFF’s recent interview with a RT4MN organizer. In California, Oakland Privacy won one of the first victories of the year, when its City Council voted to strengthen their anti-surveillance bill in January. The Citizens Privacy Coalition of Santa Clara County has been organizing for CCOPS policies across the San Francisco Bay, fighting for democratic control over the acquisition and use of surveillance tech by local government agencies. In Missouri, Privacy Watch St. Louis has taken a leadership role in pushing for a CCOPS bill that was introduced in the city council earlier this year. The group also worked with the ACLU of Missouri to educate lawmakers and their constituents about the dangers and unconstitutionality of another bill, Board Bill 200, which would have implemented aerial surveillance (or “spy planes”) similar to a Baltimore program. Early this year, the city’s Rules Committee unanimously voted against the bill. EFA members also targeted another dangerous form of police surveillance: acoustic gunshot detection, the most popular brand of which is ShotSpotter. One of the most prominent voices is Chicago-based Lucy Parsons Labs which has brought the harms to light in their research and use of Freedom of Information Act (FOIA) requests. Lucy Parsons Labs discusses this and more of their incredible work in their own year in review post. They went on to coordinate with Oakland Privacy and other EFA members to organize protests against another ShotSpotter program. In New York City, alliance member Surveillance Technologies Oversight Project (STOP) uncovered a secret NYPD slush fund used to purchase invasive surveillance technology with no public oversight. In collaboration with Legal Aid NYC, STOP blew the whistle on $159 million of unchecked surveillance spending, ranging from face recognition to x-ray vans. Also, STOP, the Brennan Center, EFF, and other leading civil society advocates held the NYPD accountable for its inadequate compliance with the POST Act. The 2020 law required greater NYPD transparency in its implementations of surveillance technologies. Defending User Rights In addition to protecting privacy from state surveillance, EFA members also turned out to ensure users’ rights were protected from unfair and shady business practices. In July, the Biden Administration instructed the Federal Trade Commission (FTC) to advance Right to Repair policies, leading to a rare public hearing and vote. Called on by fellow repair advocates such as iFixit, USPIRG, and other members of the repair association, EFA members were able to rapidly mobilize to submit public comments. Following the outpouring of support, the FTC unanimously voted to enforce Right to Repair law, to defend consumers’ rights to repair their own devices without the threat of being sued by the manufacturer or patent holder. The fight for Right to Repair is far from over for local advocates, with state legislation still being considered nationwide. Back in Oakland, organizers successfully ensured the passage of a service provider choice ordinance by unanimous vote. The new law makes sure that Oakland renters are not constrained to the internet service provider (ISP) of their landlord, but can instead freely choose their own provider. This blocks the kickback schemes many landlords enjoy, where they share revenue with Big ISPs or receive other benefits in exchange for denying competitors physical access to rented apartments. As a result, residents are stuck with whatever quality and cost the incumbent ISP cares to offer. This win in Oakland replicates the earlier success in San Francisco and gives tenants a choice, and smaller local ISPs an opportunity to compete. In the fight for internet access, EFA members like the Pacific Northwest Rural Broadband Alliance have also been working to set up smaller local options to extend broadband access in Montana without relying on Big ISPs that often ignore rural areas. Electronic Frontier Alliance members were also active in advocacy campaigns to press corporations to change policies that restrict consumer access and privacy. Several groups signed onto a letter calling on PayPal to offer transparency and due process when deciding which accounts to restrict or close. And earlier this year, when Apple revealed plans to build an encryption backdoor into its data storage and messaging systems, many EFA groups leapt into action. They helped collect over 25,000 signatures in opposition. Also, in Portland, Boston, Chicago, San Francisco, and New York, alliance members joined EFF and Fight for the Future in a nationwide series of rallies demanding Apple cancel their plans for these policies that could be disastrous for user privacy. This wave of pressure led Apple to retract some of its planned phone-scanning and pause its planned scanning of user iCloud Photos libraries. Building community While we celebrate each time Alliance members make headlines, we also recognize the extensive work they pour into strengthening their coalitions and building strong community defense. This is, of course, particularly difficult when we cannot safely come together in person, and organizers deal with extra hurdles to rebuild their work in an accessible online format. Fortunately, in 2021 many allies hit their stride, and found opportunity in adversity. With so many local events going virtual, local groups leaned on their relationships in the EFA despite being in different parts of the country. These are just a few of the unique event collaborations we saw this year: Aspiration Tech again hosted their annual collaborative gathering of grassroots activists and software developers with their unique co-created convening Canal Alliance hosted a panel of partners, including EFF, Digital Marin, the Institute of Local Self Reliance, and Media Alliance, to discuss how communities can take action on the digital divide issues exacerbated by the pandemic. CyPurr Collective maintained their monthly Brooklyn Public Library events, connecting the community to digital security experts such as EFF’s Eva Galperin, Albert Fox Cahn from EFA member S.T.O.P., and 2021 Pioneer Award winner Matt Mitchell. EFF-Austin held many online workshops, including one featuring Vahid Razavi from Ethics in Tech for an event discussing ethical issues with companies in Silicon Valley. Ethics in Tech hosted several all-day events featuring other EFA members, including a recent event with Kevin Welch from EFF-Austin. Portland’s Techno-Activism Third Mondays hosted a number of great workshops, including a three-part panel on online privacy, why people need it, and how to fight for it. RT4MN hosted a number of workshops throughout the year, including a recent panel on drone and aerial surveillance. S.T.O.P. held great online panels online in collaboration with NYC partners, tackling topics that included: face recognition and predictive policing; how AI training causes law enforcement biases; how artists can organize against police surveillance; and punitive workplace surveillance faced by warehouse workers. In addition to events hosted by EFA members, the EFF organizing team held space for EFA groups to collaborate remotely, including our first EFA Virtual Convening in August. In lieu of regular in-person meet-ups, which are essential to creating opportunities for mutual support, EFF hosted a virtual “World Café” style break-out session where EFA members and EFF staff could learn from each other’s work and brainstorm new future collaborative projects. New members This past year we also had the opportunity to expand the alliance and establish a new presence in Montana, North Carolina, and Tennessee, by welcoming six impressive new members: Calyx Institute, New York, NY: A technology non-profit with the mission of developing, testing and distributing free privacy software, as well as working to bridge the digital divide. Canal Alliance, San Rafael, CA: Advocates for digital equity for immigrant communities. DEFCON Group 864, Greenville, NC: The newest DEFCON group in the alliance, with a mission to provide learning opportunities and resources for everyone interested in information security. Devanooga, Chattanooga, TN: A non-profit community group for current or aspiring developers and designers. Pacific Northwest Rural Broadband Alliance, Missoula, MT: A non-profit foundation dedicated to building fast, affordable, community-powered broadband networks. PrivaZy Collective, Wellesley, MA: A community-centered student group addressing online privacy issues as faced by Gen Zers. Looking forward The fight for our digital rights continues, and maintaining a robust and vigilant network of organizers is essential to that struggle. EFF will continue to work with groups dedicated to promoting digital rights in their communities, and offer support whenever possible. To learn more about how the EFA works, check out our FAQ page, and consider joining the fight by having your group apply to join us. Learn more about some of our EFA members in these profiles: PDX Privacy interview Restore the Fourth MN interview Future ADA interview Personal Telco Project interview CyPurr Collective interview Cryptoparty Ann Arbor interview This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.

  • Support a Better Web for Everyone & Unlock Grants for EFF
    by Aaron Jue on December 22, 2021 at 5:25 pm

    During the holiday season you’ll see lots of appeals from worthy causes. Wherever your heart is, there’s little doubt that technology amplifies voices and helps build community around the issues that matter most to you. That’s why the Electronic Frontier Foundation fights for your right to express yourself, connect to friends, and explore ideas online. And it’s also why EFF needs your help during our Year-End Challenge. Digital privacy, security, and free speech lift up all the efforts to make the dark corners of the world a little brighter. Will you help EFF work toward a better digital future for everyone? Give Today Donate By December 31 to Unlock Bonus Grants! Count yourself in during the Year-End Challenge and you’ll help EFF receive up to $46,500 in grants gathered by EFF’s board of directors. As the number of online rights supporters grows (see the counter!), EFF can unlock a series of seven challenge grants—from $500 to $20,000—that grow larger after we reach each milestone. No matter the size of the donation, every supporter counts. Your Support Means a Better Web for Everybody EFF owes every success to members around the world backing tech users’ rights. This year EFF fended off Apple’s plan for dangerous device-scanning; celebrated one of the largest state investments in public fiber broadband; helped convince the U.S. Supreme Court to reject an overbroad interpretation of the Computer Fraud and Abuse Act; took legal action to stop police surveillance of protesters; and kept sanity in conversations about online content moderation and censorship. And that’s just a few highlights! It doesn’t matter if you’re an engineer, a caretaker, an artist, or a political activist—you can power the work necessary for a brighter future together (and unlock additional grants before the year ends!). Please consider making a contribution today. Rise to the Challenge! Support Online Privacy & Free Speech This Year _____________ EFF is a member-supported U.S. 501(c)(3) organization with a top rating from the nonprofit watchdog Charity Navigator. Donations are tax-deductible as allowed by law. Make membership even easier with an automatic monthly or annual contribution!

  • Podcast Episode: The Life of the (Crypto) Party
    by Jason Kelley on December 21, 2021 at 8:10 am

    Episode 106 of EFF’s How to Fix the Internet Surveillance is always problematic, but it isn’t neutral—it is more often deployed in communities of color than elsewhere. And surveillance technology isn’t objective, either—it often magnifies the biases of its users and creators, affecting already-marginalized individuals far more heavily than others. Matt Mitchell, founder of CryptoHarlem, has an exciting solution for helping undo the damage that pervasive surveillance has done to those who are most profoundly impacted by it.  Join EFF’s Cindy Cohn and Danny O’Brien as they talk with Matt, who has worked as a data journalist, a software engineer, a security researcher, a trainer, and a hacker—and learn more about how education, transparency, and building trust can increase privacy and safety for everyone. And best of all, you get to go to a party while you’re doing it. Click below to listen to the episode now, or choose your podcast player: Privacy info. This embed will serve content from      You can’t fight back against surveillance unless you recognize it. CryptoHarlem, which Matt Mitchell founded, provides workshops on digital surveillance and a space for Black people in Harlem, who are over policed and heavily surveilled, to learn about digital security, encryption, privacy, cryptology tools, and more. Matt talks with Cindy and Danny about how living under pervasive surveillance dehumanizes us, why you have to meet people where they are to mobilize them, and how education is the first step to protecting your privacy—and the privacy of a community. But overall, he shows us how fun and exciting it can be to help empower and organize your community.    You can also find the MP3 of this episode on the Internet Archive. In this episode you’ll learn about:  Cryptoparties being organized by volunteers to educate people about what surveillance technology looks like, how it works, and who installed How working within your own community can be an extremely effective (and fun) way to push back against surveillance How historically surveilled communities have borne the brunt of new, digital forms of surveillance The ineffectiveness and bias of much new surveillance technology, and why it’s so hard to “surveill yourself to safety” Why and how heavily surveilled communities are taking back their privacy, sometimes using new technology  The ways that Community Control Of Police Surveillance (CCOPS) legislation can benefit communities by offering avenues to learn about and discuss surveillance technology before it’s installed How security and digital privacy has improved, with new options, settings, and applications that offer more control over our online lives Matt Mitchell is the founder of CryptoHarlem and a tech fellow for the BUILD program at the Ford Foundation. As a technology fellow at the Ford Foundation, Mitchell develops digital security training, technical assistance offerings, and safety and security measures for the foundation’s grantee partners. Mitchell has also worked as an independent digital security/countersurveillance trainer for media and humanitarian-focused private security firms. His personal work focuses on marginalized, aggressively monitored, over-policed populations in the United States.  Previously, Mitchell worked as a data journalist at The New York Times and a developer at CNN, Time Inc, NewsOne/InteractiveOne/TVOne/RadioOne, AOL/Huffington Post, and Essence Magazine. Last year he was selected as a WIRED 25, a list of scientists, technologists, and artists working to make things better. In 2017 he was selected as a Vice Motherboard Human of The Year for his work protecting marginalized groups. Resources:  Pioneer Awards Ceremony 2021 Recap: Privacy Defenders Unite (EFF) Surveillance Technologies: Surveillance Cameras (EFF) Chicago Inspector General: Using Shot-putter Does Not Justify Crime Fighting Utility (EFF) A Guide to Law Enforcement Spying Technology (EFF) CCOPS and Community Action Community Control of Police Spy Tech (EFF) Asset Forfeiture and the Cycle of Electronic Surveillance Funding (EFF) Seattle and Portland: Say No to Public-Private Surveillance Networks (EFF) Transcript: Matt Mitchell: “Privacy is not secrecy. Privacy is saying, ‘I got a door, I got a door so I could open it. I got a room that no one knows about so I could invite the people who I wanna share this with. and I could say welcome, “We’re friends now. I wanna show you something I don’t share with everybody.” That’s a beautiful thing but you only can do it when you’re given the agency to do it.  Cindy Cohn: That’s Matt Mitchell. He’s working in his community to get people to understand more about their digital privacy and security, and more importantly, how they can take steps to protect it. On today’s episode of How to Fix the Internet, Matt will tell us how he marshaled his neighborhood Harlem in New York City. And he’ll help us think about how we can all reach out to those in our own communities who might need a little help on understanding what their digital footprint looks like and how to make our online lives more secure. I’m Cindy Cohn.  Danny O’Brien: And I’m Danny O’Brien. Welcome to How to Fix the Internet. A podcast of the Electronic Frontier Foundation. Today how to build a movement, one person, one checkbox, and one security setting at a time.  Danny O’Brien: Matt Mitchell is someone who does many things. He’s worked as a data journalist, a software engineer, a security researcher, a trainer, a hacker…  Cindy Cohn: And we were thrilled to give him an EFF Pioneer Award for his work in his community. Matt founded Crypto Harlem, which hosts parties that teach people about protecting their digital privacy and educates them about modern digital surveillance. Matt, welcome to How to Fix the Internet. Matt Mitchell: Hey, thanks for having me. It’s great to be here.  Cindy Cohn: When you accepted your award from EFF, you dedicated it to Jelani Henry, can you tell us his story? Matt Mitchell: Gelani grew up in Harlem and one day the police came to his door. They said, “We have reason to believe, by looking at social media, by looking at people’s contacts and their phones, that you were involved in a crime.” A crime that Gelani did not commit. And they took him from his home and he did not see home again for 14 months. Spent that time in one of the worst prisons in the world, Rikers Island in New York. And all of this is because of social media surveillance, something that happens every day in the inner city.  Cindy Cohn: I think people who don’t live in areas like Harlem sometimes have a hard time visualizing the pervasive surveillance that happens there. Can you, can you give us some of what you see in your community? Matt Mitchell: You know, uh, living in the inner city, anywhere in the United States, in a place like Harlem, you’ll be surveilled from the minute you wake up, you know? A lot of folks live in some kind of government subsidized housing. There’s a lot of CCTV recordings of these properties ’cause they’re technically government properties, and there’s different rules that apply to you, and what you can do and what you can’t do because this is technically, like, state or partly state owned. Then you’ll walk out into your courtyard and you’ll see these large floodlights that are either solar powered or gasoline generator powered. And it looks kind of like a guard tower in a prison movie, right? Where light is constantly shed and shined upon you and yourself. It goes through your window. People have, like, many layers of blackout shades. You see people just staple gun comforters to their window at this point ’cause it’s so bright. And that light doesn’t turn off until, um, the sun rises, right? So you also have this, like, constant gaze, this constant watching. And then, you know, if you were to go just for a walk outside the courtyard around the corner,if you look up, you’ll see there’s cameras, surveillance cameras in the lamps. Also, on that same corner there’ll be a box that says Property of NYPD, and on it will be many different things from, uh, a camera that can be uh, controlled and turned and tweaked, a flat box which maybe looks like almost like an access point for a WiFi or something, but it’s actually a microphone, it’s part of the shot spotter system. But for this apparatus to work, the microphones have to be on the entire time, so they’re always listening. And then when they hear the wave form or they match that pattern to what they believe is a database of firearms being fired, then it triggers or it’s supposed to work that way. And so that’s another element of the surveillance that’s around you.  Danny O’Brien: Right.   Matt Mitchell: Furthermore, you’ll have surveillance from, uh, the city level and on the corporate level as well. A lot of the folks who own a bodega, or what we call, like, our little, like, smoke shop and candy shop and grocery stores, they’ll get caught up in, this pressure from law enforcement, where it’s like, “Look, we want you to share, not just the footage ’cause someone was in here yesterday who matched the description of a criminal or there was a crime that was committed. But we want you to kind like, join our network of cameras so we can pull video when need to,” which includes a lot of folks just walking around and just doing their thing. Cindy Cohn: It really does take eyes, you know, kind of careful eyes to see all of this surveillance because as you know, the public isn’t notified when this stuff is rolled out. Matt Mitchell: I always say like we’re not, we’re not really against all of this surveillance tech. Bring it. Bring all these layers of surveillance tech to every neighborhood, to the suburbs, you know, to downtown, to the tourist area. Bring it because on that day, everyone will be up in arms. On that day, everyone will be like, “What is going on? This is not the society I wanna live in.” But unfortunately, the inner city is like a Petri dish, it’s like a beta test for a lot of this stuff. And you know, there’s a lot of commercial interests there. A company might say, “Well, we’ll basically give you the tech so we can say year over year we’ve proven that this thing does something or it’s being used by…” Like, if you say it’s being used by Chicago, New York or LAPD, everyone wants that thing, regardless of whether it works. And what we’ve seen is that the shot spotters go off and police are dispatched to an area expecting a firearm and there’s brutality because of that. Danny O’Brien: When it happened in sort of upper-class neighborhoods, there would be a big discussion and eventually it would be decided not to roll it out. And then you watch the whole thing rollout elsewhere. Matt Mitchell: Where communities have been subjugated to such huge amounts of surveillance generation after generation. People come to the CryptoHarlem event. they’re like, “Yo, we took some pictures. What is this thing? What is that thing?” That’s, often a thing that happens in our meetings with folks. And we go through it, we’re like, “Well, this is this piece of technology. This is who put it there, this is how it works, this is how it fails. We have this thing called CompStat in New York, which is police data you can look back, like, every crime that’s been properly filed as of yesterday, you know what I’m saying? So if it doesn’t matter, like, crime in this country is going down, laying out surveillance of the country is always going up. It’s not like there’ll be a day where you’ll see people on that ladder, taking down automatic licence plate reader or a shot spotter. So it’s all about like, “Can we gain ground? And then we’ll just upgrade the tech?” It just looks ridiculous after a while but no one ever takes it away. Danny O’Brien: So you founded Crypto Harlem, what? Around 2012? What was your first meeting like? Matt Mitchell: I would love to say it was like, “Nobody showed up, but we did it at anyway.” But it was packed, first meeting was packed. I mean, I was shocked. I could barely get in the room. It was people on the sidewalk looking into the building, into the community center, you know what I’m saying? ‘Cause, you know, in some communities, you have to give the whole nothing to hide argumen. But in the, you know, in these neighborhoods it’s like, “Oh, I know. I already know that I’m being criminalized. My identity’s criminalized, my- my existence is criminalized. And my grandma’s existence was criminalized, and her great-great grandma’s existence was criminalized. So it’s a history of surveillance, right? And to say I have a remedy for this thing that’s been plaguing you, people will show up like you’re giving out medicine, right? So, you know, that’s how that works when you do a Crypto party or anything like this in these neighborhoods, a marginalized community, it’s actually pretty easy.  Danny O’Brien: Do you have a remedy though? Is there something that people can practically do in a situation like that? Matt Mitchell: Community organizers in the hood they’ll sit down with folks, they’ll be like, “Okay, listen, you’re a laborer, you’re a worker, you’re undocumented, whatever your situation is. Let’s make a list of what’s plaguing you, things that you would change. Blue sky, dream a better world, right? For you and your kids.” And you take that list, all these things that are bothering them, and then you find what you know secretly it’s actually a- easy win, it’s a quick fix, you know, like, “And that pothole down the street,” right? You go to the nice neighborhoods, there’s no potholes, you know?  So, you just teach folks like, “Look, this is how you show up. This is how you complain. This is how you go to this meeting. This is how the city ordinance is set up,” and then you patch that pothole. And with that win, they will work for years on the hardest thing on that list. You know, you have folks who have been subjugated to a huge amount of digital surveillance, we already know the panopticon.  We already know what that feels like. When they have a win and they’re armed with that, they will fight forever.  Danny O’Brien:  Right. Cindy Cohn: Privacy really isn’t about secrecy. It’s about control, right. And whether you have control or someone else has control, I think that’s exactly right. And when I’m hearing about your community, which is, you don’t have to convince people that they’re being surveilled or that, this stuff can be used against them. In some ways it’s a difficult community to work in because there’s so much surveillance and it can feel overwhelming. But what I hear from you is in some ways it’s an easier community to work with because you don’t have to convince people that there’s a problem. Matt Mitchell: You don’t have to convince them. And, uh, because there’s so many layers of surveillance, there’s an additive effect that it’s so ridiculously [laughs]… It’s- it’s Terminator II world, right? And to- to just have any solution offered, that’s a community that’s ready, so ready for this. There’s a history of surveillance so they don’t need to be convinced about things, but also, it’s gotten so out of hand that it’s really hard to justify, right? It’s really hard for anyone to got their eyes open to justify. If you only put the cameras out, you’re like, “Well, it’s not that many cameras? They’re only here.” And if you only put the microphones out, you’re like, “It’s not that many microphones? They’re over here.” But when you see everything, then you’re like, “Okay, we went too far,” Cindy Cohn: You actually go out on the street and talk to people and try to encourage them to come to these parties. So can you tell me some stories about how you’ve convinced people to come in?  Matt Mitchell: Yeah, definitely.  I mean, the first thing is, you know, you can have the hottest thing in your mind, but no one’s going to show up, so you gotta be about it, like, “Hey, my life depends on it.” So if  you don’t get 10 people in this room like this is your last day, you’re gonna run around, you’re gonna grab everybody, you gonna go through the bus, you’re gonna run through the subway. Like, that’s the passion you need to have when you’re starting a movement, and you also have to meet people where they are. So you have to contextualize solutions for folks, so… And I’m like, “Hey, you know, nice phone. What’s your favorite app?” And they’ll be like, “Oh, I like this dating app,” let’s say. So you’re like, “Oh, okay, cool. You know, did you know that people can just find you, any, you know, anywhere you are, like, down to the second, like, down to, like, a split step away for you with that app?” And you’re like, “What?” And then you’ll be like, “Yeah, Kaspersky did this study. Let me show you this article real quick.” And then they’re like, “Damn,” right? And like, “How do I stop that?” I’m like, “Oh, just turn the setting off.” And they’re like, “Whoa.” And then you’re like, “Hey, I, we got more of these come through next week, this is the spot.” And people will show up because that’s how easy it can be if you meet people where they are, where their points of pain are, right?  Cindy Cohn: We talk a lot about your work in helping people with their technologies, but you’re also been helping a bit with policies and laws as well.  Matt Mitchell: I was involved with this thing called C Cops back in the day, which was Community Control of Over Police Surveillance. With that project, it was like, “Look, we should at least talk about this stuff.” Oftentimes, law enforcement will say, “Well, we don’t wanna share the information ’cause it gives upper hand to criminals.” Well, you know, when the French were like, “Why are y’all measuring heads? You should be using this thing called fingerprints, it’s a new technology back in the day.” Well, we all know that there’s a thing called a fingerprint and you leave it, it doesn’t stop you from getting caught with their fingerprints, right? It’s just this is the science, we understand it. With a lot of this new technology, it’s not well researched. The efficacy is questionable, and it’s secret, right? It’s secret. So we’re like, “Look, just tell us about it. Let’s look at the civil liberty,  privacy and other problems that might come from it and let’s figure out what we can do to mitigate that. You’re gonna use it anyway, you might as well just at least give us that ’cause it’s our money. It’s our taxpayer money or it’s civil asset forfeiture where you’re literally stopping people and taking their money to buy stuff to watch them with, which is totally messed up.” Now, in New York, we ended up having something that’s called the POST Act, it’s a Public Oversight of  Surveillance Technology. But the NYPD has been so reticent to lay out exactly what they’re using. But even what they have given us is mind blowing [laughs]. Like, I was like, “Okay, we have x-ray vans, that drive around, and can see through buildings to watch people.” Like, “What? How many of these things? When do we get this like, and they’re, like, “We won’t, we’re only gonna use it in case of terrorism,” right? They said, “We have drones and the drones will only be used by the state police, right? You know, we use helicopters to look at traffic accidents, We’re only gonna use drones? It’s cheaper and safer.” Then it’s like, “Okay, and maybe to help during if  someone has got Dementia, they’re older, they get lost.” And then it’s like, “Oh, maybe it’s for the children. Maybe it’s And then next thing you know, NYPD is just flying drones around.” Danny O’Brien: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.  Danny O’Brien: We’ve talked a lot about how technology can really harm people’s privacy and security. Is there any way in this sort of situation that you can use technology to help people? Matt Mitchell: Yeah. I mean, I love technology. That’s why I got into this game, right? So as a hacker, I would say like if you look at the apps that I tell people to use, they’re all tech that was made by people, right? I think what we really need though is technology that’s developed and created by people who are directly impacted by it. So people always talk about Signal, which is dope, right? It’s like the gold standard of encryption but, you know, homie came from a tech company and he not about this life, this stuff didn’t really happen to him, and look what he made. So imagine what the next person who actually is like, “Hey, I wanna solve this for what’s going on with my community, what’s going on what I see, from the viewpoint of my mosque or my street corner or my teenage runaway center,” or whatever, you know what I’m saying? We need to be able to support those technologies. A lot of times, they’re open source technologies. A lot of times, there’s people who, you know, who give grants to support that stuff. And I think like that- that’s the win right there. Anybody, you can literally take something that’s designed for you to consume technology all day like a mobile phone, with black and brown folks over index on, we got mad glittered out phones, you know what I’m saying? If there’s no keyboard on it, you can’t write code on that, but you can actually now program on these things. So I’ve taught folks like, “Yo, let’s take a look at this Python, let’s look at this algorithmic bias on your phone,” we did it. It’s not the most fun thing in the world, but you could do it. And, I mean, when you’re time rich and the surveillance is on you, I think that’s the future, is like building tech solutions and hacking our way out of these tech problems, right? Wonderful.  Cindy Cohn: Oh, and I also really love the idea here that, like, we don’t need one killer product. We need a million killer products because every community is different. There are some things everybody needs. I certainly would put encryption on that list, but how that gets deployed could come up in a million ways, depending on what the communities need. And that we shouldn’t be thinking of a one size fits all killer app, we should be thinking about a whole universe of tools that people are empowered to build themselves.  Matt Mitchell: I feel, like, directly impacted communities, they’re going to solve their own problems and they don’t really need anyone to come save them. They just need the tools, they need the resources. Maybe you need to know how to- how to exactly write that code or do that thing, you know what I’m saying? Um, so yeah, like, the Glover Center is this spot in Oakland, little homies just coming in and coding VR. Like, it’s amazing, right? So like, you know, We need more of that. Danny O’Brien: There’s another element of this, which is. You know, building capacity within the community of educating people about how the technology works so that people can understand better how they’re being impacted and how to mitigate it too. Right. It’s is, is that, is that part of your mission?  Matt Mitchell: This stuff is seriously hidden, right? So, um, and there are allies in strange places. Like, we got people who came in, like, “Hey, I work at the precinct down the street. This is so messed up. I gotta tell you, like, something has to be done. Blah, blah, blah,” right? And or will be like, “Yo, here’s a tweet. Like, y’all see this? Why is there a dog walking into this housing projects behind these cops? Like what’s this thing too, blah, blah, blah.” It’s a robot. So we’ll research it. We’ll look into it. We’ll talk to people like, this is what this is. This is how it works. But without action, why, You’re just, you know, you’re just scaring people, right? Your first time you go to a dentist, they don’t show you videos of people with, like, advanced gum disease. You know, when you go to the doctor talking about your health, they don’t tell you about illnesses you could do nothing about that keep the doctor up at night. No, they don’t. So we don’t do that either. We talk, we keep it hopeful, we keep it actionable, and people want that message, right? But we also are realistic, and we map it to there’s many ways to stop these things. We should be fighting on every single front, right? If you’re someone who’d throw a party in your living room and say, “Look, oh, we should know about this,” do that. If you’re someone who would, you know, throw a ballot in a box and vote against something, do that. So, like, we’re just gonna lay out every real thing and we’re not gonna tell you how to protect yourself, your family, your neighborhood or anything like that, right?  So I think that’s a message that most people don’t normally get. They get told this kind of one way message, right? And it doesn’t always make sense for them because of what they’re dealing with, that is the path. And then real change actually happens, and that’s the surprising thing. I had someone ask me, like, “Aren’t y’all just sticking your finger in a dam? And then you’re sticking your finger in another dam? And then you’re sticking your tongue, in your nose, you’re trying to just stop this flood from happening?” And I’m like, “Yeah, welcome to being Black in America. Like, that’s how we fix all our stuff,” you know what I’m saying? Like, it has to be this way, in that space, you can raise a child. In that space, you can go to college. In that space of just a little freedom and safety, you can better yourself. And that’s what we’re about. We’re about winning through those margins. Cindy Cohn: That’s great. Hey, do you have a favorite story of somebody who you kind of worked with through Crypto Harlem and how you kind of saw the light go off and, and saw them kind of take control over things? Matt Mitchell: Oh, yeah, yeah, that happens all the time actually, but, Yeah, I got this one story, right? You know, there was this young homie who came in and I was like, “Are you even lost?” Like, you know, maybe this person was in the wrong place and, so he shows up and he- he does this whole thing. He goes and checks it out, he comes back again. He talks to me afterwards. We… There’s a place next to the spot Harlem Business Alliance where we have this, the community center, which is on the corner of Malcolm X and- and Martin Luther King, which is pretty poetic and cool, but it wasn’t- I didn’t even decide that. And next door, this place called Harlem Shake, we usually just break bread and I’ll just buy some fries and just talk to people ’cause you really… After the event, people wanna process and talk. And he had so much to say and I was like, “Well, check out this video. Check out the…” we have a lot of checklists and one pagers. And he went through everything and, um, he was like, “Hey, man. I wanna do more.” We had this thing called the Glass Room that was in San Francisco at one point, but it was in New York. And he ended up working on- in the Glass Room as one of the ingeniouses there. We had this whole setup, it looked like electronic store but is really talking about surveillance. And he became one of the most, like, passionate anti surveillance speakers on this issue.  And then he was like, “Hey, um, I think I might try to apply for this job.” And he applied for this job at a tech spot and he got this job and it’s like, you know, this is a brother from Harlem, you know what I’m saying? Every time I see a Crypto Harlem video or picture, he’s in that background. And he writes me, he’s just like, “You changed my life. You changed my family’s life. You changed my friend’s life.” And he’s just like, you know, that’s what’s up. We need the younger, cooler, next version of me, you know what I’m saying? I want the, like, you know… wha, whoever they are out there, you know what I’m saying, or- or she or he is out there, you know, we need that, like, cyberpunk Afro-futuristic, baby matt  to come up, you know? So I wanna see that. And that’s just one story. I mean, I could tell you more.  Another thing that we wanna do is, which we think is quite important, is bring it back to the people. So if we get a donation, if we get opportunity, we bring it back to the people. So I’m like, “Listen, if… How’s it gonna look if I’m telling you this stuff?” But reality is, you know, you, maybe you need money, maybe you need a job opportunity, maybe you need some kind of thing. So we always make sure, we do that, you know what I’m saying? Like, another thing we do is we teach folks cyber security stuff on a path to get certified because there’s more jobs than people in that space. It’s relevant because there reaches a point where you need to, like, know how to read code, to find the bias in the thing, you know what I’m saying? And like, “I could do that.” And I’m like, “You could do that too, let me show you how to do that.”We need to understand high level cyber security stuff, get your CompTIA certifi… you know, security plus certification, so you can understand how to push back and have that authority. People will look at you, they will underestimate you based on your identity, so you need to come up with this stuff.  Cindy Cohn: That’s such a key thing about building a movement, right? Is that you’re, you know, you’re not just treating these people as the spectators, right? Like they’re just watching a show, that’s about digital security. You’re bringing them in, you’re supporting them. You’re building the kind of, um, network and community. And you know, we’ve talked a lot about Harlem and your home communities, but I also know that you, you think a lot about how to, how to had to, had to do this kind of organizing and communities that are not at all, like the community that you came from, places that are rural or that are more distributed. And can you talk a little bit about that? Matt Mitchell:  I worked globally, you know, I worked in private security, I worked for non-profits. I worked for NGOs, you know what I’m saying? And, you know, whoever is on the margins might look differently, but the treatment’s the same and the playbook’s the same.  So you might be like an ethnic Russian in Estonia, right? You might be in a place where I can’t even tell, I’m like, “Yo, who… what’s the difference between this Albanian brother here and this person there ’cause, like, you know… Or that German and this German that you’re telling me it’s, it’s a class different.” But then you’ll quickly see that, “Oh, that neighborhood has more surveillance because I don’t trust you. I don’t trust you. I won’t ask. I will find out by watching, by surveilling,” right? What you look like and what the community look like might be different but how this rolls out is painfully, obviously very the same. And therefore, the conditions we can use to organize and push back against it are equally the same. So, you know what? It works in our favor too ’cause they could use the same playbook to hurt, we could use the same playbook to succeed and win and help.  If you live in the United States, in a rural neighborhood, in a rural area, there’s a lot of farmlands, a lot- No- not a lot of opportunity. From the cost of just getting on the internet is- is unfair, there’s not even equity there, right? Your options and- and choices are not fair, right? And when you speak to people, I think it’s about that common ground, and that’s how you build a nationwide movement, right? But I,love working in the inner city because I figure like, “Look, if we could stop it there, we could keep all these pains from being experienced by much larger numbers of humans.” Danny O’Brien: Yeah. No community is, is, is monolithic. Right? And, and I know that in situations and places where people suffer from a lot of crime, which is where the first argument about this kind of pervasive surveillance take place. There’s a lot of people who, who live there, who, who wants some help and, and see surveillance as a solution, right? Rather than a problem. That they’re more scared of the crime than they are of the potential of the technology. What’s your way of, of interacting and bringing them into this discussion.  Matt Mitchell: Well I think it’s about, fear is used as a tactic. You know, obviously, your rational mind and your emotional mind is a constant battle, right? And if you’re in a moment where you’re panicked and you’re stressed and you’re just reacting fight or flight like that- that animal trying to just survive, that’s the moment where you’re not making the best choices, you know? So if I just tell you to do some simple math, one plus one is two, two plus two is four, you start using your rational mind, you start getting outta that place. So, that’s what we try to do. We try to show up and just be like, and respect the fear, right? For example, look at the Asian communities, right? With Asian hate. People won’t even call a hate crime, a hate crime.  It’s so obvious, right? I have to respect, this is a reality right? And you work with the people who are trusted in that community, if I wanna talk to the Korean community in Queens, a lot of folks are religious, so I need to go to the Korean church, right? A lot of folks are gamers, I need to go to the Korean internet cafe. I gotta find folks who, you know, um, represent the community, not just as in literally, like, identify like, “Yo, I’m from here, this is my neighborhood,” but also the way they stand up in the community, right? They’re the most popular, uh, Cos player or whatever, you know what I’m saying? So you bring them in and, um, that is your entry point to fighting against fear because that level of trust, that level of friendship, that level of, like, “I share your pain, I share your- the history of surveillance that happened to you. I share trying to make you safer, Who doesn’t want a safer day? Who doesn’t wanna safer street for their kid? Everybody does, right? Matt Mitchell: So and then explaining that these things they’re actually dangerous, they’re not safe. That’s the kicker. And that’s what does it for folks because all of these technologies come with a pro and a con like every tech, right? Every tech has a little positive or negative, but surveillance tech, the negative is killer. The negative is so bad that once you break it down, nobody wants to go near it. Cindy Cohn: If we’ve, if we’ve gotten a handle on this stuff and we. You know, shrunk surveillance to the limited place that it ought to have in our society. How, how does that look from where you sit?  Matt Mitchell: I’m a dreamer, I’m a blue sky person, so I’m a, 100% abolitionist. Like, zero surveillance everywhere, right? Surveillance has always been as a tool to hurt people, so… But I think it looks like little things, you know? I also believe in celebrating every victory that comes from the community organizer in me, you know? So, you know, like, when I open up my phone and there’s a settings area and it says Privacy, like, that wasn’t there before. I’m an old school nerd, there was no security settings before. That’s a win. And the more settings in there, each line they add is a win. And, you know, like, when people are like, “Oh, see this camera up here? woo, boo,” whatever, right? That’s a win, right? Matt Mitchell: Every animal wants to be free, right? Every human being wants to be free. Every child wants to be loved. All these things, surveillance the opposite of, so you know what I’m saying? I think, like, what’s that future look like? No cameras, no sensors, none of this stuff on the corner, a little bit more trust, little more acceptance that technology can’t protect us from the things that we created, the evil things that we’ve created.  Cindy Cohn: The stories just keep coming. You’re right. You know, every day it gets clearer and clearer. Um, and, and now we need, you know, whether it’s people in the ballot box or people on the policy side, or people who build the technologies decide, Hey, I’m not going to be part of this. Matt Mitchell: In some parts of the world, we see protests where they’re just like, “Hey, that ham- that hammer and that camera need to meet, you know, they need to get lunch together.” So [laughs], you know, like, whatever it takes. And I wanna see it done the right way and I wanna see it done through policy and through law because that’s the best thing. Like, civil rights taught us that, right? When you have the laws, they might not be respected today, but those are what you stand on to change the future tomorrow. Danny O’Brien: Matt Mitchell. Thanks very much. Matt Mitchell: Thank you.  Cindy Cohn: That was terrific. And just so inspiring. I really appreciate that. Matt really just took us on a walk through his neighborhood to show us all the surveillance. I mean, it was chilling and of course it just drives home how marginalized communities are disproportionately targeted by. I did buy a surveillance. And I kind of appreciate the silver lining of that, which is that he doesn’t have to convince his community. That surveillance is a bad thing. They already know it and they know it from generations.  Danny O’Brien: Yeah, surveillance can often be so sort of out of sight out of mind. It’s it’s, it’s this strange contradiction where it becomes invisible just as it’s making you visible to whoever is out there, spying on you. Those walks just as a practical community organizing method those like walks and tours. I know Oakland privacy does this of just pointing out where the cameras are incredibly effective. The thing that you take away from is the lesson that I think we all learned is that fear just blocks learning, right? It just paralyzes you. And what you need to do is, is you need to respect the fear, or you need to understand that people often come to you out of a fear and concern, but you want to get rid of that fear and then add a historical lens that makes them understand why this is happening and gives them the possibility that things could change. Cindy Cohn: I think that the thing that really shines through in this is that, you know, Matt’s not just teaching security- he’s building a movement. He’s empowering people. He doesn’t do security training as if people are just passive listeners or, or students. He he’s really working on giving people the tools they need to protect themselves, but also turning themselves into leaders. Including, especially because this is about cybersecurity. I mean, people having real careers and the ability to feed their family from this story. I mean, that’s how you build a robust movement.  Danny O’Brien: And I think that builds on the idea that technology isn’t just an enemy here. It can also be part of the solution. It can also be one of the tools that you use to fix things. And I love the idea that the ultimate solutions are going to be built by the communities who are impacted by them.  Cindy Cohn: Yes. I really love his version of the future when we get it right. Not only that, that every little community is going to build the technology that they need, because they’re the ones impacted by it. They’re the ones who ought to build the technology that best serves them. Right. But also again on the movement side, you know, recognizing that small steps matter that we have victories every single day and that we celebrate those victories. And then of course the bigger long-term vision, you know, finally living in a world where we realize that we cannot surveil ourselves to safety. That’s a world I want to live in. Danny O’Brien: Me too. Well, that’s it for this week on How to Fix the Internet, check out the notes on this episode for some of the resources Matt and Crypto Harlem built and recommend so you can learn more about your digital security or pass it onto someone that you know.  Cindy Cohn: Thanks to our guest, Matt Mitchell for sharing his optimism and vision for a future with less surveillance and more humanity. Danny O’Brien: If you like what you hear, follow us on your favourite podcast player. We’ve got lot’s more episodes in store with smart people who will tell you how to fix the internet.  Nat Keefe and Reed Mathis of Beat Mower made the music for this podcast, with additional music and sounds used under creative commons licence from CCMixter. You can find the credits for each of the musicians and links to the music in our episode notes. Thanks again for joining us, if you have any feedback on this episode, please email [email protected], we read every email. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. I’m Danny O’Brien. Cindy Cohn: And I’m Cindy Cohn. 

  • EFF Continues Legal Fight to Release Records Showing How Law Enforcement Uses Cell-site Simulators
    by Aaron Mackey on December 17, 2021 at 5:35 pm

    Four years ago, EFF set out on a mission to chase down the paper trail left behind when cops in California use cell-site simulators. This trail has led us to a California appellate court, where next spring we will face-off with San Bernardino County law enforcement over whether they can keep search warrants authorizing electronic surveillance secret from the public indefinitely. Cell-site simulators (CSSs) mimic cell-phone towers to trick any nearby phones into connecting with them. Police use this technology to gather information on people’s phones or to track people in real time. Whenever police use surveillance tools like CSSs, they inevitably sweep up innocent people’s private data. Because these tools are so invasive, there are legitimate questions about whether law enforcement should have deployed them in that particular investigation. The public should be able to answer those questions by reviewing the public records that reflect how law enforcement justified using CSSs, what types of crimes merited their use, and what training and expertise officers had when deploying them. But in San Bernardino County, Calif., the public has been shut out of accessing these details despite EFF’s effort to make court records public. The long fight to make search warrant records public Since 2018, EFF has been trying to pry loose search warrant filings, including affidavits filed by law enforcement officers seeking court approval to use cell-site simulators, after we suspected officials were improperly sealing these records at local courthouses. Our concerns turned out to be correct. Law enforcement agencies nationwide have a history of shielding the use of this technology from public scrutiny, with prosecutors going so far as to even dismiss cases rather than reveal information about how the technology works. However, in 2015, two new California laws changed the game in the Golden State. First, SB 741 required California law enforcement agencies to post their cell-site simulator usage policies online. Second, and more importantly, the California Electronic Communications Privacy Act (CalECPA), ensured that the existence of search warrants involving cell-site simulators would be disclosed via the California Attorney General’s OpenJustice website and that the warrants would be available to the public via the courts. From the San Bernardino County Sheriff’s SB 741 policy, we learned that the agency keeps an annual tally of CSS use: in 2017, deputies deployed the technology 231 times, including 20 times in “emergency circumstances.” Via the Attorney General’s CalECPA dataset, we were able to identify the existence of several search warrants we believed were related to CSSs, because they mentioned the term “cell-site stimulators,” an unintentional misspelling. However, when we tried to obtain six actual search warrants themselves, we hit a wall: San Bernardino County law enforcement refused to turn over the records. The agency claimed that our request was  “vague, overly broad,” didn’t describe an identifiable record, and would be exempt from disclosure as investigative records. And so we sued in October 2018. San Bernardino continued to refuse to provide the records, and claimed they could not make them public because the files remained indefinitely under seal. Rather than give up, we expanded our request to cover 22 search warrants that we believe could shine light on the use of cell-site simulators or other forms of electronic surveillance used by the San Bernardino Sheriff’s Office. After asking the court to unseal those records, we filed another lawsuit demanding that they be made public by arguing that the indefinite sealing violated the public’s right to access judicial records under the First Amendment. An incremental victory for public access, then a new roadblock The second lawsuit demonstrated that law enforcement had been oversealing these records in ways that were not warranted under the law. Authorities subsequently made public large portions of the search warrants applications and related documents. And in one case, law enforcement released one search warrant in its entirety after learning that a judge had rejected its sealing request and never made the documents secret. But authorities claimed that eight law enforcement affidavits filed with search warrants must remain entirely under seal indefinitely and moved to dismiss EFF’s lawsuit. These records are important and should be public because they likely contain the justification officials provided for using an invasive tool like a cell-site simulator, as well as the type of crime being investigated. Disclosure of that information will enable the public to see whether police are reserving use of cell-site simulators for when they are truly needed or if they are routinely deploying them in non-violent or other low-level investigations. Despite the high public interest in the affidavits, the trial court hearing the case agreed with law enforcement and ruled in January 2021 that the affidavits must be kept secret for the foreseeable future. EFF appeals to defend the public’s right to court records With the trial court’s ruling in favor of secrecy, the stakes were raised. Now, it is no longer just about cell-site simulators, but the question of whether the government has the power to keep search warrant records secret indefinitely. Our appeal argues that the trial court’s ruling makes it very difficult for the public to understand basic facts about how police are using cell-site simulators and other invasive technologies that sweep up innocent people’s personal data. Moreover, EFF argued that the First Amendment and California laws and rules do not allow authorities to keep every word in the affidavits under seal, in their entirety, forever. Instead, the trial court should have made the documents public and redacted any specific information that could justifiably remain secret. We completed briefing in the case earlier this fall and were happy to receive support, via friend-of-the-court briefs, from the First Amendment Coalition and two local journalists who report on law enforcement’s impact on the local community. We anticipate the appeals court will hear arguments in the case in spring 2022. EFF’s case shows police fail to follow CalECPA The biggest problem with EFF’s four-year battle to pry the search warrant affidavits loose is that they should never have been kept secret for so long in the first place. Under CalECPA, law enforcement is required to make public their surveillance orders as soon as the underlying surveillance period ended. So if the cops sought a pen register and trap and trace order for two months, under CalECPA the records reflecting that surveillance generally must be made public after two months. And another California law has always required authorities to make public search warrant filings 10 days after they have been executed. San Bernardino County authorities’ excessive secrecy violates CalECPA and is in clear conflict with governing law in the state. The fact that EFF continues to have to push in litigation for compliance with these laws highlights how far law enforcement will go to keep their surveillance activity secret. But EFF remains undeterred. In the meantime, the case shows that CalECPA is only as effective as its enforcement. This is why we need the California Department of Justice to bring San Bernardino authorities and other agencies across the state into compliance with CalECPA’s transparency mandates.  Related Cases: EFF v. San Bernardino County Superior Court (stingray transparency)

  • EFF to Court: Deny Foreign Sovereign Immunity to DarkMatter for Hacking Journalist
    by Mukund Rathi on December 16, 2021 at 10:27 pm

    When governments or private companies target someone with malware and facilitate the abuse of their human rights, the victim must be able to hold the bad actors accountable. That’s why, in October, EFF requested that a federal court consider its amicus brief in support of journalist Ghada Oueiss in her lawsuit against DarkMatter, a notorious cyber-mercenary company based in the United Arab Emirates. Oueiss is suing the company and high-level Saudi government officials for allegedly hacking her phone and leaking her private information as part of a smear campaign. EFF’s brief argues that private companies should not be protected by foreign sovereign immunity, which limits when foreign governments can be sued in U.S. courts. Hundreds of technology companies sell surveillance and hacking as a product and service to governments around the world. Some companies sell surveillance tools to governments—in 45 of the 70 countries that are home to 88% of the world’s internet users—and others, like DarkMatter, do the surveillance and hacking themselves. DarkMatter’s hacking has serious consequences. In her lawsuit, Oueiss recounts being targeted by thousands of tweets attacking her, with accounts posting stolen personal photos and videos, some of which were doctored to further humiliate her. And earlier this month, EFF filed a lawsuit against DarkMatter because the company hacked Saudi human rights activist Loujain AlHathloul, leading to her kidnapping by the UAE and extradition to Saudi Arabia, where she was imprisoned and tortured. U.S. companies are on both ends of DarkMatter’s misconduct—some are targets, like Apple and iPhone users, and other companies are vendors. Two U.S. companies sold zero-click iMessage exploits to DarkMatter, which it used to create a hacking system that could infiltrate iPhones around the world without the targets knowing a thing. Human rights principles must be enforced, and voluntary mechanisms have failed these victims. U.S. courts should be open to journalists and activists to vindicate their rights, especially when there is a connection to this country—the smear campaign against Oueiss occurred here in part. EFF welcomed the Ninth Circuit Court of Appeals’ recent ruling that spyware vendor NSO Group, as a private company, did not have foreign sovereign immunity from WhatsApp’s lawsuit alleging hacking of the app’s users. Courts should similarly deny immunity to DarkMatter and other surveillance and hacking companies who directly harm Internet users around the world.

  • EU’s Digital Identity Framework Endangers Browser Security
    by Alexis Hancock on December 15, 2021 at 10:46 pm

    If a proposal currently before the European Parliament and Council passes, the security of HTTPS in your browser may get a lot worse. A proposed amendment to Article 45 in the EU’s Digital Identity Framework (eIDAS) would have major, adverse security effects on millions of users browsing the web. The amendment would require browsers to trust third parties designated by the government, without necessary security assurances. But trusting a third party that turns out to be insecure or careless could mean compromising user privacy, leaking personal or financial information, being targeted by malware, or having one’s web traffic snooped on. What is a CA? Certificate Authorities (CAs) are trusted notaries which underpin the main transport security model of the Web and other internet services. When you visit an HTTPS site, your browser needs to know that you are communicating with the site you requested, and that trust is ultimately anchored by the CA. CAs issue digital certificates that certify the ownership and authenticity of a public encryption key. The CA verifies that this key does belong to that website. For a certificate to be valid in a browser, it must be signed by a CA. The fundamental duty of the CA is to verify certificate requests submitted to it, and sign only those that it can verify as legitimate. What is a Root Store? Operating systems and browsers choose which CAs meet their standards and provide benefits to their users. They store those CAs’ root certificates in their root store. A CA that does not meet these rigid requirements are not allowed in these root stores. The Dangers of Requiring Government Mandated CAs The proposed amendment requires CAs in all major root stores that are nationally approved by EU member countries. The amendment has no assurance that these CAs must meet the root store’s security requirements, no listed mechanisms to challenge their inclusion, and no required transparency. Even though eIDAS wasn’t intended to be anti-democratic, it could open the path to more authoritarian surveillance. This can lead to issues beyond poorly managed practices from a faulty or careless CA. If browsers can’t revoke a CA that has been flagged by their standards, their response to a security incident will be delayed. This setup could also tempt governments to try  “Machine-in-the-Middle”(MITM) attacks on people. In August 2019, the government of Kazakhstan tried to require installation of a certificate to scan citizen traffic for “security threats.” Google Chrome, Mozilla Firefox, and Apple Safari blocked this certificate. They were able to take this stand because they run independent root stores with proper security controls. Under this new regulation, this would not be as easy to do. The EU has much more reach and impact than one country. Even though eIDAS wasn’t intended to be anti-democratic, it could open the path to more authoritarian surveillance. If adopted, the amendment would roll back security gains that so many worked hard to achieve in the past decade. The amendment should be dropped. Instead, these CAs should be pushed to meet requirements for transparency, security, and incident response.

  • Apple’s Android App to Scan for AirTags is a Necessary Step Forward, But More Anti-Stalking Mitigations Are Needed
    by Karen Gullo on December 15, 2021 at 6:49 pm

    This post has been updated to say that Tracker Detect is available internationally.We’re pleased to see Apple has come out with an Android app called Tracker Detect that addresses some of the serious threats to privacy and safety we identified with Apple AirTags when they debuted. Quarter-sized Bluetooth-enabled homing beacons marketed as a way to track lost luggage or keys, AirTags can easily be exploited by stalkers to track and locate their victims.As we explained in May, AirTags can be slipped into a target’s bag or car, allowing abusers to follow their every move. While there are other physical trackers such as Tile and Chipolo on the market, AirTags are an order of magnitude more dangerous because Apple has made every iPhone that doesn’t specifically opt out into a part of the Bluetooth tracking network that AirTags use to communicate, meaning AirTags’ reach is much greater than other trackers. Nearly all of us cross paths with Bluetooth-enabled iPhones multiple times a day, even if we don’t know it.We called on Apple to create an Android app that alerts users to nearby tracker. Tracker Detect, released on the Google Play store this week, allows people using Android devices to find out if someone is tracking them with an AirTag or other devices equipped with sensors compatible with the Apple Find My network. If the app detects an unexpected AirTag nearby, it will show up in the app as “Unknown AirTag.” The app has an “alarm” of sorts—it will play a sound within 10 minutes of identifying the tracker, a big improvement over the time it takes for an AirTag to start beeping when it’s out of range of the iPhone it’s tethered to: up to 24 hours, plenty of time for a stalker to track a victim without their knowledge.While not perfect, Tracker Detect is a win for privacy. It gives victims of domestic and intimate partner abuse who exist outside of the Apple ecosystem a fighting chance to learn if they are being tracked or followed. EFF supports a harm reduction approach to privacy, and this app fills an important gap. It will be important to do outreach to domestic violence shelters and other service providers to familiarize them with AirTags and how to use Tracker Detect to run a scan. The app is available in the U.S. and internationally. But it’s only available for Android 9 and higher, which rules out many of the devices at the cheap end of the Android ecosystem which are often used by vulnerable populations.In September, researchers at Technical University of Darmstad’s Secure Mobile Networking Lab released an app called Air Guard, which is available in the Google Play Store. The app claims to be able to detect AirTags on an Android device while running in the background, but EFF has not yet tested Air Guard’s functionality. This may be an additional option for Android users who are concerned about physical trackers.Having an app to download is a step forward, but it is not enough. We’re calling on Google to take this one step further and incorporate background AirTag tracking and detection of other physical trackers into the Android OS. Unlike the functionality Apple has incorporated into the iPhone, which operates constantly in the background, Tracker Detect requires the user to run a scan. Having your device automatically detect trackers will put it on par with the stalking mitigations that iPhone users already have. That mitigation can only be accomplished if Apple and Google cooperate in order to protect users against physical trackers.We hope that both Apple and Google take the threat of ubiquitous, cheap, and powerful physical trackers seriously enough to work together to help their users know when they’re being stalked.  

  • YouTube’s New Copyright Transparency Report Leaves a Lot Out
    by Katharine Trendacosta on December 15, 2021 at 5:42 pm

    YouTube recently released a transparency report on the status of copyright claims for the first half of 2021. It says it will release these numbers biannually from now on. We applaud this move towards transparency, since it gives researchers a better look at what’s happening on the world’s largest streaming video platform. What is less welcome is the spin. The major thrust of this report is to calm the major studios and music labels. Those huge conglomerates have consistently pushed for more and more restrictions on the use of copyrighted material, at the expense of fair use and, as a result, free expression. YouTube has plenty of incentives to try to avoid the wrath of these deep-pocketed companies by showing how it polices allege copyright infringement and generates money for creators. The secondary goal of the report is to claim that YouTube is adequately protecting its creators. That rings hollow, since every user knows what it’s like to actually live in this ecosystem. And they have neither the time nor money to lobby YouTube for improvements. Worse, as a practical matter, YouTube is the only game in town, so they can’t make their anger heard by leaving. Here are the big numbers YouTube just released for the first half of 2021: 772 million copyright claims were made through Content ID, 99% of all copyright claims were Content ID claims, meaning only 1% were DMCA or other forms of complaint 6 million removal requests were made with YouTube’s Copyright Match Tool Fewer than 1% of Content ID claims were disputed When they were, 60% of the time the dispute was resolved in favor of those contesting the claims YouTube argues that by transferring large sums to music labels and movie studios from internet creators, its ecosystem is, to borrow a phrase, fair and balanced. YouTube basically claims that because rightsholders use Content ID to make a lot of claims and online creators continue to upload new videos, then it must be working. That conclusion ignores a few key realities.   Monopoly: “Where Am I Supposed to Go?” The creators who post videos to YouTube don’t do so because they like YouTube. They do it because they believe they have no choice. We have heard “I am on YouTube for lack of any better option,” “Where am I supposed to go?” and “For what I do, there is nowhere else.” One creator, asked if he could leave YouTube, bluntly answered, “No, obviously not.” It’s not that internet creators like what Content ID does for them, it’s that they have to agree to it in order to survive. They have to use YouTube because of its size. Since most who create videos for a living rely on sponsorships and/or memberships via platforms like Patreon, they need to reach as many people as possible to sell these services. YouTube gives them that power, far more than any other existing platform. The Number of Disputes Is Hiding a Lot YouTube’s dispute claims don’t add up. First, the idea that there are so few disputes means Content ID is working to catch infringement is laughable. On page 10 of the report, YouTube admits that there are errors, but that they are few and far between, based on the low dispute rate. They state that “When disputes take place, the process provided by YouTube provides real recourse,” which runs counter to much of what creators actually say they experience. They feel pressured, by YouTube, not to dispute Content ID. They fear disputing Content ID and losing their channel as a result. YouTube’s suggestion that the relatively high percentage of disputes resolving in favor of the video creator means that there is a functioning appeals process is also dubious. Disputing Content ID is a confusing mess that often scares creators into accepting whatever punishment the system has levied against them. The alternative—as YouTube tells them over and over—is losing their account due to accumulating copyright strikes. Absent alternative platforms, no one who makes videos for a living can afford to lose their YouTube channel. One creator, Chris Person, runs a channel of video game clips called “Highlight Reel.” It was an incredibly popular show when Person edited it for the website Kotaku. When Person was let go, he was allowed to continue the show independently. But he had to rebuild the entire channel, which was a frustrating process. Having done that, he told us he would do anything to avoid having to do it again. As would most creators. Creators have reported that they tell fellow creators to dispute matches on material they have the right to use, only to be met by fear. Too many are too afraid of losing their channel, their only access to an audience and therefore their income, to challenge a match. One music reviewer simply accepts them all, losing most or all direct income from the videos, rather than spend months fighting. Furthermore, creators report that YouTube ignores its own rules, taking far longer than the 30 days it claims must pass before it acts to either release a claim or repost a video. When delays happen, there are no helplines staffed by actual human beings that might do something about it. There is a terrible, circular logic that traps creators on YouTube. They cannot afford to dispute Content ID matches because that could lead to DMCA notices. They cannot afford DMCA notices because those lead to copyright strikes. They cannot afford copyright strikes because that could lead to a loss of their account. They cannot afford to lose their account because they cannot afford to lose access to YouTube’s giant audience. And they cannot afford to lose access to that audience because they cannot count on making money from YouTube’s ads alone, partially because Content ID often diverts advertising money to rightsholders when there is Content ID match. Which they cannot afford to dispute.

  • Have an Open Records Horror Story? Shine a Light by Nominating an Agency for The Foilies 2022
    by Dave Maass on December 14, 2021 at 10:13 pm

    This post is crossposted at MuckRock and was co-written by Michael Morisy.We are now accepting submissions for The Foilies 2022, the annual project to give tongue-in-cheek awards to the officials and institutions that behave badly—or ridiculously—when served with a request for public records. Compiled by the Electronic Frontier Foundation (EFF) and MuckRock, The Foilies run as a cover feature in alternative newsweeklies across the U.S. during Sunshine Week (March 13-19, 2022), through a partnership with the Association of Alternative Newsmedia. In 2021, we saw agencies fight to keep secrets large and small and we saw officials withhold and obfuscate critical information the public needs and is entitled to by law. But even as we’ve kept a running tally of Freedom of Information Act (FOIA) fumbles, we still miss many of the transparency horror stories out there, especially those that go unreported. If you’ve seen a story about an agency closing off important access or simply redacting ad absurdum, this is your chance to highlight it and let the world know—and hopefully help push all agencies to be a little more open. EFF and MuckRock will solicit, vet, and judge submissions, but folks from across the transparency community—journalists, researchers, local and international gadflies, and more—are encouraged to submit both their own run-ins of opaque intransigence or items that have been reported on elsewhere. We’ll be accepting nominations until January 3, so please submit early and often! Submit a Nomination Note: MuckRock’s privacy policy applies to submissions. We’re looking for examples at all levels of government, including state, local, and national, and while we’re primarily focused on U.S. incidents, we welcome submissions about global phenomenon. You can also review The Foilies archives, dating back to 2015, for ideas of what we’re looking for this year. Who Can Win? The Foilies are not awarded to people who filed FOIA requests. These are not a type of recognition anyone should actually covet. There’s no physical trophy or other tangible award, just a virtual distinction of demerit issued to government agencies and public officials (plus the odd rock star) who snubbed their nose at transparency. If you filed a FOIA request with the Ministry of Silly Walks for a list of grant recipients, and a civil servant in a bowler hat told you to take a ludicrous hike, then the ministry itself would be eligible for The Foilies. What Are the Categories? For the most part, we do not determine the categories in advance. Rather, we look at the nominations we receive, winnow them down to the most outrageous, then come up with fitting tributes, such as “Most Expensive FOIA Fee Estimate” and “Sue the Messenger Award.” That said, there are a few things we’re looking for in particular, such as extremely long processing times and surreal redactions. Who Can Nominate? Anyone, regardless of whether you were involved in the issue or just happened to read about it on Twitter. Send as many nominations as you like! Eligibility All nominations must have had some event happen during calendar year 2021. For example, you can nominate something related to a FOIA request filed in 1994 if you finally received a rejection in 2021. Deadline All nominations must be received by January 3, 2022. How to Submit a Nomination Click here to submit your nominations. You can nominate multiple entries by just returning to that page as many times as needed. Each entry should include the following information:Category: One-line suggested award title. We reserve the right to ignore or alter your suggestion. Description: Succinct explanation of the public records issue and why it deserves recognition. Links/References: Include any links to stories, records, photos, or other information that will help us better understand the issue. Email address: Include a way for us to reach you with further questions. This information will remain confidential. If we short-list your nomination, we may be in touch to request more information.

  • Victory! Federal Court Blocks Texas’ Unconstitutional Social Media Law
    by Mukund Rathi on December 14, 2021 at 9:30 pm

    On December 1, hours before Texas’ social media law, HB 20, was slated to go into effect, a federal court in Texas blocked it for violating the First Amendment. Like a similar law in Florida, which was blocked and is now pending before the Eleventh Circuit Court of Appeals, the Texas law will go to the Fifth Circuit. These laws are retaliatory, obviously unconstitutional, and EFF will continue advocating that courts stop them. In October, EFF filed an amicus brief against HB 20 in Netchoice v. Paxton, a challenge to the law brought by two associations of tech companies. HB 20 prohibits large social media platforms from removing or moderating content based on the viewpoint of the user. We argued, and the federal court agreed, that the government cannot regulate the editorial decisions made by online platforms about what content they host. As the judge wrote, platforms’ right under the First Amendment to moderate content “has repeatedly been recognized by courts.” Social media platforms are not “common carriers” that transmit speech without curation. Moreover, Texas explicitly passed HB 20 to stop social media companies’ purported discrimination against conservative users. The court explained that this “announced purpose of balancing the discussion” is precisely the kind of government manipulation of public discourse that the First Amendment forbids. As EFF’s brief explained, the government can’t retaliate against disfavored speakers and promote favored ones. Moreover, HB 20 would destroy or prevent the emergence of even large conservative platforms, as they would have to accept user speech from across the political spectrum. HB 20 also imposed transparency requirements and user complaint procedures on large platforms. While these kinds of government mandates might be appropriate when carefully crafted—and separated from editorial restrictions or government retaliation—they are not here. The court noted that companies like YouTube and Facebook remove millions of pieces of user content a month. It further noted Facebook’s declaration in the case that it would be “impossible” to establish a system by December 1 compliant with the bill’s requirements for that many removals. Platforms would simply stop removing content to avoid violating HB 20 – an impermissible chill of First Amendment rights.

  • Google’s Manifest V3 Still Hurts Privacy, Security, and Innovation
    by Alexei Miagkov on December 14, 2021 at 6:09 pm

    It’s been over two years since our initial response to Google’s Manifest V3 proposal. Manifest V3 is the latest set of changes to the Chrome browser’s rules for browser extensions. Each extensions manifest version update introduces backwards-incompatible changes to ostensibly move the platform forward. In 2018, Manifest V3 was framed as a proposal, with Google repeatedly claiming to be listening to feedback. Let’s check in to see where we stand as 2021 wraps up. Since announcing Manifest V3 in 2018, Google has launched Manifest V3 in Chrome, started accepting Manifest V3 extensions in the Chrome Web Store, co-announced joining the W3C WebExtensions Community Group (formed in collaboration with Apple, Microsoft and Mozilla), and, most recently, laid out a timeline for Manifest V2 deprecation. New Manifest V2 extensions will no longer be accepted as of January 2022, and Manifest V2 will no longer function as of January 2023. According to Google, Manifest V3 will improve privacy, security and performance. We fundamentally disagree. According to Google, Manifest V3 will improve privacy, security, and performance. We fundamentally disagree. The changes in Manifest V3 won’t stop malicious extensions, but will hurt innovation, reduce extension capabilities, and harm real world performance. Google is right to ban remotely hosted code (with some exceptions for things like user scripts), but this is a policy change that didn’t need to be bundled with the rest of Manifest V3. Community response to Manifest V3, whether in the Chromium extensions Google group or the W3C WebExtensions Community Group, has been largely negative. Developers are concerned about Manifest V3 breaking their extensions, confused by the poor documentation, and frustrated by the uncertainty around missing functionality coupled with the Manifest V2 end-of-life deadline. Google has been selectively responsive, filling in some egregious gaps in functionality and increasing their arbitrary limits on declarative blocking rules. However, there are no signs of Google altering course on the most painful parts of Manifest V3. Something similar happened when Chrome announced adding a “puzzle piece” icon to the Chrome toolbar. All extension icons were to be hidden inside the puzzle piece menu (“unpinned”) by default. Despite universally negative feedback, Google went ahead with hiding extensions by default. The Chrome puzzle piece experience continues to confuse users to this day. The World Wide Web Consortium’s (W3C) WebExtensions Community Group is a welcome development, but it won’t address the power imbalance created by Chrome’s overwhelming market share: over two-thirds of all users globally use Chrome as their browser. This supermajority of web users is not likely to migrate away because of a technical squabble about extension APIs. No matter what Google decides to do, extension developers will have to work around it—or lose most of their users. And since developers are unlikely to want to maintain separate codebases for different browsers, other browsers will be heavily incentivized to adopt whatever set of extension APIs that Google ends up implementing. Instead of working in true collaboration on the next iteration of browser extensions, Google expects Manifest V3 to be treated as a foregone conclusion. Participation in the WebExtensions group gives Google the veneer of collaboration even as it continues to do what it was going to do anyway. In short, Google enters the room as an 800-pound gorilla unwilling to listen or meaningfully work with the community. Forcing all extensions to be rewritten for Google’s requirements without corresponding benefits to users is a fundamentally user-hostile move by Google Forcing all extensions to be rewritten for Google’s requirements without corresponding benefits to users is a fundamentally user-hostile move by Google. Manifest V3 violates the “user-centered”, “compatibility”, “performance” and “maintainability” design principles of the WebExtensions group charter. While Google’s response to community feedback has been tweaks and fixes around the margins, we have been paying attention to what developers are saying. The shortcomings of Manifest V3 have come into focus. Requiring service workers for extensions is harmful Most browser extensions are built around a background page, a place where all sorts of work happens out of sight as the user browses. With today’s Manifest V2, extensions in Chrome have the choice to opt into using an ephemeral “event”-based background page, or to use a persistent background page. Ephemeral pages get shut down and restarted repeatedly, whenever Chrome decides to do so. Persistent pages continue running as long as the browser is open. In addition to extension APIs, both kinds of extension background pages have access to the standard set of website APIs. Manifest V3 removes the choice, instead requiring all extensions to be based on “service workers.” Service workers are ephemeral, event-based, and do not have access to the standard set of website APIs. Along with removing the “blocking webRequest” mechanism, which we talk about below, rebasing all extensions on service workers is one of the most damaging changes in Manifest V3. Rebasing all extensions on service workers is one of the most damaging changes in Manifest V3 Service workers are JavaScript scripts that run in the background, independent of the website that launched them. Service workers are meant to enable websites to perform previously hard or impossible tasks that optimize website performance or provide offline functionality. For example, the first time you visit, the website installs a service worker in your browser. The service worker will stay installed, and may continue to perform tasks, even if you lose network connectivity or navigate away from Service workers give websites superpowers, giving web apps functionality that is otherwise difficult or impossible. But service workers don’t have the same freedom to execute code that websites do, and there are limits to how long service workers live. Each service worker listens for messages from its website, performs its tasks, and shuts down shortly after. This makes sense, as the website is the main actor that calls upon its service worker for help. But this model doesn’t translate well to browser extensions. Service workers were designed to work with websites, and they are a standardized part of the Web Platform. But there is no equivalent service worker standard for WebExtensions. Since extensions enhance the browser, applying the same execution limits from website service workers makes no sense, and yet this is exactly what Google has done. Sometimes, extensions do things that explicitly act against the intentions of the browser developers, such as when tracker blockers restrict the information flowing out of Chrome. Websites and their service workers are developed by the same teams, and are meant to work in tandem. But browsers and browser extensions are built by different teams with different goals. Extensions are supposed to add new functionality that browser developers didn’t think of or intentionally left out. Sometimes, extensions do things that explicitly act against the intentions of the browser developers, such as when tracker blockers restrict the information flowing out of Chrome. Chrome continues to be the only major browser without meaningful built-in tracking protection. Web extensions need more freedom to operate on their own, which means first-class access to browser APIs and persistent memory. Take a look at the long list of known use cases harmed by requiring service workers. Seamlessly playing audio, parsing HTML, requesting geolocation, communicating via WebRTC data channels, and the ability to start a separate service worker are all broken under the new paradigm. Under Manifest V2, extensions are treated like first-class applications with their own persistent execution environment. But under V3, they are treated like accessories, given limited privileges and only allowed to execute reactively. As per feedback from Mozilla engineers, one legitimate benefit of service workers may be getting extensions to gracefully handle early termination on Android. But there are ways of achieving this goal that don’t involve this degree of harm. And if one of Google’s aims for Manifest V3 is to help bring extensions to Chrome on Android, Google failed to communicate this information. How can browsers and extensions developers collaborate on moving extensions forward when it appears that Google is unwilling to share all of the reasons behind Manifest V3? declarativeNetRequest alone is inadequate Besides proposing to move extensions to an ill-fitting service worker foundation, Google’s Manifest V3 is changing the way that content-blocking extensions can function. Extensions based on Manifest V2 use webRequest, a flexible API that lets extensions intercept and block or otherwise modify HTTP requests and responses. Manifest V3 drops the blocking and modification capabilities of webRequest in favor of the new declarativeNetRequest API. The interception-only or “observational” webRequest API—which allows extensions to monitor, though not modify, requests—will supposedly remain in Manifest V3, although the API is broken in Manifest V3 at this time, with the relevant bug report open for over two years. If your extension needs to process requests in a way that isn’t covered by the existing rules, you just can’t do it. As the name suggests, the new declarativeNetRequest API is declarative. Today, extensions can intercept every request that a web page makes, and decide what to do with each one on the fly. But a declarative API requires developers to define what their extension will do with specific requests ahead of time, choosing from a limited set of rules implemented by the browser. Gone is the ability to run sophisticated functions that decide what to do with each individual request. If your extension needs to process requests in a way that isn’t covered by the existing rules, you just can’t do it. From this follows the main problem with requiring a declarative API for blocking. Advertising technology evolves rapidly, and privacy extension developers need to be able to change their approaches to it over time. To make matters worse, extension developers can’t depend on Google browser engineers to react in any timely manner or at all. Google abandoned extension API development for years before Manifest V3. For example, while extensions have had the ability to “uncloak” CNAME domains in Firefox for over three years now, Chrome still lacks support for CNAME uncloaking. And while this support may come at some point in the future as part of declarativeNetRequest, many years behind Firefox, what about uncloaking CNAMEs elsewhere, such as in observational webRequest? As we wrote in 2019, “For developers of ad- and tracker-blocking extensions, flexible APIs aren’t just nice to have, they are a requirement. When particular privacy protections gain popularity, ads and trackers evolve to evade them. As a result, the blocking extensions need to evolve too, or risk becoming irrelevant. […] If Google decides that privacy extensions can only work in one specific way, it will be permanently tipping the scales in favor of ads and trackers.” We have many questions about how the declarative API will interact with other Google projects. Will Google’s Privacy Sandbox technologies be exposed to declarativeNetRequest? If declarativeNetRequest works exclusively on the basis of URL pattern matching, how will extensions block subresources that lack meaningful URLs, facilitated by another Google effort called WebBundles? As more tracking moves to the server, will Manifest V3 extensions be able to keep up? Is Manifest V3 a step down a path where the Google parts of the Web become unblockable by extensions? We reject declarativeNetRequest as a replacement for blocking webRequest. Instead, Google should let developers choose to use either API. We reject declarativeNetRequest as a replacement for blocking webRequest. Instead, Google should let developers choose to use either API. Making both APIs available can still fulfill Google’s stated goals of making extensions safer and more performant. Google could use Chrome Web Store to guide extensions that don’t actually need blocking webRequest towards the declarative API. Google could also provide extension developer tools that would automatically analyze your extension for potential improvements, like the audit tools provided to promote best practices to website developers. In addition, extensions that use webRequest should get flagged for additional review; this should be clearly communicated to extension developers. Google’s performance claims Google has claimed that part of the reason for its Manifest V3 restrictions is to improve performance. If extensions are allowed to have persistent background pages, the argument goes, then those pages will sit idle and waste memory. In addition, Google claims webRequest is an inefficient API because of how it traverses browser internals and extension code, and because it makes it possible for poorly implemented extensions to slow down Chrome. Google has provided no evidence to back these claims. In fact, many of the most popular extensions drastically speed up regular browsing by blocking resource-hogging ads and trackers. On the other hand, the restraints imposed by Manifest V3 will cause broken functionality and degraded performance for common extension tasks. This exercise should quickly put the lie to Google’s claims. While a persistent extension background page will continue to use memory as long as your browser is open, try opening Chrome’s Task Manager sometime. Then compare the memory consumed by each and every website you have open to the memory consumed by your (presumably far fewer) extensions. Then, if you are a user of privacy or ad blocking extensions, try disabling them and reloading your websites. This exercise should quickly put the lie to Google’s claims. The memory consumed by your various open websites—especially without the help of privacy and security extensions to block memory-intensive trackers and advertisers—should dwarf the memory consumed by the extensions themselves. Furthermore, repeatedly starting up and tearing down service worker-based extensions will lead to greater CPU load. For example, an extension using tabs, webNavigation, or observational webRequest APIs will get constantly invoked during browsing until either the user stops browsing or the five-minute time limit is reached. When the user resumes browsing, the service worker will have to get restarted immediately. Imagine how many times such an extension will get restarted during a typical day, and to what end? Any extension that depends on relatively expensive one-time processing on startup (for example, machine learning models or WebAssembly) is an especially poor fit for service workers’ ephemeral nature. Beyond harming performance, arbitrarily shutting down extension service workers will break functionality. Beyond harming performance, arbitrarily shutting down extension service workers will break functionality. The user may be in the middle of interacting with extension-provided functionality on some web page when the extension’s service worker gets shut down. After a service worker restart, the extension may have stale or missing configuration data and won’t work properly without the user knowing to reload the page. The additional delay caused by service worker startup will break use cases that depend on speedy messaging between the web page and the extension. For example, an extension that dynamically modifies the right-click menu based on the type of clicked element is no longer able to communicate within itself in time to modify the menu before it opens. Regressions and bugs On top of everything else, Google’s rollout of Manifest V3 has been rushed and buggy. While you will no longer be able to upload new Manifest V2 extensions to the Chrome Web Store as of January 2022 (next month!), entire classes of existing extensions are completely broken in Manifest V3. As previously mentioned, observational webRequest is still broken, and so is native messaging. Manipulating web pages in the background, WebSockets, user script extensions, WebAssembly: all broken. Injecting scripts into page contexts before anything else happens (document_start “main world” injection) is also broken. This is critical functionality for privacy and security extensions. Extension developers have to resort to ugly hacks to accomplish this injection with configuration parameters, but they are all broken in Manifest V3, and the promised Manifest V3 replacement is still not available. Meanwhile, early adopters of Manifest V3 are running into bugs that cause their extensions to stop working when new extension versions are released. Even something as basic as internationalization is broken inside service workers. Mozilla’s disappointing response Mozilla, apparently forced to follow in Google’s wake for compatibility reasons, announced it will also be requiring extensions to switch to service workers. While Mozilla will continue to support the blocking capabilities of webRequest, in addition to implementing declarativeNetRequest, it was framed as a temporary reprieve “until there’s a better solution which covers all use cases we consider important.” Recently, in a belated sign of community feedback finally having some effect, a Mozilla engineer proposed a compromise in the form of “limited event pages”. Limited event pages would lessen the pain of Manifest V3 by restoring the standard set of website APIs to extension background pages. An Apple representative expressed support on the part of Safari. Google said no. Instead of following Google into Manifest V3, Mozilla should be fighting tooth and nail against Google’s proposal. It should be absolutely clear that Google acts alone despite overwhelmingly negative community feedback. A proposal cannot become a standard when everyone else stands in opposition. Mozilla’s behavior is obscuring Google’s betrayal of the extensions ecosystem. Moreover, it gives a false sense of competition and consensus when in reality this is one of the prime examples of Google’s market dominance and anti-competitive behavior. Conclusion What is the future of extensions? As we explained in our 2019 response, removing blocking webRequest won’t stop abusive extensions, but will harm privacy and security extensions. If Manifest V3 is merely a step on the way towards a more “safe” (i.e., limited) extensions experience, what will Manifest V4 look like? If the answer is fewer, less-powerful APIs in service of “safety”, users will ultimately suffer. The universe of possible extensions will be limited to what Google explicitly chooses to allow, and creative developers will find they lack the tools to innovate. Meanwhile, extensions that defend user privacy and safety against various threats on the Web will be stuck in the past, unable to adapt as the threats evolve. We shouldn’t rely on browser developers to think of all the needs of the diverse Web, and we don’t have to: that’s the beauty of extensions. The WebExtensions standard is what we all make it to be. If we are to take the WebExtensions Community Group at face value, we should be making extensions more capable together. We should indeed be making it easier to write secure, performant, privacy-respecting extensions, but not at the cost of losing powerful privacy-preserving functionality. We should make it easier to detect abuse, but not at the cost of losing the ability to innovate. We shouldn’t rely on browser developers to think of all the needs of the diverse Web, and we don’t have to: that’s the beauty of extensions. The next extensions manifest version update should be opening doors to empower all of us, unconstrained by whether you can convince a few browser engineers of the validity of your needs. Google needs to cancel moving to service workers, restore blocking webRequest, and halt Manifest V2 deprecation until all regressions in functionality are addressed. Anything short of that is at best an insincere acknowledgment of developers’ shared concerns, and at worst outright hostility to the extensions community at large. More Information Chrome Users Beware: Manifest V3 is Deceitful and Threatening Manifest V3: Open Web Politics in Sheep’s Clothing Google’s Plans for Chrome Extensions Won’t Really Help Security

  • Podcast Episode: A Better Future Starts with Secret Codes
    by Christian Romero on December 14, 2021 at 9:00 am

    Podcast Episode 105 Law enforcement wants to force companies to build a backdoor to the software that runs on your phones, tablets, and other devices. This would allow easier access to the information on your device and the information that flows through it, including your private communications with others, the websites you visit, and all the information from your applications. Join EFF’s Cindy Cohn and Danny O’Brien as they talk to Riana Pfefferkorn, a lawyer and research scholar at the Stanford Internet Observatory, about the dangers of law enforcement trying to get these backdoors built and how users’ lives are better without them. Click below to listen to the episode now, or choose your podcast player: Privacy info. This embed will serve content from      More than ever before, users—from everyday people to CEOs to even high-ranking government officials—have troves of personal and work-related information on their devices. With so much data stored by such a wide variety of users, including government officials, why would law enforcement want to create a vulnerability in the devices’ software? Riana Pfefferkorn guides us toward an internet that prioritizes users over the state and how that would lead to individuals having the ability to express themselves openly and have safe, private conversations.  Not only could bugs get in through that hole, but it also might spider cracks out throughout the rest of the windshield. In this episode you’ll learn about: Different types of data law enforcement try to gather information from, including “at rest” and “in transit” data. The divide between law enforcement, national security and intelligence communities regarding their stance on strong encryption and backdoors on devices. How the First Amendment plays a role in cryptography and the ability for law enforcement to try to force companies to build certain code into their software. How strong encryption and device security empowers users to voice their thoughts freely. Riana Pfefferkorn is a Research Scholar at the Stanford Internet Observatory. She focuses on investigating and analyzing the U.S. and other governments’ policies and practices for forcing decryption and/or influencing crypto-related design of online platforms and services via technical means and through courts and legislatures. Riana also researches the benefits and detriments of strong encryption on free expression, political engagement, and more. You can find Riana Pfefferkorn on Twitter @Riana_Crypto. If you have any feedback on this episode, please email [email protected] You can find a copy of this episode on the Internet Archive.  Below, you’ll find legal resources—including links to important cases, books, and briefs discussed in the podcast—as well as a full transcript of the audio. Resources  Encryption and Exceptional Access: Encryption and “Exceptional Access” (EFF) Deep Dive into Crypto “Exceptional Access” Mandates: Effective or Constitutional—Pick One (EFF) The Communications Assistance for Law Enforcement Act (CALEA) of 1994 (EFF) Apple and the FBI: The FBI Could Have Gotten Into the San Bernardino Shooter’s iPhone, But Leadership Didn’t Say That (EFF) The Burr-Feinstein Proposal Is Simply Anti-Security (EFF) Apple, Americans, and Security vs. FBI (EFF) If You Build It, They Will Come: Apple Has Opened the Backdoor to Increased Surveillance and Censorship Around the World (EFF) The 90s and Now: FBI and Its Inability to Cope with Encryption (EFF) Code as First Amendment Speech EFF at 25: Remembering the Case that Established Code as Speech (EFF) Surveillance Chills Free Speech—As New Studies Show— And Free Association Suffers (EFF) Transcript: Riana Pfefferkorn: The term backdoor is one that the government doesn’t like to use. Sometimes they just want to call it the front door, to just walk right on in here, your encrypted communications or your devices. But, nevertheless, they tend to prefer phrases like an exceptional access mechanism.The problem being that when you are building an exceptional access mechanism, it’s a hole. And so we have likened it to drilling a hole in a windshield, where the windshield is supposed to protect you, but now you have a hole that’s been drilled in the middle of it. Not only could bugs get in through that hole, but also it might spider cracks out to throughout the rest of the windshield. Danny O’Brien:That’s Riana Pfefferkorn. She’s a research scholar at the Stanford Internet Observatory and she’s also a lawyer. We’re going to talk to her today about why backdoors into our devices are a bad idea. Cindy Cohn: And we’re also going to talk about a future in which we have privacy while also giving the police the tools they do need and not the ones that they don’t. Welcome to EFF’s How to Fix the Internet. Cindy Cohn: Welcome to the show. I’m Cindy Cohn, EFS’s executive director. Danny O’Brien: And I’m Danny O’Brien, and I’m a special advisor to EFF. Cindy Cohn:Today we’re going to dig into device encryption and backdoors. Danny O’Brien:Riana’s been researching forced decryption and the influence of the US government and law enforcement have had on technology and platform design. She’ll take us through what is at stake, how we can approach the problem, and what is standing in the way of the solutions. Riana, thanks for joining us. Riana Pfefferkorn:Thank you for having me today. Cindy Cohn::We’re so excited to have you here. Riana. Of course, as you know, talking about encryption is near and dear to all of our hearts here at EFF. We think most people first recognize the idea that the FBI was seeking a backdoor into their devices and information in 2015, when it demanded that Apple build one into the iPhone, after a horrible incident in San Bernardino. Now Apple pushed back with help from a lot of us, both you and the EFF, and the government ended up getting the information another way and the case was dropped. Bring us up to speed, what’s happened since then? Riana Pfefferkorn::Following the Apple versus FBI dispute in San Bernardino, we saw the almost introduction of a bill by our very own here in California, Senator Dianne Feinstein, together with Senator Richard Burr, that would have imposed penalties on smartphone manufacturers that did not find a way to comply with court orders by unlocking phones for law enforcement. That was in 2016. That bill was so roundly ridiculed that it never actually even got formally introduced in any committees or anything, much less went anywhere further beyond that. Then in the next few years, as law enforcement started being able to with, fair regularity, get into devices the way they had done in the San Bernardino dispute, we saw the debate shift, at least in the United States, from a focus on device encryption to a focus on end-to-end encryption for our communications, for our messages, particularly in end-to-end encrypted chat apps. Cindy Cohn: I also remember that there was another incident in Pensacola, Florida a few years ago, where the FBI once again tried to push Apple into it. Once again, the FBI was able to get the information without Apple having to hurt the security of the phones. So it seems to me that the FBI can get into our devices otherwise. So why do they keep pushing? Riana Pfefferkorn: It used to be that the rationale was encryption is wholly precluding us from getting access to evidence. But as it’s become more and more obvious that they can open phones, as in the Pensacola shooting, as in the San Bernardino shooting, the way they speak about it has changed slightly to, “Well, we can’t get into these phones quickly enough, as quickly as we would like.” Therefore, it seems that now the idea is that it is an impediment to the expeditiousness of an investigation rather than to being able to do the investigation at all. And so, if there were guaranteed access by just being able to make sure that, by design, our devices were made to provide a ready-made backdoor for governments, then they wouldn’t have to go through all of the pesky work of either having to use their own in-house tools in order to crack into phones, as the FBI has done with its own in-house capabilities, or purchase them, or seek out a vendor that has the capability of building exploits to allow them to get into those phones, which is what happened in the San Bernardino situation. Danny O’Brien: So I think this leaves people generally confused as to what data is protected and from whom on their phones. Now I think you’ve talked about two things here. One is the protecting data that’s on the phone, people’s contacts, stuff like that. Then there’s the content of communications where you have this end-to-end encryption. But what is the government able to access? Who is the encryption supposed to protect people against? What’s the current state of play? Riana Pfefferkorn: We can think about whether data is at rest or if it’s in transit. So when the government seeks to get access to messages as they are live passing over the wire, over the airwaves between two people, that requires them to get a wiretap order that lets them see basically in real time the contents of communications that are going back and forth. Once those have been received, once they are in the string of chat messages that I have in my chat app on my phone, or other messages or information that you might have, locally on your device, or remotely also in the cloud, we could talk about that, then that’s a situation where there’s a different mechanism that law enforcement would have to get. They would need a warrant in order to get into, be able to search it and seize data off of your phone. So we’re looking at two different points in time, potentially, for what might be the same conversation. In terms of accessibility, I think if your device is encrypted, then that impedes law enforcement from rapidly being able to get into your phone. But once they do, using the third-party tools or homeworld tools that they have for getting into your phone, then they can see any text messages, conversations that you’ve got, unless you have disappearing messages turned on in the apps that you use, in which case they will have vanished from your particular device, your end point. Whereas if law enforcement wants to get access to end-to-end encrypted communications as they’re in transit on the wire, they’re not going to be able to get anything other than gobbledygook, where they have to compel the provider to wiretap those conversations for them. And so, we’ve also seen some scattered efforts by US law enforcement to try and force the providers of end-to-end encrypted apps to remove or weaken that in order to enable wiretapping. Danny O’Brien: So the data that’s stored on the phone, so this is the data that’s encrypted and at rest, the idea behind devices and companies like Apple encrypting that is just a general protection. So if someone steals my phone, they don’t get what I have, right? Riana Pfefferkorn: Yeah. It’s funny, we used to see from the same heads of police agencies who subsequently got angry at Apple for having stronger encryption, they used to be mad about the rate at which phones were getting stolen. It wasn’t so much that criminals wanted to steal a several hundred-dollar hunk of metal and glass. It was what they could get into by being able to easily get into your phone before that prevalence of strong default passcodes and stronger encryption to get into phones. There was a treasure trove of information that you could get. Once you were in somebody’s phone, you could get access to their email, you can get access to anything else that they were logged into, or have ways of resetting their logins and get into those services, all because you’d been able to steal their phone or their iPad or what have you. And so, the change to making it harder to unlock phones wasn’t undertaken by Apple, or subsequently by Google for Android phones, in order to stick it to law enforcement. It was to cut down on that particular angle of attack for security and privacy invasions that criminal rings or hackers or even abusive spouses or family members might be able to undermine your own interests in the thing that has been called basically an appendage of the human body by none other than our own Supreme Court. Cindy Cohn: in some ways it’s the cops versus the cops on this, because the cops that are interested in helping protect us from crime in the first place want us to have our doors locked, want us to have set this lock down so that we’re protected if somebody comes and steals from us. By the way, that’s how most people feel as well. Whereas the part of the cops that want to solve crimes, want to make it as easy as possible for them to get access to information. And so, in some ways, it’s cop versus cop about this. If you’re asking me, I want to side with the cop who wants to make sure I don’t get robbed in the first place. So I think it’s a funny conversation to be in. Riana Pfefferkorn: But it’s exactly as you say, Cindy, that there are several different components of the government whose interests are frequently at odds when it comes to issues of security and privacy, in as much as not only is there a divide between law enforcement and the national security and intelligence communities when it comes to encryption, where the folks who come out of working at the NSA then turn around and say, “We try and push for stronger encryption because we know that one part of our job in the intelligence community is to try and ensure the protection of vital information and state secrets and economic protection and so forth,” as opposed to law enforcement who have been the more vocal component of government in trying to push for undermining or breaking encryption. Not only is there this divide between national security and the intelligence community and law enforcement, there’s also a divide between law enforcement and consumer protection agencies, because I think that we find a lot of providers that have sensitive information and incentives to protect it by using strong encryption are in a bind, where on the one hand, they have law enforcement saying, “You need to make it easier for us to investigate people and to conduct surveillance,” and on the other hand, they have state attorneys general, they have the Federal Trade Commission, and other authorities breathing down their necks saying, “You need to be using stronger encryption. You need to be taking other security measures in order to protect your customers’ data and their customers’ data.” Danny O’Brien: So the argument seems to be from law enforcement, “Well, okay, stop here. No further. We don’t want this to get any better protected.” What are the arguments on the other side? What are the arguments for not only keeping the protections that we have already, but not stopping and continuing to make this stuff safer and more secure? Riana Pfefferkorn: There are a variety of different pain points. We can look at national security interests. There’s the concept of commercial off-the-shelf software and hardware products where people in the military or people in government are using the same apps and the same phones that you or I use, sometimes with additional modifications, to try and make them more secure. But to the extent that everybody is using the same devices, and that includes CEOs and the heads of financial institutions and a high-ranking government officials, then we want those devices to be as secure just off the line as they could be given that variety of use cases. That’s not to say that average people like you or me, that our interests aren’t important as well, as we continue to face growing ransomware pandemic and other cybersecurity and data breach and hacking incidents that seem to dominate the headlines. Encryption isn’t necessarily a cure all for all of those ills, but nevertheless, to the greater degree that we can encrypt more data in more places, that makes it more difficult for attackers to get anything useful in the event that they are able to access information, whether that’s on a device or whether that’s on a server somewhere. All of this, of course, has been exacerbated by the COVID-19 pandemic. Now that all of us are at home, we’re doing things over electronic mediums that we previously did face-to-face. We deserve just as much privacy and we deserve just as much security as we ever had when we were having those communications and meetings and doctor appointments and therapist appointments face-to-face. And so, it’s important, I think, to continue expanding security protections, including encryption, in order to maintain those expectations that we had now that so much more of what we do for the past 18 months has had to be online in order to protect our own physical health and safety. Cindy Cohn: Your answer points out that a focus on the just you and you have nothing to hide misses that, on one hand, we’re not all the same. We have very different threats. Some of us are journalists. Some of us are dissidents. Some of us are people who are facing partner abuse. One way or another, we all have a need to be secure and to be able to have a private conversation these days. Riana Pfefferkorn: One of the areas where I think we frequently undermine their interest is children and the privacy and speech interests of children, and the idea that children somehow deserve less privacy. We have restrictions. Parents have a lot of control over their children’s lives. But children are tomorrow’s adults. And so, I think there’s also been a lot of concern about not normalizing surveillance of children, whether that’s doing school from home over the laptop, contexts again that we’ve had over the last 18 months of surveillance of children who are trying to do schoolwork or attend classes online. There has been some concern expressed, for example, by Susan Landau, who’s a computer science professor at Tufts, saying we need to not give children the impression that when they become tomorrow’s adults, that we normalize surveillance and intrusion upon their own ability to grow and have private thoughts and become who they’re going to become, and grow up into a world where they just think that extremely intrusive level of surveillance is normal or desirable in society. Danny O’Brien:: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science, enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. Cindy Cohn::As long as we’re talking encryption, I want to talk about the First Amendment, because, of course, that is the tool that we used in the 1990s to free up encryption from government regulation, with the observation that cryptography is just applied math and you can’t regulate math without meeting the First Amendment tests. Is the First Amendment playing a role in the conversation today, or how do you think it’ll play as this conversation goes forward? Riana Pfefferkorn::It’s a great question because we have multiple court cases now that recognize that code is speech and is protected speech. To the degree that this is going to come into play in government efforts to either pass a law restricting the ability of providers, of devices, or communication services to offer strong encryption to their users, or to the degree it comes up in court orders to try and undermine existing strong protections, I think there are First Amendment arguments to be made. When we were doing the front of the court briefing in the Apple versus FBI case, that came up multiple times, including in Apple’s own briefing, to say, look, we have a First Amendment right not to be forced to hold out a suborn piece of software that the government’s trying to compel us to write that would roll back security protections for this particular phone. We have a First Amendment right not to be forced to stand by something that we don’t actually stand by. We don’t want to stand by that piece of software. We don’t want to have to pretend like we do. That’s a little bit of a nuanced point, I think, to make. I think often when we talk about the First Amendment in the context of the internet, and this goes into current debates around content moderation as well, it becomes really easy to forget that we’re not necessarily only talking about the First Amendment rights of end users as people who want to speak and receive information, receiving information also being a First Amendment protected right, it’s also the rights of the companies and organizations that build these tools to be able to write and disseminate code as they wish and not to have to be forced into cryptographically signing a piece of code that they don’t stand by, such as custom iOS software. Cindy Cohn::When you saw what Apple was saying here, is, “Look, we’re selling a secure tool. We are telling people this is a secure tool,” and you’re basically making us into liars. It’s one thing to think about speech rights in the abstract about corporations, but I think the government forcing a company to lie about the security of its product does, I think, even if you’re not a fan of corporations, feel like maybe that’s something that the government shouldn’t be able to do. It shouldn’t be able to force me to lie and it shouldn’t be able to force Apple to lie Danny O’Brien:So one of the things that fascinates me about the idea of compelling backdoors into the software produced by companies like Facebook, with WhatsApp and so forth, is what happens next? We’ve seen this a little bit in the rest of the world, because the UK and Australia have effectively introduced laws that are like this. But then they’ve had to hold off on actually introducing those back doors because the pushback from the companies and from the public has been so huge. So I guess what I’m asking here is in the nightmare scenario where somebody does introduce this stuff, what happens? Do people suddenly … Everybody around the world, everybody writes software that has backdoors in it? Does it really solve the problem that they’re trying to solve here? Riana Pfefferkorn:It really does feel like a time machine in some ways, that we would end up maybe going right back to the 1990s when there were two different versions of the Netscape browser. One was domestic-grade crypto and one was export-grade crypto. Do we end up with a world where services have different versions with weaker or stronger encryption, depending on what market they’re being offered in? It seems like we’re in a place right now where if you regulate and mandate weaker encryption at one level, then the encryption moves to a different level. So, for example, WhatsApp just announced that they are allowing opt-in to end-to-end encrypt your messaging backups. If you don’t trust your cloud service provider not to be somehow scanning for verboten or disfavored content, then you could choose to turn on end-to-end encryption for your WhatsApp backups.Or will people be scared of having them at all? I think one of the nightmare scenarios is that we go from this place where people finally have means of communicating openly and honestly and sincerely with each other, secure in the knowledge that their communications have protection, thanks to end-to-end encryption, or that they are able to encrypt their devices in such a way that they can’t easily be accessed by others. Instead they get chilled into not saying what they want to say or fully realizing and self-actualizing themselves in the way that they would, which is the grim 1984 version. But we’ve seen something of that in terms of the research that’s already come out saying that people self-censor their search queries when they think that their search queries are going to somehow be monitored or logged by the government. You could envision a world where if people think they no longer have privacy and security over their communications with each other, or their files that they have on their phone or remotely, that they just stop thinking or saying or doing controversial thoughts or statements. That would be a huge loss. Cindy Cohn:So let’s flip it around a little bit because we’re fixing the internet here, not celebrating its brokenness. So, what are the values that we’re going to get if we get this right? Riana Pfefferkorn:We’re talking about data security, I think we often think of it as does this protect my private thoughts or less popular opinions? But it would protect everything. That’s all the good and bad stuff that people do online. But I think there will be a side effect to improving the security of everything from your e-commerce or online banking, to the security of our communications that we have as if you are an attorney, I think there’s a lot to be said for having stronger encryption for your communications with your clients and in other privileged contexts, whether that is your online therapist or e-health. Instead of a move fast and break things, it’s almost a move fast and fix things, where encryption has become more and more ubiquitous just by simply turning it on by default as choices that have been made by the same large providers that, while they are rightly subject to critique for their privacy practices or antitrust practices, or what have you, nevertheless have, because they have such massive user bases, done a lot for security simply by stepping their game up when it comes to their users. Danny O’Brien: Yeah. We have this phrase at the EFF, which is the tyranny of the defaults, where you get stuck in a particular world, not because you don’t have the freedom to change it, but because everyone gets the default settings which exclude it. It would be great to flip that around in this utopia so that the defaults actually are on the side of the user rather than the people who want to peer into this. Danny O’Brien:What discussions would we be having if that was all behind us? What are the subtler debates that you want to get on to and that we would have in our beautiful utopia? Riana Pfefferkorn:I mean one thing would be just what does a world look like where we are not privileging and centering the interests of the state above all others? What does it look like to have an internet and devices and the digital environment that centers users and individual dignity? What does that mean when individual dignity means protection from harm or protection from abuse? What does it mean when individual dignity means the ability to express yourself openly, or to have privacy in your communications with other people? Right now, I think we’re in a place where law enforcement interests always get centered in these discussions. I think also, at least in the United States, there’s been a dawning recognition that the state is not necessarily the one that has a monopoly on public safety, on protection, on justice, and in fact has often been an exponent of injustice and less safety for individuals, particularly people from communities of color and other marginalized groups. And so, if we’re looking at a world that needs to be the right balance of safety and free and liberty and having dignity, there are a lot of different directions that you could go, I think, in exploring what that means that do not play into old assumptions about, well, it means total access by police or other surveilling authorities to everything that we do. Cindy Cohn: Oh, what a good world. We’re still safe. We have safety. As I’ve said for a long time, you can’t surveil yourself to safety, that we’ve recognized that and we’ve shifted towards how do we give people the privacy and dignity they need to have their conversations. think I’d be remiss if I didn’t point out like police solved crimes before the internet. They solve crimes now without access to breaking encryption. And I think we said this at the beginning. It’s not like this is blocking police. It might be making things just slightly slower, but at the cost of, again, our security and our dignity. So I think in our future, we still solve crimes. I would love to have a future without crimes, but I think we’re going to have them. But we still solve crimes. But our focus is on how do we empower users? Riana Pfefferkorn: Right. I think it’s also easy in these discussions to fall into a technological solutionism mindset, where it’s not going to be about only having more police powers for punitive and investigative purposes, or more data being collected or more surveillance being conducted by the companies and other entities that provide technology to us, that provide these media and devices to us, but also about the much harder societal questions of how do we fix misogyny and child abuse? And having economic safety and environmental justice and all of these other things? Those are much harder questions, and we can’t just expect a handful of people in Silicon Valley or a handful of people in DC to solve all of our way out of them. I think it almost makes the encryption debate look like the simpler avenue by comparison, or by looking solely towards technological and surveillance-based answers, because it allows the illusion of those harder questions about how to build a better society. Cindy Cohn:I think that’s so right. We circle back to why encryption has been interesting to those of us who care about making the world better for a long time, because if you can’t have a private conversation, you can’t start this first step towards making the change you need to make in the world. Well, thank you so much for coming to our podcast, Riana. It’s just wonderful unpacking all of this was you. We’re huge fans over here at EFF. Riana Pfefferkorn: Oh, the feeling is mutual. It’s been such a pleasure. This has been a great conversation. Thanks for having me. Danny O’Brien:Well, as ever, I really enjoyed that conversation with one of the key experts in this area, Riana Pfefferkorn:. One of the things I liked is we touched on some of the basics of this discussion, about government access to communications and devices, which is really this idea of the backdoor. Cindy Cohn: Yeah, but it’s just inevitable, right? I love the image of a crack in the windshield. I mean once you have the crack in there, you really can’t control what’s going to come through. You just can’t build a door that only good guys can get in and bad guys can’t get in. I think that came really clear. The other thing that became really clear in listening to Riana about this is how law enforcement’s a bit alone in this. As she pointed out, the national security folks want strong security, they want it for themselves, and the devices that they rely on when they buy them off the shelf, that consumer protection agencies want strong security in our devices and our systems, because we’ve got this cybersecurity nightmare going on right now with data breaches and other kinds of things. And so, all of us, all of us who want strong security are standing on the one side with law enforcement, really the lone voice on the other side, wanting us to have weaker security. It really does make you wonder why we keep having this conversation, given that it seems like it’s outsized on the one side. Danny O’Brien:What did you think of the First Amendment issues here? Because I mean you pioneered this analysis and this protection for encryption, that code is speech and that trying to compel people to weaken encryption is like compelling them to lie, or at least compelling them to say what the government wants. How does that fit in now, do you think, based on what Riana was saying? Cindy Cohn: Well, I think that it’s not central to the policy debates. A lot of this is policy debates. It becomes very central when you start writing down these things into law, because then you’re starting to tell people you can code like this, but you can’t code like that, or you need a government permission to be able to code in a particular way. Then that’s where we started in the Bernstein case. I think, once again, the First Amendment will end up being a backstop to some of the things that law enforcement is pushing for here that end up really trying to control how people speak to each other in this rarefied language of computer code. Danny O’Brien:We always like talking about the better future that we can get to on the show. I liked Riana’s couching of that in terms of, first of all, the better future happens when we finally realize this conversation is going around in circles and there are more important things to discuss, like actually solving those problems, problems that are really deep and embedded in society that law enforcement is really chasing after. I like the way that she conveyed that the role of technology is to really help us communicate and work together to fix those problems. It can’t be a solution in its own right. It’s not often that people really manage to successfully convey that, because to people outside, I think, it all looks like tech solutions, and there’s just some it works for and some it doesn’t. Cindy Cohn::Yeah. I really appreciated the vision she gave us of what it looks like if we get this all right. That’s the world I want to live in. Thank you so much to Riana Pfefferkorn for coming on the show and giving us her vision of the utopia we could all have. Danny O’Brien:: And thanks to Nat Keefe and Reed Mathis of Beat Mower for making the music for this podcast. Additional music is used under creative commons licence from CCMixter. You can find the credits for each of the musicians and links to the music in our episode notes. Please visit where you find more episodes, learn about these issues, donate to become a member, as well as lots more. Members are the only reason we can do this work, plus you can get cool stuff like an EFF hat or an EFF hoodie or even an EFF camera cover for your laptop. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. I’m Danny O’Brien:. Cindy: And I’m Cindy Cohn.  

  • Digital Services Act: EU Parliament’s Key Committee Rejects a Filternet But Concerns Remain
    by Christoph Schmon on December 14, 2021 at 8:00 am

    Fix What is Broken vs Break What Works: Oscillating Between Policy Choices The European Union’s Digital Services Act (DSA) is a big deal. It’s the most significant reform of Europe’s internet platform legislation in twenty years and the EU Commission has proposed multiple new rules to address the challenges brought by the increased use of services online. While the draft proposal got many things right — for example by setting standards on transparency, proposing limits on content removal, and allowing users to challenge censorship decisions — signals from the EU Parliament showed that we were right to worry about where the DSA might be headed. Leading politicians suggested a dystopian set of rules that promote the widespread use of error-prone upload filters. Proponents of rules that would make any “active platform” (which one isn’t?) potentially liable for the communications of its users, showed that not everyone in Parliament had learned from the EU’s controversial Copyright Directive, which turns online platforms into the internet police, with special license to scan and filter users’ content. Together with our allies, we called on the EU Parliament to reject the idea of a filternet, made in Europe, and refrain from undermining pillars of the e-Commerce Directive crucial in a free and democratic society. As any step in the wrong direction could reverberate around the world, we also added international voices to the debate, such as the Digital Services Act Human Rights Alliance, which stands for transparency, accountability, and human rights-centered lawmaking. Committee Vote: Parliament Listened to Civil Society Voices In this week’s vote, EU members of parliament (MEPs) showed that they listened to civil society voices: Even though the key committee on internal market affairs (IMCO) did not follow the footsteps of the ambitious DSA reports from last year, MEPs took a stance for the protection of fundamental rights and agreed to: – Preserve the liability exemptions for internet companies: Online intermediaries will continue to benefit from the “safe harbor” rules, which ensure that they cannot be held liable for content provided by users unless they know it is illegal and don’t act against it (Art 5); – Uphold and strengthen the ban on mandated monitoring: Under current EU internet rules, general monitoring of information transmitted or stored by intermediary service providers is banned, guaranteeing users’ freedom of expression and their rights to personal data as memorialized in the Fundamental Rights Charter, which enshrines the fundamental rights people enjoy in the EU. MEPs preserved this important key principle and clarified that monitoring should neither be imposed by law or de facto, through automated or non-automated means (Art 7(1)); – Abstain from introducing short deadlines for content removals: The Committee recognized that strict and short time frames for content removals, as imposed under dangerous internet legislation like Germany’s NetzDG or the EU’s controversial copyright directive, will lead to removals of legitimate speech and opinion, thus impinging rights to freedom of expression; – Not interfere with private communication: The Committee acknowledged the importance of privacy online and rejected measures that would force companies to analyze and indiscriminately monitor users’ communication on private messaging services like Whatsapp or Signal. Even though Parliamentarians were not ambitious enough to call for a general right to anonymity, they agreed that Member States should not prevent providers of intermediary services from offering end-to-end encrypted services and impose monitoring measures that could limit the anonymous use of internet services (Art 7(1b)(1c)). We welcome these agreements and appreciate the committee’s commitment to approving important procedural justice rights for users as recommended by EFF, such as the reinstatement of content and accounts that have been removed by mistake (Art 17(3)), and the important territorial limitation of take-down orders by courts or authorities (Art 8(2b)), which makes clear that one country’s government shouldn’t dictate what residents of other countries can say, see, or share online. We also applaud the Committee’s strong focus on transparency—platforms must explain how content moderation works and the number of content moderators allocated for each official language—and for strengthening risk assessment mechanisms—platforms must take into account language and region-specific risks when assessing systemic risk resulting from their service. Lastly, we commend the Committee’s inclusion of an ambitious dark patterns prohibition: Platforms are banned from using misleading design or functionality choices that impair users’ ability to control and protect their internet experience. Concerns Remain: Enforcement Overreach, Trust Issues, and Walled Gardens However, we continue to be worried that the DSA could lead to enforcement overreach and assign trust to entities that shouldn’t necessarily be trusted. If the DSA becomes law, online platforms would be required to hand over sensitive user information to non-judicial authorities at their request. While we acknowledge the introduction of procedural safeguards—platforms would be granted the right to lodge an effective remedy—essential human rights guarantees are still missing. Other sections of the bill, even though they come with a number of positive changes compared to the EC’s original proposal, still favor the powerful. The text still comes with the option of awarding the status of a “trusted flagger” to law enforcement agencies or profit-seeking industry organizations, whose notices must be given priority over notifications submitted by users. Even though conditions for becoming trusted flaggers were tightened and accompanied by comprehensive reporting obligations, further improvements are necessary. Parliamentarians also failed to follow the lead of their colleagues, who recently took a first step towards a fair and interoperable market in their vote on the Digital Markets Act (DMA). Whereas DMA amendments called for functional interaction between messaging services and social networks, MEPs did not back key provisions that would ensure interoperability of services and instead went with a lofty non-binding political declaration of intent in the DSA. Only incremental improvements were put in place on the limits of surveillance advertising and on a requirement that platforms appoint in-country legal representatives, unaffordable for many small non-EU providers. We also agree with the criticism that centralizing enforcement power in the hands of the EU Commission comes with democratic deficits and could lead to corporate capture. There are further aspects of the committee’s position that require re-working, such as the last-minute approval of mandatory cell phone registration for pornographic content creators, which poses a threat to digital privacy and could risk exposing sensitive information of vulnerable content creators to data leaks. We will analyze the details of the committee position in the next weeks and will work to ensure that EU lawmakers agree on a text that preserves what works and fixes what is broken.

  • Dream Job Alert: Senior Fellow for Decentralization at EFF
    by Jon Callas on December 13, 2021 at 10:20 pm

    We have an amazing opportunity to join the EFF team. We are hiring a Senior Fellow of Decentralization, a position that is a public advocate helping to establish EFF as a leader in the civil liberties implications of decentralizing the Internet. You’ll help chart a course for EFF to have real impact in the public conversations about decentralization of the Internet as it fits into our mission of ensuring that technology supports freedom, justice, and innovation for all people of the world. Through fierce and informed public advocacy, blogging, and social media, this Fellow will help us create a future internet that is more decentralized, resilient, and protective of both civil liberties and innovation. Apply today. Note that this is a two year fellowship, with the possibility of extending up to one to two additional years depending on the outcomes of those two years and EFF’s needs. The landscape of Decentralization is broad. Technologies that can re-decentralize the internet and provide for increased competition and provide for resources to those who are not being served properly by the existing world point to the future we would like to help bring about. There are three major areas of activity where we expect you to work: New protocols that provide internet services that facilitate competition, freedom of expression and privacy  that anyone can set up and use, effectively. For example, Mastodon, Blue Sky, Diaspora, Tor onion services, and Manyverse. “Web3” or “DeWeb” technologies that provide for a decentralized infrastructure using blockchain technology, especially as they support a more privacy protective experience from today’s surveillance-facilitating technologies. “DeFi” technologies including cryptocurrencies and DAOs that can potentially reorganize finance and payment systems for a more privacy protective and equitable use of money. We will prioritize the work in that order.  We are open to many different types of candidates for this role. We’re interested in candidates who span advocacy and implementation. It’s most important for candidates to be able to explain, think about, and advocate for resilient decentralized systems that protect and preserve civil liberties and innovation. Secondarily, someone with technical skills to use them and help us use them ourselves is helpful, but not required. And, we value diversity in background and life experiences.  EFF is committed to supporting our employees. That’s why we’ve got competitive salaries, incredible benefits (including rental assistance, student loan reimbursement, and fantastic healthcare), and ample policies for paid time off and holidays.  Please check out our job description and apply today!  And if you are someone who has a question about this role, please email [email protected]  Even if this job isn’t the right fit for you, please take a moment to spread the word on social media. We’re looking for an unusual person who is excited to help bring about a decentralized, locally empowered digital world.

  • EFF to Federal Appeals Courts: Hold Police Accountable for Violating Civilians’ Right to Record
    by Mukund Rathi on December 13, 2021 at 6:36 pm

    You have the right under the First Amendment to livestream and record on-duty police officers and officers who interfere with that right should be held accountable. That’s what EFF told the Fourth and Tenth Circuit Courts of Appeals in amicus briefs filed in November. EFF is supporting the plaintiffs in those cases, Sharpe v. Winterville and Irizarry v. Yehia, who are suing police officers for violating their right to record. After police officers beat Dijon Sharpe during a traffic stop, he decided that next time he was in a car that was pulled over, he would livestream and record the police. So in October 2018, Sharpe, sitting in the passenger seat of a stopped car, took out his phone and started livestreaming on Facebook. When an officer saw that he was livestreaming, he grabbed Sharpe and tried to take the phone. Sharpe filed a civil rights lawsuit for the interference, a federal district court dismissed his claims in two opinions, and he appealed. Abade Irizarry was recording a traffic stop as a bystander when another police officer interfered. The officer shined lights into Irizarry’s phone camera, stood between the camera and the traffic stop, and menaced Irizarry with his car and blasted him with an air horn. Irizarry appealed after a federal district court dismissed his lawsuit. EFF thinks these officers should be held accountable. The First Amendment protects people who gather and share information, especially when it is about official misconduct and advances government accountability. Police body cameras point towards the public, effectively surveilling those already being policed. The civilian’s camera, by contrast, is appropriately pointed towards the officer. Ordinary people’s livestreams and recordings of the police have always been necessary to inform the public—before the police murder of George Floyd went viral in June 2020, there was the beating of Rodney King in March 1991. But in the digital age, with the proliferation of smartphones with cameras and access to social media platforms, the right to record has become even more important, powerful, and accessible. Earlier this year, the Pulitzer Prize board awarded a special citation to Darnella Frazier, who recorded the shocking police murder of George Floyd on her phone. The board commended her for “courageously recording the murder of George Floyd, a video that spurred protests against police brutality around the world, highlighting the crucial role of citizens in journalists’ quest for truth and justice.” Recording the police, while constitutionally protected, does have risks. Unfortunately, police officers often retaliate against people who exercise their right to record. In those situations, it is particularly important for people to be able to livestream them, like Sharpe did. If a person livestreams an encounter with a police officer, they can publish at least part of the encounter even if the officer retaliates and forces them to stop. The officers in Sharpe’s case claim, without evidence, that because livestreaming gives viewers real-time information about where officers are, it poses a greater risk to officer safety than recording. However, a bifurcated right to record but not livestream would confuse people, leave officers in the impractical position of having to verify what the person is doing with their phone, and stifle police accountability. Finally, EFF argued that qualified immunity should not protect officers who violate someone’s clearly established right to livestream and record. Not only did the Supreme Court long ago decide that the First Amendment protects gathering and publishing information, but several federal circuits have specifically applied this to recording the police. Police officers know that when they use their extraordinary powers, the public has the right to watchdog and record them. When the police violate that right, the public must be able to hold them accountable.

  • This Is Not the Privacy Bill You’re Looking For
    by Hayley Tsukayama on December 13, 2021 at 5:58 pm

    Lawmakers looking for a starting place on privacy legislation should pass on The Uniform Law Commission’s Uniform Personal Data Protection Act (UPDPA). The Uniform Law Commission (ULC) seeks to write model legislation that can be adopted in state legislatures across the country to set national standards. Sadly, the ULC has fumbled its consumer privacy bill and created, in the UPDPA, a model bill that is weak, confusing, and toothless. A strong privacy bill must place consumers first. EFF has laid out its top priorities for privacy laws, which include a full private right of action that allows people to act as their own privacy enforcers, and measures that prevent companies from discriminating—by charging more or offering less—against those who wish to protect their privacy by exercising their rights. EFF also advocates for an opt-in consent model that requires companies to obtain a person’s permission before they collect, share, or sell their data, rather than an opt-out model. The UPDPA falls short on many of these fronts. And why? Because, despite years of evidence that companies will not protect consumer privacy on their own, the UPDPA defers to company complaints that respecting people’s privacy is a burden. In fact, UPDPA Committee Chairman Harvey Perlman openly admitted that one of the drafting committee’s main goals was to lower business compliance costs. By lowering its standards to coax companies into compliance, the UPDPA leaves consumers twisting in the wind. By seeking a middle path on some of the biggest disagreements between consumer advocates and companies looking to do as little as possible to change their practices, the UPDPA has come up with “compromises” that work for no one. Company advocates find its suggestions confusing, as it sets up yet another framework for compliance. Consumer advocates find the “protections” in the bill hollow. It’s no surprise that one Oklahoma legislator told the International Association of Privacy Professionals the bill was “empty.” “There appears to be nothing else substantive in this bill besides an obligation for the data company to provide a voluntary consent standard,” he said.  “Essentially those in control of the data get to decide what their policies and procedures are going to be. So this law is empty because it’s saying [businesses] have to come up with something to address privacy, but we’re not telling you exactly what it is.” Consumer Rights, But Defined by Companies By lowering its standards to coax companies into compliance, the UPDPA leaves consumers twisting in the wind.  At its core, the bill hinges on whether a company uses your information for purposes that are either “compatible” or “incompatible” with the reasons the company originally collected the information. So, for example, you might allow a company to collect your location information if it’s going to do something for you related to where you are,  such as identify certain restaurants near you. This kind of guardrail might sound good at first blush; in fact, it’s in line with an important privacy principle –companies should only use a consumer’s information for the kinds of purposes that a consumer gave permission for in the first place. However, the UPDPA undermines the meaning of “compatible purpose”—providing no real protections for normal people. First, individuals have no say over whether the purposes companies ultimately use their data for are “compatible” with the original purpose of collection, leaving that definition entirely up to companies. This gives a company wide latitude to process people’s information for whatever reason it may deem in keeping with the reason it collected it. That could include processing that a person wouldn’t want at all.  For example, if the company collecting your location information to tell you about nearby restaurants decided it also wanted to use that data to track your regular travel patterns, it could unilaterally classify that new use as supposedly “compatible” with the original use, without asking you to approve it.  The UPDPA also defines targeted advertising as a “compatible purpose” that requires no extra consent—despite targeted ads being one of the most commonly derided uses of personal information. In fact, when consumers are given the choice, they overwhelmingly choose not to participate in advertising that tracks their behavior. This contorts ideas to protect privacy and lets an unwanted privacy invasion slip under the lowest bar possible. Furthermore, when a company uses a consumer’s data for an incompatible purpose, the bill only requires the company to give the consumer notice and an opportunity to opt-out. In other words, if a weather app had your permission to collect your location information for the purpose of locally-accurate forecasts, but then decided to share it with a bunch of advertisers, it wouldn’t have to ask for your permission first. It would simply have to give you a heads-up that “we share with advertisers” and the option to opt-out—likely in a terms and conditions update that no one ever reads. Other rights in this bill, including those EFF supports such as the right to access one’s data and the right to correct it, are severely limited. For example, the bill gives companies permission to ignore correction requests that they deem “inaccurate, unreasonable, or excessive.” They can decide which requests meet this criteria without providing justification. That gives companies far too much leeway to ignore what their customers want. And while the bill gives consumers the right to access their data, it does not give them the right to a machine-readable electronic copy—what is often call the right to data portability. The UPDPA also comes up short on one of EFF’s most important privacy principles: making sure that consumers aren’t punished for exercising their privacy rights. Even in cases where the bill requires a company using data to get permission to use it for an “incompatible data practice,” companies can offer a “reward or discount” in exchange for that permission. In other words, you can have your human right to privacy only if you’re willing and able to pay for it. As we have said before, this type of practice frames our privacy as a commodity to be traded away, rather than a fundamental right to be protected. This is wrong. Someone who values their privacy but is struggling to make ends meet will feel pressured to surrender their rights for a very small gain—maybe $29 off a monthly phone bill. Privacy legislation should rebalance power in favor of consumers, not double-down on a bad system of corporate overreach. The UPDPA Has Big Blind Spots… The UPDPA also fails to address how data flows between private companies and government. It’s not alone in this regard: while the European General Data Protection Regulation (GDPR) covers both government and private entities, many state privacy laws in the United States focus on just one or the other. However, there is a growing need to address the ways that data flows from private entities to government, and the UPDPA largely turns a blind eye to this threat. For example, the bill considers data “publicly available”—and therefore exempt from its protections­–if it is “observable from a publicly accessible location.”  That would seem to exempt, for example, footage from Ring cameras that people place on their doors which document what is happening in adjacent public sidewalks. Information from Ring and other private cameras needs to be protected, particularly against indiscriminate sharing with law enforcement agencies. This is yet another example of how the model legislation ignores pressing privacy concerns. The definition of publicly available information would also seemingly exempt information posted on limited-access social media sites such as Facebook entirely—including from requirements for adhering to privacy policies or security practices. Specifically, the UPDPA exempts “a website or other forum with restricted access if the information is available to a broad audience.” That is far too broad, and willfully ignores the ways private companies feed information from social media and other companies into the hands of government agencies. …And No Teeth Finally, the UPDPA has gaping holes in its enforcement provisions. Privacy laws are only as good as their teeth. That means strong public enforcement and a strong private right of action. This bill has neither. Worst of all, it expressly creates no private right of action, cutting people off from the most obvious avenue for defending themselves against a company that abuses their privacy: a lawsuit. Many privacy statutes contain a private right of action, including federal laws on wiretaps, stored electronic communications, video rentals, driver’s licenses, credit reporting, and cable subscriptions. So do many other kinds of laws that protect the public, including federal laws on clean water, employment discrimination, and access to public records. There’s no reason consumer privacy should be any different. By denying people this obvious and powerful tool to enforce the few protections they gain in this law, the UPDPA fails the most crucial test. State attorneys general do have the power to enforce the bill, but they have broad discretion to choose not to enforce the law. That’s too big a big gamble to play with privacy. Attorneys general might be understaffed or suffer regulatory capture—in those cases, consumers have no recourse whatsoever to be made whole for violations of the few privacy protections this bill provides.  Don’t Duplicate This Bill While the UPDPA wrestles with many of the most controversial discussions in privacy legislation today, it falls short of providing a meaningful resolution to any of them. It grossly fails to address the privacy problems ordinary people face—invasive data collection, poor control over how their information is used, no clear means to fight for themselves—that have data privacy on the agenda in the first place. Lawmakers, federal or state, should not duplicate this hollow bill and lower the bar on privacy.

  • مؤسسة الجبهة الإلكترونية نيابة عن ناشطة حقوقية سعودية، تقاضي صانع برامج التجسس دارك ماتر لانتهاك قوانين مكافحة القرصنة الأمريكية والقوانين الدولية لحقوق الإنسان
    by Carlos Wertheman on December 10, 2021 at 4:02 am

    تقدمت لجين الهذلول، المدافعة البارزة عن حقوق المرأة، بشكوى ضد شركة دارك ماتر وعملاء استخبارات أميركيين سابقين صمموا برامج ضارة لاختراق هاتفها.English version  بورتلاند، أوريغون – رفعت مؤسسة الجبهة الإلكترونية (EFF) دعوى قضائية اليوم نيابة عن الناشطة السعودية البارزة في مجال حقوق الإنسان لجين الهذلول ضد شركة دارك ماتر لبرامج التجسس وثلاثة من مديريها التنفيذيين السابقين بتهمة اختراق جهاز آيفون الخاص بها بشكل سري غير قانوني لتتبع اتصالاتها وأماكن وجودها. الهذلول من بين ضحايا برنامج تجسس غير قانوني أنشأه ويديره عملاء سابقون في المخابرات الأمريكية، بما في ذلك المتهمون الثلاثة الذين وردت أسماؤهم في الدعوى القضائية، والذين عملوا لصالح شركة أمريكية استأجرتها الإمارات العربية المتحدة في أعقاب احتجاجات الربيع العربي. لتحديد ومراقبة النشطاء والصحفيين والقادة الأجانب المتنافسين ومن يُعتبرون أعداء سياسيين. نشرت وكالة رويترز الأخبارية، أنباء حول برنامج القرصنة المسمى مشروع ريفين في العام 2019، حيث أفادت أنه عندما أوكلت الإمارات العربية المتحدة أعمال المراقبة إلى شركة دارك ماتر الإماراتية، حيث أصبح عملاء أمريكيون، تعلموا العمل الاستخباراتي لصالح وكالة الأمن القومي ووكالات استخبارات أمريكية أخرى، يديرون برنامج القرصنة دارك ماتر الذي استهدف نشطاء حقوق إنسان مثل الهذلول والمعارضين السياسيين وحتى الأمريكيين المقيمين في الولايات المتحدة. قام المدراء التنفيذيون لشركة دارك ماتر، مارك باير، ورايان آدامز، ودانيال جيريك، الذين يعملون لصالح عميلهم في الإمارات العربية المتحدة – التي كانت تعمل نيابة عن المملكة العربية السعودية – بالاشراف على مشروع القرصنة، الذي استغل ثغرة أمنية في تطبيق آي ماسج لتحديد موقع ومراقبة الأهداف. قام باير، وآدامز، وجيريك، وجميعهم أعضاء سابقين في المخابرات الأمريكية أو الوكالات العسكرية، بتصميم وتشغيل برنامج الإمارات العربية المتحدة للمراقبة الإلكترونية، والمعروف أيضًا باسم مشروع دريد (إدارة أبحاث التطوير والاستغلال والتحليل)، باستخدام كود خبيث تم شراؤه من شركة أمريكية. باير، الذي يقيم في الإمارات العربية المتحدة، وآدامز، المقيم في ولاية أوريغون، وجريك، الذي يعيش في سنغافورة، اعترفوا في سبتمبر\ أيلول بانتهاك قانون الاحتيال وإساءة استخدام الكمبيوتر (CFAA) وحظر بيع التكنولوجيا العسكرية الحساسة بموجب اتفاقية عدم الملاحقة القضائية مع وزارة العدل الأمريكية. قال مدير الحريات المدنية في مؤسسة الجبهة الأمامية ديفيد جرين: “يجب محاسبة الشركات التي تقدم برامج وخدمات المراقبة للحكومات القمعية التي ينتج عنها انتهاكات لحقوق الإنسان”. وأضاف: “لا يمكن مسح الأذى الذي لحق بلجين الهذلول، لكن هذه الدعوى هي خطوة نحو المساءلة “. الهذلول، التي يرد بيانها حول القضية أدناه هي رائدة في حركة النهوض بحقوق المرأة في المملكة العربية السعودية، حيث مُنعت النساء من القيادة حتى العام 2018، ومطلوب منهن بموجب القانون الحصول على إذن من ولي الأمر للعمل أو السفر. ويعانين من التمييز والعنف. صعدت إلى الصدارة بسبب دفاعها عن حق المرأة في القيادة وعرضت نفسها لخطر كبير في عام 2014 من خلال الإعلان عن نيتها القيادة عبر الحدود من الإمارات العربية المتحدة إلى المملكة العربية السعودية وتصوير نفسها وهي تقود السيارة. تم توقيفها عند حدود المملكة العربية السعودية وسجنها لمدة 73 يومًا. وبلا رادع، استمرت الهذلول في الدفاع عن حقوق المرأة واستمرت في كونها هدفًا لجهود المملكة لقمع المعارضة. قامت شركة دارك ماتر بتوجيه الكود عن عمد إلى خوادم شركة أبل في الولايات المتحدة للوصول إلى برامج ضارة ووضعها على هاتف آيفون الخاص بالهذلول، وهو انتهاك لقانون الاحتيال وإساءة استخدام الكمبيوتر، كما تقول مؤسسة الجبهة الأمامية في شكوى تم رفعها في محكمة ولاية أوريغون الاتحادية. تم اختراق الهاتف في البداية في عام 2017، مما مكن من الوصول إلى رسائلها القصيرة ورسائل البريد الإلكتروني وبيانات موقعها الفعلي. في وقت لاحق، كانت الهذلول تقود سيارتها على الطريق السريع في أبو ظبي عندما ألقت أجهزة الأمن الإماراتية القبض عليها، ونقلتها قسراً بالطائرة إلى المملكة العربية السعودية، حيث تم سجنها مرتين، أحدهما في سجن سري حيث تعرضت للصعق بالصدمات الكهربائية والجلد، والتهديد بالاغتصاب والقتل. “تجاوز مشروع ريفين السلوك الذي رأيناه من مجموعة NSO، التي تم الكشف مرارًا وتكرارًا بيعها برامج لحكومات استبدادية تستخدم أدواتها للتجسس على الصحفيين والنشطاء والمعارضين”، قالت إيفا جالبيرين، مديرة الأمن السيبراني في مؤسسة الجبهة الأمامية. وأضافت: ” لم توفر شركة دارك ماتر الأدوات فحسب، بل أشرفوا على برنامج المراقبة بأنفسهم”. في حين أن مؤسسة الجبهة الأمامية قد ضغطت منذ فترة طويلة باتجاه الحاجة إلى إصلاح قانون الاحتيال وإساءة استخدام الكمبيوتر، فإن هذه القضية تمثل تطبيقًا مباشرًا لنوع الانتهاك الفاضح لأمن المستخدمين الذي يتفق الجميع على أن القانون كان يهدف إلى معالجته. قال موكوند راثي، محامي مؤسسة الجبهة الأمامية وزميل ستانتون: “هذه حالة واضحة لاختراق الأجهزة، حيث قام عملاء دارك ماتر باقتحام هاتف آيفون الخاص بالهذلول دون علمها لإدخال برامج ضارة، مع عواقب مروعة، هذا النوع من الجرائم هو ما كان المقصود من إصلاح قانون الاحتيال وإساءة استخدام الكمبيوتر للمعاقبة عليه”. بالإضافة إلى انتهاكات قانون الاحتيال وإساءة استخدام الكمبيوتر، تزعم الشكوى أن باير وآدامز وجريك ساعدوا وحرضوا على ارتكاب جرائم ضد الإنسانية لأن اختراق هاتف الهذلول كان جزءًا من هجوم الإمارات الواسع النطاق والممنهج ضد المدافعين عن حقوق الإنسان والنشطاء وغيرهم من منتقدي حقوق الإنسان في الإمارات والسعودية. تشترك في هده القضية مكاتب المحاماة فولي هوغ LLP وبويس ماثيوز LLP كمستشار لمؤسسة الجبهة الأمامية.   بيان لجين الهذلول حول القضية   “لم أتخيل مطلقًا أن يتم الاحتفاء بي لدفاعي عن ما اعتقدت أنه صحيح. دفعني إدراكي المبكر لامتياز التحدث بصوت عالٍ وصراحة نيابة عن النساء وعني إلى الانخراط في مجال المدافعين عن حقوق الإنسان.   “في مقال نُشر عام 2018 بعنوان الحريات المخطوفة، عبرت فيه عن فهمي للحرية في أن تكون أمانًا وسلامًا: “الأمان في التعبير، والشعور بالحماية، والعيش والحب. [و] السلام للكشف عن إنسانية أنقى وأخلص مغروسة في أعماق أرواحنا وعقولنا دون التعرض لعواقب لا تغتفر. حُرمت من الأمان والسلام، فقدت حريتي. إلى الأبد؟’ “في السابق ، كان لدي علم محدود لجوانب الضرر الذي يمكن أن يواجهه المدافع عن حقوق الإنسان، أو أي فرد يدافع عن حقه، لا سيما في عالم الإنترنت. اليوم، أقوم ادافع عن الأمان عبر الإنترنت بالإضافة إلى الحماية من إساءة استخدام القوة من قبل الشركات الإلكترونية في فهمي للسلامة. يجب اعتبار الأخير حقًا أساسيًا وطبيعيًا في واقعنا الرقمي. “لا ينبغي لأي حكومة أو فرد أن يتسامح مع إساءة استخدام برامج التجسس الخبيثة لردع حقوق الإنسان أو تعريض صوت الضمير البشري للخطر. لهذا السبب اخترت الدفاع عن حقنا الجماعي في البقاء آمنين على الإنترنت والحد من الانتهاكات الإلكترونية للسلطة المدعومة من الحكومة. ما زلت أدرك امتيازي للعمل ربما بناءً على معتقداتي. “آمل أن تلهم هذه القضية الآخرين لمواجهة جميع أنواع الجرائم الإلكترونية لخلق مساحة أكثر أمانًا لنا جميعًا للنمو والمشاركة والتعلم من للشكوى: لمزيد من المعلومات حول البرامج الضارة التي ترعاها الدولة: اتصل: كارين جولو

Share This Information.

5 thoughts on “Deeplinks

  1. Thanks for a marvelous posting! I seriously enjoyed reading it, you will be a great author.

    I will ensure that I bookmark your blog and will often come back down the road.
    I want to encourage you to continue your great writing, have a nice

Leave a Reply

Your email address will not be published. Required fields are marked *