Deeplinks EFF’s Deeplinks Blog: Noteworthy news from around the internet

  • Coalition Against Stalkerware Expands Membership
    by Eva Galperin on July 6, 2020 at 10:08 pm

    Privacy and security are both team sports, and no one person or organization completely changes the landscape alone. This is why coalition-building is often at the heart of activism. In 2019, EFF was one of the ten organizations that founded the Coalition Against Stalkerware, a group of security companies, non-profit organizations, and academic researchers that support survivors of domestic abuse by working together to address technology-enabled abuse and raise awareness about the threat posed by stalkerware. Among its early achievements are an effort to create an industry-wide definition of stalkerware, encouraging research into the proliferation of stalkerware, and convincing anti-virus companies to detect and report the presence of stalkerware as malicious or unwanted programs. Stalkerware is the class of apps that are sold commercially for the purpose of covertly spying on another person’s device. They can be blatantly marketed as tools for “catching a cheating spouse” or they may euphemistically describe themselves as tools for tracking your children or employees’ devices. The key defining feature of stalkerware is that it is designed to operate covertly, to trick the user into believing that they are not being monitored. Less than a year after its founding, the coalition has more than doubled in size. The original ten partners (Avira, Electronic Frontier Foundation, the European Network for the Work with Perpetrators of Domestic Violence, G DATA Cyber Defense, Kaspersky, Malwarebytes, The National Network to End Domestic Violence, NortonLifeLock, Operation Safe Escape, and WEISSER RING) have been joined by AEquitas with its Stalking Prevention, Awareness, and Resource Center (SPARC), Anonyome Labs, AppEsteem Corporation, bff Bundesverband Frauenberatungsstellen und Frauennotrufe, Centre Hubertine Auclert, Copperhead, Corrata, Commonwealth Peoples’ Association of Uganda, Cyber Peace Foundation, F-Secure, and Illinois Stalking Advocacy Center. The coalition is especially excited about adding organizations in India and Uganda, because stalkerware is a global problem that requires global solutions beyond the countries and regions represented by the coalition’s founding organizations. The Coalition has also produced an explanatory video, which describes common indicators to check for if a user thinks their device has been infected with stalkerware. The video is available in six languages: English, Spanish, Italian, German, French, and Portuguese. The Coalition’s website also contains resources detailing what stalkerware is, how it works, how to detect it, and how to protect your devices, as well as contact information for many local victims’ services organizations. Coalition members have also released documentation and evidence collection apps (DocuSAFE and NO STALK), for use on trusted devices to collect, store, and share evidence of abuse with law enforcement and survivor support organizations.  The fight is just beginning, but norms are already changing.

  • EFF Joins Coalition Calling On the EU to Introduce Interoperability Rules
    by Svea Windwehr on July 6, 2020 at 8:57 pm

    Today, EFF sent a joint letter to European Commission Executive Vice President Margrethe Vestager, highlighting the enormous potential of interoperability to help achieve the EU’s goals for Europe’s digital future. EFF joins a strong coalition of organizations representing European civil society organizations, entrepreneurs, and SMEs. We are calling on the European Commission to consider the role interoperability can play in ensuring that technology creates a fair and competitive economy and strengthens an open, democratic, and sustainable society. Specifically, we urge the Commission to include specific measures requiring interoperability of large Internet platforms in the forthcoming Digital Services Act package. This will strengthen user empowerment and competition in the European digital single market. Interoperability mandates will enable users to exercise greater control over their online experiences. No longer confronted with the binary choice of either staying on dominant platforms that do not serve their needs or losing access to their social network, users will be able to choose freely the tools that best respect their privacy, security, or accessibility preferences. Interoperability rules will also be crucial to ensure a dynamic market in which new entrants and innovative business models will have a fair shot to convince users of their value. The upcoming Digital Services Act is a crucial, and rare, opportunity to achieve these goals. This is not just any regulatory reform, but the most significant reform project the European Union has undertaken in two decades. And we intend to fight for users’ rights, transparency, anonymity, and limited liability for online platforms every step of the way. Guided by our policy principles, we will work with and support the Commission in its efforts to develop a Digital Services Act that best addresses the challenges for Europe’s Digital Future and will submit detailed responses to the public consultation now underway.              The full text of the letter to Executive Vice-President Margrethe Vestager is available here.

  • EFF Files Amicus Brief Arguing Geofence Warrants Violate the Fourth Amendment
    by Jennifer Lynch on July 2, 2020 at 10:53 pm

    Should the police be able to force Google to turn over identifying information on every phone within a certain geographic area—potentially hundreds or thousands of devices—just because a crime occurred there? We don’t think so. As we argued in an amicus brief filed recently in People v. Dawes, a case in San Francisco Superior Court, this is a general search and violates the Fourth Amendment. The court is scheduled to hear the defendant’s motion to quash and suppress evidence on July 7, 2020. In 2018, police in San Francisco were trying to figure out who robbed a house in a residential neighborhood. They didn’t have a suspect. Instead of using traditional investigative techniques to find the culprit, they turned to a new surveillance tool that’s been gaining interest from police across the country—a “geofence warrant.” Unlike traditional warrants for electronic records, a geofence warrant doesn’t start with a suspect or even an account; instead it directs Google to search a vast database of location history information to identify every device (for which Google has data) that happened to be in the area around the time of the crime, regardless of whether the device owner has any link at all to the crime under investigation. Because these investigations start with a location before they have a suspect, they are also frequently called “reverse location” searches. Google has a particularly robust, detailed, and searchable collection of location data, and, to our knowledge, it is the only company that complies with these warrants. Much of what we know about the data Google provides to police and how it provides that data comes from a declaration and an amicus brief it filed in a Virginia case called United States v. Chatrie. According to Google, the data it provides to police comes from its database called “Sensorvault,” where it stores location data for one of its services called “Location History.” Google collects Location History data from different sources, including wifi connections, GPS and Bluetooth signals, and cellular networks. This makes it much more precise than cell site location information and allows Google to estimate a device’s location to within 20 meters or less. This precision also allows Google to infer where a user has been (such as to a ski resort), what they were doing at the time (such as driving), and the path they took to get there. Location History is offered to users on both Android and IOS devices, but users must opt in to data collection. Google states that only about one-third of its users have opted in to Location History, but this represents “numerous tens of millions of Google users.” Police have been increasingly seeking access to this treasure trove of data over the last few years via geofence warrants. These warrants reportedly date to 2016, but Google states that it received 1500% more geofence warrants in 2018 than 2017 and 500% more in 2019 than in 2018. According to the New York Times, the company received as many as 180 requests in a single week in 2019. Geofence warrants typically follow a similar multi-stage process, which appears to have been created by Google. For the first stage, law enforcement identifies one or more geographic areas and time periods relevant to the crime. The warrant then requires Google to provide information about any devices, identified by a numerical identifier, that happened to be in the area within the given time period. Google says that, to comply with this first stage, it must search through its entire store of Location History data to identify responsive data—data on tens of millions of users, nearly all of whom are located well outside the geographic scope of the warrant. Google has also said that the volume of data it produces at this stage depends on the size and nature of the geographic area and length of time covered by the warrant, which vary considerably from one request to another, but the company once provided the government with identifying information for nearly 1,500 devices. After Google releases the initial de-identified pool of responsive data, police then, in the second stage, demand Google provide additional location history outside of the initially defined geographic area and time frame for a subset of users that the officers, at their own discretion, determine are “relevant” to their investigation. Finally, in the third stage, officers demand that Google provide identifying information for a smaller subset of devices, including the user’s name, email address, device identifier, phone number and other account information. Again, officers rely solely on their own discretion to determine this second subset and which devices to target for further investigation. There are many problems with this kind of a search. First, most of the information provided to law enforcement in response to a geofence warrant does not pertain to individuals suspected of the crime. Second, as not all device owners have opted in to Location History, search results are both over and under inclusive. Finally, Google has said there is only an estimated 68% chance that the user is actually where Google thinks they are, so the users Google identifies in response to a geofence warrant may not even be within the geographic area defined by the warrant (and therefore are outside the scope of the warrant).  Unsurprisingly, these problems have led to investigations that ensnare innocent individuals. In one case, police sought detailed information about a man in connection with a burglary after seeing his travel history in the first step of a geofence warrant. However, the man’s travel history was part of an exercise tracking app he used to log months of bike rides—rides that happened to take him past the site of the burglary. Investigators eventually acknowledged he should not have been a suspect, but not until after the man hired an attorney and after his life was upended for a time. This example shows why geofence warrants are so pernicious and why they violate the Fourth Amendment. They lack particularity because they don’t properly and specifically describe an account or a person’s data to be seized, and they result in overbroad searches that can ensnare countless people with no connection to the crime. These warrants leave it up to the officers to decide for themselves, based on no concrete standards, who is a suspect and who isn’t. The Fourth Amendment was written specifically to prevent these kinds of broad searches. As we argued in Dawes, a geofence warrant is a digital analog to the “general warrants” issued in England and Colonial America that authorized officers to search anywhere they liked, including people or homes —simply on the chance that they might find someone or something connected with the crime under investigation. The chief problem with searches like this is that they leave too much of the search to the discretion of the officer and can too easily result in general exploratory searches that unreasonably interfere with a person’s right to privacy. The Fourth Amendment’s particularity and probable cause requirements as well as the requirement of judicial oversight were designed to prevent this. Reverse location searches are the antithesis of how our criminal justice system is supposed to work. As with other technologies that purport to pull a suspect out of thin air—like face recognition, predictive policing, and genetic genealogy searches—there’s just too high a risk they will implicate an innocent person, shifting the burden of proving guilt from the government to the individual, who now has to prove their innocence. We think these searches are unconstitutional, even with a warrant. The defendant’s motion to quash the geofence warrant and motion to suppress the evidence will be heard in San Francisco Superior Court on July 7, 2020. Our Amicus Brief Related Cases: Carpenter v. United States

  • The New EARN IT Bill Still Threatens Encryption and Free Speech
    by Joe Mullin on July 2, 2020 at 9:50 pm

    The day before a committee debate and vote on the EARN IT Act, the bill’s sponsors replaced their bill with an amended version. Here’s their new idea: instead of giving a 19-person federal commission, dominated by law enforcement, the power to regulate the Internet, the bill now effectively gives that power to state legislatures.  And instead of requiring that Internet websites and platforms comply with the commission’s “best practices” in order to keep their vital legal protections under Section 230 for hosting user content, it simply blows a hole in those protections. State lawmakers will be able to create new laws allowing private lawsuits and criminal prosecutions against Internet platforms, as long as they say their purpose is to stop crimes against children.  The whole idea behind Section 230 is to make sure that you are responsible for your own speech online—not someone else’s. Currently, if a state prosecutor wants to bring a criminal case related to something said or done online, or a private lawyer wants to sue, in nearly all cases, the prosecutor has to seek out the actual speaker. They can’t just haul a website owner into court because of the user’s actions. But that will change if EARN IT passes. That’s why we sent a letter [PDF] yesterday to the Senate Judiciary Committee opposing the amended EARN IT bill. Section 230 protections enabled the Internet as we know it. Despite the politicized attacks on Section 230 from both left and right, the law actually works fine. It’s not a shield for Big Tech—it’s a shield for everyone who hosts online conversations. It protects small messaging and email services, and every blog’s comments section.  Once websites lose Section 230 protections, they’ll take drastic measures to mitigate their exposure. That will limit free speech across the Internet. They’ll shut down forums and comment sections, and cave to bogus claims that particular users are violating the rules, without doing a proper investigation. We’ve seen false accusations succeed in silencing users time and again in the copyright space, and even used to harass innocent users. If EARN IT passes, the range of possibilities for false accusations and censorship will expand.  EARN IT Still Threatens Encryption   When we say the original EARN IT was a threat to encryption, we’re not guessing. We know that a commission controlled by Attorney General William Barr will try to ban encryption, because Barr has said many times that he thinks encrypted services should be compelled to create backdoors for police. The Manager’s Amendment, approved by the Committee today, doesn’t eliminate this problem. It just empowers over 50 jurisdictions to follow Barr’s lead in banning encryption. An amendment by Sen. Patrick Leahy (D-VT), also voted into the bill, purports to protect encryption from being the states’ focus. It’s certainly an improvement, but we’re still concerned that the amended bill could be used to attack encryption. Sen. Leahy’s amendment prohibits holding companies liable because they use “end-to-end encryption, device encryption, or other encryption services.” But the bill still encourages state lawmakers to look for loopholes to undermine end-to-end encryption, such as demanding that messages be scanned on a local device, before they get encrypted and sent along to their recipient. We think that would violate the spirit of Senator Leahy’s amendment, but the bill opens the door for that question to be litigated over and over, in courts across the country. And again, this isn’t a theoretical problem. The idea of using “client-side scanning” to allow certain messages to be selected and sent to the government, circumventing the protections of end-to-end encryption, is one we’ve heard a lot of talk about in the past year. Despite the testimonials of certain experts who have sided with law enforcement, the fact is, client-side scanning breaks the protections of encryption. The EARN IT Act doesn’t stop client-side scanning, which is the most likely strategy for state lawmakers who want to use this bill to expand police powers in order to read our messages.  And it will only take one state to inspire a wave of prosecutions and lawsuits against online platforms. And just as some federal law enforcement agencies have declared they’re opposed to encryption, so have some state and local police.  The previous version of the bill suggested that if online platforms want to keep their Section 230 immunity, they would need to “earn it,” by following the dictates of an unelected government commission. But the new text doesn’t even give them a chance. The bill’s sponsors simply dropped the “earn” from EARN IT. Website owners—especially those that enable encryption—just can’t “earn” their immunity from liability for user content under the new bill. They’ll just have to defend themselves in court, as soon as a single state prosecutor, or even just a lawyer in private practice, decides that offering end-to-end encryption was a sign of indifference towards crimes against children.  Offering users real privacy, in the form of end-to-end encrypted messaging,  and robust platforms for free speech shouldn’t produce lawsuits and prosecutions. The new EARN IT bill will do just that, and should be opposed.  TAKE ACTION STOP THE EARN IT BILL BEFORE IT BREAKS ENCRYPTION

  • Google’s AMP, the Canonical Web, and the Importance of Web Standards
    by Alexis Hancock on July 2, 2020 at 9:11 pm

    Have you ever clicked on a link after googling something, only to find that Google didn’t take you to the actual webpage but to some weird Google-fied version of it? Instead of the web address being the source of the article, it still says “google” in the address bar on your phone? That’s what’s known as Google Accelerated Mobile Pages (AMP), and now Google has announced that AMP has graduated from the OpenJS Foundation Incubation Program. The OpenJS Foundation is a merged effort between major projects in the JavaScript ecosystem, such as NodeJS and jQuery, whose stated mission is “to support the healthy growth of the JavaScript and web ecosystem”. But instead of a standard starting with the web community, a giant company is coming to the community after they’ve already built a large part of the mobile web and are asking for a rubber stamp. Web community discussion should be the first step of making web standards, and not just a last-minute hurdle for Google to clear. What Is AMP? This Google-backed, stripped down HTML framework was created with the promises of creating faster web pages for a better user experience. Cutting out slower loading content, like those developed with JavaScript. At a high level, AMP works by fast loading stripped down versions of full web pages for mobile viewing. The Google AMP project was announced in late 2015 with the promise of providing publishers a faster way of serving and distributing content to their users. This also was marketed as a more adaptable approach than Apple News and Facebook Instant Articles. AMP pages began making an appearance by 2016. But right away, many observed that AMP encroached on the principles of the open web. The web was built on open standards, developed through consensus, that small and large actors alike can use. Which, in this case, entails keeping open web standards in the forefront and discouraging proprietary, closed standards. Instead of utilizing standard HTML markup tags, a developer would use AMP tags. For example, here’s what an embedded image looks like in classic HTML, versus what it looks like using AMP: HTML Image Tag: <img src=”src.jpg” alt=”src image” /> AMP Image Tag:  <amp-img src=”src.jpg” width=”900” height=”675” layout=”responsive” /> Since launch page speeds have proven to be faster when using AMP, the technology’s promises aren’t necessarily bad from a speed perspective alone. Of course, there are ways of improving performance other than using AMP, such as minimizing files, building lighter code, CDNs (content delivery networks), and caching. There are also other Google-backed frameworks like PWAs (progressive web applications) and service workers. AMP has been around for four years now, and the criticisms still carry into today with AMP’s latest progressions around a very important part of the web, the URL. Canonical URLs and AMP URLs When you visit a site, maybe your favorite news site, you would normally see the original domain along with an associated path to the page you are on: This, along with it’s SSL certificate would clarify that you are seeing web content served from this site at this URL with a good amount of trust. This is what would be considered a canonical URL. An AMP URL, however, can look like this: Using canonical URLs, users can more easily verify that the site they’re on is the one they’re trying to visit. But AMP URLs muddied the waters, and made users have to adapt new ways to verify the origins of original content. Whose Content? One step further is their structure for pre-rendered pages from cached content. This URL would not be in view of the user, but rather the content (text, images, etc.) served onto the cached page would be coming from the URL below. The final URL, the one in view or the URL bar, of a cached AMP page would look something like this: This cache model does not follow the web origin concept and creates a new framework and structure to adhere to. The promise is better performances and experience for users. Yet, the approach is implementation first and web standards later. Since Google has become such an ingrained part of the modern web for so many, any technology they deploy would immediately have a large share of users and adopters. This is also paired with other arguments other product teams within Google have made to reshape the URL as we know it. This fundamentally changed the way the mobile web is served for many users. Another, more recent development is the support for Signed HTTP Exchanges, or “SXG”, a subset of the Web Packages standard that allows further decoupling of distribution of web content from its origins with cryptographically signed HTTP exchanges (a web page). This is supposed to address the problem, introduced by AMP, that the URL a user sees does not correspond to the page they’re trying to visit. SXG allows the canonical URL (instead of the AMP URL) to be shown in the browser when you arrive, closing the loop back to the original publisher. The positive here is that a web standard was used, but the negative here is the speed of adoption without general consensus from other major stakeholders. Currently, SXG is only supported in Chrome and Chromium based browsers. Pushing AMP: how did a new “standard” take over? News publishers were among the first to adopt AMP. Google even partnered with a major CMS (content management system), WordPress, to further promote AMP. Publishers use CMS services to upload, edit, and host content, and WordPress holds about 60% of the market share as the CMS of choice. Publishers also compete on other Google products, such as Google Search. So perhaps some publishers adopted AMP because they thought it would improve SEO (search engine optimization) on one of the web’s most used search engines. However, this argument has been disputed by Google, and they maintain that performance is prioritized no matter what is used to get that page result to that performance measure. Since the Google Search algorithm is mainly in secret, we can only trust these statements at their word. Tangentially, the “Top Stories” feature in Search on mobile has recently dropped AMP as a requirement.  The AMP project was more closed off in terms of control in the beginning of it’s launch despite the fact it promoted itself as an open source project. Publishers ended up reporting higher speeds, but this was left up to a “time will tell” set of metrics. In conclusion, the statement “you don’t need AMP to rank higher” is often competing with “just use AMP and you will rank higher”. Which can be tempting to publishers trying to reach the performance bar to get their content prioritized. Web Community First We should focus less about whether or not AMP is a good tool for performance, and more about how this framework was molded by Google’s initial ownership. The cache layer is owned by Google, and even though it’s not required, most common implementations use this cache feature. Concerns around analytics have been addressed and they have also done the courtesy of allowing other major ad vendors into the AMP model concerning ad content. This is a mere concession though, since Google Analytics has such a large market share of the measured web. If Google was simply a web performance company that would still be too much centralization of the web’s decisions. But they are not just a one-function company, they are a giant conglomerate that already controls the largest mobile OS, web browser, and search engine in the world. Running the project through the OpenJS Foundation is a more welcome approach. The new governance structure consists of working groups, an advisory committee, and a technical steering committee of people inside and outside of Google. This should bring more voices to the table and structure AMP into a better process for future decisions. This move will allegedly de-couple Google AMP Cache, which hosts pages, from AMP runtime, which is the JavaScript source to process AMP components on a page. However, this is all well after AMP has been integrated into major news sites, e-commerce, and even nonprofits. So this new model is not an even-ground, democratic approach. No matter the intentions, good or bad, those who work with powerful entities need to check their power at the door if they want a more equitable and usable web. Not acknowledging the power one wields, only enforces a false sense of democracy that didn’t exist. Furthermore, the web standards process itself is far from perfect. Standards organizations are heavily dominated by members of corporate companies and the connections one may have to them offer immense social capital. Less-represented people don’t have the social capital to join or be a member. It’s a long way until a more equitable process occurs for these types of organizations; paired with the lack of diversity these kinds of groups tend to have, the costs of membership, and time commitments. These particular issues are not Google’s fault, but Google has an immense amount of power when it comes to joining these groups. When joining standards organizations, It’s not a matter of earning their way up, but deciding if they should loosen their reigns. At this point in time with the AMP project, Google can’t retroactively release the control it had in AMP’s adoption. And we can’t go back to a pre-AMP web to start over. The discussions about whether the AMP project should be removed, or discouraged for a different framework, have long passed. Whether or not users can opt-out of AMP has been decided in many corners of the web. All we can do now is learn from the process, and try to make sure AMP is developed in the best interests of users and publishers going forward. However, the open web shouldn’t be weathered by multiple lessons learned on power and control from big tech companies that obtusely need to re-learn accountability with each new endeavor.

  • Amazon’s Ring Enables the Over-Policing Efforts of Some of America’s Deadliest Law Enforcement Agencies
    by Jason Kelley on July 2, 2020 at 3:46 pm

    Ring, Amazon’s “smart” doorbell camera company, recently began sharing statistics on how many video requests police departments submit to users, and the numbers are staggering. In the first quarter of 2020 alone, police requested videos over 5000 times, using their partnerships with the company to email users directly and ask them to share private videos from their Ring devices. It’s unclear how many video requests were successful, as users have the option to deny the requests; however, even a small percentage of successful requests would mean potentially thousands of videos shared with police per year. With a warrant, police could also circumvent the device’s owner and get footage straight from Amazon, even if the owner denied the police.  SIGN THE PETITION TELL RING TO END ITS POLICE PARTNERSHIPS But this isn’t the only disturbing number: roughly half of the agencies that now partner with Ring also have been responsible for at least one fatal encounter in the last five years, an analysis of data from Ring, Fatal Encounters, and Mapping Police Violence shows. In fact, those agencies have been responsible for over a third of fatal police encounters nationwide. 1 If Amazon is truly concerned about “systemic racism and injustice,” it must immediately recognize the danger of building a digital porch-to-police pipeline Concerns about police violence must be tied to concerns about growing police surveillance and the lack of accountability that comes with it. Ring partnerships facilitate police access to video and audio footage from massive numbers of doorbell cameras aimed at public spaces such as sidewalks—a feature that could conceivably be used to identify participants in a protest through a neighborhood. They create a high-speed digital mechanism by which users can make snap judgements about who does, and who does not, belong in their neighborhood—and (sometimes dangerously) summon police to confront them. These partnerships also make it all too easy for law enforcement to harass, arrest, or detain someone who is simply exercising their legal rights to free expression, for example, by walking through a neighborhood or canvassing for a political campaign.  Ring Is More Concerned With Profit Than Safety or Privacy Alongside a growing number of civil liberties organizations and elected officials, EFF has expressed concern with these partnerships since they were initially reported two years ago. Since then, they have grown nearly exponentially, reaching over 1400 agencies after adding 600 in the last six months alone.  At a time when there’s more criticism than ever of the dangerous ways in which law enforcement interacts with their communities, and as citizens and companies alike are questioning the ways that they facilitate those interactions, why would Ring double down on partnerships with police? The answer is simple: profit.  Every time a community app leads to a 911 call on a person of color—whether a neighbor or a visitor—it puts that person at risk of being harassed or even killed by the police While at least some technology companies, and their employees, are reexamining their close relationships with law enforcement, Ring continues to intentionally blur the line between what’s best for the safety of a community and what’s best for its bottom line. Ring has gone so far as to draft press statements and social media posts for police to promote Ring cameras and write talking points to convince users to hand over their footage, creating a vicious cycle in which police promote the adoption of Ring, Ring terrifies people into thinking their homes are in danger, and Amazon sells more cameras. This arrangement makes salespeople out of what should be impartial and trusted protectors of our civic society. In some instances, local police departments even get credit toward buying cameras they can distribute to residents.  Every time a community app leads to a 911 call on a person of color—whether a neighbor or a visitor—it puts that person at risk of being harassed or even killed by the police. Research shows that users are more likely to report Black people as engaging in suspicious activities on social networks like Neighbors or Nextdoor. The dangers aren’t just theoretical: Ring and other neighborhood watch apps have both been under fire for years for increasing paranoia, normalizing surveillance, and facilitating reporting of so-called “suspicious” behavior that ultimately amounts to racial profiling. A Black real estate agent was stopped by police because neighbors on Nextdoor thought it was “suspicious” for him to ring a doorbell. A man was shot and killed by sheriff’s deputies the same night a woman’s Ring camera captured video of him on her porch and that she subsequently shared to the Neighbors app. These apps essentially streamline police escalation and increase the likelihood of violent interactions. But far from pushing users to be cautious and careful when reporting behavior, Ring has moved in the opposite direction, going so far as to gamify the reporting of suspicious activity and offer free products in exchange for reports.  Ring isn’t the only surveillance equipment used by these agencies, of course. A more detailed survey (which we are also working on in our Atlas of Surveillance) would no doubt indicate the use of other dangerous technologies, like face recognition, automated license plate readers, and drones—but Ring partnerships are particularly insidious. Importantly, Ring hasn’t been shown to decrease crime. But it does allow for the expansion of unaccountable mass surveillance by law enforcement. By asking residents (and private companies) to do the technological work of policing, Ring minimizes transparency and police accountability; you can’t send a public records request to Amazon. And if the owner of a Ring camera refuses to hand over their video to police, officers can still go straight to Amazon with a warrant and ask them for the footage, circumventing the camera’s owner. A relationship with Ring, an increasing desire for door-front surveillance, and a responsiveness to paranoid reports from these apps are all indicators of a type of over-policing that puts officers more frequently into contact with vulnerable populations. Amazon Ring should not be partnering with any law enforcement agency—but it’s especially concerning how many of the agencies they’ve partnered with are responsible for deaths in their communities in the last few years.  Ring Must Admit the Danger of These Partnerships In June, Nextdoor, the neighborhood watch and social networking app which competes with Ring’s Neighbors app, announced it would end a “Forward to Police” feature, which allows users to share their posts or urgent alerts directly with law enforcement. The termination of this feature is part of the company’s “anti-racism work” and its “efforts to make Nextdoor a place where all neighbors feel welcome.”  This change from Nextdoor is a good sign—but it is only a very small step in the right direction. Unlike Nextdoor’s Forward to Police feature, which Nextdoor says was only used by a small percentage of law enforcement agencies, the growth of police-Ring partnerships is not showing any signs of slowing down. This must change. Citing the current protests against policing, Amazon halted the sale of its face recognition tool, Rekognition, to police for one year—but that signal means nothing to the public if the company continues to expedite police access to home surveillance footage. SIGN THE PETITION TELL RING TO END ITS POLICE PARTNERSHIPS There’s no evidence Ring surveillance makes neighborhoods safer, but there is evidence that it can make police less accountable, invade community privacy, and stoke the fires of racial prejudice. If Amazon is truly concerned about “systemic racism and injustice,” as the company claims, it must immediately recognize the danger of building a digital porch-to-police pipeline, and end these invasive, reckless Ring-police partnerships. 1.   Ring currently has partnerships with 1403 law enforcement agencies. A cursory analysis of data from Mapping Police Violence (MPV) from 2015 to the present identified 559 of these agencies as also having been responsible for at least one police-involved death (40%), while in the same time period, the Fatal Encounters dataset identified 695 such agencies as having been responsible for at least one police-involved death (50%). Additionally, MPV reports 6084 deaths, of which agencies with Ring partnerships accounted for 2165, while Fatal Encounters reports 9635 deaths, with 3382 involving agencies with Ring partnerships (both equal to 35% of total deaths).

  • Hundreds of Police Departments with Deadly Histories Partner with Amazon’s Ring Surveillance Cameras
    by Rebecca Jeschke on July 2, 2020 at 3:46 pm

    Partnerships Include Agencies Responsible for Over 30% of Fatal Encounters Over the Last Five YearsSan Francisco – Research by the Electronic Frontier Foundation (EFF) shows that hundreds of U.S. police departments with deadly histories have official partnerships with Amazon’s Ring—a home-surveillance company that makes it easy to send video footage to law enforcement. Ring sells networked cameras, often bundled with doorbells or lighting, that record video when they register movement and then send notifications to owners’ cell phones. Ring’s partnerships allow police to seek access to private video footage directly from residents through a special web portal. Ring now works with over 1400 agencies, adding 600 in the last six months alone. An analysis of data from Ring, Fatal Encounters, and Mapping Police Violence shows that roughly half of the agencies that Ring has partnered with had fatal encounters in the last five years. In fact, those departments have been responsible for over a third of fatal police encounters nationwide, including the deaths of Breonna Taylor, Alton Sterling, Botham Jean, Antonio Valenzuela, Michael Ramos, and Sean Monterrosa. “At a time when communities are more concerned than ever before about their relationship with law enforcement, these partnerships encourage an atmosphere of mistrust and could allow for near-constant surveillance by local police,” said EFF Digital Strategist Jason Kelley. “These partnerships make it all too easy for law enforcement to harass, arrest, or detain someone who is simply exercising their legal rights to free expression—for example, by walking through a neighborhood, protesting in their local community, or canvassing for a political campaign.” Recently, Nextdoor’s Forward to Police feature, which allows users to share their posts or urgent alerts directly with law enforcement, was ended as part of the company’s “anti-racism” work. EFF calls on Ring to do the same by ending their partnerships with law enforcement agencies. “People across the nation are calling for policing reform,” said Kelley. “Amazon has acknowledged this important movement, and stopped selling its face recognition tool called Rekognition to police for one year. But that concession means nothing to the public if the company continues to expedite police access to home surveillance footage through Ring.” For the full report: EFF’s petition to Amazon: Contact:  JasonKelleyDigital and Campaign [email protected]

  • “Don’t Believe Proven Liars”: The Absolute Minimum Standard of Prudence in Merger Scrutiny
    by Cory Doctorow on July 1, 2020 at 11:07 pm

    “There’s an old saying in Tennessee — I know it’s in Texas, probably in Tennessee — that says, fool me once, shame on — shame on you. Fool me — you can’t get fooled again.” -President George W Bush Anti-monopoly enforcement has seen a significant shift since the 1970s. Where the Department of Justice once routinely brought suits against anticompetitive mergers, today, that’s extremely rare, even between giant companies in highly concentrated industries. (The strongest remedy against a monopolist—breaking them up altogether—is a relic of the past). Regulators used to go to court to block mergers to prevent companies from growing so large that they could abuse their market power. In place of blocking mergers, today’s competition regulators like to add terms and conditions to them, exacting promises from companies to behave themselves after the merger is complete. This safeguard continues to enjoy popularity with competition regulators, despite the fact that companies routinely break their public promises to safeguard users’ privacy and rarely face consequences for doing so. (These two facts may be related!) When they do get sanctioned, the punishment almost never exceeds the profits from the broken promise. “A fine is a price.” Today, we’d like to propose a modest, incremental improvement to this underpowered deterrent: If a company breaks a promise, and then it makes the same promise when seeking approval for a merger, we should not believe it. Read on for three significant broken promises we’d be fools to believe again. “There’s a sucker born every minute” -Traditional (often misattributed to PT Barnum) Amazon promised not to use data from the sellers on its platform to launch competing products. It lied. In the summer of 2019, Amazon’s General Counsel Nate Sutton made the company’s position crystal clear when he told Congress, “We don’t use individual seller data directly to compete.” In April, the Wall Street Journal spoke to 20 former Amazon employees who said they did exactly this, confirming the suspicions of Amazon sellers, who’d been told that it was just a coincidence that the world’s largest online retailer kept cloning the most successful products on its platform. “Insanity is doing the same thing over and over again, but expecting different results.” -Rita Mae Brown (often misattributed to Albert Einstein) In 2014, Facebook bought WhatsApp for $19b, and promised users that it wouldn’t harvest their data and mix it with the surveillance troves it got from Facebook and Instagram. It lied. Years later, Facebook mixes data from all of its properties, mining it for data that ultimately helps advertisers, political campaigns and fraudsters find prospects for whatever they’re peddling. Today, Facebook is in the process of acquiring Giphy, and while Giphy currently doesn’t track users when they embed GIFs in messages, Facebook could start doing that anytime. “Once is happenstance. Twice is coincidence. The third time it’s enemy action.” -Ian Fleming In 2007, Google bought DoubleClick, a web advertising network. It promised not to merge advertising data with Google user profiles. It lied. Like most Big Tech companies, much of Google’s growth comes from buying smaller companies. Because Google’s primary revenue source is targeted advertising, these mergers inevitably raise questions about data-mining. As Google’s role in online advertising is under scrutiny, antitrust enforcers must not accept a new “We mean it, this time. Seriously, guys” promise to protect user data. It would be easy to argue that promises made in a formal settlement with antitrust enforcers will carry more weight than mere public statements about what a company will do. But those public promises can keep customers unaware, and enforcers away, as the company extends its dominance. Those promises have to mean something. Hiding privacy abuses behind false promises “effectively raises rivals’ costs, as they try to compete against what appears to be high quality, but is, in truth, low quality.” That could raise antitrust liability. An overhaul to competition enforcement is long past due, but while that’s happening, can we take the absolute smallest step toward prudent regulation and stop believing liars when they tell the same lies?

  • Wikileaks-Hosted “Most Wanted Leaks” Reflects the Transparency Priorities of Public Contributors
    by rainey Reitman on July 1, 2020 at 7:24 pm

    The government recently released a superseding indictment[1] against Wikileaks editor in chief Julian Assange, currently imprisoned and awaiting extradition in the United Kingdom. As we’ve written before, this prosecution poses a clear threat to journalism, and, whether or not Assange considers himself a journalist, the indictment targets routine journalistic practices such as working with and encouraging sources during an investigation. While considering the superseding indictment, it’s useful to look at some of the features carrying over from the previous version. Through much of the indictment, the government describes and refers back to a page on the Wikileaks website describing the “Most Wanted Leaks of 2009.[2]” The implication in the indictment is that Wikileaks was actively soliciting leaks with this Most Wanted Leaks list, but the government is leaving out a crucial piece of nuance about the Most Wanted Leaks page: Unlike much of, the Most Wanted Leaks was actually a publicly-editable wiki.   Rather than viewing this document as a wishlist generated by Wikileaks staff or a reflection of Assange’s personal priorities, we must understand that this was a publicly-generated list developed by contributors who felt each “wanted” document offered information that would be valuable to the public. Archives of the page show evidence of the editable nature of the document:  The Most Wanted Leaks page shows that it has visible “edit” links, similar to what one might find on any wiki page. Language on the page says that one can “securely and anonymously” add your nomination by directly editing the page: And goes on to encourage contributors to “simply click “edit” on the country below” to make changes to the page: While we don’t know how many people contributed to the page at the time, the different formatting and writing styles across the page support the idea that this page was edited by many different people. But the government’s indictment, which names this document no less than 14 times and dedicates multiple pages to describing it, never explains the crowd-sourced nature of the Most Wanted Leaks document. It’s easy to understand why. The government prosecutors are trying to paint a picture of Assange as a mastermind soliciting leaks, and is charging him with violating computer crime law and the Espionage Act. It doesn’t suit their narrative to show Wikileaks as a host for a crowdsourced page where activists, scholars, and government accountability experts from across the globe could safely and anonymously offer their feedback on the transparency failures of their own governments. But as we analyze the indictment, it’s important that we keep this context in mind. It’s overly simplistic to describe the Most Wanted Leaks list, as the government does in its indictment, as “ASSANGE’s solicitation of classified information made through the Wikileaks website” or a way “to recruit individuals to hack into computers and/or illegally obtain and disclose classified information Wikileaks.” This framing excises the role of the untold number of contributors to this page, and lacks an understanding of how modern wikis and distributed projects work. We’ve long argued that working with sources to obtain classified documents of public importance is a well-established and integral part of investigative journalism and protected by the First Amendment. Even if Assange had himself written and posted everything on the Most Wanted Leaks page, then the First Amendment would protect his right to do so. There is no law in the United States that would prevent a publisher from publicly listing the types of stories or leaks they would like to be made publicly. But that’s not what happened here—in this case, Wikileaks was providing a forum where contributors from around the world could identify documents and data they felt were important to be made public. And the First Amendment clearly protects the rights of websites to host a public forum of that nature. Many of the documents on the Most Wanted Leaks page are of clear public interest. Some of the documents requested by editors of the page include: Lists of domains that are censored (or on proposed or voluntary censorship lists) in China, Australia, Germany, and the U.K. In Austria, the source code for e-Voting systems used in student elections. Documents detailing the Vatican’s interactions with Nazi Germany. Profit-sharing agreements between the Ugandan government and oil companies PACER – the United States’ federal court record search database. While today it’s in the government’s interest to paint Wikileaks as a rogue band of hackers directed by Assange, the Most Wanted Leaks page epitomizes one of the most important features of Wikileaks: that as a publisher, it served the public interest. Wikileaks served activists, human rights defenders, scholars, reformers, journalists and other members of the public. With the Most Wanted Leaks page, it gave members of the public a platform to speak anonymously about documents they believed would further public understanding. It’s an astonishingly thoughtful and democratic way for the public to educate and communicate their priorities to potential whistleblowers, those in power, and other members of the public. The ways Wikileaks served and furthered the public interest doesn’t fit the prosecution’s litigation strategy. If Assange goes to court to combat the Espionage charges he is facing, he may well be prevented from discussing the public interest and impact of Wikileaks’ publication history. That’s because the Espionage Act, passed in 1917, pre-dated modern communications technology and was never designed as a tool to crack down on investigative journalists and their publishers. There’s no public interest defense to the Espionage Act, and those charged under the Espionage Act may have no chance to even explain their motivation or the impact—good or bad—of their actions.   Assange’s arrest in April 2019 was based on a single charge, under the Computer Fraud and Abuse Act (CFAA), arising from a single, unsuccessful attempt to crack a password. At the time, it was clear to us that the government’s CFAA charge was being used as a thin cover for attacking the journalism. The original May 2019 Superseding Indictment added 17 additional charges to the CFAA charge, and clarifying it was charging both conspiracy and a direct violation. In the Second Superseding Indictment, however, the direct CFAA charge is gone, leaving the charge of Conspiracy to Commit Computer Intrusion. The government removed the paragraphs specifying the password cracking as the particular factual grounds, now basing this Count vaguely on the acts “described in the [27 page] General Allegations Section of this Indictment.” Removing the direct CFAA charge does not make this indictment any less dangerous as an attack on journalism. These General Allegations include many normal journalistic practices, all essential to modern reporting: communications over secure chat service, transferring files with cloud services, removing usernames and logs to protect the source’s identity, and, now in the Second Superseding Indictment, having a crowd-sourced list of documents that the contributors believed would further public understanding. By removing the factual specificity, the Second Superseding Indictment only deepens the chilling effect on journalists who use modern tools to report on matters of public interest. Regardless of how you feel about Assange as a person, we should all be concerned about his prosecution. If found guilty, the harm won’t be just to Assange himself—it will be to every journalist and news outlet that will face legal uncertainty for working with sources to publish leaked information. And a weakened press ultimately hurts the public’s ability to access truthful and relevant information about those in power. And that is directly against the public interest. Read the new charges against Assange. [1] A superseding indictment means that the government is replacing its original charges with new, amended charges. [2] The Most Wanted Leaks document was also submitted in Chelsea Manning’s trial.   Related Cases: Bank Julius Baer & Co v. Wikileaks

  • EFF to Court: Social Media Users Have Privacy and Free Speech Interests in Their Public Information
    by Sophia Cope on June 30, 2020 at 11:59 pm

    Special thanks to legal intern Rachel Sommers, who was the lead author of this post. Visa applicants to the United States are required to disclose personal information including their work, travel, and family histories. And as of May 2019, they are required to register their social media accounts with the U.S. government. According to the State Department, approximately 14.7 million people will be affected by this new policy each year. EFF recently filed an amicus brief in Doc Society v. Pompeo, a case challenging this “Registration Requirement” under the First Amendment. The plaintiffs in the case, two U.S.-based documentary film organizations that regularly collaborate with non-U.S. filmmakers and other international partners, argue that the Registration Requirement violates the expressive and associational rights of both their non-U.S.-based and U.S.-based members and partners. After the government filed a motion to dismiss the lawsuit, we filed our brief in district court in support of the plaintiffs’ opposition to dismissal.  In our brief, we argue that the Registration Requirement invades privacy and chills free speech and association of both visa applicants and those in their social networks, including U.S. persons, despite the fact that the policy targets only publicly available information. This is amplified by the staggering number of social media users affected and the vast amounts of personal information they publicly share—both intentionally and unintentionally—on their social media accounts. Social media profiles paint alarmingly detailed pictures of their users’ personal lives. By monitoring applicants’ social media profiles, the government can obtain information that it otherwise would not have access to through the visa application process. For example, visa applicants are not required to disclose their political views. However, applicants might choose to post their beliefs on their social media profiles. Those seeking to conceal such information might still be exposed by comments and tags made by other users. And due to the complex interactions of social media networks, studies have shown that personal information about users such as sexual orientation can reliably be inferred even when the user doesn’t expressly share that information. Although consular officers might be instructed to ignore this information, it is not unreasonable to fear that it might influence their decisions anyway. Just as other users’ online activity can reveal information about visa applicants, so too can visa applicants’ online activity reveal information about other users, including U.S. persons. For example, if a visa applicant tags another user in a political rant or posts photographs of themselves and the other user at a political rally, government officials might correctly infer that the other user shares the applicant’s political beliefs. In fact, one study demonstrated that it is possible to accurately predict personal information about those who do not use any form of social media based solely on personal information and contact lists shared by those who do. The government’s surveillance of visa applicants’ social media profiles thus facilitates the surveillance of millions—if not billions—more people. Because social media users have privacy interests in their public social media profiles, government surveillance of digital content risks chilling free speech. If visa applicants know that the government can glean vast amounts of personal information about them from their profiles—or that their anonymous or pseudonymous accounts can be linked to their real-world identities—they will be inclined to engage in self-censorship. Many will likely curtail or alter their behavior online—or even disengage from social media altogether. Importantly, because of the interconnected nature of social media, these chilling effects extend to those in visa applicants’ social networks, including U.S. persons. Studies confirm these chilling effects. Citizen Lab found that 62 percent of survey respondents would be less likely to “speak or write about certain topics online” if they knew that the government was engaged in online surveillance. A Pew Research Center survey found that 34 percent of its survey respondents who were aware of the online surveillance programs revealed by Edward Snowden had taken at least one step to shield their information from the government, including using social media less often, uninstalling certain apps, and avoiding the use of certain terms in their digital communications. One might be tempted to argue that concerned applicants can simply set their accounts to private. Some users choose to share their personal information—including their names, locations, photographs, relationships, interests, and opinions—with the public writ large. But others do so unintentionally. Given the difficulties associated with navigating privacy settings within and across platforms and the fact that privacy settings often change without warning, there is good reason to believe that many users publicly share more personal information than they think they do. Moreover, some applicants might fear that setting their accounts to private will negatively impact their applications. Others—especially those using social media anonymously or pseudonymously—might be loath to maximize their privacy settings because they use their platforms with the specific intention of reaching large audiences. These chilling effects are further strengthened by the broad scope of the Registration Requirement, which allows the government to continue surveilling applicants’ social media profiles once the application process is over. Personal information obtained from those profiles can also be collected and stored in government databases for decades. And that information can be shared with other domestic and foreign governmental entities, as well as current and prospective employers and other third parties. It is no wonder, then, that social media users might severely limit or change the way they use social media. Secrecy should not be a prerequisite for privacy—and the review and collection by the government of personal information that is clearly outside the scope of the visa application process creates unwarranted chilling effects on both visa applicants and their social media associates, including U.S. persons. We hope that the D.C. district court denies the government’s motion to dismiss the case and ultimately strikes down the Registration Requirement as unconstitutional under the First Amendment.

  • Inside the Invasive, Secretive “Bossware” Tracking Workers
    by Bennett Cyphers on June 30, 2020 at 11:38 pm

    COVID-19 has pushed millions of people to work from home, and a flock of companies offering software for tracking workers has swooped in to pitch their products to employers across the country. The services often sound relatively innocuous. Some vendors bill their tools as “automatic time tracking” or “workplace analytics” software. Others market to companies concerned about data breaches or intellectual property theft. We’ll call these tools, collectively, “bossware.” While aimed at helping employers, bossware puts workers’ privacy and security at risk by logging every click and keystroke, covertly gathering information for lawsuits, and using other spying features that go far beyond what is necessary and proportionate to manage a workforce. This is not OK. When a home becomes an office, it remains a home. Workers should not be subject to nonconsensual surveillance or feel pressured to be scrutinized in their own homes to keep their jobs. What can they do? Bossware typically lives on a computer or smartphone and has privileges to access data about everything that happens on that device. Most bossware collects, more or less, everything that the user does. We looked at marketing materials, demos, and customer reviews to get a sense of how these tools work. There are too many individual types of monitoring to list here, but we’ll try to break down the ways these products can surveil into general categories. The broadest and most common type of surveillance is “activity monitoring.” This typically includes a log of which applications and websites workers use. It may include who they email or message—including subject lines and other metadata—and any posts they make on social media. Most bossware also records levels of input from the keyboard and mouse—for example, many tools give a minute-by-minute breakdown of how much a user types and clicks, using that as a proxy for productivity. Productivity monitoring software will attempt to assemble all of this data into simple charts or graphs that give managers a high-level view of what workers are doing. Every product we looked at has the ability to take frequent screenshots of each worker’s device, and some provide direct, live video feeds of their screens. This raw image data is often arrayed in a timeline, so bosses can go back through a worker’s day and see what they were doing at any given point. Several products also act as a keylogger, recording every keystroke a worker makes, including unsent emails and private passwords. A couple even let administrators jump in and take over remote control of a user’s desktop. These products usually don’t distinguish between work-related activity and personal account credentials, bank data, or medical information. InterGuard advertises that its software “can be silently and remotely installed, so you can conduct covert investigations [of your workers] and bullet-proof evidence gathering without alarming the suspected wrongdoer.” Some bossware goes even further, reaching into the physical world around a worker’s device. Companies that offer software for mobile devices nearly always include location tracking using GPS data. At least two services—StaffCop Enterprise and CleverControl—let employers secretly activate webcams and microphones on worker devices.  There are, broadly, two ways bossware can be deployed: as an app that’s visible to (and maybe even controllable by) the worker, or as a secret background process that workers can’t see. Most companies we looked at give employers the option to install their software either way.  Visible monitoring Sometimes, workers can see the software that is surveilling them. They may have the option to turn the surveillance on or off, often framed as “clocking in” and “clocking out.” Of course, the fact that a worker has turned off monitoring will be visible to their employer. For example, with Time Doctor, workers may be given the option to delete particular screenshots from their work session. However, deleting a screenshot will also delete the associated work time, so workers only get credit for the time during which they are monitored.  Workers may be given access to some, or all, of the information that’s collected about them. Crossover, the company behind WorkSmart, compares its product to a fitness tracker for computer work. Its interface allows workers to see the system’s conclusions about their own activity presented in an array of graphs and charts. Different bossware companies offer different levels of transparency to workers. Some give workers access to all, or most, of the information that their managers have. Others, like Teramind, indicate that they are turned on and collecting data, but don’t reveal everything they’re collecting. In either case, it can often be unclear to the user what data, exactly, is being collected, without specific requests to their employer or careful scrutiny of the software itself. Invisible monitoring The majority of companies that build visible monitoring software also make products that try to hide themselves from the people they’re monitoring. Teramind, Time Doctor, StaffCop, and others make bossware that’s designed to be as difficult to detect and remove as possible. At a technical level, these products are indistinguishable from stalkerware. In fact, some companies require employers to specifically configure antivirus software before installing their products, so that the worker’s antivirus won’t detect and block the monitoring software’s activity. A screenshot from TimeDoctor’s sign-up flow, which allows employers to choose between visible and invisible monitoring. This kind of software is marketed for a specific purpose: monitoring workers. However, most of these products are really just general purpose monitoring tools. StaffCop offers a version of their product specifically designed for monitoring children’s use of the Internet at home, and ActivTrak states that their software can also be used by parents or school officials to monitor kids’ activity. Customer reviews for some of the software indicate that many customers do indeed use these tools outside of the office. Most companies that offer invisible monitoring recommend that it only be used for devices that the employer owns. However, many also offer features like remote and “silent” installation that can load monitoring software on worker computers, without their knowledge, while their devices are outside the office. This works because many employers have administrative privileges on computers they distribute. But for some workers, the company laptop they use is their only computer, so company monitoring is ever-present. There is great potential for misuse of this software by employers, school officials, and intimate partners. And the victims may never know that they are subject to such monitoring. The table below shows the monitoring and control features available from a small sample of bossware vendors. This isn’t a comprehensive list, and may not be representative of the industry as a whole; we looked at companies that were referred to in industry guides and search results that had informative publicly-facing marketing materials.  Table: Common surveillance features of bossware products Activity monitoring (apps, websites) Screenshots or screen recordings Keylogging Webcam/ microphone activation Can be made “invisible” ActivTrak confirmed confirmed confirmed CleverControl confirmed confirmed confirmed confirmed (1, 2) confirmed DeskTime confirmed confirmed confirmed Hubstaff confirmed confirmed Interguard confirmed confirmed confirmed confirmed StaffCop confirmed confirmed confirmed confirmed  (1, 2) confirmed Teramind confirmed confirmed confirmed confirmed TimeDoctor confirmed confirmed confirmed Work Examiner confirmed confirmed confirmed confirmed WorkPuls confirmed confirmed confirmed Features of several worker-monitoring products, based on the companies’ marketing material. 9 of the 10 companies we looked at offered “silent” or “invisible” monitoring software, which can collect data without worker knowledge. How common is bossware? The worker surveillance business is not new, and it was already quite large before the outbreak of a global pandemic. While it’s difficult to assess how common bossware is, it’s undoubtedly become much more common as workers are forced to work from home due to COVID-19. Awareness Technologies, which owns InterGuard, claimed to have grown its customer base by over 300% in just the first few weeks after the outbreak. Many of the vendors we looked at exploit COVID-19 in their marketing pitches to companies. Some of the biggest companies in the world use bossware. Hubstaff customers include Instacart, Groupon, and Ring. Time Doctor claims 83,000 users; its customers include Allstate, Ericsson, Verizon, and Re/Max. ActivTrak is used by more than 6,500 organizations, including Arizona State University, Emory University, and the cities of Denver and Malibu. Companies like StaffCop and Teramind do not disclose information about their customers, but claim to serve clients in industries like health care, banking, fashion, manufacturing, and call centers. Customer reviews of monitoring software give more examples of how these tools are used.  Let’s be clear: this software is specifically designed to help employers read workers’ private messages without their knowledge or consent. By any measure, this is unnecessary and unethical. We don’t know how many of these organizations choose to use invisible monitoring, since the employers themselves don’t tend to advertise it. In addition, there isn’t a reliable way for workers themselves to know, since so much invisible software is explicitly designed to evade detection. Some workers have contracts that authorize certain kinds of monitoring or prevent others. But for many workers, it may be impossible to tell whether they’re being watched. Workers who are concerned about the possibility of monitoring may be safest to assume that any employer-provided device is tracking them. What is the data used for? Bossware vendors market their products for a wide variety of uses. Some of the most common are time tracking, productivity tracking, compliance with data protection laws, and IP theft prevention. Some use cases may be valid: for example, companies that deal with sensitive data often have legal obligations to make sure data isn’t leaked or stolen from company computers. For off-site workers, this may necessitate a certain level of on-device monitoring. But an employer should not undertake any monitoring for such security purposes unless they can show it is necessary, proportionate, and specific to the problems it’s trying to solve. Unfortunately, many use cases involve employers wielding excessive power over workers. Perhaps the largest class of products we looked at are designed for “productivity monitoring” or enhanced time tracking—that is, recording everything that workers do to make sure they’re working hard enough. Some companies frame their tools as potential boons for both managers and workers. Collecting information about every second of a worker’s day isn’t just good for bosses, they claim—it supposedly helps the worker, too. Other vendors, like Work Examiner and StaffCop, market themselves directly to managers who don’t trust their staff. These companies often recommend tying layoffs or bonuses to performance metrics derived from their products. Marketing material from Work Examiner’s home page, Some firms also market their products as punitive tools, or as ways to gather evidence for potential worker lawsuits. InterGuard advertises that its software “can be silently and remotely installed, so you can conduct covert investigations [of your workers] and bullet-proof evidence gathering without alarming the suspected wrongdoer.” This evidence, it continues, can be used to fight “wrongful termination suits.” In other words, InterGuard can provide employers with an astronomical amount of private, secretly-gathered information to try to quash workers’ legal recourse against unfair treatment. None of these use cases, even the less-disturbing ones discussed above, warrant the amount of information that bossware usually collects. And nothing justifies hiding the fact that the surveillance is happening at all. Most products take periodic screenshots, and few of them allow workers to choose which ones to share. This means that sensitive medical, banking, or other personal information are captured alongside screenshots of work emails and social media. Products that include keyloggers are even more invasive, and often end up capturing passwords to workers’ personal accounts.  Work Examiner’s description of its Keylogging feature, specifically highlighting its ability to capture private passwords. Unfortunately, excessive information collection often isn’t an accident, it’s a feature. Work Examiner specifically advertises its product’s ability to capture private passwords. Another company, Teramind, reports on every piece of information typed into an email client—even if that information is subsequently deleted. Several products also parse out strings of text from private messages on social media so that employers can know the most intimate details of workers’ personal conversations.  Let’s be clear: this software is specifically designed to help employers read workers’ private messages without their knowledge or consent. By any measure, this is unnecessary and unethical. What can you do? Under current U.S. law, employers have too much leeway to install surveillance software on devices they own. In addition, little prevents them from coercing workers to install software on their own devices (as long as the surveillance can be disabled outside of work hours). Different states have different rules about what employers can and can’t do. But workers often have limited legal recourse against intrusive monitoring software.  That can and must change. As state and national legislatures continue to adopt consumer data privacy laws, they must also establish protections for workers with respect to their employers. To start: Surveillance of workers—even on employer-owned devices—should be necessary and proportionate.  Tools should minimize the information they collect, and avoid vacuuming up personal data like private messages and passwords.  Workers should have the right to know what exactly their managers are collecting.  And workers need a private right of action, so they can sue employers that violate these statutory privacy protections. In the meantime, workers who know they are subject to surveillance— and feel comfortable doing so—should engage in conversations with their employers. Companies that have adopted bossware must consider what their goals are, and should try to accomplish them in less-intrusive ways. Bossware often incentivizes the wrong kinds of productivity—for example, forcing people to jiggle their mouse and type every few minutes instead of reading or pausing to think. Constant monitoring can stifle creativity, diminish trust, and contribute to burnout. If employers are concerned about data security, they should consider tools that are specifically tailored to real threats, and which minimize the personal data caught up in the process. Many workers won’t feel comfortable speaking up, or may suspect that their employers are monitoring them in secret. If they are unaware of the scope of monitoring, they should consider that work devices may collect everything—from web history to private messages to passwords. If possible, they should avoid using work devices for anything personal. And if workers are asked to install monitoring software on their personal devices, they may be able to ask their employers for a separate, work-specific device from which private information can be more easily siloed away. Finally, workers may not feel comfortable speaking up about being surveilled out of concern for staying employed in a time with record unemployment. A choice between invasive and excessive monitoring and joblessness is not really a choice at all. COVID-19 has put new stresses on us all, and it is likely to fundamentally change the ways we work as well. However, we must not let it usher in a new era of even-more-pervasive monitoring. We live more of our lives through our devices than ever before. That makes it more important than ever that we have a right to keep our digital lives private—from governments, tech companies, and our employers.

  • EFF Successfully Defends Users’ Right to Challenge Patents and Still Recover Legal Fees
    by Alex Moss on June 30, 2020 at 8:59 pm

    When individuals and companies are wrongly accused of patent infringement, they should be encouraged to stand up and defend themselves. When they win, the public does too. While the patent owner loses revenue, the rest of society gets greater access to knowledge, product choice, and space for innovation. This is especially true when defendants win by proving the patent asserted against them is invalid. In such cases, the patent gets cancelled, and the risk of wrongful threats against others vanishes. The need to encourage parties to pursue meritorious defenses, is partly why patent law gives judges the power to force losers to pay a winner’s legal fees in “exceptional” patent cases. The fee-shifting allowed in patent cases is especially important because there are so many invalid patents in the possession of patent trolls, which are entities that exploit the exorbitant costs of litigating in federal court to scare defendants into paid settlements. When patent trolls abuse the litigation system, judges have to make sure that they pay a price. That’s why the selective fee-shifting that happens in patent cases is so important. However, proving invalidity in district court takes a lot of time and money. That’s why Congress created a faster, cheaper alternative when it passed the America Invents Act in 2011. That alternative is the IPR system, which allows parties to get a decision on a patent’s validity in less expensive, streamlined, proceedings at the Patent Office. One benefit of this system is the huge savings to parties and courts of avoiding needless patent litigation. Another is that going to the Patent Office should, in theory, yield more accurate decisions. When Congress created the IPR system, the whole point was to encourage parties to use it to make patent litigation cheaper and faster while improving the quality of issued patents by allowing the Patent Office to weed out those it shouldn’t have granted.  Fee-shifting and IPR are both meant to deter meritless patent lawsuits. That’s why we weighed in last year in a case called Dragon Intellectual Property v. Dish Network. In April, a panel of Federal Circuit agreed with our position. It’s a win for properly applied fee-shifting in the patent system, and for every company that wants to fight back after being hit with a meritless patent threat. In the Dragon v. Dish case, a federal circuit tried to stop defendant Dish Network from getting its fees because of Dish’s success using the Patent Office’s inter partes review (IPR) system. That’s right—Dish was penalized for winning. In this case, the district court saw that a party was successful at proving invalidity in an IPR—but then actually held that against the winning party. Dish Networks was one of several defendants Dragon sued for infringement. After the suit was filed, Dish initiated IPR proceedings. But before those proceedings finished, the District Court construed the patent’s claims in a way that required finding non-infringement. While that decision was on appeal, the Patent Office finished its review and found Dragon’s patent invalid.  Yet when Dish tried to recover the cost of litigating Dragon’s invalid patent, the District Court refused on the ground that Dish wasn’t the prevailing party. Oddly, its success proving invalidity at the Patent Office became grounds for stripping it of separately prevailing on non-infringement. That ruling made no sense; if anything, Dish’s success in proving invalidity should reinforce, rather than undo,  its status as the prevailing party. It would have also created a big new downside for defendants considering IPR proceedings: if they won, they could have lost prevailing party status in district court, and thus the possibility of recovering the cost of paying their attorneys. So EFF weighed in, filing an amicus brief in support of Dish’s prevailing party status with the Federal Circuit in February of 2019. More than a year later, on April 21, 2020, the Federal Circuit finally ruled, agreeing with EFF that the district court’s finding of non-infringement made Dish the prevailing party. Dish’s parallel success in proving invalidity at the Patent Office did not change that. The Federal Circuit’s decision makes clear that proving invalidity at the Patent Office doesn’t make an earlier non-infringement win—and thus the possibility of recovering attorneys’ fees— disappear. That principle is important: If patent owners could save themselves from fee awards by having their patents invalidated by the Patent Office, they would have a perverse incentive to assert the worst patents in litigation. But a month after the Federal Circuit issued its decision, Dragon filed a petition asking the full court to convene and re-hear the case. On June 24, the Federal Circuit finally denied that petition, making its decision final. Even though the Federal Circuit was skeptical that fees would ultimately be recoverable in this case, its decision will help protect the IPR system, as well as proper fee-shifting. Those who need an efficient way to challenge a wrongly-granted patent will have one, and it won’t make the cost of district court litigation even greater. 

  • Tell Your Senator: Vote No on the EARN IT Act
    by Joe Mullin on June 30, 2020 at 7:40 pm

    This month, Americans are out in the streets, demanding police accountability. But rather than consider reform proposals, a key Senate committee is focused on giving unprecedented powers to law enforcement—including the ability to break into our private messages by creating encryption backdoors. TAKE ACTION STOP THE EARN IT BILL BEFORE IT BREAKS ENCRYPTION This Thursday, the Senate Judiciary Committee is scheduled to debate and vote on the so-called EARN IT Act, S. 3398. It’s a law that would allow the government to scan every message sent online. The EARN IT Act creates a 19-person commission that would be dominated by law enforcement agencies, with Attorney General William Barr at the top. This unelected commission will be authorized to make new rules on “best practices” that Internet websites will have to follow. Any Internet platform that doesn’t comply with this law enforcement wish list will lose the legal protections of Section 230. The new rules that Attorney General Barr creates won’t just apply to social media or giant Internet companies. Section 230 is what protects owners of small online forums, websites, and blogs with comment sections, from being punished for the speech of others. Without Section 230 protections, platform owners and online moderators will have every incentive to over-censor speech, since they could potentially be sued out of existence based on someone else’s statements. Proponents of EARN IT say that the bill isn’t about encryption or privacy. They’re cynically using crimes against children as an excuse to change online privacy standards. But it’s perfectly clear what the sponsors’ priorities are. Sen. Lindsay Graham (R-SC), one of EARN IT’s cosponsors, has introduced another bill that’s a direct attack on encrypted messaging. And Barr has said over and over again that encrypted services should be forced to offer police special access. The EARN IT Act could end user privacy as we know it. Tech companies that provide private, encrypted messaging could have to re-write their software to allow police special access to their users’ messages. This bill is a power grab by police. We need your help to stop it today. Contact your Senator and tell them to oppose the EARN IT Act. TAKE ACTION STOP THE EARN IT BILL BEFORE IT BREAKS ENCRYPTION

  • 5 Serious Flaws in the New Brazilian “Fake News” Bill that Will Undermine Human Rights [UPDATED]
    by Katitza Rodriguez on June 29, 2020 at 11:33 pm

    Update: On Tuesday night (6/30), the Brazilian Senate approved the “PLS 2630/2020”, the so-called “Fake News” bill. A final amendment cut back on article 7 “Account Registration” so that mandatory identification no longer applies to all users and is, in principle, optional in general. Under the revised text, companies “may” demand identification from users where there are complaints of non-compliance with the “fake news” law, or when there is reason to suspect they are bots, are behaving inauthentically, or assuming someone else’s identity. Social networks and private messengers app are also expected to create some means of detecting fraud in account creation (Article 7, paragraph one). These new provisions seem to match most companies’ existing practices but may be expanded to also include those new obligations established in the “fake news” bill. An amendment narrowed article 8 “Account Registration” so that it applies only to private messaging accounts “exclusively linked to cell phone numbers” (still potentially tremendously confusing in practice); private messaging services are ordered to check with mobile operators which numbers had their contract terminated in order to suspend the related accounts in the app. This provision now excludes social networks. An amendment removed “within the scope of its service” from article 9 that says that private messaging services must limit the size of private groups and lists. This last change can potentially undermine innovation of future products based upon peer-to-peer messaging systems that, by design, can not control the size of a group. Another amendment cut back on article 10 “the traceability provision”, forcing only private messaging applications to retain the chain of all communications that have been “massively forwarded” for the purpose of potential criminal investigation or prosecution. Previous versions of the bill, as explained in our post below, also included social networks. The virality of a message and thresholds do not change the privacy and due process rights of the original sender. Forwarding a popular message does not mean you should automatically be under suspicion. The traceability provision is a “tech mandate” that compels private messaging apps to change their privacy-by design platform to weaken its privacy protections. All the other parts of the provisions discussed in this post remained intact. —The original article was published on Monday morning, June 29th, 2020. The Brazilian Senate is scheduled to make its vote this week on the most recent version of “PLS 2630/2020” the so-called “Fake News” bill. This new version, supposedly aimed at safety and curbing “malicious coordinated actions” by users of social networks and private messaging apps, will allow the government to identify and track countless innocent users who haven’t committed any wrongdoing in order to catch a few malicious actors.  The bill creates a clumsy regulatory regime to intervene in the technology and policy decisions of both public and private messaging services in Brazil, requiring them to institute new takedown procedures, enforce various kinds of identification of all their users, and greatly increase the amount of information that they gather and store from and about their users. They also have to ensure that all of that information can be directly accessed by staff in Brasil, so it is directly and immediately available to their government—bypassing the strong safeguards for users’ rights of existing international mechanisms such as Mutual Legal Assistance Treaties. This sprawling bill is moving quickly, and it comes at a very bad time. Right now, secure communication technologies are more important than ever to cope with the COVID-19 pandemic, to collaborate and work securely, and to protest or organize online. It’s also really important for people to be able to have private conversations, including private political conversations. There are many things wrong with this bill, far more than we could fit into one article. For now, we’ll do a deep dive into five serious flaws in the existing bill that would undermine privacy, expression and security. Flaw 1: Forcing Social Media and Private Messaging Companies to Collect Legal Identification of All Users The new draft of Article 7 is both clumsy and contradictory. First, the bill (Article 7, paragraph 3) requires “large” social networks and private messaging apps (that offer service in Brazil to more than two million users) to identify every account’s user by requesting their national identity cards. It’s a retroactive and general requirement, meaning that identification must be requested for each and every existing user. Article 7 main provision is not limited to  the identification of a user by a  court order, also including when there is a complaint about an account’s activity, or when the company finds itself unsure of a user’s identity. While users are explicitly permitted to use pseudonyms, they may not  keep their legal identities confidential from the service provider. Compelling companies to identify an online user should only be done in response to a request by a competent authority, not a priori. In India, a similar proposal is expected to be released by the country’s IT Ministry, although reports indicate that ID verification would be optional. In 2003, Brazil made SIM card registration mandatory for prepaid cell phones, requiring prepaid subscribers to present a proof of identity, such as their official national identity card, driver’s license, or taxpayer number. Article 39 of the new draft expands that law by creating new mandatory identification requirements for obtaining telephone SIM cards, and Article 8 explicitly requires private message applications that identify their users via an associated telephone number to delete accounts whenever the underlying telephone number is deregistered. Telephone operators are required to help with this process by providing a list of numbers that are no longer used by the original subscriber. SIM card registration undermines peoples’ ability to communicate, organize, and associate with others anonymously. David Kaye, United Nations’ Special Rapporteur on Freedom of Expression and Opinion have asked states to refrain from making the identification of users a condition for access to digital communications and online services and requiring SIM card registration for mobile users; Even if the draft text eliminates Article 7, the draft remains dangerous to free expression because authorities will still be allowed to identify users of private messaging services by linking a cell phone number to an account. The Brazilian authorities will have to unmask the identity of the internet user by following domestic procedures for accessing such data from the telecom provider. Internet users will be obliged to hand over identifying information to big tech companies if Article 7 is approved as currently written, with or without paragraph 3. The compulsory identification provision is a blatant infringement on the due process rights of individuals. Countries like China and South Korea have mandated that users register their real names and identification numbers with online service providers. South Korea used to require websites with more than 100,000 visitors per day to authenticate their identities by entering their resident ID numbers when they use portals or other sites. But South Korea’s Supreme Court revoked the law as unconstitutional, stating that “the [mandatory identification] system does not seem to have been beneficial to the public. Despite the enforcement of the system, the number of illegal or malicious postings online has not decreased.” Flaw 2: Forcing Social Media and Private Messaging Companies to Track and Retain Immense Logs of User Communications   Man: What happened? Police officer: You shared that message that went viral accusing someone of a corruption scheme. They’re saying that it’s a lie and is calúnia. Descriptive text: It’s easy to imagine how the new traceability rule could be abused and make us all afraid to share content online. We can’t let that happen. Article 10 compels social networks and private messaging applications to retain the chain of all communications that have been “massively forwarded”, for the purpose of potential criminal investigation or prosecution. The new draft requires three months of data storage of the complete chain of communication for such messages, including date and time of forwarding, and the total number of users who receive the message. These obligations are conditioned on virality thresholds and apply when an instance of a message has been forwarded to groups or lists by more than 5 users within 15 days, where a message’s content has reached 1,000 or more users. The service provider is also apparently expected to temporarily retain this data for all forwarded messages during the 15-day period in order to determine whether or not the virality threshold for “massively forwarded” will be met. This provision blatantly infringes on due process rights by compelling providers to retain everyone’s communication before anyone has committed any legally defined offense.  There have also been significant changes to how this text interacts with encryption and with communications providers’ efforts to know less about what their users are doing. This provision may create an incentive to weaken end-to-end encryption, because end-to-end encrypted services may not be able to comply with provisions requiring them to recognize when a particular message has been independently forwarded a certain number of times without undermining the security of their encryption.  Although the current draft (unlike previous versions) does not create new crimes, it requires providers to trace messages before any crime has been committed so the information could be used in the future in the context of a criminal investigation or prosecution for specific crimes defined in articles 138 to 140, or article 147 of the Brazil’s Penal Code, such as defamation, threats, and calúnia. This means, for example, that if you share a message that denounces corruption of a local authority and it gets forwarded more than 1,000 times, authorities may criminally accuse you of calúnia against your local authority.  Companies must limit the retention of personal data to what is reasonably necessary, proportionate to certain legitimate business purposes. This is “data minimization,” that is, the principle that any company should minimize its processing of consumer data. Minimization is an important tool in the data protection toolbox. This bill goes against that, favoring dangerous big data collection practices. Flaw 3: Banning Messaging Companies from Allowing Broadcast Groups, Even if Users Sign Up Articles 9 and 11 require broadcast and discussion group sizes in private messaging tools to have a maximum membership limit (something that WhatsApp does today, but that not every communications tool necessarily does or will do), and that the ability to reach mass audiences via private messaging platforms must be strictly limited and controlled, even when those audiences opt in. The vision of the bill seems to be that mass discussion and mass broadcast are inherently dangerous and must only happen in public, and that no one should create forums or media for these interactions to happen in a truly private way, even with clear and explicit consent by the participants or recipients. Suppose an organization like an NGO, or a labor union, or a political party wanted to have a discussion forum among its whole membership or send its newsletter to all its members who’ve chosen to receive it. It wouldn’t be allowed to do this through a tool similar to WhatsApp — at least once some (unspecified) audience size limit was reached. Per articles 9 and 11, the organization would have to use another platform (not a private messaging tool), and so the content would be visible to and subject to the control of its operator. Flaw 4: Forcing Social Media and Messaging Companies to Make Private User Logs Available Remotely Article 37 compels large social networks and private messaging apps to appoint legal representatives in Brazil. It also forces those companies to provide remote access to their user databases and logs to their staff in Brazil so the local employees can be directly forced to turn them over.  This undermines user security and privacy. It increases the number of employees (and devices) that can access sensitive data and reduces the company’s ability to control vulnerabilities and unauthorized access, not least because this is global in scale and, should it be adopted in Brazil, could be replicated by other countries. Each new person and each new device adds a new security risk.  Flaw 5: No Limitations on Applying this Law to Users Outside of Brazil  Paragraphs 1 and 2 of Article 1 provide some jurisdictional exclusions, but all of these are applied at the company level—that is, a foreign company could be exempt if it is small (less than 2,000,000 users) or does not offer services to Brazil. None of these limitations, however, relate to the users’ nationality or location. Thus, the bill, by its terms, requires a company to create certain policies and procedures about content takedowns, mandatory identification of users, and other topics, which are not themselves in any way limited to people based in Brazil. Even if the intent is only to force the collection of ID documents from users who are based in Brazil, the bill neglects to say so. Addressing “Fake News” Without Undermining Human Rights There are many innovative new responses being developed to help cut down on abuses of messaging and social media apps, both through policy responses and technical solutions. WhatsApp, for example, already limits the number of recipients of a single forwarded message at a time and shows users that messages were forwarded, viral messages are labeled with double arrows to indicate they did not originate from a close contact. However, shutting down bad actors cannot come at the expense of silencing millions of other users, invading their privacy, or undermining their security. To ensure that human rights are preserved, the Brazilian legislature must reject the current version of this bill. Moving forward, human rights such as privacy, expression, security must be baked into the law from the beginning. 

  • Now Is The Time: Tell Congress to Ban Federal Use of Face Recognition
    by Matthew Guariglia on June 29, 2020 at 10:03 pm

    Cities and states across the country have banned government use of face surveillance technology, and many more are weighing proposals to do so. From Boston to San Francisco, elected officials and activists rightfully know that face surveillance gives police the power to track us wherever we go, turns us all into perpetual suspects, increases the likelihood of being falsely arrested, and chills people’s willingness to participate in First Amendment protected activities. That’s why we’re asking you to contact your elected officials and tell them to co-sponsor and vote yes on the Facial Recognition and Biometric Technology Moratorium Act of 2020. Take action TELL congress: END federal use of face surveillance Three companies—IBM, Amazon, and Microsoft—have recently ended or suspended sales of face recognition to police departments, acknowledging the harms that this technology causes. Police and other government use of this technology cannot be responsibly regulated. Face surveillance in the hands of the government is a fundamentally harmful technology. Congress, states, and cities should take this momentary reprieve, during which police will not be able to acquire new face surveillance technology from these companies, as an opportunity to ban government use of the technology once and for all. Face surveillance disproportionately hurts vulnerable communities. Recently the New York Times published a long piece on the case of Robert Julian-Borchak Williams, who was arrested by Detroit police after face recognition technology wrongly identified him as a suspect in a theft case.  The ACLU filed a complaint on his behalf with the Detroit police. The problem isn’t just that studies have found face recognition highly inaccurate when it comes to matching the faces of people of color. The larger concern is that law enforcement will use this invasive and dangerous technology, as it unfortunately uses all such tools, to disparately surveil people of color. This federal ban on face surveillance would apply to opaque and increasingly powerful agencies like Immigration and Customs Enforcement, the Drug Enforcement Administration, the Federal Bureau of Investigation, and Customs and Border Patrol. The bill would ensure that these agencies cannot use this invasive technology to track, identify, and misidentify millions of people. Tell your senators and representatives they must co-sponsor and pass the Facial Recognition and Biometric Technology Moratorium Act of 2020, introduced by Senators Markey and Merkley and Representatives Ayanna Pressley, Pramila Jayapal, Rashida Tlaib, and Yvette Clarke. This bill would be a critical step to ensuring that mass surveillance systems don’t use your face to track, identify, or incriminate you. The bill would ban the use of face surveillance by the federal government, as well as withhold certain federal funding streams from local and state governments that use the technology.  That’s why we’re asking you to insist your elected officials co-sponsor and vote “Yes” on the Facial Recognition and Biometric Technology Moratorium Act of 2020, S.4084 in the Senate. Take action TELL congress: END federal use of face surveillance

  • Your Phone Is Vulnerable Because of 2G, But it Doesn’t Have to Be
    by Cooper Quintin on June 29, 2020 at 9:48 pm

    Security researchers have been talking about the vulnerabilities in 2G for years. 2G technology, which at one point underpinned the entire cellular communications network, is widely known to be vulnerable to eavesdropping and spoofing. But even though its insecurities are well-known and it has quickly become archaic, many people still rely on it as the main mobile technology, especially in rural areas. Even as carriers start rolling out the fifth generation of mobile communications, known as 5G, 2G technology is still supported by modern smartphones. The manufacturers of operating systems for smartphones (e.g. Apple, Google, and Samsung)  are in the perfect position to solve this problem by allowing users to switch off 2G. What is 2G and why is it vulnerable? 2G is the second generation of mobile communications, created in 1991. It’s an old technology that at the time did not consider certain risk scenarios to protect its users. As years have gone, many vulnerabilities have been discovered in 2G and it’s companion SS7. The primary problem with 2G stems from two facts. First, it uses weak encryption between the tower and device that can be cracked in real time by an attacker to intercept calls or text messages. In fact, the attacker can do this passively without ever transmitting a single packet. The second problem with 2G is that there is no authentication of the tower to the phone, which means that anyone can seamlessly impersonate a real 2G tower and your phone will never be the wiser.  Cell-site simulators sometimes work this way. They can exploit security flaws in 2G in order to intercept your communications. Even though many of the security flaws in 2G have been fixed in 4G, more advanced cell-site simulators can take advantage of remaining flaws to downgrade your connection to 2G, making your phone susceptible to the above attacks. This makes every user vulnerable—from journalists and activists to medical professionals, government officials, and law enforcement. How do we fix it? 3G, 4G, and 5G deployments fix the worst vulnerabilities in 2G that allow for cell-site simulators to eavesdrop on SMS text messages and phone calls (though there are still some vulnerabilities left to fix). Unfortunately, many people worldwide still depend on 2G networks. Therefore, brand-new, top-of-the-line phones on the market today—such as Samsung Galaxy, Google Pixel, and the iPhone 11—still support 2G technology. And the vast majority of these smartphones don’t give users any way to switch off 2G support.  That means these modern 3G and 4G phones are still vulnerable to being downgraded to 2G. The simplest solution for users is to use encrypted messaging such as Signal whenever possible. But a better solution would be to be able to switch 2G off entirely so the connection can’t be downgraded. Unfortunately, this is not an option in iPhones or most Android Phones. Apple, Google, and Samsung should allow users to choose to switch 2G off in order to better protect ourselves. Ideally, smartphone OS makers would block 2G by default and allow users to turn it back on if they need it for connectivity in a remote area. Either way, with this simple action, Apple, Google, and Samsung could protect millions of their users from the worst harms of cell-site simulators.

  • Egypt’s Crackdown on Free Expression Will Cost Lives
    by Jason Kelley on June 29, 2020 at 7:27 pm

    For years, EFF has been monitoring a dangerous situation in Egypt: journalists, bloggers, and activists have been harassed, detained, arrested, and jailed, sometimes without trial, in increasing numbers by the Sisi regime. Since the COVID-19 pandemic began, these incidents have skyrocketed, affecting free expression both online and offline.  As we’ve said before, this crisis means it is more important than ever for individuals to be able to speak out and share information with one another online. Free expression and access to information are particularly critical under authoritarian rulers and governments that dismiss or distort scientific data. But at a time when true information about the pandemic may save lives, instead, the Egyptian government has expelled journalists from the country for their reporting on the pandemic, and arrested others on spurious charges for seeking information about prison conditions. Shortly after the coronavirus crisis began, a reporter for The Guardian was deported, while a reporter for the The New York Times was issued a warning.. Just last week the editor of Al Manassa, Nora Younis, was arrested on cybercrime charges (and later released). And the Committee to Protect Journalists reported today that at least four journalists arrested during the pandemic remain imprisoned.  Social media is also being monitored more closely than ever, with disastrous results: the Supreme Council for Media Regulation has banned the publishing of any data that contradicts the Ministry of Health’s official data. It has sent warning letters to news websites and social networks’ accounts it claims are sharing false news, and has arrested individuals for posting about the virus. Claiming national security interests, the far-reaching ban, which also limits the use of pseudonyms by journalists and criminalizes discussion of other “sensitive” topics, such as Libya, is being seen (rightfully) as censorship across the country. At a moment when obtaining true information is extremely important, the fact that Egypt’s government is increasing its attack on free expression is especially dangerous. The government’s attacks on expression aren’t only damaging free speech online: rather than limiting the number of individuals in prison who are potentially exposed to the virus, Egyptian police have made matters worse, by harassing, beating, and even arresting protestors who are demanding the release of prisoners in dangerously overcrowded cells or simply ask for information on their arrested loved ones. Just last week, the family of Alaa Abd El Fattah, a leading Egyptian coder, blogger and activist who we’ve profiled in our Offline campaign, was attacked by police while protesting in front of Tora Prison. The next day, Alaa’s sister, Sanaa Seif, was forced into an unmarked car in front of the Prosecutor-General’s office as she arrived to submit a complaint regarding the assault and Alaa’s detention. She is now being held in pre-trial detention on charges of “broadcast[ing] fake news and rumors about the country’s deteriorating health conditions and the spread of the coronavirus in prisons” on Facebook, among others—according to police, for a fifteen day period, though there is no way to know for sure that it will end then.  All of these actions put the health and safety of the Egyptian population at risk. We join the international coalition of human rights and civil liberties organizations demanding both Alaa and Sanaa be released, and asking Egypt’s government to immediately halt its assault on free speech and free expression. We must lift up the voices of those who are being silenced to ensure the safety of everyone throughout the country.  Banner image CC-BY, by Molly Crabapple.

  • Dutch Law Proposes a Wholesale Jettisoning of Human Rights Considerations in Copyright Enforcement
    by Cory Doctorow on June 29, 2020 at 5:33 pm

    With the passage of last year’s Copyright Directive, the EU demanded that member states pass laws that reduce copyright infringement by internet users while also requiring that they safeguard the fundamental rights of users (such as the right to free expression) and also the limitations to copyright. These safeguards must include protections for the new EU-wide exemption for commentary and criticism. Meanwhile states are also required to uphold the GDPR, which safeguards users against mass, indiscriminate surveillance, while somehow monitoring everything every user posts to decide whether it infringes copyright. Serving these goals means that when EU member states turn the Directive into their national laws (the “transposition” process), their governments will have to decide to give more weight to some parts of the Directive, and that courts would have to figure out whether the resulting laws passed constitutional muster while satisfying the requirement of EU members to follow its rules. The initial forays into transposition were catastrophic. First came France’s disastrous proposal, which “balanced” copyright enforcement with Europeans’ fundamental rights to fairness, free expression, and privacy by simply ignoring those public rights. Now, the Dutch Parliament has landed in the same untenable legislative cul-de-sac as their French counterparts, proposing a Made-in-Holland version of the Copyright Directive that omits: Legally sufficient protections for users unjustly censored due to false accusations of copyright infringement; Legally sufficient protection for users whose work makes use of the mandatory, statutory exemptions for parody and criticism; A ban on “general monitoring”— that is, continuous, mass surveillance; Legally sufficient protection for “legitimate uses” of copyright works. These are not optional elements of the Copyright Directive. These protections were enshrined in the Directive as part of the bargain meant to balance the fundamental rights of Europeans against the commercial interests of entertainment corporations. The Dutch Parliament’s willingness to pay mere lip-service to these human rights-preserving measures as legislative inconveniences is a grim harbinger of other EU nations’ pending lawmaking, and an indictment of the Dutch Parliament’s commitment to human rights. EFF was pleased to lead a coalition of libraries, human rights NGOs, and users’ rights organizations in an open letter to the EU Commission asking them to monitor national implementations that respect human rights. In April, we followed this letter with a note to the EC’s Copyright Stakeholder Dialogue Team, setting out the impossibility of squaring the Copyright Directive with the GDPR’s rules protecting Europeans from “general monitoring,” and calling on them to direct member-states to create test suites that can evaluate whether companies’ responses to their laws live up to their human rights obligations. Today, we renew these and other demands, and we ask that Dutch Parliamentarians do their job in transposing the Copyright Directive, with the understanding that the provisions that protect Europeans’ rights are not mere ornaments, and any law that fails to uphold those provisions is on a collision course with years of painful, costly litigation.

  • Your Objections to the Google-Fitbit Merger
    by Cory Doctorow on June 25, 2020 at 10:21 pm

    EFF Legal Intern Rachel Sommers contributed to this post. When Google announced its intention to buy Fitbit in April, we had deep concerns. Google, a notoriously data-hungry company with a track record of reneging on its privacy policies, was about to buy one of the most successful wearables company in the world —after Google had repeatedly tried to launch a competing product, only to fail, over and over. Fitbit users give their devices extraordinary access to their sensitive personal details, from their menstrual cycles to their alcohol consumption. In many cases, these “customers” didn’t come to Fitbit willingly, but instead were coerced into giving the company their data in order to get the full benefit of their employer-provided health insurance. Companies can grow by making things that people love, or they can grow by buying things that people love. One produces innovation, the other produces monopolies. Last month, EFF put out a call for Fitbit owners’ own thoughts about the merger, so that we could tell your story to the public and to the regulators who will have the final say over the merger. You obliged with a collection of thoughtful, insightful, and illuminating remarks that you generously permitted us to share. Here’s a sampling from the collection: From K.H.: “It makes me very uncomfortable to think of Google being able to track and store even more of my information. Especially the more sensitive, personal info that is collected on my Fitbit.” From L.B.: “Despite the fact that I continue to use a Gmail account (sigh), I never intended for Google to own my fitness data and have been seeking an alternative fitness tracker ever since the merger was announced.” From B.C.: “I just read your article about this and wanted to say that while I’ve owned and worn a Fitbit since the Charge (before the HR), I have been looking for an alternative since I read that Google was looking to acquire Fitbit. I really don’t want “targeted advertisements” based on my health data or my information being sold to the highest bidder.” From T.F.: “I stopped confirming my period dates, drinks and weight loss on my fitbit since i read about the [Google] merger. Somehow, i would prefer not to become a statistic on [Google].”  From D.M.: “My family has used Fitbit products for years now and the idea of Google merging with them, in my opinion, is good and bad. Like everything in the tech industry, there are companies that hog all of the spotlight like Google. Google owns so many smaller companies and ideas that almost every productivity and shopping app on any mobile platform is in some way linked or owned by them. Fitbit has been doing just fine making their own trackers and products without any help from the tech giants, and that doesn’t need to stop now. I’m not against Google, but they have had a few security issues and their own phone line, the pixel, hasn’t been doing that well anyway. I think Fitbit should stay a stand alone company and keep making great products.” From A.S.: “A few years back, I bought a Fitbit explicitly because they were doing well but didn’t seem to be on the verge of being acquired. I genuinely prefer using Android over iOS, and no longer want to take on the work of maintaining devices on third party OSes, so I wanted to be able to monitor steps without thinking it was all going to a central location. Upon hearing about the merger, I found myself relieved I didn’t use the Fitbit for long (I found I got plenty of steps already and it was just a source of anxiety) so that the data can’t be merged with my massive Google owned footprint.” From L.O.: “A few years ago, I bought a Fitbit to track my progress against weight-loss goals that I had established. Moreover, I have a long-term cardiac condition that requires monitoring by a third-party (via an ICD). So I wanted to have access to medical data that I could collect for myself. I had the choice to buy either an Apple Watch, Samsung Gear, Google Fit gear, or a Fitbit. I chose to purchase a Fitbit for one simple reason: I wanted to have a fitness device that did not belong to an OEM and/or data scavenger. So I bought a very expensive Fitbit Charge 2. I was delighted by the purchase. I had a top-of-the-line fitness device. And I had confidence that my intimate and personal data would be secure; I knew that my personal and confidential data would not be used to either target me or to include me in a targeted group. Now that Google has purchased Fitbit, I have few options left that will allow me to confidentially collect and store my personal (and private) fitness information. I don’t trust Google with my data. They have repeatedly lied about data collection. So I have no confidence in their assertions that they will once again “protect” my data. I trust that their history of extravagant claims followed by adulterous actions will be repeated. My fears concerning Google are well-founded. And as a result, I finally had to switch my email to an encrypted email from a neutral nation (i.e., Switzerland). And now, I have to spend even more money to protect myself from past purchases that are being hijacked by a nefarious content broker.  Why should I have to spend even more money in order to ensure my privacy? My privacy is guaranteed by the United States Constitution, isn’t it? And it in an inalienable right, isn’t it? Since when can someone steal my right to privacy and transform it into their right to generate even more money? As a citizen, I demand that my right to privacy be recognized and defended by local, state, and federal governments. And in the meantime, I’m hoping that someone will create a truly private service for collecting and storing my personal medical information.” From E.R.: “Around this time last year, I went to the Nest website. I am slowly making my condo a smart home with Alexa and I like making sure everything can connect to each other. I hopped on and was instantly asked to log in via Google. I was instantly filled with regret. I had my thermostat for just over a year and I knew that I hadn’t done my research and the Google giant had one more point of data collection on me – plus it was connected to my light bulbs and Echo. Great.  Soon, I learn the Versa 2 is coming out – best part? It has ALEXA! I sign up right away—this is still safe. Sure. Amazon isn’t that great at data secrets, but a heck of a lot better than Google connected apps. Then, I got the news of the merger. I told my boyfriend this would be the last FitBit I owned—but have been torn as it has been a motivating tool for me and a way to be in competition with my family now that we live in different states. But it would be yet another data point for Google, leaving me wondering when it will possibly end.  This may be odd coming from a Gmail account—but frankly, Gmail is the preferred UI for me. I tried to avoid Google search, but it proved futile when I just wasn’t getting the same results. Google slowly has more and more of my life—from YouTube videos, to email, to home heating, and now fitness… when is enough enough?” From J.R.: “My choice to buy a Fitbit device instead of using a GoogleFit related device/app is largely about avoiding giving google more data.  My choice to try Waze during its infancy was as much about its promise to the future as it was that it was not a Google Product and therefore google wouldn’t have all of my families sensitive driving data. Google paid a cheap 1 Billion to purchase all my data from Waze and then proceed to do nothing to improve the app. The app actually performs worse now on the same phone, sometimes taking 30 minutes to acquire GPS satellites that Google Maps (which i can’t uninstall) see immediately.  Google now has all my historic driving data for years…. besides the fact that there is no real competitor to Waze and it does not seem like any company will ever try to compete with Google again on Maps and traffic data… why not continue using it? from my history, they can probably predict my future better than me. The same with Fitbit… Now google will know every place I Run, Jog and walk…. not just where I park but exactly where i go…. is it not enough for them to know i went to the hospital but now they will know which floor (elevation), which wing (precise location data)…. they will get into mapping hospitals and other areas…. they will know exactly where we are and what we are doing….   They will also sell our health data to various types of insurance companies, etc. I believe Google should be broken up and not allowed to share data between the separate companies. I don’t believe google should be able to buy out companies that harvest data as part of their mission. If google buys fitbit, i will certainly close the account, delete what I can from it and sell the fitbit (if it has value left)….” While the overwhelming majority of comments sought to halt the merger, a few people wrote to us in support of it. Here’s one of those comments. From T.W.: “I’m really looking forward to the merger. I see the integration of Fitbit and Google Fit as a great bonus and hope to get far more insights than I get now. Hopefully the integration will progress really soon!” If you’re a Fitbit owner and you’re alarmed by the thought of your data being handed to Google, we’d love to hear from you. Write to us at [email protected], and please let us know: If we can publish your story (and, if so, whether you’d prefer to be anonymous); If we can share your story with government agencies; If we can share your email address with regulators looking for testimony.

  • EFF and Durie Tangri Join Forces to Defend Internet Archive’s Digital Library
    by Rebecca Jeschke on June 25, 2020 at 4:42 pm

    Free, Public-Service Lending Program Threatened by Baseless Copyright LawsuitSan Francisco – The Electronic Frontier Foundation (EFF) is joining forces with the law firm of Durie Tangri to defend the Internet Archive against a lawsuit that threatens their Controlled Digital Lending (CDL) program, which helps people all over the world check out digital copies of books owned by the Archive and its partner libraries. “Libraries protect, preserve, and make the world’s information accessible to everyone,” said Internet Archive Founder and Digital Librarian Brewster Kahle. “The publishers are suing to shut down a library and remove books from our digital shelves. This will have a chilling effect on a longstanding and widespread library practice of lending digitized books.” The non-profit Internet Archive is a digital library, preserving and providing access to cultural artifacts of all kinds in electronic form. CDL allows people to check out digital copies of books for two weeks or less, and only permits patrons to check out as many copies as the Archive and its partner libraries physically own. That means that if the Archive and its partner libraries have only one copy of a book, then only one patron can borrow it at a time, just like any other library. Four publishers sued the Archive earlier this month, alleging that CDL violates their copyrights. In their complaint, Hachette, HarperCollins, Wiley, and Penguin Random House claim CDL has cost their companies millions of dollars and is a threat to their businesses. “EFF is proud to stand with the Archive and protect this important public service,” said EFF Legal Director Corynne McSherry. “Controlled digital lending helps get books to teachers, children and the general public at a time when that is more needed and more difficult than ever. It is no threat to any publisher’s bottom line.” “Internet Archive is lending library books to one patron at a time,” said Durie Tangri partner Joe Gratz. “That’s what libraries have done for centuries, and we’re proud to represent Internet Archive in standing up for the rights of libraries in the digital age.” Contact:  CorynneMcSherryLegal [email protected] JoeGratzPartner at Durie [email protected] ChrisFreelandInternet [email protected]

  • California Agency Blocks Release of Police Use of Force and Surveillance Training, Claiming Copyright
    by Dave Maass on June 25, 2020 at 2:30 pm

    Under a California law that went into effect on January 1, 2020, all law enforcement training materials must be “conspicuously” published on the California Commission on Peace Officer Standards and Training (POST) website.  However, if you visit POST’s Open Data hub and try to download the officer training materials relating to face recognition technology or automated license plate readers (ALPRs), or the California Peace Officers Association’s course on use of force, you will receive only a Word document with a single sentence:  This is unlawful, and unacceptable, EFF told POST in a letter submitted today. Under the new California law, SB 978, POST must post law enforcement training materials online if the materials would be available to the public under the California Public Records Act. Copyrighted material is available to the public under the California Public Records Act—in fact, EFF obtained a full, unredacted copy of POST’s ALPR training through a records request just last year.  The company that creates POST’s courses on ALPR and face recognition is the same company that sells the technology: Vigilant Solutions (now a subsidiary of Motorola Solutions). This company has a long history of including non-publication clauses in its contracts with law enforcement as a means to control its intellectual property. But, as we explain in our letter, SB 978 is clear: copyright law is not a valid excuse for POST to evade its transparency obligations. And what’s just as bad is that even when copyright isn’t an issue, POST has only released training course outlines, and not the training materials themselves. Indeed, POST has made no training materials about the use of force available, sharing only the outlines. With police use of force currently a hotly debated issue throughout the state and nation, it is all the more concerning that POST is unlawfully hiding this material. SB 978 was sponsored by California State Senator Steven Bradford and supported by EFF and a number of civil rights groups in order to create a new level of transparency and public accountability.  When EFF obtained the ALPR training last year, we found that Vigilant Solutions’ training was gravely outdated, included incorrect information, and raised questions about whether the presentation served the company’s commercial interests more than the public’s. EFF called on POST to suspend the course. However, since POST has not published the current training materials, the public does not yet know whether these problems have been adequately addressed.  Our elected officials can pass laws regulating the police, and watchdog bodies can review law enforcement policies, but if training materials are kept secret, it provides a back door for manufacturers of surveillance technology and private organizations to influence police practices without oversight or accountability.  If California POST is going to set and uphold police standards, then it cannot ignore the law. POST must make its training materials available online immediately. Read EFF’s letter to POST on SB 978 violations.

  • The Senate’s New Anti-Encryption Bill Is Even Worse Than EARN IT, and That’s Saying Something
    by Andrew Crocker on June 24, 2020 at 11:10 pm

    Right now, we rely on secure technologies like never before—to cope with the pandemic, to organize and march in the streets, and much more. Yet, now is the moment some members of the Senate Judiciary and Intelligence Committees have chosen to try to effectively outlaw encryption in those very technologies. The new Lawful Access to Encrypted Data Act—introduced this week by Senators Graham, Blackburn, and Cotton—ignores expert consensus and public opinion, which is unfortunately par for the course. But the bill is actually even more out of touch with reality than many other recent anti-encryption bills. Since January, we’ve been fighting the EARN IT Act, a dangerous anti-speech and anti-security bill that would hand a government commission, led by the Attorney General, the power to determine “best practices” online. It’s easy to see how that bill would enable an attack on service providers who provide encrypted communications, because the commission would be headed by Attorney General William Barr, who’s made his opposition to encrypted communications crystal clear. The best that EARN IT’s sponsors can muster in defense is that the bill itself doesn’t use the word “encryption”—asking us to trust that the commission won’t touch encryption.  But if EARN IT attempts to avoid acknowledging the elephant in the room, the Lawful Access to Encrypted Data Act puts it at the center of a three-ring circus. The new bill doesn’t bother with commissions or best practices. Instead, it would give the Justice Department the ability to require that manufacturers of encrypted devices and operating systems, communications providers, and many others must have the ability to decrypt data upon request. In other words, a backdoor.  The bill is sweeping in scope. It gives the government the ability to demand these backdoors in connection with a wide range of surveillance orders in criminal and national security cases, including Section 215 of the Patriot Act, a surveillance law so controversial that Congress can’t agree whether it should be reauthorized. Worse yet, the bill requires companies to figure out for themselves how to comply with a decryption directive. Their only grounds to resist is to show it would be “technically impossible.” While that might seem like a concession to the long-standing expert consensus that technologists simply can’t build a “lawful access” mechanism that only the government can use, the bill’s sponsors are nowhere near that reasonable. As a hearing led by Senator Graham last December demonstrated, many legislators and law enforcement officials believe that even though any backdoor could be exploited by bad actors and put hundreds of millions of ordinary users at risk, that doesn’t mean it’s “technically impossible.” In fact, even if decryption would be “impossible” because the system is designed to be secure against everyone except the user who holds the key —as with full-disk encryption schemes designed by Apple and Google—that’s likely not a defense. Instead, the government can require the system to be redesigned.   Not only does the bill disregard the security of users, it allows the government to support its need for a backdoor with one-sided secret evidence, any time it feels a public court proceeding would harm national security or “enforcement of criminal law.” As we’ve seen, the government already attempts to stretch the limit of surveillance laws in secret to undermine the security of communications products. This bill would make that the norm. Finally, the bill makes almost no concession to the massive disruption it would have on how people use technology. Its limitations are almost laughable: any device that has more than a gigabyte of storage and sells more than a million units a year could have to build a government-required backdoor if it is subject to five warrants or other requests, as would any operating system or communication system with more than a million active users. Clearly the bill’s authors are attempting to target iPhones, Android phones, WhatsApp, and other popular technologies, but the bill would also sweep in many specialized operating systems as well as consumer devices like Fitbits, Rokus, and so on.  It would also establish a sort of X-Prize for “secure backdoors,” rewarding researchers who manage to find “solutions providing law enforcement access to encrypted data pursuant to legal process.” But it is not a lack of resources or proper monetary incentives that has failed to square that particular circle. Instead, it is simply the inability to design a system that reliably allows access by the “good guys” without catastrophically weakening the security of the system.  These concerns only scratch the surface of what’s wrong with this bill. As with EARN IT, we should take every opportunity to tell members of Congress to leave the secure technology we rely on alone.  TAKE ACTION STOP THE ATTACK ON ENCRYPTION Related Cases: EFF, ACLU v. DOJ – Facebook Messenger unsealingApple Challenges FBI: All Writs Act Order (CA)

  • Victory! Boston Bans Government Use of Face Surveillance
    by Matthew Guariglia on June 24, 2020 at 10:36 pm

    The push to minimize the government’s power to track and spy on people with surveillance technology has picked up steam as the Black-led movement against racism and police brutality continues to push politicians to reconsider the role policing plays in our lives. Thanks to the tireless efforts of ACLU-Massachusetts and activists and organizations around the country, including EFF, this week Boston joins the ranks of cities that have banned government use of face surveillance.  Boston will become the tenth city in the United States to ban government use of face recognition technology. Last year, the state of California passed a three-year moratorium on the use of FRT on police body-worn and hand-held cameras. The Boston ordinance [PDF] declares:  Whereas, Governments around the world are responding to the COVID-19 pandemic with an unprecedented use of surveillance tools, including face surveillance technology, despite public health and privacy experts agreeing that public trust is essential to an effective response to the pandemic; and  Whereas, Facial surveillance technology has been proven to be less accurate for African American and AAPI faces, and racial bias in facial surveillance has the potential to harm communities of color who are already facing increased level of surveillance and harassment; and Whereas, Several municipalities in Massachusetts, including Springfield, Somerville, Brookline, and Cambridge, have passed local legislation to ban face surveillance…  Nathan Sheard, EFF’s Associate Director of Community Organizing, testified before the Boston City Council on the hazards of face surveillance. “Face surveillance is profoundly dangerous,” he told the council, “First, in tracking our faces, a unique marker that we cannot change, it invades our privacy. Second, government use of the technology in public places will chill people from engaging in first-amendment protected activities… Third, surveillance technologies have an unfair, disparate impact against people of color, immigrants, and other vulnerable populations.”  EFF sent a letter [PDF] to the Boston City Council suggesting three improvements to the ordinance. The Council adopted all three. One closed a loophole that might have allowed police to ask third parties to collect face recognition evidence for them. Another change provides attorney fees to a  person who brings a successful suit against the City for violating this ban on government use of face surveillance. Otherwise, only well-funded organizations or wealthy individuals would be able to enforce this critical new ban.  It is no coincidence that this ban passed unanimously through Boston’s city council on the same day that the New York Times published a long piece on the case of Robert Julian-Borchak Williams, who was arrested by Detroit police after face recognition technology wrongly identified him as a suspect in a theft case. The problem isn’t just that studies have found face recognition staggeringly and disparately inaccurate when it comes to matching the faces of people of color. The larger concern is that in the hands of police, the technology poses a threat to vulnerable communities by virtue of the fact that police departments themselves disproportionately surveil and patrol those neighborhoods.  EFF has long advocated for a nation-wide ban on the government’s use of face surveillance. You can help by joining our fight, telling your elected officials to ban the technology, or getting your local representatives to introduce our model bill that will ban face surveillance in your town. 

  • Brazil’s Fake News Bill Would Dismantle Crucial Rights Online and is on a Fast Track to Become Law
    by Veridiana Alimonti on June 24, 2020 at 8:12 pm

    Update: A new draft text was released shortly before the voting set for June 25th. It doesn’t include blocking and data localization measures, but the surveillance and identification rules remain. Read more in the analysis of a coalition of digital rights groups in Brazil. Despite widespread complaints about its effects on free expression and privacy, Brazilian Congress is moving forward in its attempts to hastily approve a “Fake News” bill. We’ve already reported about some of the most concerning issues in previous proposals, but the draft text released this week is even worse. It will hinder users’ access to social networks and applications, require the construction of massive databases of users’ real identities, and oblige companies to keep track of our private communications online. It creates demands that disregard Internet key characteristics like end-to-end encryption and decentralised tool-building, running afoul of innovation, and could criminalize the online expression of political opinions. Although the initial bill arose as an attempt to address legitimate concerns on the spread of online disinformation, it has opened the door to arbitrary and unnecessary measures, that strike settled privacy and freedom of expression safeguards. You can join the hundreds of other protestors and organizations telling Brazil’s lawmakers why not to approve this Fake News bill right now. Here’s how the latest proposals measure up: Providers Are Required to Retain the Chain of Forwarded Communications Social networks and any other Internet application that allows social interaction would be obliged to keep the chain of all communications that have been forwarded, whether distribution of the content was done maliciously or not. This is a massive data retention obligation which would affect millions of innocent users instead of only those investigated for an illegal act. Although Brazil already has obligations for retaining specific communications metadata, the proposed rule goes much further. Piecing together a communication chain may reveal highly sensitive aspects of individuals, groups, and their interactions — even when none are actually involved in illegitimate activities. The data will end up as a constantly-updated map of connections and relations between nearly every Brazilian Internet user: it will be ripe for abuse. Furthermore, this obligation disregards the way more decentralized communication architectures work. It assumes that application providers are always able to identify and distinguish forwarded and non-forwarded content, and also able to identify the origin of a forwarded message. In practice, this depends on the design of the service and on the relationship between applications and services. When the two are independent it is common that the service provider will not be able to  differentiate between forwarded and non-forwarded content, and that the application does not store the forwarding history except on the user’s device.  This architectural separation is traditional in Internet communications, including  web browsers, FTP clients, email, XMPP, file sharing, etc. All of them allow actions equivalent to the forwarding of contents or the act of copying and pasting them, where the client application and its functions are  technically and legally independent from the service to which it connects. The obligation would also negatively impact open source applications, designed to let  end-users not only understand but also to modify and adapt  the functioning of local applications. It Compels Applications to Get All User’s ID and Cell Phone Numbers The bill creates a general monitoring obligation on user’s identity, compelling Internet applications to require all users to give proof of identity through a national ID or passport, as well as their phone number. This requirement goes in the opposite direction to the  principles and safeguards set out in the country’s data protection law which is yet to enter into force.  A vast database of identity cards, held by private actors, is in no way aligned with the standards of data minimization, purpose limitation and the prevention of risks in processing and storing personal data that Brazil’s data protection law represents. Current versions of the “Fake News” Bill do not even ensure the use of  pseudonyms for Internet users. As we’ve said many times before, there are myriad reasons why individuals may wish to use a name other than the one they have on their IDs and were born with. Women rebuilding their lives despite the harassment of domestic violence abusers, activists and community leaders facing threats, investigative journalists carrying out sensitive research in online groups, transgender users affirming their identities are just a few of examples of the need for pseudonymity in a modern society. Under the new bill, users’ accounts would be linked to their cell phone numbers, allowing  — and in some cases requiring —  telecom service providers and Internet companies to track users even more closely. Anyone without a mobile number would be prevented from using any social network — if users’ numbers are disabled for any reason, their social media accounts would be suspended. In addition to privacy harms, the rule creates serious hurdles to speak, learn, and share online.  Censorship, Data Localization, and Blocking These proposals seriously curb the online expression of political opinions and could quickly lead to political persecution. The bill sets high fines in cases of online sponsored content that mocks electoral candidates or question election reliability. Although elections’ trustworthiness is crucial for democracy and disinformation attempts to disrupt it should be properly tackled, a broad interpretation of the bill would severely endanger the vital work of e-voting security researchers in preserving that trustworthiness and reliability. Electoral security researchers already face serious harassment in the region. Other new and vague criminal offenses set by the bill are prone to silence legitimate critical speech and could criminalize users’ routine actions without the proper consideration of malicious intent. The bill revives the disastrous idea of data localization. One of its provisions would force  social networks to store user data in a special database that would be required to be hosted in Brazil. Data localization rules such as this can make data especially vulnerable to security threats and surveillance, while also imposing serious barriers to international trade and e-commerce. Finally, as the icing on the cake of a raft of provisions that disregard  the Internet’s global nature, providers that fail to comply with the rules would be subject to a suspension penalty. Such suspensions are unjustifiable and disproportionate, curtailing the communications of millions of Brazilians and incentivizing applications to overcompliance in the detriment of users’ privacy, security, and free expression. EFF has joined many other organizations across the world calling on the Brazilian parliament to reject the latest version of the bill and stop the fast-track mode that has been adopted. You can also take action against the “Fake News” bill now, with our Twitter campaign aimed at senators of the National Congress.

  • The House Has a Universal Fiber Broadband Plan We Should Get Behind
    by Ernesto Falcon on June 24, 2020 at 5:32 pm

    America is behind on its transition to a 21st-century, fiber-connected Internet with no plan for how to fix the problem. Until today. For the first time, legislation led by Majority Whip James Clyburn would begin a national transition of everyone’s Internet connection into multi-gigabit capable fiber optics has been introduced and is likely heading towards a vote on the House floor as part of the overall COVID-19 recovery effort. After that its future remains in the hands of the Senate.  The Accessible, Affordable Internet for All Act (H.R. 7302) would create an $80 billion fiber infrastructure program run by a new Office of Internet Connectivity and Growth that would coordinate all federal infrastructure efforts with state governments. Such an ambitious program would have the United States match China’s efforts to build universal fiber with the U.S. completing its transition just a few short years after China. Without this law, the transition would take decades. This would ensure that the multi-gigabit innovations in applications and services can be created in the United States and also used by all Americans. A universal fiber program would also allow next-generation Wi-Fi and 5G to have national coverage as well as any future iterations of wireless technology. But perhaps most importantly of all, the issue of the digital divide would be solved in its entirety and properly relegated to the history books. Key Provisions of the Legislation Explained Fiber is a future-proofed infrastructure that is vastly superior to the current copper and cable networks available to most people. EFF’s technical analysis shows that it will continue to leapfrog past cable, wireless, and other transmission mediums. Because it can be useful for decades, sustaining older slower networks with government funds will actually cost more in the long term. While the core of the legislation is a massive fiber infrastructure program, there are several provisions worth highlighting as they all play an integral part of achieving universal, competitive, affordable broadband networks.  The bill emphasizes open-access fiber networks that would replicate the success in Utah, where people are getting a dozen options for low priced gigabit and ten-gigabit services, including in rural markets. Building these types of networks would shatter the nearly 15-year decline into the giant monopolies or duopolies that most Americans experience when trying to get high-speed Internet access. Instead, you could get Internet access from small businesses, non-profits, and even your local schools and libraries.  The bill will also free up local governments to pursue community broadband. The removal of state laws advocated by the major national ISPs that ban local communities from building their own broadband access network is long overdue. The public sector has long ago proven essential to the effort to build universal fiber as rural cooperatives, small cities, and townships are building fiber networks in areas long ago skipped by the private sector. While it seems small, the bill fixes a long-standing problem of Internet access in the United States by updating what we mean by broadband. Today’s federal definition of broadband was established in 2015 and stands at 25 megabits per second download and 3 megabits per second upload. This 25/3 standard makes it appear as if there are more broadband options than there truly are, hiding the monopoly. And it is a standard that can’t handle what we actually need in the 21st Century—the ability to work and study at home, for example. To fix this, the bill would establish that communities lacking 25/25 broadband are “unserved” and establish a minimum standard of 100/100 megabits per second for federally funded projects. These higher metrics are what make this a fiber infrastructure bill as older legacy networks such as DSL cannot effectively deliver these speeds. Furthermore, it makes Internet access a right during COVID-19. Key provisions of the recently-passed-by-the-House HEROES Act ensure that if you lose Internet access due to COVID-19, the government would make sure you retain access are in the bill. Given that broadband access is an essential service now more than ever, this type of emergency connectivity assistance is important during the pandemic.   A Plan Is Finally Here, but Congress Must Be Forced to Act The big ISPs, which fail to deliver universal access but enjoy comfortable monopolies and charge you prices at 200% to 300% above competitive rates, will resist this effort. Even when it is profitable to deliver fiber, the national ISPs have chosen not to do it in exchange for short-term profits. A massive infrastructure program, the kind that helped countries like South Korea become global leaders in broadband, aren’t just desperately needed in the United States, it is a requirement. No other country on planet Earth has made progress in delivering universal fiber without an infrastructure policy of this type.  So it is on each of us to commit to contacting our Member of the House and two Senators to demand they vote for universal fiber legislation as envisioned by the Accessible, Affordable Internet for All Act. Ending the digital divide for everyone without exception should be the key piece of the economic recovery effort of Congress. The hardships and pain so many people are facing across the country due to the failures of our past telecom policies to guarantee equality in access should be the reason we pass a law to solve those problems. We deserve universal, affordable, high-speed Internet access Tell Congress to Vote For the Accessible, Affordable Internet for All Act

  • Groundbreaking Community-Building Technologists Join EFF’s Board of Directors
    by Rebecca Jeschke on June 24, 2020 at 1:42 pm

    EFF Is Proud to Welcome Anil Dash and James VasileSan Francisco – The Electronic Frontier Foundation (EFF) is honored to announce the two newest members of its Board of Directors: tech executive and advocate Anil Dash and free and open source software advocate James Vasile. Both Dash and Vasile have spent their careers blending technology and community, and are dedicated to using technology to make the world better both online and off. Anil Dash is the CEO of Glitch, a coding community where millions of creators collaborate and create apps together. He has been a prominent and welcome voice, advocating for a more humane, inclusive, and ethical technology industry through his work as an entrepreneur, activist and writer. Dash also hosts Function, a podcast exploring how tech is shaping culture and society. Dash was an advisor to the Obama White House’s Office of Digital Strategy, and today advises major startups and non-profits including Medium, DonorsChoose, and Project Include. He also serves as a board member for Stack Overflow, the world’s largest community for computer programmers; the Data & Society Research Institute, which researches the cutting edge of tech’s impact on society; and the Lower East Side Girls Club, which serves girls and families in need in New York City. “EFF has risen to the moment—fighting for the most vulnerable and prioritizing those with the most to lose,” said Dash. “Technology can be a force for good in the world, and those of us who create it can be part of the fight for making the world more just. I want lend my skills and voice to the essential efforts that EFF leads in the world.” James Vasile is a partner at Open Tech Strategies, a company that offers advice and services to organizations that make strategic use of free and open source software. He has 20 years’ experience as a user, developer, advocate, and advisor. Vasile was also the founding director of the Open Internet Tools Project, which was the launching pad for community-based projects like the Circumvention Tech Festival, which later became the influential Internet Freedom Festival, and Techno-Activism Third Mondays, a meetup that gathered people in over 20 cities around the world every month. He serves on the boards of Brave New Software, which makes the Lantern censorship circumvention tool downloaded 100 million times around the world, and Horizons Media, which supports the study of the artistic and scientific uses of psychedelics. Previously, Vasile was a Senior Fellow at the Software Freedom Law Center, a director of the FreedomBox Foundation, and a founding board member of Open Source Matters, the non-profit behind Joomla. “The challenges facing technologists and their users grow more complex every day, so we need to shore up our communities and fight for digital rights together,” said Vasile. “For example, an EFF project like Let’s Encrypt delivers first-class technical work, but that is not why it succeeds. Its outreach and coalition building have delivered its benefits to many more people than the technical work alone could reach. When we help people connect to each other—by providing enabling technology and removing barriers to free communication—we build communities that tackle our biggest problems.” “I am so proud that these two stellar technologists and community-builders are joining our Board of Directors,” said EFF Executive Director Cindy Cohn. “In the midst of this difficult time, we are constantly reminded of how important community is, and how much we need technology to maintain it. Both Anil and James put their values to work every day, and we are eager for their insight and support.” In addition to Dash and Vasile, EFF’s Board of Directors includes Chair Pamela Samuelson, Vice Chair Brian Behlendorf, Sarah Deutsch, David Farber, John Gilmore, Brewster Kahle, Bruce Schneier, Gigi Sohn, Shari Steele, and Jonathan Zittrain. Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights [email protected]

  • Pride Resources for Activism in Digital and Physical Spaces
    by Shirin Mori on June 23, 2020 at 10:10 pm

    In June, people honor one of the key events that ushered the era of LGBTQIA+ Pride—Stonewall—during which Black and Brown trans and queer people led a riot in direct response to police brutality. This year, Pride occurs during national and global protests over the continued murder of Black people, and highlights disparities around race, gender, ability and identity, with people at these intersections experiencing particular stress, such as the unprecedented dangers for Black trans women. In this moment of solidarity with Black, trans and disabled activists demanding justice for the killings of Tony McDade, Layleen Cubilette-Polanco, George Floyd, Breonna Taylor, and others from historically targeted communities, we are sharing resources to help activists and others protect their digital security. While there are known and established practices for protecting people against physical threats and harassment during strictly in-person gatherings, digital gatherings involve a different set of security considerations. This guide offers an overview and further reading that, we hope, will help activists think  about how to adapt their work as digital considerations increasingly touch physical spaces.  This guide is divided into three sections:  Considerations for Physical Spaces Considerations for Networks Considerations for Digital Spaces Preface: Please note that the authors of this post come from primarily U.S.-based experiences and language. This blogpost is by no means comprehensive, and it should be noted that digital security risks (such as available surveillance equipment), and their mitigation, can vary depending on your location and other contexts. COVID-19 responses will also be relevant; We do not cover physical security nor health considerations, such as care for people, wearing masks, social distancing. Please consult health organizations like the CDC, as well as Black disability-centered and trans-centered resources, for tips on being conscientious. Additionally, your individual assessment and approach may be highly dependent on your unique risks. For example, for some people, being identified during their activism is less of a risk, where for others, being identified would cause significant stress and/or put them at risk of reprisal from employers, co-workers, family, or strangers. Identification and Harassment One of the consistent themes that we discuss in our digital security work is how to protect your identity—whether that’s using a different name, preventing association while engaging in protests, deliberately complicating the data used to track you, or taking and sharing photos safely. Unfortunately, folks in targeted communities are painfully aware of the need to protect their identities: from threats like doxxing (publishing private or identifying information about you on the Internet); surveillance by law enforcement; corporate surveillance; and/or digital, financial and physical harassment.  Let’s imagine one journey: you attend a protest with friends and take photos, and then you post the photos to a platform like Twitter, using a hashtag to promote their visibility. What security issues might you consider?  Security for Physical Spaces, such as Protests and Rallies In order to assess how best to defend against surveillance at public gatherings, it’s important to know what surveillance tools are being used. At EFF, we spend a lot of time writing and researching law enforcement surveillance. For information on what these surveillance technologies look like, check out our post on Identifying Visible (and Invisible) Surveillance at Protests. Here’s a short overview of problems presented by relevant technologies during a protest, and some ways people mitigate against these problems: Photo by Mike Katz-Lacabe (CC BY)Automated License Plate Readers, or ALPRs—ALPRs are computer-controlled cameras, frequently mounted on police cars and street poles that are used to scan and track the license plates of vehicles that are present in a given area. They record the plate, time, date, and location for every vehicle that passes the camera. This is why many people opt not to drive a car to attend large in-person gatherings. You can see our Street-Level Surveillance resource for a primer on how ALPRs work. Image Recognition Technologies—facial recognition, iris recognition and tattoo recognition technologies are used by law enforcement to analyze large image sets of people in public spaces (such as from networked camera systems) as well as on social media accounts. It’s a reason why some people prefer to cover their face and tattoos, or to limit identifiers, as well as one reason why people may have changes of clothing. Biased algorithms in image recognition technologies further exacerbate the experience of already-surveilled communities, such as the use of gender-identification facial recognition. For more information on image recognition  technologies, check out our educational resource on Street-Level Surveillance. Facial recognition image source: Arizona Department of Transportation Social media scraping and image recognition technologies are also a reason why citizen photographers ask for explicit consent when taking photos of protesters, and photograph people in less identifying manners (like from behind, in a crowd shot).   For specific scenarios, like ethical documentation of police brutality, read EFF’s post on first amendment protections in filming the police and tips for journalists, as well as these tips from WITNESS. Credit and Debit Cards—credit cards are used as persistent long-term identifiers, and a reason why some prefer to carry cash when attending a gathering close by.  This information can also be connected to other identifiers, like tickets for public transit. Phones—One big consideration is the little computer we carry with us: our mobile devices.  For protests, phones can be an important tool in documentation, communication and broadcasting; however, without proper protections, they can also be used by different groups—ranging from third parties, to law enforcement, to app companies— to track people. Mobile phone surveillance has a few overlapping areas of concern, relating to the hardware on the physical device, whether features like Bluetooth, Wi-Fi and Location Services are on, what types of encryption are used, and so on. People make different choices based on who they believe may be surveilling them. International Mobile Subscriber Identifier (IMSI) numbers are unique identifiers that point to the subscriber themselves, and they are shared with your cell provider every time you connect to a cell tower. As you move, your phone sends out pings to nearby towers to request information about the state of the network. Your phone carrier can use this information to track your location (to varying degrees of accuracy). There’s also an added risk—devices called cell-site simulators pretend to be network towers, and are employed by law enforcement to identify IMSIs in a given geographic area.  For a primer on mobile device identifiers, check out this section of our illustrated third-party tracking white paper. For more information on how cell-site simulators work, check out EFF’s post on Cell Phone Surveillance at Protests or check out our illustrated white paper. It’s for these reasons that many people opt to leave their phones at home, to bring a burner device, or to set their phones on airplane mode. Yet for many people, access to a phone may be their primary technology. Your choices depend on assessing your risks and needs, and making a choice that’s right for you. While we’re on the topic of mobile devices, there are a few other risks to be aware of: Location history features  make a record of your device’s comings and goings and are susceptible to law enforcement requests—which is why people choose to turn off Location Services on their phones, as well as why many people choose to disable GPS, Bluetooth, Wi-Fi, and phone signals when planning to attend a protest. Mobile devices can be taken–which is why many people choose to encrypt their phones and to backup their phones before attending a protest. Check out our post on Protecting your Privacy If Your Phone is Taken Away for more information on how to protect your device. Mobile devices can be forcibly unlocked—which is why many people choose to password-protect their phones, rather than using pattern-based swiping a fingerprint unlock or facial unlock. SMS and phone calls can be intercepted—which is why many people use end-to-end encryption for their communication, like Signal. For some people, giving away their phone number is an uncomfortable prospect, so they might use alternative workflows or apps like Wire. For more information on considerations for end-to-end encrypted phone apps, check out our lesson plan from the Security Education Companion. Unfortunately, there are significant examples of phones being seized. Many people choose to simply assume that their phone will be taken and unlocked. They prepare for that event by, for example, opting to log out of accounts or uninstall their apps, disable app notifications, take photos without unlocking their device, and set their photos to backup automatically. These considerations are especially important for folks who may be concerned about their app activity in precarious legal contexts. Law enforcement has been known to use the presence of an app to target people (e.g., gay dating apps) and any app content they see through notifications or within the app can be used as evidence for greater scrutiny and illegal activities (e.g., any client screening or accounting information a sex worker stores on their phone). You can read more in our Surveillance Self-Defense guide, The Problem with Mobile Phones. For specific considerations on digital security during a protest, please read our comprehensive Surveillance Self-Defense guide on Attending a Protest for in-depth tips, as well as our companion post to this guide on protests during COVID-19. Your Device, Your Network With social movements, people may encounter censorship through passive or intentional network disruption. In various countries, governments have cracked down on people’s use of mobile phones by slowing or shutting down the internet during large gatherings, though it can be difficult to figure out how a network is disrupted during the fact. In the US, internet shutdowns are less of a possibility: network overload is more probable, due to the abnormal amount of people in a concentrated area connecting at once to nearby cell service. Regardless of the cause, the effect is that network disruption inhibits quick information sharing.  Given this risk, it may be helpful to create a plan for backing up images and videos, and sharing them later, on a faster connection.  For more information on how censorship and network connectivity works, check out our Surveillance Self-Defense guide on Understanding and Circumventing Network Censorship, and EFF’s post on cell phone surveillance at protests.  In other cases, especially in LGBTQIA+ communities of color and sex worker communities, censorship may occur on the platforms they use. This can take the form of censoring social media posts and hashtags, as documented in our Online Censorship project. Additionally, some communities worry that platforms may “shadowban” content that’s deemed illicit or inappropriate. Though there is little evidence of deliberate platform shadowbanning,  a Facebook patent has increased concerns.  In any event, there is ample evidence of private platform censorship as a general matter. In the next section, we’ll explore considerations for digital spaces, like sharing images from a protest, or creating digital spaces to come together. Considerations for Digital Spaces, such as Posting on Social Media and Digital Gatherings If you are concerned about protecting your digital information while participating in digital spaces and online activism, consider the following issues. The Data You Include (or Don’t Mean to Include)—Posting content can be risky. For example, you might accidentally out someone’s participation in a protest or online gathering by mere association. An additional thing to consider is that LGBTQIA+ specific digital and physical gatherings not only provide space for those who are out, but space also for those who are not yet out to their families, workplaces and so on. That is why many people are careful about their digital associations, such as being mindful of what account they’re logged into when RSVPing to events, or being careful to not be tagged in photos and posts. In many contexts (and particularly for immigrants and people who have to deal with punitive laws in their countries), it could be quite dangerous for someone to be recognized in photos with LGBTQIA+ symbols.  Be mindful of the data you share, particularly as it relates to other people; it’s a good process to ask for consent when including other people in your posts. This is a reason citizen photographers obscure people’s faces when posting pictures of protests online (like using tools such as Signal’s in-app camera blur feature). When posting, be mindful of metadata—or additional information included along with your data, that provides more details about your situation.  Metadata of the videos and photos you post, as well as virtual “check-ins,” can include sensitive information such as locations, time the photo or video was taken, the equipment used and so on. Be mindful that location sharing is not enabled, and that you are not including sensitive metadata when posting videos from scenes like protests. For a good example of EXIF data that accompanies photos and videos, check out Freedom of the Press Foundation’s primer on media metadata and their piece on redacting photos. Public, Private—Social media settings on your accounts can be helpful for separating an account’s visibility.  Switching an account to a Private mode, limiting comments, or using social media provided tools such as blocking and reporting mechanisms can provide some protection. You can follow Surveillance Self-Defense’s tips for protecting yourself on social media networks. A tricky consideration is your account’s visibility may increase as you engage in activism. Keep this in mind when sharing content that features other people — they may have different security considerations that may not be immediately apparent, and that what might be a safe activity for you, may open someone else to new and unnecessary targeting. For example, asking for consent before mentioning or tagging a friend in a public post is a considerate practice. Being Sensitive to Misinformation—Unfortunately, it’s a pressing and difficult social challenge to differentiate misinformation tactics, such as videos or sock puppet accounts that utilize AI-generated faces to seem legitimate, or massive harassment campaigns targeting people based on their identities. A number of social media services have content moderation systems; however, as marginalized communities are particularly aware, these systems can be weaponized by those with ill intent. Misinformation is a shifting space—Data & Society has research on identifying and mitigating against misinformation, which can be helpful for folks looking to recognize patterns in how misinformation tactics are used.  The Names You Use and the Spaces You’re In—Many LGBTQIA+ creators are already skilled at managing identities in physical spaces. To mitigate against harassment and doxxing, it’s helpful to compartmentalize digital identities as well. For an excellent primer on LGBTQIA+ considerations for managing these identities, read Norman Shamas’s post on the topic.  This might mean taking stock of the different services you use, which usernames and passwords are used for these services, and where your names are mentioned. A task that’s particularly tedious is cutting the tie from a legal birth name to a chosen name. For trans folks, removing all associations with chosen names to dead names is especially fraught. Legal name changes and difficulties—such as the legal requirement to publish name change—are one component; Data brokers are another that make this task incredibly daunting. If you’re doing this yourself or with friends, or are considering paid options for opting out of data broker websites. Journalist Yael Grauer has an updated document of resources. It’s incredibly stressful to be on the receiving end of harassment, which is why some people choose to get the support of friends who can help them with moderating comments and making appropriate choices. Where possible, consider taking steps to mitigate against online harassment as a pre-emptive measure (for example, as part of your plan before going to a protest), rather than a reactive step: For more information on mitigating against online harassment, check out Access Now’s guide on Self-Doxxing, or Feminist Frequency’s guide, Speak Up and Stay Safe(r). One opportunistic way people gain access to people’s accounts is to look for a leaked password from a major breach and to try to find if it’s used on other services—this is why it’s important to use unique, random, and long passwords for every service. For tips on how to make stronger passwords, as well as how to remember all these unique passwords and accounts, check out our Surveillance Self-Defense guide on password managers. The Tools You Use—As part of the work of compartmentalizing digital identities to mitigate against harassment, you may want to consider the specific use cases for your devices, online accounts, and the browsers you use to access those accounts. For example, say you have a performance persona that’s geared toward a wide audience, and a private profile you use among friends. You might be careful to avoid cross contamination of data between these profiles, like not reusing photos. For example, someone looking to compartmentalize their social media identities might only access their public performance account using a VPN and a specific browser, clearing their session afterwards, and maybe use a different browser on a different device on their regular network for their personal private account.  Adding Barriers—Mitigating against someone who might be intent on targeting you with harassment is exhausting, and can feel like trying to outrun a bear. You can add barriers to make it harder to disrupt your online life, such as by adding passwords to ordinarily password-less services. For example, if you are running a large video call, consider enabling security settings and creating a process to remove opportunistic people disrupting the call. We have a guide on hardening security settings in Zoom, which might be useful for folks holding large virtual celebrations or actions on a video conferencing platform. Additionally, if two-factor authentication is available, use it, as it provides extra protection for your accounts. Two-factor authentication is something you know and something you have: this means an adversary doesn’t just need to guess or obtain your password, but also they’d physically have to obtain something to get access to your account. Where possible, use app-based two-factor authentication or hardware-based two-factor authentication, rather than SMS-based. Learn more about how to enable two-factor authentication at Surveillance Self-Defense, check out for an overview of which services offer two-factor authentication, or follow our Security Education Companion one-page handout on the topic.  Getting Help—For activists, journalists, bloggers and civil society members around the world, Access Now has a 24/7 Helpline that provides digital security advice and recommendations. If you’re based in the US and are participating in protests, EFF has legal referral services for protesters.  We’ll be putting together more posts in acknowledgment of Pride this month. Join us for our Pride edition of our At Home with EFF livestream on Thursday, June 25th, at 12pm PDT. We’d like to thank Slammer Musuta of Pumzi Code, Norman “grumpy pants” Shamas, Sarah Aoun at the Open Technology Fund, Sage Cheng at Access Now, Eriol Fox, and Martin Shelton at Freedom of the Press Foundation for their contributions and edits.

  • Victory: Indiana Supreme Court Rules that Police Can’t Force Smartphone User to Unlock Her Phone
    by Andrew Crocker on June 23, 2020 at 9:56 pm

    In courts across the country, EFF has been arguing that the police cannot constitutionally require you to unlock your phone or give them your password, and today the Indiana Supreme Court issued a strong opinion agreeing with us. In the case, Seo v. State, the court found that the Fifth Amendment privilege against self-incrimination protected a woman against unlocking her phone because complying with the order was a form of “testimony” under the Fifth Amendment. Indiana joins Pennsylvania, which ruled strongly in favor of the Fifth Amendment privilege in a compelled decryption case last year. Meanwhile, state supreme courts in New Jersey and Oregon are also considering this issue.  In Seo, the defendant reported to law enforcement outside of Indianapolis that she had been the victim of a rape and allowed a detective to examine her iPhone for evidence. But the state never filed charges against Seo’s alleged rapist, identified as “D.S.” Instead, the detective suspected that Seo was harassing D.S. with spoofed calls and texts, and she was ultimately arrested and charged with felony stalking. The state not only sought a search warrant to go through Seo’s phone, but a court order to force her to unlock it. Seo refused, invoking her Fifth Amendment rights. The trial court held her in contempt, but an intermediate appeals court reversed. In an amicus brief on behalf of EFF and ACLU and at oral argument in the Indiana Supreme Court, we explained that the compelled recollection and use of passwords to encrypted devices should be viewed as a modern form of “testimonial” communications, which are protected by the Fifth Amendment privilege. Although some courts have struggled with the concept of testimony in the context of compelled decryption, a 1957 U.S. Supreme Court case defines it as anything that requires a person to disclose “the contents of his own mind.” It’s also clear that nonverbal acts can be testimonial, such as being forced to respond truthfully to police questioning with a “nod or headshake,” or to produce a gun that police believe was used in a crime. And in a 1990 case, the U.S. Supreme Court found that a motorist suspected of drunk driving couldn’t be forced to tell police the date of his sixth birthday, even though officers clearly knew the answer and were simply trying to obtain evidence of his intoxication.  The Indiana Supreme Court agreed, writing that unlocking a phone “communicates a breadth of factual information,” since it allows the government to infer that the suspect knows the password to the device and thus possessed the files on the phone. This gives “the State information it did not previously know—precisely what the privilege against self-incrimination is designed to prevent.” In addition to the question of “testimony,” however, courts in compelled decryption cases have struggled with Fisher v. United States, a 1976 U.S. Supreme Court case that introduced the concept of a “foregone conclusion.” Fisher involved a subpoena for an individual’s tax documents, where the government could demonstrate that it already knew all of the information it would otherwise learn from a response to the subpoena. In other words, it was a “foregone conclusion” that the specific documents the government sought existed, were authentic, and belonged to the individual. Although the Supreme Court has never again relied on this foregone conclusion rationale, the government has built it into a full-blown “doctrine.” State and federal prosecutors have invoked it in nearly every forced decryption case to date. In Seo, the State argued that all that compelling the defendant to unlock her phone would reveal is that she knows her own passcode, which would be a foregone conclusion once it “has proven that the phone belongs to her.” In our amicus brief, we argued that this would be a dangerous rule for the Indiana Supreme Court to adopt. If all the government has to do to get you to unlock your phone is to show you know the password, it would have immense leverage to do so in any case where it encounters encryption. The Fifth Amendment is intended to avoid putting people to a “cruel trilemma”: self-incriminate, lie about knowing the password, or risk being held in contempt for refusing to cooperate. Instead, it’s clear from Fisher and later Supreme Court cases that the foregone conclusion rationale is very narrow. The Court has applied it in Fisher, a case involving business records, and only where the testimonial communication at issue was the act of providing specified documents. The Court has made clear there is no foregone conclusion exception where a person is required to use the contents of their mind, even in responding to a more open-ended document subpoena. So there should be no exception to the Fifth Amendment when the government compels disclosure or use of a passcode to unlock and decrypt a digital device.   In its opinion, the Indiana Supreme Court largely agreed. It rejected the state’s argument that it could invoke the foregone conclusion rationale if it could show that the defendant knew her password. Instead, it held that the state was “fishing for incriminating evidence” without any knowledge of what was on her phone, and that forcing her to unlock her phone under these circumstances would “sound the death knell for a constitutional protection against compelled self-incrimination in the digital age.” Although that resolved the case, the court also included a lengthy discussion of why the foregone conclusion rationale should probably never apply to compelled decryption cases. It noted that smartphones contain “far more private information than a personal diary or an individual tax return ever could,” a fact that has led the U.S. Supreme Court to reject the application of pre-digital caselaw to government searches of phones. The Indiana court wrote that applying Fisher’s foregone conclusion rationale “would mean expanding a decades-old and narrowly defined legal exception to dynamically developing technology that was in its infancy just a decade ago.” Finally, the court noted that police have many tools to investigate users of encrypted devices without compromising users’ constitutional rights. In light of these tools, compelling a user to unlock a phone would “tip the scales too far in the State’s favor, resulting in a seismic erosion of the Fifth Amendment’s privilege against self-incrimination.” We’re gratified by the ruling, and we’re watching for courts in New Jersey, Oregon and elsewhere to continue the trend of protecting against compelled decryption.

  • OTF’s Work Is Vital for a Free and Open Internet
    by Jillian C. York on June 23, 2020 at 8:48 pm

    Update 7/2: We have edited this post to add information about OTF’s Localization Lab. Keeping the internet open, free, and secure requires eternal vigilance and the constant cooperation of freedom defenders all over the web and the world. Over the past eight years, the Open Technology Fund (OTF) has fostered a global community and provided support—both monetary and in-kind—to more than four hundred projects that seek to combat censorship and repressive surveillance, enabling more than two billion people in over 60 countries to more safely access the open Internet and advocate for democracy. OTF has earned trust over the years through its open source ethos, transparency, and a commitment to independence from its funder, the US Agency for Global Media (USAGM), which receives its funding through Congressional appropriations. In the past week, USAGM has removed OTF’s leadership and independent expert board, prompting a number of organizations and individuals to call into question OTF’s ability to continue its work and maintain trust among the various communities it serves. USAGM’s new leadership has been lobbied to redirect funding for OTF’s open source projects to a new set of closed-source tools, leaving many well-established tools in the lurch. Why OTF Matters EFF has maintained a strong relationship with OTF since its inception. Several of our staff members serve or have served on its Advisory Council, and OTF’s annual summits have provided crucial links between EFF and the international democracy tech community. OTF’s support has been vital to the development of EFF’s software projects and policy initiatives. Guidance and funding from OTF have been foundational to Certbot, helping the operators of tens of millions of websites use EFF’s tool to generate and install Let’s Encrypt certificates. The OTF-sponsored fellowship for Wafa Ben-Hassine produced impactful research and policy analysis about how Arab governments repress online speech. OTF’s Localization Lab has provided translations for Surveillance Self-Defense and HTTPS Everywhere, helping bring EFF’s work to a global audience. OTF’s funding is focused on tools to help individuals living under repressive governments. For example, OTF-funded circumvention technologies including Lantern and Wireguard are widely used by people around the world. OTF also incubated and assisted in the initial development of the Signal Protocol, the encryption back-end used by both Signal and WhatsApp. By sponsoring Let’s Encrypt’s implementation of multi-perspective validation, OTF helped protect the 227 million sites using Let’s Encrypt from BGP attacks, a favorite technique of nation-states that hijack websites for censorship and propaganda purposes. While these tools are designed for users living under repressive governments, they are used by individuals and groups all over the world, and benefit movements as diverse as Hong Kong’s Democracy movement, the movement for Black lives, and LGBTQ+ rights defenders.  OTF requires public, verifiable security audits for all of its open-source software grantees. These audits greatly reduce risk for the vulnerable people who use OTF-funded technology. Perhaps more importantly, they are a necessary step in creating trust between US-funded software and foreign activists in repressive regimes.  Without that trust, it is difficult to ask people to risk their lives on OTF’s work. Help Us #SaveInternetFreedom It is not just OTF that is under threat, but the entire ecosystem of open source, secure technologies—and the global community that builds those tools. We urge you to join EFF and more than 400 other organizations in signing the open letter, which asks members of Congress to: Require USAGM to honor existing FY2019 and FY2020 spending plans to support the Open Technology Fund; Require all U.S.-Government internet freedom funds to be awarded via an open, fair, competitive, and evidence-based decision process; Require all internet freedom technologies supported with US-Government funds to remain fully open-source in perpetuity; Require regular security audits for all internet freedom technologies supported with US-Government funds; and Pass the Open Technology Fund Authorization Act. EFF is proud to join the voices of hundreds of organizations and individuals across the globe calling on USAGM and OTF’s board to recommit to the value of open source technology, robust security audits, and support for global Internet freedom. These core values—which have been a mainstay of OTF’s philanthropy—are vital to uplifting the voices of billions of technology users facing repression all over the world.

  • Apple’s Response to HEY Showcases What’s Most Broken About the Apple App Store
    by rainey Reitman on June 22, 2020 at 11:47 pm

    Basecamp’s new paid email service, HEY, has been making headlines recently in a very public fight with Apple over their App Store terms of service.  Just as the service was launching, the HEY developers found the new release of the app—which included important security fixes—was held up over a purported violation of the App Store rules. Specifically, Developer Rule 3.1.1, which states that “If you want to unlock features or functionality within your app, (by way of example: subscriptions, in-game currencies, game levels, access to premium content, or unlocking a full version), you must use in-app purchase.” Apple alleged that HEY had violated this rule by pushing users to pay for its email service outside of the crystal prison of the App Store. Basecamp’s CTO David Heinemeier Hansson tweeted:  But many apps—like Netflix and Amazon’s Kindle —follow this same payment pathway, with users setting up accounts directly through a website and then logging into that paid account via an app in the Apple App Store. And it’s no wonder that tech companies balk at the idea of following App’s store payment pathway—as the BBC reports, Apple takes a cut of all in-app payments, often as much as 30%. HEY announced that they had found a way forward: negative publicity and public pressure pushed Apple to keep HEY in the App Store, at least for now. Apple agreed to allow the new version of HEY with its security fixes, and HEY is seeking to release a new version of the app they hope will be more palatable to Apple long-term. But one-off exceptions don’t address the systemic problems with the App Store, and not every app developer can launch a high-profile publicity campaign to shame Apple into doing the right thing. HEY’s fight with Apple highlights what’s most broken with the App Store: our mobile technology environment is dictated by two tech behemoths that set the rules of innovation for billions of people. And while the current system may benefit Apple, Google, and a small number of early-entrant technology companies, everyday technology users and small startups end up with the short end of the stick. Apple’s policies are opaque, arbitrarily applied, and byzantine. The company prioritizes their own apps in search results, a Wall Street Journal analysis found, so that users searching for “music, “audiobooks,” or other categories will be shunted toward Apple products. Apple has also restricted and removed apps designed to help families limit the amount of time they spend using Apple products, according to an analysis by the New York Times and Sensor Tower. Apple also has a history of censoring apps in country-specific App Stores, including removing Chinese language podcasts from China’s App Store. More recently, two podcasting apps were removed from China’s App Store. The creators of one of those apps said, “The very small amount of warning we were given between there being a problem, and our app being completely removed from the Chinese app store was quite alarming.” It’s no wonder that the European Commission has launched an antitrust investigation into the App Store, and United States Representative David Cicilline (D-RI) described the App Store fees as “highway robbery, basically,” that are “crushing small developers who simply can’t survive with those kinds of payments.” Some may be celebrating HEY’s recent victory in getting the newest version of the app into the App Store, but we aren’t. For every app like HEY that can mount a publicity campaign and carve out an exception for itself, there are untold other app developers who tolerate exorbitant fees or have their apps banned with little notice. Exceptions to Apple’s policies don’t fix the root problems, which is a broken marketplace that is financially incentivized to limit user choices and competition.  What might a real solution to this problem look like? The first step is transparency: Apple should be up-front about its policies and appeals systems, so that developers can act with certainty. Transparency also includes a public accounting of how and why apps are banned from the App Store, released publicly so that scholars, lawmakers and the general public can be informed about how and when tech users are denied innovative products. Second, policies must be fairly applied to all apps in the App Store; companies with big followings like HEY and Netflix shouldn’t get special deals that patch rather than fix the system. Third, Apple needs to commit to allowing updates to apps that are already in the stores that only fix security bugs—we all need to know that our security is Apple’s top priority. Finally, Apple needs to revisit its pricing model and remove the requirement for every app to use Apple’s in-app payments. If app developers can choose whether or not to use Apple’s payment services, that competition will discipline Apple, so that it prices its service at a level that appeals to developers. Authorities are taking renewed interest in all forms of antimonopoly enforcement and EFF is glad to see these remedies considered for the mobile OS duopoly. Apple’s high-handed and arbitrary treatment of software vendors didn’t occur in a vacuum: rather, it’s the toxic result of growth-through-acquisition, Digital Rights Management restrictions that prevent competing App Stores, and the creation of vertically integrated monopolies—where Apple provides a store to sell apps and then competes with the companies that must use that store. Companies that can use the law to accumulate market power create abusive, dysfunctional systems that limit public choice and stifle competition. And if Apple continues to leverage its total control over the software that runs on Apple devices to extract arbitrary tolls and impose censorship on developers, antitrust authorities have the power to order Apple to allow competing App Stores. Walled gardens don’t have customers, they have hostages, and evenhandedness is the very least we should demand of those who take away our choice. Apple doesn’t have the best App Store for iOS users, it has the only one. Far better would be competing App Stores with their own policies, where software authors and their customers get to decide for ourselves who’s earned our trust.

  • California Coalition Calls for Moratorium on State Gang Database
    by Dave Maass on June 22, 2020 at 9:38 pm

    EFF has joined a coalition of civil rights, immigration, and criminal justice reform organizations to demand the California Department of Justice (CADOJ) place an immediate moratorium on the use of the state’s gang database, also known as CalGang.  For years, EFF has stood beside many of these organizations to advocate for reforms to the CalGang system, which has tarnished the records of countless Californians—largely Black and Latinx—by connecting them to gangs based on the thinnest of evidence. Indeed, sometimes the information has been falsified, as was revealed to be the case with the Los Angeles Police Department (LAPD) earlier this year. In previous legislative sessions, we supported multiple pieces of legislation by Assemblymember Shirley Weber to overhaul CalGang. However, the CADOJ has missed many of the deadlines created by these statutes, so it’s clear that simply hoping for reform isn’t enough. Just as LAPD suspended its use of gang databases this month, use of CalGang must come to a dead stop until, at minimum, CADOJ fully implements reforms required by existing legislation. EFF also supports the abolition of CalGang altogether.  Below is the text of the letter signed by the coalition of gang-database activists, as well as members of the technical advisory board set up by the state to provide guidance on CalGang. Dear Attorney General Becerra,  Since the murder of George Floyd, Californians have taken to the streets to express their outrage and grief, but most importantly to state that Black Lives Matter. Millions of people exercising their First Amendment rights these past weeks have demonstrated the insufficiency of the California Department of Justice’s actions to meet their longstanding demand that law enforcement treat everyone with respect and dignity. Nowhere is that more true than in the Department’s ineffective effort to reform law enforcement’s use of the CalGang database.  To remedy this, we the undersigned organizations and individuals demand the Department must immediately place a moratorium on the use of CalGang Database until (1) regulations for its use are adopted, (2) trainings are developed with purposeful community input and approved by the Department, and (3) all users have completed these trainings.  On October 12, 2017, the Legislature chaptered AB 90, assigning to the Department the task of regulating law enforcement agencies’ use of shared gang databases, with instruction to adopt rules that would address the racially-biased overinclusion of Black people and other people of color. The Legislature gave the Department a deadline of January 1, 2020 to accomplish this task. As of the date of this letter, regulations have yet to be adopted. The Department’s most recent Revisions to Proposed Regulations (OAL Register 2019-0430-06) delete even the Department’s revised deadline of July 1, 2020 without offering any new date. In the meantime, the CalGang database continues to be utilized by law enforcement agencies under the old policies and practices which the Legislature expressly found unacceptable in 2017. Furthermore, the currently proposed regulations have prioritized codifying the existing criteria and practices of law enforcement agencies that led to the criminalization of Black and Latino Californians over the Legislature’s stated goal of shielding communities from this unnecessary labeling and its dangerous effects.  CalGang is emblematic of the type of policing that has directly led to the recent unrest throughout the country. Gang units and other patrols fan out across communities, target Black people and other people of color, stop them under pretexts like a traffic stop or a supposedly “consensual” stop, use racist stereotypes to deem them “gang members,” and add their names and information into the database to be tracked. Police then use these gang labels to justify further stops, interrogations, uses of force, and enhanced punishment. The Department has yet to accomplish anything to eradicate this practice despite the Legislature’s instruction and statutory grant of authority to do exactly that. To the contrary, it has affirmatively rejected amendments grounded in empirical evidence and community input that would limit the likelihood of harm inflicted upon communities of color.  While many of us believe that all law enforcement gang databases must be abolished, a moratorium until the Department has fully implemented reforms that are consistent with its legislative grant of power is a reasonable and appropriate step that the Department can immediately take while abolition of gang databases is considered by the broader group of stakeholders. The Department has little excuse not to take this step. In 2018, the legislature imposed a moratorium with no negative consequences. Currently, the Los Angeles Police Department, CalGang’s largest user, has imposed a second moratorium on their department’s use of CalGang – spurred by the recognition that some entries lack a factual basis and are driven by harmful presumptions based at least in part upon race, and after finally acknowledging the damaging effects of continued use of this system, particularly upon the Black community.  The Department can no longer allow CalGang to continue under the unacceptable old policies and procedures while the Department misses deadline after deadline, deferring reform. Now is the time for the Department to recognize that truth, and alongside community partners take immediate concrete actions by enacting an immediate moratorium on all use of CalGang. The Department should also take this moment to revisit the proposed regulations and adopt rules for the use of CalGang that provide real protections to the communities wrongly targeted by gang labeling for far too long.  Sincerely,  ACLU of California Coalition for Humane Immigrant Rights (CHIRLA) Loyola Immigrant Justice Clinic Urban Peace Institute Sammy Nunez – Gang Database Technical Advisory Committee Chair Marissa Montes – Gang Database Technical Advisory Committee Jeremy Thornton – Gang Database Technical Advisory Committee Paul Carrillo – Gang Database Technical Advisory Committee Michael Scaffidi – Gang Database Technical Advisory Committee 2nd Call Anti-Recidivism Coalition Asian Americans Advancing Justice – CA Asian Americans Advancing Justice – Los Angeles Breaking Through Barriers to Success California Alliance for Youth and Community Justice (CAYCJ) California Attorneys for Criminal Justice Californians for Safety and Justice California Immigrant Policy Center Central American Resource Center (CARECEN) Chicanxs Unidos Chispa Community Advocates for Just and Moral Governance (MoGo) Consumer Attorneys of California Criminal Justice Clinic, UC Irvine School of Law Detours Mentoring Group, Inc. Electronic Frontier Foundation Ella Baker Center for Human Rights Fathers & Families of San Joaquin Friends Committee on Legislation of California Ground Game LA H.E.L.P.E.R. Foundation Healing Hearts Restoring Hope Homeboy Industries Homies Unidos Immigrant Defense Advocates Immigrant Legal Resource Center Latino Coalition for a Healthy California National Association of Social Workers (NASW) California Chapter PICO California Project Kinship Promesa Boyle Heights Public Counsel Restore Justice Southern California Cease Fire Committee Southern California Crossroads UC Irvine School of Law Immigrant Rights Clinic Young Visionaries Youth Leadership Academy For the full letter click here.

  • Staying Private While Using Google Docs for Legal & Mutual Aid Work
    by Bill Budington on June 19, 2020 at 1:31 am

    Regardless of your opinion about Google, their suite of collaborative document editing tools provides a powerful resource in this tumultuous time. Across the country, grassroots groups organizing mutual aid relief work in response to COVID-19 and legal aid as part of the recent wave of protests have relied on Google Docs to coordinate efforts and get help to those that need it. Alternatives to the collaborative tools either do not scale well, are not as usable or intuitive, or just plain aren’t available. Using Google Sheets to coordinate who needs help and how can provide much-needed relief to those hit hardest. But it’s easy to use these tools in a way Google didn’t envision, and trigger account security lockouts in the process. The need for privacy when doing sensitive work is often paramount, so it’s understandable that organizers often won’t want to use their personal Google accounts. But administering aid documents from a single centralized account and sharing the password amongst peers is not recommended. If one person accessing the account connects from an IP address Google has marked as suspicious, it may lock that account for some time (this can happen for a variety of reasons—a neighbor piggybacking off of your WiFi and using it to hack a website, for example). The bottom line is: the more IPs that connect to a single account, the more likely the account will be flagged as suspicious. In addition, sharing a password makes it easy for someone to change that password, locking everyone else out. It also means that you can’t protect the account with 2-step verification without a lot of difficulty. 2-step verification protects accounts so that you have to use an app that displays a temporary code or an authentication key every time you sign in to an account.  This protects the account from various password-stealing attacks. For any documents that you create, you’ll want clear attribution for any changes made, even if it is attributable only to a pseudonym. This helps ensure that if false or malicious data is introduced, you know where it came from. Google Docs and Sheets allow you to see the history of changes to a document, and who made those changes. You can also revert to a previous version of the document. Unfortunately, in our testing we found that Google requires a valid phone number to create and edit documents from an account. (Instead of Google Sheets to organize data, you might consider using Google Forms instead, which allows you to build out a custom form that anyone can submit to, even without an account.) The author of a document can also share the document via a link with editor or commenter permissions, but this also requires a Google account. Google already has a mechanism for determining if a user is legitimate, via its reCAPTCHA service. Instead of requiring sensitive identifying information like phone numbers, it should allow users to create anonymous or pseudonymous accounts without having to link a phone number. There are a number of routes to getting a phone number that Google will accept and send you a verification code for. The best method for setting up your account depends on how private you want the account to be. Your real phone number is often easily linked back to your address.  One step of removal is using a service that generates real phone numbers that can accept SMS messages. There are many such services out there, and most will have you sign up with your real phone number to generate those numbers. These include apps like Burner and full communications platforms such as Twilio.  When you establish an account relying on a phone number generated by a third-party (but, ultimately, connected to your phone number), linking a document to your identity will require information from both Google as well as the third-party service. For extra privacy, users should look into purchasing a prepaid SIM card and use a burner phone to receive the verification SMS. If you’re going down this route, you’ll probably also be interested in using a VPN or Tor Browser when collaborating. There is not a one-size-fits-all solution to collaborating privately with Google Docs. Your decisions on how private you want to be will depend on your own security plan, as well as that of your collaborators.

  • Two Different Proposals to Amend Section 230 Share A Similar Goal: Damage Online Users’ Speech
    by Aaron Mackey on June 18, 2020 at 11:41 pm

    Whether we know it or not, all Internet users rely on multiple online services to connect, engage, and express themselves online. That means we also rely on 47 U.S.C. § 230 (“Section 230”), which provides important legal protections when platforms offer their services to the public and when they moderate the content that relies on those services, from the proverbial cat video to an incendiary blog post. Section 230 is an essential legal pillar for online speech. And when powerful people don’t like that speech, or the platforms that host it, the provision becomes a scapegoat for just about every tech-related problem. Over the past few years, those attacks have accelerated; on Wednesday, we saw two of the most dangerous proposals yet, one from the Department of Justice, and the other from Sen. Josh Hawley The proposals take different approaches, but they both seek to create new legal regimes that will allow public officials or private individuals to bury platforms in litigation simply because they do not like how those platforms offer their services. Basic activities like offering encryption, or editing, removing, or otherwise moderating users’ content could lead to years of legal costs and liability risk. That’s bad for platforms—and for the rest of us. DOJ’s Proposal Attacks Encryption and Would Make Everyone’s Internet Experience Less Secure The Department of Justice’s Section 230 proposal harms Internet users and gives the Attorney General more weapons to retaliate against online services he dislikes. It proposes four categories of reform to Section 230. First, it claims that platforms need greater incentive to remove illegal user-generated content and proposes that Section 230 should not apply to what it calls “Bad Samaritans.” Platforms that knowingly host illegal material or content that a court has ruled is illegal would lose protections from civil liability, including for hosting material depicting terrorism or cyber-stalking. The proposal also mirrors the EARN IT Act by attacking encryption: it conditions 230 immunity on whether the service maintains “the ability to assist government authorities to obtain content (i.e., evidence) in a comprehensible, readable, and usable format pursuant to court authorization (or any other lawful basis).” Second, it would allow it and other federal agencies to initiate civil enforcement actions against online services that they believed were hosting illegal content. Third, the proposal seeks to “clarify that federal antitrust claims are not covered by Section 230 immunity.” Finally, the proposal eliminates key language from Section 230 that gives online services the discretion to remove content they deem to be objectionable and defines the statute’s “good faith” standard to require platforms to explain all of their decisions to moderate users’ content. The DOJ’s proposal would eviscerate Section 230’s protections and, much like the EARN IT Act introduced earlier this year, is a direct attack on encryption. Like EARN IT, the DOJ’s proposal does not use the word encryption anywhere. But in practice the proposal ensures that any platform providing secure end-to-end encryption would face a torrent of litigation—surely no accident given the Attorney General’s repeated efforts to outlaw encryption. Other aspects of the DOJ’s “Bad Samaritan” proposals are problematic, too. Although the proposal claims that bad actors would be limited to platforms that knowingly host illegal material online, the proposal targets other content that may be offensive but is nonetheless protected by the Constitution. Additionally, requiring platforms to take down content deemed illegal via a court order will result in a significant increase in frivolous litigation about content that people simply don’t like. Many individuals already seek to use default court judgments and other mechanisms as a means to remove things from the Internet. The DOJ proposal requires platforms to honor even the most trollish court-ordered takedown. Oddly, the DOJ also proposes punishing platforms for removing content from their services that is not illegal. Under current law, Section 230 gives platforms the discretion to remove harmful material such as spam, malware, or other offensive content, even if it isn’t illegal.  We have many concerns about those moderation decisions, but removing that discretion altogether could make everyone’s experiences online much worse and potentially less safe. It’s also unconstitutional: Section 230 notwithstanding, the First Amendment gives platforms the discretion to decide for themselves the type of content they want to host and in what form.  The proposal would also empower federal agencies, including the DOJ, to bring civil enforcement actions against platforms. Like  last month’s Executive Order targeting online services, this would give the government new powers to target platforms that government officials or the President do not like. It also ignores that the DOJ already has plenty of power. Because Section 230 exempts federal criminal law, it has never hindered the DOJ’s ability to criminally prosecute online services engaging in illegal activity. The DOJ would also impose onerous obligations that would make it incredibly difficult for any new platform to compete with the handful of dominant platforms that exist today. For example, the proposal requires all services to provide “a reasonable explanation” to every single user whose content is edited, deleted, or otherwise moderated. Even if services could reasonably predict what qualifies as a “reasonable explanation,” many content moderation decisions are not controversial and do not require any explanation, such as when services filter spam. Sen. Hawley’s Proposed Legislation Turns Section 230’s Legal Shield Into An Invitation to Litigate Every Platform’s Moderation Decisions Sen. Hawley’s proposed legislation, for its part, takes aim at online speech by fundamentally reversing the role Section 230 plays in how online platforms operate. As written, Section 230 generally protects platforms from lawsuits based either on their users’ content or actions taken by the platforms to remove or edit users’ content. Hawley’s bill eviscerates those legal protections for large online platforms (platforms that average more than 30 million monthly users or have more than $1.5 billion in global revenue annually), by replacing Section 230’s simple standard with a series of detailed requirements. Platforms that meet those thresholds would have to publish clear policies describing when and under what circumstances they moderate users’ content. They must then enforce those policies in good faith, which the bill defines as acting “with an honest belief and purpose,” observing “fair dealing standards,” and “acting without fraudulent intent.” A platform fails to meet the good faith requirement if it engages in “intentionally selective enforcement” of its policies or by failing to honor public or private promises it makes to users. Some of this sounds OK on paper—who doesn’t want platforms to be honest? In practice, however, it will be a legal minefield that will inevitably lead to overcensorship. The bill allows individual users to sue platforms they believe did not act in good faith and creates statutory damages of up to $5,000 for violations. It would also permit users’ attorneys to collect their fees and costs in bringing the lawsuits. In other words, every user who believes a platform’s actions were unfair, fraudulent, or otherwise not done in good faith would have a legal claim against a platform. And there would be years of litigation before courts would decide standards for what constitutes good faith under Hawley’s bill. Given the harsh reality that it is impossible to moderate user-generated content at scale perfectly, or even well, this bill means full employment for lawyers, but little benefit to users. As we’ve said repeatedly, moderating content on a platform with a large volume of users inevitably results in inconsistencies and mistakes, and it disproportionately harms marginalized groups and voices. Further, efforts to automate content moderation create additional problems because machines are terrible at understanding the nuance and context of human speech. This puts platforms in an impossible position: moderate as best you can, and get sued anyway—or drastically reduce the content you host in the hopes of avoiding litigation. Many platforms will choose the latter course, and avoid hosting any speech that might be controversial. Like the DOJ’s proposal,  the bill also violates the First Amendment. Here, it does so by making distinctions between particular speakers. That distinction would trigger strict scrutiny under the First Amendment, a legal test that requires the government to show that (1) the law furthers a compelling government interest and (2) the law is narrowly tailored to achieve that interest. Sen. Hawley’s bill fails both prongs: although there are legitimate concerns about the dominance of a handful of online platforms and their power to limit Internet users’ speech, there is no evidence that requiring private online platforms to practice good-faith content moderation represents a compelling government interest. Even assuming there is a compelling interest, the bill is not narrowly tailored. Instead, it broadly interferes with platforms’ editorial discretion by subjecting them to endless lawsuits from any individual claiming they were wronged, no matter how frivolous. As EFF told Congress back in 2019, the creation of Section 230 has ushered in a new era of community and connection on the Internet. People can find friends old and new over the Internet, learn, share ideas, organize, and speak out. Those connections can happen organically, often with no involvement on the part of the platforms where they take place. Consider that some of the most vital modern activist movements—#MeToo, #WomensMarch, #BlackLivesMatter—are universally identified by hashtags. Forcing platforms to overcensor their users, or worse, giving the DOJ more avenues to target platforms it does not like, is never the right decision. We urge Congress to reject both proposals.

  • Victory! French High Court Rules That Most of Hate Speech Bill Would Undermine Free Expression
    by Karen Gullo on June 18, 2020 at 11:09 pm

    EFF and Partners Said the Bill Would Have Empowered Government and Platforms to Censor Online SpeechParis, France—In a victory for the free speech rights of French citizens, France’s highest court today struck down core provisions of a bill meant to curb hate speech, holding they would unconstitutionally sweep up legal speech.The decision comes as some governments across the globe, in seeking to stop hateful, violent, and extremist speech online, are considering overbroad measures that would silence legitimate speech. The French Supreme Court said the bill’s requirements—that online posts, comments, photos, and other content deemed hateful by potential plaintiffs must be taken down within 24 hours of being reported—would encourage social media platforms like Facebook and Twitter, in their haste to avoid hefty fines, to remove perfectly legal speech. The provisions “infringe on freedom of speech, and are not necessary, appropriate and proportionate,” the court said.It also rejected a provision that required speech related to terrorism and child pornography be removed within an hour of being flagged. The Electronic Frontier Foundation (EFF), Nadine Strossen, the John Marshall Harlan II Professor of Law, Emerita at New York Law School, and the French American Bar Association (FABA) urged the court in a brief submitted earlier this month to reject the bill.“We applaud the court for recognizing that citizens’ rights of free speech and expression are paramount in a democratic society, and the bill’s draconian deadlines for removal were so inflexible and extreme that those rights would be violated under France’s constitution,” said EFF International Policy Director Christoph Schmon. “Any government effort to censor objectionable content must be balanced with people’s rights to air their views on politics, the government, and the news. This bill failed to strike that balance. Its requirements would deputize platforms to police speech at the behest of the government, which is unacceptable in a free society.”In its filing with the court, EFF and its partners argued that the bill, known as the Avia Bill, would undermine European Union (EU) directives prioritizing users’ free speech rights when dealing with Internet activities. Instead of taking steps to foster innovation and encourage competition so that social media platforms would improve their speech removal practices or lose customers, lawmakers in the U.S., Europe, and elsewhere are pushing legislation that makes online platforms the new speech police.“Although the law’s anti-hatred goal is laudable, human rights activists around the world agree that the more effective strategy is to counter hateful ideas through education, and ensuring that everyone has meaningful access to online resources,” said Nadine Strossen, the John Marshall Harlan II Professor of Law, Emerita at New York Law School.“The Avia Bill would have forced social media platforms to single-handedly make an immediate determination as to the legal nature of the content,” said Thomas Vandenabeele and Pierre Ciric, president and vice president, respectively, at FABA. “We are pleased that the French Supreme Court adopted the position expressed in our joint June 1 amicus brief, whereby those take down timing requirements will cause over-censorship of perfectly legal speech, and are therefore unconstitutional.””As the European Union is gearing up for a major reform of key Internet regulation, the court’s decision is also a strong call that lawmakers should better focus on how to put users back in control of their online experience,” said Schmon.For the decision:     Contact:  ChristophSchmonInternational Policy [email protected]

  • Victory! New York’s City Council Passes the POST Act
    by Nathan Sheard on June 18, 2020 at 9:02 pm

    After three years of organizing by a broad coalition of civil society organizations and community members, New York’s City Council has passed the POST Act with an overwhelming—and veto-proof—majority supporting this common-sense transparency measure.  The POST Act’s long overdue passing came as part of a package of bills that many considered longshots before weeks of public protest calling attention to injustices in policing. However, in recent weeks many of the bills detractors, including New York City Mayor Bill de Blasio, came to see the measure as appropriate and balanced.  The POST Act provides a much needed first step toward transparency. Once signed into law, the act will require the NYPD to openly publish a use policy for each surveillance technology it intends to use. After this notice has been made publicly available, and members of the community have had an opportunity to voice their concerns to the department and City Council, the NYPD Commissioner will be required to provide a final version of the surveillance impact and use policy to the City Council, the mayor, and the public. The bill lacks the community control rules included in similar Surveillance Equipment Regulation Ordinances (SEROs) like Oakland’s Surveillance and Community Safety ordinance and San Francisco’s Stop Secret Surveillance Ordinance. In those communities, city agencies must get permission from their city councils before acquiring surveillance technologies. Still, the new transparency requirements of New York’s POST Act are an important step forward. With federal agencies expanding their spying programs against immigrants and political dissidents, and concern that the federal government will commandeer the surveillance programs of state and local governments, the police surveillance transparency movement continues to gain momentum on the local and state level.  EFF will continue to work with our Electronic Frontier Alliance allies like New York’s Surveillance Technology Oversight Project and the Bay Area’s Oakland Privacy to develop and pass comprehensive legislation ensuring civil liberties and essential privacy. To find an Electronic Frontier Alliance member organization in your community, or to learn how your group can join the Alliance, visit

  • California Privacy Advocates Sue Vallejo Over Cell-Site Simulator
    by Dave Maass on June 18, 2020 at 6:34 pm

    Special thanks to legal intern Gillian Vernick, who was lead author of this post. The Vallejo Police Department was warned: by rushing to purchase a cell-site simulator without first crafting a use policy, the agency side-stepped its legal duty to transparency. Now, Oakland Privacy has filed a first-of-its-kind suit to ensure the public has a say in how this controversial surveillance technology is deployed in their communities.  Cell-site simulators are devices that police use to gather information from cell phones, typically to locate, identify, and track people. Also known as IMSI catchers, these devices pretend to be cell phone towers in order to trick phones into connecting to them. The technology is so controversial that, in 2015, the California legislature stepped in and passed SB 741, a law that ensures a police department cannot acquire a cell-site simulator without a city council first approving a detailed policy that is “consistent with respect for an individual’s privacy and civil liberties.”  In mid-March, amid COVID-19 shelter-in-place orders, the Vallejo City Council approved a $766,000 purchase of a cell-site simulator manufactured by KeyW. However, instead of holding a hearing on the policy, the council simply told the police that they could write a policy later. Oakland Privacy and EFF both sent letters to the city demanding an immediate halt on the purchase.  Those letters went unheeded. In response, Oakland Privacy—a local member in the Electronic Frontier Alliance and a winner of the 2019 Pioneer Award given by EFF— and two residents filed suit on May 21 to demand that Vallejo Police follow the law.  In a press release issued by Oakland Privacy, plaintiff Dan Rubins stated: “Now, during a severe health and economic crisis that is already causing a $12M budget shortfall, they want to spend almost $1M to buy a powerful and unnecessary surveillance device while they write its use policy in secret. Their actions flout transparency and procurement regulations that give people a forum to raise these issues that impact all of our civil liberties.” The complaint alleges the Vallejo Police Department violated SB 741 (California Government Code § 53166) by failing to comply with the requirement that the City Council approve a usage policy for the cell-site simulator before it is acquired and operated. Instead of adopting or reviewing a privacy policy before authorizing the purchase, the City Council simply authorized the chief of police to create a privacy policy behind closed doors, without public participation as required by the law. The complaint further contends that not only did the Vallejo Police Department fail to present a policy for approval, but the actual policy created by the police chief fails to comply with all the requirements for the construction of a policy under SB 741. The law has a long list of elements that must be included in a policy, many of which were addressed inadequately by the policy eventually released by the Vallejo Police Department.  First, the policy fails to include a description of the employees authorized to access information collected with the cell-site simulator. Second, it allows the police chief or his designee to authorize an unspecified employee to use the device, without a requirement of amending the policy to show this person has been authorized to use the device or access the information collected. Finally, the policy authorizes the use of the technology without prior judicial approval based on an imminent threat of generic “bodily injury” of any kind, which is inconsistent with the California Electronic Communications Privacy Act (CalECPA). Under CalECPA, the standard for using a cell-site simulator without prior judicial authorization is “danger of death or serious physical injury,” whereas the Vallejo Police Department policy leaves room for the technology to be deployed without a warrant for something as basic as a twisted ankle.  This battle is important. States, counties, cities, and transit agencies around the nation, particularly in California, are passing laws to ensure surveillance technology can’t be acquired or used before a policy is put in writing and approved by an elected body in a public hearing. We applaud Oakland Privacy for taking a stand against law enforcement circumventing transparency requirements intended to give the public a say in the surveillance technologies used in their communities. 

  • Our EU Policy Principles: Interoperability
    by Svea Windwehr on June 18, 2020 at 7:00 am

    As the EU is gearing up for a major reform of key Internet regulation, we are introducing the principles that will guide our policy work surrounding the Digital Services Act. In this post, we take a closer look at what we mean when we talk about interoperability obligations, and at some of the principles that should guide interoperability measures to make sure they serve users, not corporations. New Rules for Online Platforms The next years will be decisive for Internet regulation in the EU and beyond as Europe is considering the most significant update to its regulatory framework for Internet platforms in two decades. In its political guidelines and a recent communication, the European Commission has pledged to overhaul the e-Commerce Directive, the backbone of the EU’s Internet regulation. A new legal act—the Digital Services Act—is supposed to update the legal responsibilities of online platforms. New competition-friendly rules that tackle unfair behavior of dominant platforms are another objective of the upcoming reform.  EFF will work with EU institutions to advocate that users are put back in control of their online experiences through transparency and anonymity measures whilst preserving the backbone of innovation-focused Internet bills: immunity for online platforms from liability for user content and banning filtering and monitoring obligations. The reform of the e-Commerce Directive bears the risk that the EU could follow in the footsteps of Internet-hostile regulations that foster the privatization of enforcement, such as the Copyright Directive, the German NetzDG, or the French Avia Bill. On the other hand, it is also an opportunity to break open the walled gardens that many large platforms have become, and to put users’ rights to informational self-determination front and center. Interoperability Obligations We believe that interoperability obligations are an important tool to achieve these goals. Today, most elements of our online experiences are designed and regulated by large platform companies that hold significant market power. Many platforms take it upon themselves (or are required) to police expression and to arbitrate access to content, knowledge, and goods and services. They act as gatekeepers to most of our social, economic, and political interactions online. Platforms are powerful, and their power stems from many sources: most of today’s big tech players have a history of stifling competitors through technical measures, strategic lawsuits, and acquiring competitors. Over time, big platforms have become entrenched thanks to network effects, their sheer size, and the significant resources at their disposal. This is reinforced by regulation that has often become too difficult or expensive to implement for smaller competitors. The result: users become hostages, locked in a labyrinth of walled gardens. The solution to this situation is not to reinvent the wheel, but to take inspiration from what the Internet’s early days looked like. Many of today’s significant players’ ascent was aided by interoperability—the ability to make a new product or service work with an existing product or service. Today’s incumbents made their fortunes by building their new ideas onto existing products or structures, thereby creating adversarial interoperability, often against the then-incumbents’ will. In these early days of the Internet, not only did start-ups and new market entrants flourished, but also users had much more choice and control over the services and products that created their experiences online. Principle 1: General Interoperability Obligations EFF’s vision is a legal regime that fosters innovation and puts users back in control of their data, privacy, and online experiences. We believe that interoperability has a major role to play to make this vision of a Public Interest Internet come to life, which is why we propose interoperability obligations for platforms with significant market power. What we mean by that is simple: platforms that control significant shares of a market, and act as gatekeepers to that market, must offer possibilities for competing, not-incumbent platforms to interoperate with their key features. While Europeans already have a right to data portability under the GDPR, this right comes with limits. It is not encompassing (users cannot port all personal data), it is conditional (only possible where “technically feasible”), and it is not clear where users should port their data to. Interoperability is the missing piece to breathe life into the right to portability. Interoperability through technical interfaces would enable users to communicate with friends across platform boundaries, or to be able to follow their favorite content across different platforms without having to create several accounts. Users would no longer be forced to stay on a platform that disregards their privacy, covertly collects their data, or jeopardizes their security, for fear of losing their social network. Instead, users would have the chance to make real and informed choices. Principle 2: Delegability But it doesn’t end here. Interoperability should also happen at the level of user interfaces, and should allow for as much flexibility and diversity as users want. Therefore, platforms with significant market power should also make it possible for competing third parties to act on users’ behalf. If users want to, they should be able to delegate elements of their online experience to different competent actors. For example, if you don’t like Facebook content moderation practices, you should be able to delegate that task to another organization, like a non-profit specializing in community based content moderation. Principle 3: Limit Commercial Use of Data To avoid the exploitation of interoperability, any data made available through interoperability should not be available for general commercial use. Most major platforms are built on business models that rely on the (often coveted) collection and sale of users’ data, thereby monetizing users’ attention and exploiting their personal data. Therefore, any data made available for the purpose of interoperability should only be used for maintaining interoperability,  safeguarding users’ privacy, or ensuring data security. By prohibiting the commercial use of data used for implementing or maintaining interoperability, we also want to positively incentivize competitors with innovative, responsible, and privacy-protective business models. Principle 4: Privacy It is crucial to empower users to take control of how, when, why, and with whom their data is being shared. This means that key principles underpinning the GDPR and other applicable legislation—such as data minimization, privacy by design, and privacy by default—must be respected. This should also include easy-to-use interfaces through which users can give their explicit consent regarding any use of their data (as well as revoke that consent at any time). Principle 5: Security But users’ data and communications should not only be kept private, but also safe. Interoperability measures should always center on users’ security and should never be construed as a reason that prevents platforms from taking efforts to keep users safe. However, if intermediaries do have to suspend interoperability to fix security issues, they should not exploit such situations to break interoperability but rather communicate transparently, resolve the problem, and reinstate interoperability interfaces within a reasonable and clearly defined timeframe. Principle 6: Documentation and Non-Discrimination Finally, it is crucial to make sure that interoperability does not become a tool for powerful incumbents to act as gatekeepers and to further enshrine their dominant position. Our goal of user empowerment is served best when diversity and plurality are strongest, so interoperability should benefit as many competitors as possible, rather than just a few favored parties. To offer users more choice, access to interoperability interfaces should not discriminate between different competitors and should not come with strenuous obligations or content restrictions. Interoperability interfaces, such as APIs, must also be easy to find, well-documented, and transparent. Conclusion Requiring platforms with significant market power to allow interoperability with their services is an important first step to empower users to decide how they want to shape their online experiences. Data portability, interoperability, and delegability will allow users to make real choices regarding the people they want to interact with, the moderation of content they encounter, and the use of their data. Interoperability mandates, however, are not an easy or quick fix to the problems underlying the current landscape of dominant platforms. We must take a holistic view of digital policy, and take care that policymakers do not inadvertently give incumbents excuses to block their competitors from entering a market.

  • Germany’s Corona-Warn-App: Frequently Asked Questions
    by Svea Windwehr on June 17, 2020 at 9:35 pm

    This blog post is co-authored with Least Authority, a Berlin-based tech company committed to advancing digital security and preserving privacy as a fundamental human right. This week, Germany’s COVID tracing app finally went live. As governments around the world have been rushing to adopt contact tracing apps in their fight against the COVID-19 pandemic, their efforts have been accompanied by important debates regarding the safety, efficacy, and necessity of the technology. Germany’s approach to contact tracing apps has been a long and winding road, with many delays and shifts in course. Now that the “Corona-Warn-App” is available for download, we are answering some of the key questions surrounding topics like data protection, privacy, and the rules that govern the app. Do I have to have the app? No. The download and use of the app is voluntary. So far, however, there is no law governing the app, and critics have argued that the voluntary nature of the app should be legally protected. Additionally, social pressure or pressure from employers to install the app may undermine individuals’ ability to choose freely whether or not they want to download the app. Do I need to download a new contact tracing app every time I cross a European border? Probably not. Most countries that are part of the Schengen zone, in which EU citizens may cross borders without going through border controls, have eased their travel restrictions. EU governments that use “decentralized” apps have agreed to make their contact tracing apps interoperable across borders, but it is not clear when that solution will be in place. However it is unlikely that Germany’s decentralized app would be interoperable with, for example, France’s, which uses a “centralized” approach. It is worthwhile to check a country’s policy regarding contact tracing apps before crossing any borders—Re-open EU is a useful resource. What’s the difference between centralized and decentralized apps, and what approach does the German app follow?  As governments around the world have become interested in contract tracing technologies, researchers have advocated different solutions. One important question in the design of contact tracing systems is whether they are “centralized” or “decentralized.” In the context of contact tracing apps, both centralized and decentralized models rely on an authority that processes data. The difference is what the authority (for example, a public health authority) knows. In the centralized model, the authority knows enough to contact the people who may have been nearby a person who later tests positive. This includes data about interpersonal associations, which can be quite sensitive. In the decentralized model, the authority usually only knows the identities of users who have been diagnosed with COVID-19. Under a decentralized model, the contact tracing app compares that list of IDs of people who tested positive with the list of IDs it has come in contact with locally, on the users’ phone. While centralized and decentralized systems can both have a host of privacy problems, centralized approaches rest on the dangerous assumption that one central authority can be trusted to keep vast quantities of sensitive data secure, and will not misuse it. As we have seen over and over again, such trust is often abused. Carefully constructed decentralized models are much less likely to harm civil liberties, and EFF has taken a clear stance against the use of centralized systems for contact tracing. In the EU, many governments—including Germany—started out with a centralized approach, but pivoted to a decentralized system after criticism from digital rights NGOs and researchers. Germany’s Corona-Warn-App is based on the decentralized framework developed by Apple and Google. While it is not perfect, it is a more privacy-friendly option. How does the app work? The goal of the Corona-Warn-App is to notify users when they have been in contact with other users that have tested positive. The underlying assumption is that many people own smartphones, and that most carry their phones with them. The majority of smartphones include Bluetooth technology, which allows the sharing of data across short distances. That technology is used for the contact tracing app. The app is built on Apple and Google’s exposure notification interface that allows smartphones to exchange short Bluetooth signals that carry rotating identification numbers.. Each phone shares its own identifier approximately every five minutes, and listens constantly for nearby devices doing the same. Phones use daily random keys to generate new identifiers every couple of minutes, and store them locally (i.e. on the users’ phone) for 14 days. When people who have downloaded the app are near each other for a given period of time, their phones exchange their IDs, and each saves the ID of the other phone. Alongside the ID, phones also save data about the date, time, and the duration of the contact, as well as the strength of the signal, which will be important later on for assessing a users’ risk of infection. How does the app know whether I’m infected?  When a person tests positive for COVID-19, they can—but are not obliged to—report their test result to the Corona-Warn-App. In such cases, the app will send all of the daily keys that it has used during the past 14 days to a server after the infected user has given its consent to share that data. These keys let anyone who sees them generate the associated user’s rolling device IDs. Every phone that has the app installed regularly downloads the list of IDs of users that have been tested positive. The app then compares that list with the list of IDs it has encountered during the past two weeks. This matching does not happen on a centralized server, but instead is decentralized on the users phone. Users are also not informed that they had contact with a specific ID that is linked to an infected person, but are only told that contact has been made with an unspecified individual who has been tested positive for COVID-19. Users are told about the day on which they made contact with the infected person, but not the time, to help protect the identity of the patient. Once the app determines that it has been in contact with a person who is infected, it calculates the risk that its user has been infected with COVID-19. This is when the data regarding the date and curation of the contact, as well as the signal strength that the phone has collected alongside the ID, come into play. In conjunction with the patient’s transmission risk factor, determined by the health authority, the app informs the user about their aggregated infection risk. Users are not obliged to take any specific measures once they are informed about their risk, and do not have to report their risk factor to their local health authority. Users are thus free to make adjustments to their behavior based on their risk score (e.g. seek testing or self-quarantine) or to ignore the score.  Does the app have my name? No. In line with Europe’s data protection law, the GDPR, the app minimizes the amount of personal data it requires users to share. Users only have to provide data regarding the following functions: Consent to the use of the Exposure Notification framework, the API developed by Apple and Google that allows the app to communicate between iPhones and smartphones that run on Google’s Android operating system Transaction authentication numbers (TAN) through which users validate their test result Consent for the upload of daily keys, which can be used to generate device IDs (after the user submitted a positive test result) Why is this supposed to help me to know I was close to an infected person when I cannot get tested anyway? While it was difficult to get tested for COVID-19 during the first months of the pandemic, the situation in Germany has improved since. People that want to get tested should contact a local hospital, their general practitioner or a testing center. Germany has also pledged to expand testing capabilities for asymptomatic persons. When informing users of their infection risk, the app also provides the contact details of local authorities and further information regarding the steps users can take.  What if people feed the app with false information? To avoid users submit false test results, the app requires patients to confirm the authenticity of their test result. This can happen via a TAN number or a QR code. The app will upload the list of IDs it has been in contact with over the past 14 days only after a test result has been validated. Another potential source of false information is the Bluetooth technology on which the app is based. Bluetooth technology was not designed to support contact tracing efforts, and false positives, false negatives, or imperfect results are all possible.  Is the data really anonymized?  Yes. The data will be anonymized — meaning that your personal information will not be shared with the mobile devices that come in contact with yours — but that does not mean everyone’s identity is guaranteed to be unknown to absolutely everyone else and in every context. For example, if you do not leave your home for 14 days and only one person visits you during that time, and you are alerted to having been in contact with a person who has tested positive, you will be able to deduce that the individual who visited you was the person who tested positive. How is the data protected? The data collected by the app is stored on your mobile device. Within the app, all stored data is encrypted according to industry best practices. The stored data also includes one key per day (the “daily key”) that is used to generate the broadcasted identifiers. When a positive test result is confirmed, the previous 14 days of user keys stored in the device of that individual are voluntarily shared with the server. These keys are then broadcast to every device that is using the app. These devices use the keys to derive the rotating device IDs for the infected user and compare them against their local contact lists. The devices with matches will indicate that they have been in close proximity with an individual who has tested positive for COVID-19.  Does the government have access to the data? No. According to the design of the Corona-Warn-App, the government should not have access to contact logs stored on your device.  Mobile devices upload their daily keys (“Temporary Exposure Keys”), and other mobile devices download those, derive the 10-minute keys (“Rotating Proximity Identifiers”), and compare against their logged contacts. This means that the server, and anyone operating the server (like the government), doesn’t learn your or others’ contact graphs (the information about who you come in contact with and how that all connects together). The applications are only uploading keys from positive users and not the contact logs themselves. All of this assumes that the mobile devices function the same as is described in the documentation from Google and Apple. The Corona-Warn-App claims that it only adds data about which protocol version it is using and the strength of the signal, but it is not impossible for the app on your mobile device to attach additional data. Can and will I be tracked through the app? The Corona-Warn-App is intended to support the tracing of infection chains, and not to access or track the location of the user. Additionally, the developers seem to refrain from using analytics and telemetry tools in order to collect as little personally identifying data as possible. While it is possible that third-party listeners can learn some information from the data broadcasted by the app, it is unlikely that the app can be used as a liable location tracking mechanism, especially compared to the other digital trails already left by our devices. However, some risks remain. Is the app open-source? Yes. The code for that app is publicly available on Github, a software development platform. While it is technically possible for the app vendor to distribute a version of the code that was modified to collect more personal data, it is unlikely that such a manipulation would go unnoticed amidst the close scrutiny of the app in Germany. How many apps are there? Besides the Corona-Warn-App, there is also the “data donation” app of the Robert-Koch-Institute, Germany’s federal agency responsible for disease control and prevention. The app allows users to—voluntarily—share biometric data collected with wearables like Fitbits. The app has been criticized for its unclear data protection and privacy safeguards. EFF has cautioned against the negative consequences associated with the use of wearables to combat COVID-19.  Will the app be sunsetted after the “end” of the crisis? The German government has not yet announced its criteria or timeline for sunsetting the app, and critics are calling for a fixed expiration date for the app. Apple and Google have publicly committed to disable their exposure notification system on a regional basis when it is no longer needed. Users are free to deactivate the exposure logging feature, through which phones receive the temporary IDs of other users, at any time. Users can also uninstall the app whenever they feel that their need for it has subsided. We know that, historically, governments often hang on to new powers they acquire during a crisis, so it is critical that the government make its timeline for this technology clear.

  • VICTORY: Zoom Will Offer End-to-End Encryption to All Its Users
    by Gennie Gebhart on June 17, 2020 at 9:00 pm

    We are glad to see Zoom’s announcement today that it plans to offer end-to-end encryption to all its users, not just those with paid subscriptions. Zoom initially stated it would develop end-to-end encryption as a premium feature. Now, after 20,000 people signed on to EFF and Mozilla’s open letter to Zoom, Zoom has done the right thing, changed course, and taken a big step forward for privacy and security. Other enterprise companies like Slack, Microsoft, and Zoom’s direct competitor Cisco should follow suit and recognize, in the Zoom announcement’s words, “the legitimate right of all users to privacy” on their services. Companies have a prerogative to charge more money for an advanced product, but best-practice privacy and security features should not be restricted to users who can afford to pay a premium. The pandemic has moved more activities online—and specifically onto Zoom—than ever before. For an enterprise tool like Zoom, that means new users that the company never expected and did not design for, and all the unanticipated security and privacy problems that come with that sudden growth. Zoom’s decision to offer end-to-end encryption more widely is especially important because the people who cannot afford enterprise subscriptions are often the ones who need strong security and privacy protections the most. For example, many activists rely on Zoom as an organizing tool, including the Black-led movement against police violence. To use Zoom’s end-to-end encryption, free users will have to provide additional information, like a phone number, to authenticate. As Zoom notes, this is a common method for mitigating abuse, but phone numbers were never designed to be persistent all-purpose individual identifiers, and using them as such creates new risks for users. In different contexts, Signal, Facebook, and Twitter have all encountered disclosure and abuse problems with user phone numbers. At the very least, the phone numbers that users give Zoom should be used only for authentication, and only by Zoom. Zoom should not use these phone numbers for any other purpose, and should never require users to reveal them to other parties. The early beta of end-to-end encryption on Zoom will arrive next month. Users should still take steps to harden their Zoom settings to defend against trolls and other privacy threats. In the meantime, we applaud Zoom’s decision to make these privacy and security enhancements available to all of their hundreds of millions of users.

  • A Quick and Dirty Guide to Cell Phone Surveillance at Protests
    by Cooper Quintin on June 16, 2020 at 11:38 pm

    As uprisings over police brutality and institutionalized racism have swept over the country, many people are facing the full might of law enforcement weaponry and surveillance for the first time. Whenever protesters, cell phones, and police are in the same place, protesters should worry about cell phone surveillance. Often, security practitioners or other protesters respond to that worry with advice about the use of cell-site simulators (also known as a CSS, IMSI catcher, Stingray, Dirtbox, Hailstorm, fake base station, or Crossbow) by local law enforcement. But often this advice is misguided or rooted in a fundamental lack of understanding of what a cell-site simulator is, what it does, and how often they are used. While it is possible that cell-site simulators are being or have been used at protests, that shouldn’t stop people from voicing their dissent. With a few easy precautions by protesters, the worst abuses of these tools can be mitigated. The bottom line is this: there is very little concrete evidence of cell site simulators being used against protesters in the U.S. The threat of cell site simulators should not stop activists from voicing their dissent or using their phones. On the other hand, given that more than 85 local, state, and federal law enforcement agencies around the country have some type of CSS (some of which are used hundreds of times per year), it’s not unreasonable to include cell site simulators in your security plan if you are going to a protest and take some simple steps to protect yourself. A CSS is a device that mimics a legitimate cellular tower. Police around the world use this technology primarily to locate a phone (and therefore a person) with a high degree of accuracy, or determine who is at a specific location. There have been reports in the past that advanced CSSs can intercept and record contents and metadata of phone calls and text messages using 2G networks, there are no publicly known ways to listen to text messages and calls on 4G networks however. Cell-site simulators can also disrupt cellular service in a specific area. However, it is very hard to confirm conclusively that a government is using a CSS  because many of the observable signs of CSS use—battery drain, service interruption, or network downgrades— can happen for other reasons, such as a malfunctioning cellular network. For more details on how cell-site simulators work, read our in-depth white paper “Gotta Catch ‘em All.” Interception of phone calls and text messages is the most scary potential capability of a CSS, but also perhaps the least likely. Content interception is technically unlikely because, as far as we know based on current security research (that is, research around 2G and LTE/4G networks that does not take into account any security flaws or fixes that might occur in the 5G standard), content interception can only be performed when the target is connected over 2G, rendering it somewhat “noisy” and easy for the user to become aware of. Cell-site simulators can’t read the contents of encrypted messages such as Signal, Whatsapp, Wire, Telegram, or Keybase in any scenari0.  Police using a CSS to intercept content is legally unlikely as well because, in general, state and federal wiretap laws prohibit intercepting communications without a warrant. And if police were to get a wiretap order from the court, they could go directly to the phone companies to monitor phone calls, giving them the advantage of not having to be in the physical proximity of the person and the ability to use the evidence gathered in court. One advantage law enforcement might get from using a CSS for content interception at a protest is being able to effectively wiretap several people without having to know who they are first. This would be advantageous if police didn’t know who was leading the protest beforehand. This type of mass surveillance without a warrant would be illegal. However, police have been known to use CSS without a warrant for tracking down suspects. So far, there is no evidence of police using this type of surveillance at protests. Locating a specific mobile device (and its owner) is anecdotally the most common use of cell-site simulators by law enforcement, but conversely it may be the least useful at a protest. Locating a specific person is less useful at a protest because the police can usually already see where everyone is using helicopters and other visual surveillance methods. There are some situations, though, where police might want to follow a protester discreetly using a CSS rather than with an in-person team or a helicopter. If a CSS were to be used at a protest, the most likely use would be determining who is nearby. A law enforcement agency could theoretically gather the IMSI of everyone at a gathering point and send that to the phone company later for user identification to prove that they were at the protest. There are other ways to accomplish this: law enforcement could ask phone companies for a “tower dump” which is a list of every subscriber who was connected to a specific tower at a specific time. However, this would have the disadvantages of being slower, requiring a warrant, and having a wider radius, potentially gathering the IMSIs of many people who aren’t at the protest. Denial-of-service or signal jamming are additional capabilities of CSS. In fact, it has been admitted by the FBI that CSS can cause signal disruption for people in the area. Unfortunately, for the same reasons it’s hard to detect CSS use, it’s hard to tell how often they are disrupting service either purposefully or accidentally. What looks like signal jamming could also be towers getting overloaded and dropping connections. When you have many people suddenly gathered in one place, it can overload the network with amounts of traffic it wasn’t designed for.  How to protect yourself from a cell-site simulator  As noted in our Surveillance Self-Defense guide for protesters, the best way to protect yourself from a cell-site simulator is to put your phone in airplane mode, and disable GPS[2], wifi, and Bluetooth, as well as cellular data. (While GPS is “receive only” and does not leak any location information on its own, many apps track GPS location data, which ends up in databases law enforcement can search later.) We know that some IMSI catchers can also intercept content, however as far as we know none of them can do this without downgrading your cellular connection to 2G. If you are concerned about protecting your device against this attack, the best thing you can do is use encrypted messaging like Signal or Whatsapp, and put your phone in airplane mode if you see it drop down to 2G. (There are plenty of legitimate reasons your phone might downgrade part of your connection to 2G but better safe than sorry.) However an important part of protests can be streaming/recording and immediately uploading videos of police violence against protestors. This is at odds with the advice of keeping your phone off/in airplane mode. It’s up to you to decide what your priorities at protests are, and know that what’s important for you might not be someone else’s priority. Unfortunately iOS and Android currently offer no easy ways to force your phone to only use 4G, though this is something the developers could certainly add to their operating systems. If you can turn off 2G on your phone, it is a good precaution to take. How a cell-site simulator might be detected  Unfortunately cell site simulators are very difficult to detect. Some of the signs one might interpret as evidence, such as downgrading to 2G or losing your connection to the cell network, are also common signs of an overloaded cell network. There are some apps that claim to be able to detect IMSI catchers, but most of them are either based on outdated information or have so many false positives that they are rendered useless. One potential way to detect cell-site simulators is to use a software-defined radio to map all of the cellular antennas in your area and then look for antennas that show up and then disappear, move around, show up in two or more places, or are especially powerful. There are several projects that attempt to do this such as “Seaglass” and  “SITCH” for 2G antennas, and EFF’s own “Crocodile Hunter” for 4G antennas. While it is possible that cell-site simulators are being or have been used at protests, that shouldn’t stop people from voicing their dissent. With a few easy precautions by protesters, the worst abuses of these tools can be mitigated. Nevertheless, we call on lawmakers and people at all levels of the cellular communications industry to take these issues seriously and work toward ending CSS use.

  • Nominations Open for 2020 Barlows!
    by Hannah Diaz on June 16, 2020 at 9:08 pm

    Nominations are now open for the 2020 Barlows to be presented at EFF’s 29th Annual Pioneer Award Ceremony. Established in 1992, the Pioneer Award Ceremony recognizes leaders who are extending freedom and innovation in the realm of technology. In honor of Internet visionary, Grateful Dead lyricist, and EFF co-founder John Perry Barlow, recipients are awarded a “Barlow,” previously known as the Pioneer Awards. The nomination window will be open until 11:59pm Pacific time on June 30, 2020. You could nominate the next Barlow winner today! What does it take to be a Barlow winner? Nominees must have contributed substantially to the health, growth, accessibility, or freedom of computer-based communications. Their contributions may be technical, social, legal, academic, economic or cultural. This year’s winners will join an esteemed group of past award winners that includes the visionary activist Aaron Swartz, global human rights and security researchers The Citizen Lab, open-source pioneer Limor “Ladyada” Fried, and whistle-blower Chelsea Manning, among many remarkable journalists, entrepreneurs, public interest attorneys, and others. 2019 Barlows The Pioneer Award Ceremony depends on the generous support of individuals and companies with passion for digital civil liberties. To learn about how you can sponsor the Pioneer Award Ceremony, please email [email protected] Remember, nominations are due no later than 11:59pm PDT on Wednesday, June 30th! After you nominate your favorite contenders, we hope you will consider joining our virtual event this fall to celebrate the work of the 2020 winners. If you have any questions or if you’d like to receive updates about the event, please email [email protected] GO TO NOMINATION PAGE Nominate your favorite digital rights hero now!

  • Bracelets, Beacons, Barcodes: Wearables in the Global Response to COVID-19
    by Katitza Rodriguez on June 15, 2020 at 11:38 pm

    In their efforts to contain the spread of the pandemic, governments around the world are rolling out body-worn devices (“wearables”) to assist in fighting the virus. Some governments want a technological silver bullet to solve the public health crisis. But many of the tools aimed at solving problems come with a host of other problems that will undermine the public health goals for which they are adopted, and create new unintended consequences for privacy, association, and freedom of expression. These electronic devices are usually worn on the wrist or ankle. Their use can be mandated by the government or voluntary (although users don’t always understand exactly what it is they’re being asked to wear). We might tend to associate the idea of a “wearable” with either a smartwatch or an ankle monitor, but governments are also using wrist-worn “bracelets” for a broad range of different purposes amid the COVID-19 pandemic.   Wearables may use an electronic sensor to collect health information from the wearer (by measuring vital signs) and act as an early warning to identify likely COVID-19 patients before they show any symptoms. They can also be used to detect or log people’s proximity to one another (to enforce social distancing) or between a person’s bracelet and that person’s own mobile phone or a stationary home beacon (to enforce home quarantine). For quarantine enforcement, the devices might also use a GPS receiver to inform authorities of the wearer’s location. Some use Bluetooth radio beacons to let authorities confirm when the wearer is within range of a phone that itself is running a contact tracing app (rather than leaving the phone at home and going outside in violation of a health order). And some may be low-tech wristbands that are no more than a piece of paper with a QR code, which authorities may regularly ask the user to photograph with a mobile app (among other uses of photo demands for quarantine enforcement). Like other technologies deployed for pandemic-related tasks, they vary along several dimensions, including whether they are voluntary and/or under control of the user, and whether they are used to surveil whether a person is doing what the state told them to do, or merely to provide the user with health information to assist the user’s decision-making. Some impose significant privacy risks. And, particularly because of the haste with which they’ve been deployed, they also vary in terms of their apparent suitability for their purpose. Here, we will highlight a range of devices that different governments are currently asking or telling people to put on their wrists or ankles to fight the pandemic. Early Warning System to Identify COVID-19 Patients In Liechtenstein, the Principality is financially supporting a medical study called “COVI-GAPP” by the Swiss medical testing firm Labormedizinisches Zentrum Dr. Risch. In this voluntary trial, 2,200 persons (about 5% of tiny Liechtenstein’s population) are being given “Ava”-brand bracelets to determine whether these wearables can identify COVID-19 pre-symptomatic cases (i.e. before the patient shows any symptoms). The bracelets, which were supplied by Swiss fertility start-up Ava, are worn at night and record biometric data such as movements, body temperature, blood flow, breath, and pulse rate. The clinical trial will study the biometric data to see whether an algorithm can spot indicators that a person might have developed COVID-19 symptoms—increased temperature, shortness of breath and cough—even before patients notice these themselves. Participation in the clinical trial is voluntary, and the collected data is pseudonymized. The collected data is still subject to Europe’s General Data Protection Regulation (GDPR), which applies in Liechtenstein. As a general rule, the processing of biometric data is strictly prohibited for the purpose of uniquely identifying a person, unless the person gives explicit consent to such processing. While the study is government-funded, the Principality stated that it does not have access to the research data. We should be careful not to lose sight of or take shortcuts on data protection principles for biometric data, such as express consent, data minimization, transparency, and security. Personal medical data gathered from wearables and machine learning should be used in a way that patients can understand and agree to, and should be deleted when it is no longer needed. Workplace Monitoring of Social Distancing Many employers are showing interest in making their staff wear electronic bracelets in the workplace, often to mitigate risks by enforcing social distancing rules. The port of Antwerp, Belgium, has started to use wristbands to enforce social distancing rules on the workfloor, requiring a specific minimum distance between any two workers. The wearables, supplied by the Dutch company Rombit, are equipped with Bluetooth and ultra-wideband technology and give off warning signals when workers come within a specified distance from each other.  But enforcing social distancing is not the only functionality of the bracelet: as the wristbands are Bluetooth-enabled, they also allow for contact tracing, with all personal data collected for that purpose centrally stored at Rombit’s servers. As employers’ surveillance of workers has become increasingly widespread, records of worker-to-worker interactions could be abused for many purposes, like union busting. It can also be used for other purposes like surveilling workers to reduce “unplanned downtime”.   While wearing tracking bracelets at the workplace might not (yet) be mandatory in most places, it is more than questionable whether workers—with their livelihoods at stake—can exercise real choice when their employer tells them to strap it on. Under the GDPR, consent can’t be freely given if there is a clear imbalance between the data subject and the data controller. In other words, consent can’t be a valid legal ground to process the data when the employee has no real choice, feels compelled to consent, or will endure negative consequences if they do not consent. Wearable Device Proximity Tracking EFF is wary about mobile-based Bluetooth-based proximity tracking apps. Now such automated tracking might be migrating from phone apps to wearable devices. Reuters reported that the Singaporean government is switching its centralized contact tracing technology focus away from its existing TraceTogether smartphone app (which uses Bluetooth to detect and log close proximity of other smartphones). Instead, that nation will deploy a new centralized TraceTogether Token standalone wearable device, which the government plans to eventually distribute to all 5.7 million Singapore residents. While the TraceTogether Token uses a broadly similar technology to the TraceTogether app, it will not rely on participants to own or carry a smartphone. Like the app, the new token will trace proximity between users (not location).  According to MobiHealth News, only users who test positive for COVID will be told to hand their wearable to the Ministry of Health in order to upload data to a centralized server about who they have been in contact with. EFF objects to such centralized approaches to automated contact tracing, whether by means of a phone app or a wearable device. Further details about how the Singaporean device will work are scarce. Press reports did not initially confirm if the wearable tokens will interoperate with the mobile TraceTogether app. If they do, which seems likely, the government will continue to collect a great deal of sensitive data about interpersonal associations, and regularly upload that information to a centralized government server.  The centralized TraceTogether mobile app collects data that links device IDs to real contact information like phone numbers, which means the government can use it to determine which individuals have come into contact with one another. This makes TraceTogether app incompatible with decentralized exposure notification systems like Apple and Google’s API, where those who have been exposed to an infected person get only a notification but their personally identifying data never leaves the infected persons’ device. There is no centralized server where people upload the data. EFF opposes the centralization feature of the Singaporean mobile app, and will likewise oppose this same feature if it is part of the new wearable token system. Since the token will be a single-purpose device, users may not have the same amount of control over how it works. App users can always turn off Bluetooth on their phone, but they may not be able to stop a wristband from broadcasting or collecting data.  Finally, a weakness of app-based exposure notification systems is that many people do not own a smartphone, especially in developing nations, small cities, or rural areas. Allowing users to decide whether to use a wearable token or a mobile app (or to use neither) might improve participation rates. But these systems remain an unproven technology that might do little to contain the virus especially in rural areas, and should at most be a supplement to primary public health measures like widespread testing and manual contact tracing. And everyone should have the right not to wear a tracking token, and to take it off whenever they wish. Mandatory Apps and Wearables to Monitor Patients Under Quarantine Orders Some countries have started to make tracking wristbands or apps a mandatory element of their efforts to enforce quarantine orders of persons who are or might be infected with COVID-19. EFF opposes such coercive surveillance based solely on infection. In Bahrain, persons in medical isolation are compelled to download the government-mandated contact tracing app “BeAware,” turn on Bluetooth, keep their Internet on, and set their quarantine location. They are also compelled to wear GPS-enabled bracelets that track their whereabouts and connect it to the app. iPhone users are obliged to turn on the “allow access to the app” setting to “always allow.” If this system shows the bracelet is 15 meters away from the phone, it sends a notification to the government’s monitoring station. In addition, the government can request selfies at any time from the patient, clearly depicting both the isolating person’s face and bracelet in the same image. Attempts to remove or tamper with the electronic bracelet can result in steep fines and imprisonment for not less than three months.  Similarly, Kuwait requires individuals returning home from abroad to wear tracking bracelets. Linked with the country’s official contact tracing app, Shlonik, the bracelets notify health officials when individuals subject to isolation orders appear to break quarantine. Kuwait’s app was developed by Zain, a Kuwaiti telecommunications giant. In 2016, Zain worked with Kuwait’s Ministry of Awqaf & Islamic Affairs to deploy wristbands and SIM cards to monitor the locations of 8,000 Kuwaiti Hajj pilgrims during the annual pilgrimage to Mecca. Like in Bahrain, use of the new bracelet is enforced through selfie requests, and violators risk being transferred to a governmental quarantine facility, as well as other legal actions. As we have previously said, forcing people to download and use an app significantly undermines their ability to control their phone and the data they share, undermining people’s right to informational self-determination. Governments should not force people to hand over control of their phones and data. Also, mandating the use of an app risks introducing significant security vulnerabilities and further harming peoples’ privacy and data security. Further, a punitive approach to containment can break peoples’ trust and thereby undermine public health. For example, people may avoid testing if they fear the consequences of a positive test result.  Some governments are turning to electronic ankle shackles, including Australia and two states in the United States. These devices are commonly used to monitor individuals considered to be dangerous and/or a flight risk both pre-trial and during parole or probation. They have been repurposed for quarantine enforcement. In Western Australia, under the state’s COVID-19 response act, the police acquired 200 GPS-enabled ankle bracelets. Individuals who fail to comply with quarantine orders can be equipped with one of the bracelets. Penalties for failing to comply with orders to wear the shackles, or attempting to tamper with them, can lead to up to 12 months in jail and fines or more than 10,000AU$, or approximately 6,981 US dollars.  Courts in Kentucky and West Virginia have mandated electronic ankle shackles for individuals who refused to submit to quarantine procedures after testing positive for COVID-19. Like in Australia, the shackles are using GPS technology to locate individuals. GPS ankle shackles raise a series of concerns. They are a grave intrusion in persons’ privacy and personal freedom. Often, they are uncomfortable, restrict a person’s range of motion, and must be paid for by the person forced to wear them. This surveillance to enforce quarantine is not justified merely because a person tested positive or are deemed to have an elevated infection risk. Low-Tech Bracelets for Quarantine Enforcement Hong Kong uses yet another category of bracelets to enforce quarantine orders. Individuals undergoing 14-day home quarantine procedures, such as arrivals from overseas, are given bracelets with a unique QR code. Users register their bracelet with Hong Kong’s official COVID-19 tracing app. The app prompts the owner of the phone to walk the perimeter of their apartment, assembling a unique “signature” made up of the various wifi, Bluetooth, and other signals detectable in the home. If they move the phone outside of that “geofenced” perimeter, they trigger a warning sound that can only be stopped by scanning the QR codes of every household member’s wristband. Bracelet-wearers are also expected to scan the codes regularly with a phone. Punishments for not complying can be harsh and may lead to six months in jail time as well as fines. Some technologically more advanced bracelets have been deployed on a smaller scale in Hong Kong. Similar QR code bracelets are reported to be used in Malaysia.  The most-used form of the bracelet seems to be little more than a piece of paper with a QR code. These low-tech wristbands are an interesting case since the QR code itself is an easily copyable image and does not incorporate any electronics at all. This might seem comparatively benign when viewed against the backdrop of more technologically intrusive alternatives. But even a low-tech, non-electronic bracelet with a unique code can play a significant role in making new kinds of surveillance feel familiar and normalized.  Conclusion All of these surveillance technologies, like many other COVID-19 mitigations, are being rolled out rapidly amidst the crisis. While proponents may feel that they are taking an urgently needed step, governments must begin by showing the efficacy of each technology. They also must address the kinds of digital rights concerns raised by EFF on related topics such as proximity apps and patients’ right to privacy against quarantine enforcement. Intrusive monitoring tools adopted now may further normalize the surveillance of individuals by governments and private entities alike. History shows that governments rarely “waste a good crisis,” and tend to hold on to the new powers they seized to address the emergency situation. They can also introduce a variety of serious privacy and security risks to individuals that may be forced to wear COVID-19 surveillance tech. Beyond the immediate risks, it is crucial to also consider the long-term effects of tracking bracelets, including their cultural effects. It should not feel normal to be tracked everywhere or to have to prove your location.

  • Streaming Is Laying Bare How Big ISPs, Big Tech, and Big Media Work Together Against Users
    by Katharine Trendacosta on June 11, 2020 at 10:29 pm

    HBO Max is incredible. Not because it is good, but because of how many problems with the media landscape it epitomizes. If you ever had trouble seeing where monopoly, net neutrality, and technology intertwine, well then thanks, I guess, to AT&T for its achievement in HBO Max. No one knows what it’s supposed to do, but everyone can see what’s wrong with it. For the record, HBO Max is a streaming service from AT&T, which owns Warner Bros. and, of course, HBO. HBO Go, by contrast, is the app for people who subscribe to HBO through a cable or satellite provider. And HBO Now is a digital-only subscription version of HBO. HBO Max is, somehow, not HBO. It’s a new streaming service, like Disney+, offering both the back catalogs of HBO and Warner Bros. and new exclusives. The name, which emphasizes HBO and doesn’t alert people that this is a service where they can watch Friends, has been a marketing problem. But the marketing problem, while hilarious, is not where the biggest concerns lie. The real problem is with AT&T offering HBO Max for free to customers with certain plans, not counting it against data caps for its mobile customers, and launching without support for certain TV devices. Let’s go through what’s happening here piece by torturous piece. First: HBO Max is free if you are a subscriber to certain AT&T plans—high-speed home Internet, unlimited wireless plans, and premier DirectTV plans, to name a few. But Americans pay more for worse Internet than their peers in Europe and South Korea. With high-speed home Internet, most Americans have two or fewer choices. The most meaningful choice an AT&T home Internet subscriber in the U.S. makes is between expensive low-speed service or very expensive “high-speed” service. This lack of choice means that there is no reason for AT&T or any of the other large ISPs to have a better quality product or better customer service. They know we will pay because in 2020, nearly all of us need Internet access at home. Any Internet service will sell just fine, and it’s more lucrative, in the short term, for ISPs to offer slow, expensive Internet than fast, good Internet. Given these high prices, HBO Max isn’t “free.” AT&T is already making money hand over fist on you, and now it gets to report AT&T premium customers as subscribers to its new streaming service to its investors, inflating growth. Second: AT&T isn’t counting HBO Max against the data caps on its mobile plans. Data caps are artificial: they exist so that there can be more expensive plans, not to manage capacity. Not counting the data used by an app against a data cap is a practice known as a “zero-rating.” When an ISP zero-rates its own content and applications, or that of its favored partners, that violates the principle of net neutrality. Net neutrality is the principle that all data online is treated equally by Internet providers, so that they can’t manipulate what you see online by blocking it, slowing it down, or prioritizing the data of privileged apps and services. In the case of AT&T and HBO Max, AT&T has a “sponsored data” program that allows companies to pay it to zero-rate their data. But when HBO Max does that, AT&T is just paying itself though a meaningless accounting convention that costs it nothing (unlike competitors who give it money for equivalent zero-rating treatment). AT&T does this all the time. So if  Disney+ or Netflix—or, more importantly, a smaller company trying to compete with the big guys—wants their content to be on a level playing field, they will have to pay a fee that HBO Max does not. This does not mean HBO Max is a better deal on an AT&T phone. You are paying too much for data already and, again, this trick helps drive AT&T’s subscriber numbers while not costing the company anything. It’s manipulative, too. It funnels AT&T customers who want entertainment but have artificially low data caps into AT&T’s own content. And according to Pew Research Center, those who rely on smartphones for Internet access are more likely to be young, Black, Hispanic, low-income, and rural. Finally: HBO Max was launched without support on certain TV devices. Managing all these streaming services and subscriptions is a pain, and a lot of people do it with devices like Roku or Amazon Fire TV. Sometimes these are separate devices, and sometimes your so-called “smart” TV just came with one built-in. And guess what? If you have one, you weren’t watching HBO Max when it was launched. AT&T hadn’t made deals with those companies, so HBO Max won’t play on those devices. Remember how cable and satellite companies fight with TV networks over fees, sometimes leading to programming blackouts? Well, the same thing is now happening between streaming services like HBO Max and the makers of hardware and software for viewing them. So even if you have a “free” HBO Max subscription, you might not get to watch it on your Roku TV. It wasn’t supposed to be this way. Cable and satellite TV services have almost always required subscribers to rent special hardware—that ugly, power-guzzling set-top box that you pay monthly rent for. In 2016, TV hardware and software makers asked the Federal Communications Commission to “Unlock the Box” by passing rules that would require cable and satellite services to make their channels available through whatever hardware and software the customer chose, using a set of industry standards for connecting those devices. TV studios and networks fought vehemently against that proposal. They argued that new rules were not necessary, because services delivered “over the top” through the Internet, like Netflix, Amazon Prime, and now HBO Max, would automatically run on all the consumer’s devices. The outcome was easy to predict: Unlock the Box rules never came to be, and “over the top” apps like HBO Max don’t run on all devices—only the ones whose makers made deals with AT&T. In the cord-cutting era, Roku and Amazon Fire TV have 70% of the market share for these kinds of devices. Users are stuck in the middle of a fight between giants just to watch content they supposedly get for “free” or have already paid for. We need more choices for our ISPs, so they can’t keep charging us more for bad service. We need more choices so they can’t leverage their captive audiences for their new video services. We need net neutrality so these giant companies can’t create fiefdoms where they manipulate how we spend our time online. And we need our technology to be freed from corporate deals so we get what we paid for.

  • EFF Asks Virginia Supreme Court to Rein in Indiscriminate Collection and Storage of License Plate Information
    by Naomi Gilens on June 11, 2020 at 6:23 pm

    Like law enforcement agencies across the country, the police in Fairfax, Virginia, use automated license plate readers (ALPRs) to indiscriminately scan and record every passing car. The ALPRs don’t simply check for speeding, or outstanding tickets—instead, they store detailed information about the time, date, and location of each scan in a database for a year, even when they have absolutely no connection to law enforcement investigations. The wholesale collection and storage of the public’s movements invades individuals’ privacy and free expression rights and violates a Virginia data privacy law. As EFF explained in a friend-of-the-court brief to the Virginia Supreme Court, ALPRs collect an enormous amount of data on innocent drivers. 99.5% of cars scanned are not associated with any crime. But ALPR cameras nonetheless capture images of every license plate that comes into view, up to 3,600 plates per minute. This allows law enforcement agencies to compile enormous databases of license plate scans. And because law enforcement agencies often share the license plate information they collect with other local, state, regional, and federal agencies—and even private companies—law enforcement agencies may be able to access billions of plate scans from all over the country. The data collected can reveal highly sensitive personal information. ALPRs record the precise time, date, and place that scans occurred, and can pinpoint where an individual’s car was at a given time in the past with even more precision than cell phone data or GPS trackers. This information opens the door to a universe of inferences about peoples’ private lives, including political, professional, religious, medical, and sexual associations. Collecting this kind of information can chill individuals from engaging in constitutionally protected activity. In Muslim communities in the U.S. that were subject to surveillance, for example, people have been less likely to attend mosques, express religious observance in public, or engage in political activism. And, collecting location information in a centralized database opens the door to abuse. Police can abuse ALPR information to stalk individuals or improperly keep track of vehicles at sensitive locations, such as political rallies or doctors’ offices. Sadly, the fear of abuse is far from theoretical. Among other examples we provided to the court, police have used license plate information to stalk women; to extort the owners of vehicles parked at a gay bar; and to track cars attending gun shows. This isn’t the first time that the Virginia Supreme Court has considered the lawfulness of the ALPR system. Back in 2018, the court recognized that this ALPR collection system constitutes “sweeping randomized surveillance and collection of personal information,” and that the license plate scans allowed police to infer drivers’ past locations, as well as “personal characteristics” about the drivers. The court held that the indiscriminate collection and storage of license plate information would violate the Virginia law that regulates the government’s collection of data on private citizens if the ALPR system provides a means for police to link license plate information to the vehicles’ owners. The court then remanded the case for the lower court to decide that question. The lower court correctly concluded that the ALPR system allows police to link license plates with individuals with just “a few clicks on the screen, all from the driver’s seat of a police cruiser.” Accordingly, the lower court held that the ALPR surveillance system does violate Virginia law. Indeed, not only do ALPR systems allow police to link license plates to specific individuals, but doing so is the core purpose of collecting and storing ALPR data. Keeping a database of individuals’ historical movements, after all, allows police to conduct expansive surveillance without needing to know in advance whether they want to follow a particular person, or when. But the Supreme Court has made clear that the Constitution does not permit police to keep track of individuals’ location histories through their cell phone records. They should not be able to do so through license plate data, either. We hope that the Virginia Supreme Court will again recognize the immense impact ALPR tracking has on individual privacy, and make clear once and for all that indiscriminate license plate collection violates state law.

  • Medical Device Repair Again Threatened With Copyright Claims
    by Kit Walsh on June 11, 2020 at 5:10 pm

    Medical providers face countless challenges in responding to the COVID pandemic, and copyright shouldn’t have to be one of them. Hundreds of volunteers came together to create the Medical Device Repair Database posted to the repair information website iFixit, providing medical practitioners and technicians an easy-to-use, annotated, and indexed resource to help them keep devices in good repair. The database includes documentation for mission-critical devices relevant to the COVID pandemic and has been widely praised as a tool for caregivers and those supporting them. Despite this, Steris Corporation contacted iFixit to demand that their products’ documentation be taken down on copyright grounds. As the name suggests, Steris makes sterilization-related devices used to prevent contamination and the spread of disease. Unlike disease, though, the spread of repair information enhances public health and Steris should leave it alone. Fortunately, the law is on iFixit’s side. As we explained in our letter back to Steris, iFixit is protected by the safe harbor of the Digital Millennium Copyright Act when it hosts user-provided content, and the Medical Device Repair Database is making fair use of the repair materials hosted there. Medical care and the maintenance of medical devices are too important to let overreaching copyright claims get in the way. We at EFF are proud to be able to support iFixit and we hope that the device manufacturers will let the repair community continue to do its vital work instead of wasting everyone’s time with unfounded legal threats.

  • IBM, Amazon Agree to Step Back From Face Recognition. Where Is Microsoft?
    by Matthew Guariglia on June 11, 2020 at 1:08 am

    Update: A day after this post was published, Microsoft announced it won’t sell facial recognition technology to police until a national law exists. This is a good step, but Microsoft must permanently end its sale of this dangerous technology to police departments. Activism is working. Both on the streets as people protest to end racist and violent policing, and among civil liberties organizations who have been fighting the government’s use of harmful face surveillance technology. This week two major vendors of face surveillance technology announced that in light of recent protests against police brutality and racial injustice, they would be phasing out or pausing their sale of this technology to police.  The fact that these two companies, IBM and Amazon, have admitted the harm that this technology causes should be a red flag to lawmakers. The belief that police and other government use of this technology can be responsibly regulated is wrong. Congress, states, and cities should take this momentary reprieve, during which police will not be able to acquire face surveillance technology from two major companies, as an opportunity to ban government use of the technology once and for all.  In a letter from Arvind Krishna to Congress, the IBM CEO announced that in the name of racial justice the company would end research, development, and sale of any face recognition technology: IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies. This is a big pivot. In March 2019, IBM was criticized by photographers after it released a new dataset of diverse images, scraped from social media platform Flickr, in hopes of training face recognition programs to be less flawed when recognizing people of color. Now the company recognizes that better training data is not an effective solution to the many problems of this menacing technology.  Amazon in turn announced a 1-year moratorium on police use of its face surveillance technology, Rekognition. This company also cited recent protests as the impetus of re-examining the harm this technology can do to already over-policed communities. Unfortunately, Amazon still clings to the discredited notion that police can safely deploy face surveillance technology if only there are enough rules. “We’ve advocated,” the company posted, “that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”  Amazon’s Rekognition program has been particularly flawed and harmful. In 2018, the ACLU ran faces of sitting U.S. congress members through the program. Twenty-eight members of congress were incorrectly identified as people who had been arrested for committing crimes. That same year EFF joined with ACLU and a coalition of civil liberties organizations to demand that Amazon stop powering government surveillance infrastructure with its flawed and invasive Rekognition program.  While we welcome Amazon’s half-step, we urge the company to finish the job. Like IBM, Amazon must permanently end its sale of this dangerous technology to police departments. Microsoft is another large vendors of police-used surveillance tech. It must now follow suit and end government use of its facial recognition program. Microsoft has expressed concerns over harms police use of face recognition can cause. In 2019, Microsoft stated that it had denied one California law enforcement agency use of its face recognition technology on body-worn cameras and car cameras, due to human rights concerns. The logical next step is clear: Microsoft should end the program once and for all.  There should be a nation-wide ban on government use of face surveillance. Even if the technology were highly regulated, its use by the government would continue to exacerbate a policing crisis in this nation that disproportionately harms Black Americans, immigrants, the unhoused, and other vulnerable populations. We agree that the government should act, and are glad that Amazon is giving them a year to do so, but the outcome must be an end to government use of this technology.  The movement to ban face recognition is gaining momentum. The historic demonstrations of the past two weeks show that the public will not sit idly by while tech companies enable and profit off of a system of surveillance and policing that hurts so many.  Face recognition isn’t the only problematic tool tech companies offer to police. Though Amazon has pressed pause on offering Rekognition to police, Amazon-owned Ring, the “smart doorbell” and camera company, still partners with over 1300 police departments. These partnerships allow police to make batch-requests for footage via email to every resident with a camera within an area of interest to police—potentially giving police a one-step process for requesting footage of protests to identify protestors. These partnerships intensify suspicion, help police racially profile people, and enable and perpetuate police harassment of Black Americans. Join us in calling on Amazon to continue its thoughtful action in light of nationwide activism, admit the dangers of Ring-police partnerships, and immediately halt them. TAKE ACTION TELL AMAZON RING: END POLICE PARTNERSHIPS Correction: A previous version of this post asserted that Microsoft was one of the largest vendors of face surveillance to police departments.

  • Will Zoom Bring Encryption to the People Who Need It Most?
    by Max Hunter on June 10, 2020 at 9:23 pm

    This morning, EFF and Mozilla called on Zoom to make their upcoming end-to-end encryption feature available to all 300 million of its users. We published an open letter urging Zoom’s CEO Eric Yuan not to exclude Zoom’s free customers from using their upcoming end-to-end encryption feature.  We applaud Zoom for building strong end-to-end encryption into their service. But by limiting this security enhancement to paid accounts, Zoom is denying privacy protections to the participants who may need them the most.  Zoom CEO Eric Yuan defended the decision to withhold strong encryption, saying, “Free users — for sure we don’t want to give [them] that, because we also want to work together with the FBI, with local law enforcement.” But many activists rely on Zoom as an organizing tool, including the Black-led movement against police violence. Given the long history of law enforcement targeting Black organizers for unconstitutional surveillance, this access raises serious concerns.  For decades, the DOJ and FBI have argued that their inability to access encrypted communications poses a serious threat to national security. But the idea that compromising on encryption will give special access to U.S. officials is a fallacy. Any mechanism that law enforcement uses to access Zoom users’ data will be vulnerable to oppressive regimes and other bad actors. We recognize that premium features are a key part of Zoom’s business model, but we strongly encourage them not to compromise the privacy and security of their users. The ability to communicate privately is an essential feature of a free society. As more of our communication shifts to video calls, that feature shouldn’t be reserved for those who can afford it. June 8, 2020Eric YuanZoom Video Communications, Inc. 55 Almaden Boulevard, 6th FloorSan Jose, CA 95113 Dear Mr. Yuan, While we were pleased to see Zoom’s plans for end-to-encryption, we are extremely surprised and concerned by the news that Zoom plans to offer this protection only to paying customers. We understand that Zoom is rightfully concerned about curbing child sexual abuse material (CSAM), but restricting end-to-end encryption to paid accounts is not the right solution. As your own growth numbers demonstrate, Zoom is one of the most popular video-call platforms available. Recently, Mozilla conducted a U.S.-based survey that reiterated Zoom’s popularity among consumers. In this context, Zoom’s decisions about access to privacy and security features have enormous impact. We strongly urge you to reconsider this decision given the following considerations: Tools like Zoom can be critical to help protesters organize and communicate their message widely. Activists should be able to plan and conduct protest-related activities without fear that these meetings, and the information they include, may be subject to interception. Unfortunately, recent actions from law enforcement – and a long history of discriminatory policing – have legitimized such fears, making end-to-end encryption all the more critical. Best-in-class security should not be something that only the wealthy or businesses can afford. Zoom’s plan not to provide end-to-end encryption to free users will leave exactly those populations that would benefit most from these technologies unprotected. Around the world, end-to-end encryption is already an important tool for journalists and activists that are living under repressive regimes and fighting censorship. We recognize that Zoom’s business model includes offering premium features for paid accounts, but end-to-end encryption is simply too important to be one of those premium features. While we recognize that Zoom is concerned about the potential misuse of its platform, offering end-to-end encryption to paid accounts only is not the solution. Such an approach ultimately punishes a larger number of users – from families using the tool to communicate, to activists using the tool to organize – who would benefit from the security of an end-to-end encrypted system. Your rationale for this decision could undermine encryption more broadly at a time when the U.S. Attorney General has publicly battled with companies that refuse to weaken their products’ encryption in order to provide the government a “back door” and while Congress is considering legislation that jeopardizes the future of encryption with the EARN IT Act. In Mozilla’s letter to you in April, we highlighted our conviction that all users should have access to the strongest privacy and security features available. The value of privacy and security is even more critical today, especially for political organizers and protesters who may be the target of government surveillance. Thank you for your openness to our previous recommendations – we especially appreciate that you have already made important changes, such as prioritizing user consent to be unmuted. Our hope is that you consider this feedback and immediately adjust course. Sincerely, Ashley BoydVice President, Advocacy and EngagementMozilla Foundation  Gennie GebhartAssociate Director of ResearchElectronic Frontier Foundation

  • Amazon Ring Must End Its Dangerous Partnerships With Police
    by Jason Kelley on June 10, 2020 at 9:12 pm

    Across the United States, people are taking to the street to protest racist police violence, including the tragic police killings of George Floyd and Breonna Taylor. This is a historic moment of reckoning for law enforcement. Technology companies, too, must rethink how the tools they design and sell to police departments minimize accountability and exacerbate injustice. Even worse, some companies profit directly from exploiting irrational fears of crime that all too often feed the flames of police brutality. So we’re calling on Amazon Ring, one of the worst offenders, to immediately end the partnerships it holds with over 1300 law enforcement agencies. SIGN PETITION TELL AMAZON RING: END POLICE PARTNERSHIPS  One by one, companies that profit off fears of crime have released statements voicing solidarity with those communities that are disproportionately impacted by police violence. Amazon, which owns Ring, announced that they “stand in solidarity with the Black community—[their] employees, customers, and partners — in the fight against systemic racism and injustice.”  Amazon Statement And yet, Amazon and other companies offer a high-speed digital mechanism by which people can make snap judgements about who does, and who does not, belong in their neighborhood, and summon police to confront them. This mechanism also facilitates police access to video and audio footage from massive numbers of doorbell cameras aimed at the public way across the country—a feature that could conceivably be used to identify participants in a protest through a neighborhood. Amazon built this surveillance infrastructure through tight-knit partnerships with police departments, including officers hawking Ring’s cameras to residents, and Ring telling officers how to better pressure residents to share their videos. Ring plays an active role in enabling and perpetuating police harassment of Black Americans. Despite Amazon’s statement that “the inequitable and brutal treatment of Black people in our country must stop,” Ring plays an active role in enabling and perpetuating police harassment of Black Americans. Ring’s surveillance doorbells and its accompanying Neighbors app have inflamed many residents’ worst instincts and urged them to spy on pedestrians, neighbors, and workers. We must tell Amazon Ring to end their police partnerships today.  Ring Threatens Privacy and Communities We’ve written extensively about why Ring is a “Perfect Storm of Privacy Threats,” and we’ve laid out five specific problems with Ring-police partnerships. We also revealed a number of previously-undisclosed trackers sending information from the Ring app to third parties, and critiqued the lackluster changes made in response to security flaws.  To start, Ring sends notifications to a person’s phone every time the doorbell rings or motion near the door is detected. With every notification, Ring turns the pizza delivery person or census-taker innocently standing at the door into a potential criminal. And with the click of a button, Ring allows a user to post video taken from that camera directly to their community, facilitating the reporting of so-called “suspicious” behavior. This encourages racial profiling—take, for example, an African-American real estate agent who was stopped by police because neighbors thought it was “suspicious” for him to ring a doorbell.  Ring Could Be Used to Identify Protesters To make matters worse, Ring continuing to grow partnerships with police departments during the current protests make an arrangement already at risk of enabling racial profiling even more troubling and dangerous. Ring now has relationships with over 1300 police departments around the United States. These partnerships allow police to have a general idea of the location of every Ring camera in town, and to make batch-requests for footage via email to every resident with a camera within an area of interest to police—potentially giving police a one-step process for requesting footage of protests to identify protesters. In some towns, the local government has even offered tiered discount rates for the camera based on how much of the public area on a street the Ring will regularly capture. The more of the public space it captures, the larger the discount.  If a Ring camera captures demonstrations, the owner is at risk of making protesters identifiable to police and vulnerable to retribution. Even if the camera owner refuses to voluntarily share footage of a protest with police, law enforcement can go straight to Amazon with a warrant and thereby circumvent the camera’s owner.  Ring Undermines Public Trust In Police The rapid proliferation of these partnerships between police departments and the Ring surveillance system—without oversight, transparency, or restrictions—poses a grave threat to the privacy and safety of all people in the community. “Fear sells,” Ring posted on their company blog in 2016. Fear also gets people hurt, by inflaming tensions and creating suspicion where none rationally exists.  Consider that Amazon also encourages police to tell residents to install the Ring app and purchase cameras for their homes, in an arrangement that makes salespeople out of what should be impartial and trusted protectors of our civic society. Per Motherboard, for every town resident that downloads Ring’s Neighbors app, the local police department gets credits toward buying cameras it can distribute to residents. This troubling relationship is worse than uncouth: it’s unsafe and diminishes public trust. Some of the “features” Ring has been considering adding would considerably increase the danger it poses. Integrated face recognition software would enable the worst type of privacy invasion of individuals, and potentially force every person approaching a Ring doorbell to have their face scanned and cross-checked against a database of other faces without their consent. License plate scanning could match people’s faces to their cars. Alerting users to local 911 calls as part of the “crime news” alerts on its app, Neighbors, would instill even more fear, and probably sell additional Ring services.  Just today Amazon announced a one-year moratorium on police use of its dangerous “Rekognition” facial recognition tool. This follows an announcement from IBM that it will no longer develop or research face recognition technology, in part because of its use in mass surveillance, policing, and racial profiling. We’re glad Amazon has admitted that the unregulated use of face recognition can do harm to vulnerable communities. Now it’s time for it to admit the dangers of Ring-police partnerships, and stand behind its statement on police brutality.  SIGN PETITION TELL AMAZON RING: END POLICE PARTNERSHIPS

  • The Winds Are Shifting on NYPD Transparency
    by Nathan Sheard on June 10, 2020 at 4:24 pm

    There is a range of problems with the NYPD, and transparency is one. In the last week, we have seen too many examples of what happens without it. NYPD officers have been documented attacking protesters with pepper spray, batons, and an SUV. NYPD detectives have also been working with federal agents to question protest participants about their political beliefs. For three years, New York’s privacy community has been calling on the City to adopt the POST Act, an ordinance that would provide transparency around NYPD’s use of privacy-invasive surveillance technology. New Yorkers were reminded of the urgent need for this transparency when the NYPD’s Deputy Chief of Counterterrorism and Intelligence suggested that the department’s inability to anticipate large convergences was a failure to effectively “monitor gang’s social media.” Even in the opaque world of NYPD surveillance, what is known about NYPD’s “gang” designation troubles researchers and civil liberties advocates. This concern is only exacerbated by the department’s history of surveilling activist groups and, in recent years, Black Lives Matter activists in particular. Now is the time for bold new thinking about how to dismantle the overreliance of police agencies in the United States on highly invasive spy tech. This is a moment of big ideas. This weekend, for example, Minneapolis City Council members called to disband their local police force, and a group of New York City Mayoral staffers called for action and policy reform. For years, New York’s Mayor, Bill de Blasio, has joined the NYPD in opposing the POST Act. However, in recent days, Mayor de Blasio has nodded toward improving NYPD transparency. In his June 7 statement, de Blasio said he would support the State legislature’s efforts to repeal section 50-A, a statute used by police departments to shield disciplinary records. In the words of Angel Díaz, Liberty and National Security Counsel at The Brennan Center for Justice, “The POST Act’s transparency and accountability requirements are essential to prevent an era of digital stop-and-frisk.” In May, the Brennan Center, together with EFF and a coalition of over 40 civil society groups, called on City Council Speaker Corey Johnson to schedule a vote on the POST Act. Late last week, the Surveillance Technology Oversight Project(an Electronic Frontier Alliance member) announced that the Speaker would be doing just that. EFF commends Speaker Johnson, Public Advocate Jumanee Wiliams, and each of the POST Act’s 32 sponsors for taking this important step toward surveillance transparency. We hope that Mayor de Blasio’s recent support for transparency of NYPD disciplinary records will be followed by a reassessment of his position on the POST Act.

  • Digital Security Advice for Journalists Covering the Protests Against Police Violence
    by Naomi Gilens on June 9, 2020 at 9:27 pm

    This guide is an overview of digital security considerations specific to journalists covering protests. For EFF’s comprehensive guide to digital security, including advice for activists and protesters, visit Legal advice in this post is specific to the United States. As the international protests against police killings enter their third week, the public has been exposed to shocking videos of law enforcement wielding violence against not only demonstrators, but also the journalists who are tasked with documenting this historic moment. EFF recently issued Surveillance Self-Defense tips for protesters who may find their digital rights under attack, either through mass surveillance of crowds or through the seizure of their devices. However, these tips don’t always reflect the reality of how journalists may need to do their jobs and the unique threats journalists face. In this blog post, we attempt to address the digital security of news gatherers after speaking with reporters, photographers, and live streamers who are on the ground, risking everything to document these protests. The Journalists’ Threat Model When we talk about security planning or “threat modeling,” we mean assessing risk through a series of questions. What do you have that you need to protect? From what or whom do you need to protect it? What is the likelihood you will need to protect it? What are the consequences if you fail to protect it, and what are the trade-offs you’re willing to make in order to protect it? With the threat model of a protester, we often pay special attention to the need to protect the anonymity and location of those who could face retaliation for exercising their rights to march and demonstrate or who may have their rights violated as police investigate the actions of others. This means that we often recommend protesters leave their devices at home, use a temporary device, or keep their devices in airplane mode. A journalist, however, is generally more open about where they are and when, either through the credits on the photographs they publish or the bylines on the articles they write. And because many need to get out their stories rapidly or even in real time, going device-free or keeping devices in airplane mode may not be an acceptable option. The journalist’s protest threat model is complex. First, they have to worry about the police. Law enforcement could seize their devices, which in turn could expose their sources and research in addition to their personal information, and could separate them from their work product for months. Journalists could also find that police may follow their digital footprint to investigate sources (as the Feds did when Sean Penn interviewed Joaquín “El Chapo” Guzmán, despite Penn taking some security precautions). Journalists have also told us about experiences with thieves who use protests as cover, such as having laptops and other equipment stolen from their cars while they are in the field. Finally, journalists (especially photographers) also must remain aware that they may be confronted by protesters themselves, who may be trying to protect their images as part of their own threat models. (Again, it’s important to consider the likelihood of the threat: the Freedom of the Press Foundation’s research has found that the overwhelming number of attacks on journalists have been by police, not protesters.) Each of these threats may require taking different steps to secure your data. And, the steps needed will depend, too, on the form of your news gathering and the tools that you rely on in the field, whether that is taking photographs, conducting interviews, or live streaming video. To protect yourself and your data, you will need you to think carefully about the kind of journalism you’re doing (say advocacy journalism vs. traditional daily news reporting), the situation you’re joining, and the particular risks that you may face. There is only one piece of advice that we believe holds for all journalists: think ahead and be deliberate. Consider the threats and make a decision about the risks and trade-offs. In doing so, here are some steps to consider.Minimize your digital content at risk. Be prepared for the possibility that if you are arrested, police may confiscate your devices, and may keep them long after you are released to try to break into them. Minimize the amount of sensitive personal and professional information you carry in order to minimize the risk of exposure. If practicable, you may consider leaving personal devices at home and instead carrying a work or burner phone with minimal personal information on it. If that is not possible, consider minimizing the amount of sensitive information available by logging out of email, social media apps, and other apps containing data that you would not want the police or others to access. Encrypt devices with a long passcode where possible. Police may try to break into your phone, and a long passcode is significantly more difficult to crack. Keep them and others out of your devices by protecting them with strong passcodes of 8-12 random characters that are easy for you to remember. Deactivate touch unlock and face ID on your devices. In the U.S., law currently provides stronger protection against police forcing you to enter a passcode than forcing you to biometrically unlock your device. Using a long passcode may be less convenient, but iOS and Android both allow you to take photos and video without unlocking your phone. See the “Take photos and videos without unlocking your device” section of our Attending a Protest SSD guide.End-to-end encrypted messaging. By using end-to-end encrypted messaging, such as Signal (available for iOS and Android), you are making it far more difficult for law enforcement to obtain and read your communications, be it between you and your sources, your editorial team, or your personal contacts. You will want to make sure everyone on your newsgathering team has the same app installed and has each other’s contact information in advance of the protest. You may also find it useful to have several different encrypted messaging systems installed, since protesters and other sources may be using other apps. Many of these apps, including Signal, provide an option to have messages disappear anywhere from ten seconds to a week after they are first read, which will protect your communications if the police or others breach your phone. Press passes. Many journalists won’t enter a volatile situation like a protest without having visible credentials provided by their news organization, a journalism association, or the local government. A press pass certainly can be useful in establishing your identity as a journalist. However, to obtain a government or police-issued press pass, it’s important to recognize a large trade-off: you may need to provide personal information or submit to a background check. Hide your notifications. Consider turning off notifications, or, at minimum, restricting messaging apps from displaying the content of messages and message sender information. If your phone is seized or lost, you won’t have to worry about someone easily reading your private communications.Back up your data before the protest. If your device is lost, stolen, or confiscated by police, you will be glad to have a backup of your information stored in a safe place.Back up your data during the protest. If you are taking photos, video, or notes on a phone or other digital device during the protest, consider trying to back up your work in real time. This could include emailing important photos to yourself or setting up automated cloud storage while in the field. If your phone or camera is lost, stolen, or seized, you won’t lose your own coverage of what took place. But prepare for the possibility during the protest, the cell phone network may be oversaturated, unavailable, or slow.Assume digital cameras are not encrypted. Few digital cameras provide the ability to encrypt. It is safe to assume that photos and videos taken with a digital camera will be stored unencrypted, unless explicitly stated otherwise.Use multiple memory cards. Some cameras provide the capability to store your photos simultaneously on two cards. Taking advantage of this capability may be obvious to many photographers, who know that a memory card can fail just when you need it most. However, there is also a digital security element: by cycling through memory cards regularly, you are ensuring that you will not lose all of your photographs if your camera itself is seized or damaged. If you are detained by police, refuse to consent to a search of your devices. If police ask to see your phone, you have the right to refuse your consent and to refuse to give them your passcode. Note, however, that police may still seize it to try to break into and search it later. For more resources on this issue, visit EFF’s Surveillance Self-Defense Guide, the Committee to Protect Journalists, and Freedom of the Press Foundation.Are you a journalist covering the protests? We would appreciate feedback and to hear about any other practices you’ve developed for protecting your digital privacy. Additionally, if you are in need of legal assistance in relation to your work reporting on the protests, we may be able to find you some help. Email us at [email protected]

Share This Information.

2 thoughts on “Deeplinks

Leave a Reply

Your email address will not be published. Required fields are marked *