High-tech surveillance amplifies police bias and overreach
by Andrew Guthrie Ferguson, Professor of Law, American University on June 12, 2020 at 12:15 pm
Police forces across the country now have access to surveillance technologies that were recently available only to national intelligence services. The digitization of bias and abuse of power followed.
Are thermal cameras a magic bullet for COVID-19 fever detection? There’s not enough evidence to know
by Scott Adams, Postdoctoral Biomedical Engineering Researcher, Deakin University on May 27, 2020 at 6:41 am
Temperature-scanning systems are not always accurate at detecting fever, and raise a host of privacy concerns.
Police and governments may increasingly adopt surveillance technologies in response to coronavirus fears
by Joe Masoodi, PhD student, Surveillance Studies, Queen’s University, Ontario on March 23, 2020 at 9:30 pm
Recently, police forces have come under criticism for their engagement of facial recognition technologies. But pandemic response plans may increasingly incorporate surveillance.
Australian police are using the Clearview AI facial recognition system with no accountability
by Jake Goldenfein, Lecturer, Swinburne University of Technology on March 4, 2020 at 1:20 am
There are few guarantees that the facial recognition system is secure or even that it is accurate.
Airlines take no chances with our safety. And neither should artificial intelligence
by Monique Mann, Senior lecturer, Deakin University on March 1, 2020 at 7:03 pm
You’d thinking flying in a plane would be more dangerous than driving a car. In reality it’s much safer, partly because the aviation industry is heavily regulated. Airlines must stick to strict standards…
How cameras in public spaces might change how we think
by Janina Steinmetz, Senior Lecturer in Marketing, Cass Business School, City, University of London on February 27, 2020 at 1:48 pm
How you feel about eating chips and wearing your pyjamas out – experiments show how differently you react when you’re being observed.
Facial recognition is spreading faster than you realise
by Garfield Benjamin, Postdoctoral Researcher, School of Media Arts and Technology, Solent University on February 26, 2020 at 2:55 pm
The more we use facial recognition, the more we see its limits and its risks.
Facial recognition: research reveals new abilities of ‘super-recognisers’
by David James Robertson, Lecturer, University of Strathclyde on January 10, 2020 at 4:37 pm
"Super-recognisers" who can identify a range of ethnicities could help increase fraud detection rates at passport control and decrease false conviction rates that have relied on CCTV.
AI can now read emotions – should it?
by Christoffer Heckman, Assistant Professor of Computer Science, University of Colorado Boulder on January 8, 2020 at 12:18 pm
A report calls for banning the use of emotion recognition technology. An AI and computer vision researcher explains the potential and why there’s growing concern.
Why the government’s proposed facial recognition database is causing such alarm
by Sarah Moulds, Lecturer of Law, University of South Australia on October 24, 2019 at 10:24 pm
Human rights groups say the bill is an attempt to introduce mass surveillance to Australia and an egregious breach of individual privacy.
Facial recognition: ten reasons you should be worried about the technology
by Birgit Schippers, Visiting Research Fellow, Senator George J Mitchell Institute for Global Peace, Security and Justice, Queen’s University Belfast on August 21, 2019 at 1:47 pm
Surveillance software that identifies people from CCTV is eroding human rights and democracy.
Stolen fingerprints could spell the end of biometric security – here’s how to save it
by Chaminda Hewage, Reader in Data Security, Cardiff Metropolitan University on August 20, 2019 at 12:06 pm
You can’t change your fingerprint if it’s stolen like you’d change your password.
Bring on the technology bans!
by Kentaro Toyama, W. K. Kellogg Professor of Community Information, University of Michigan on August 19, 2019 at 11:11 am
Legal bans and moratoriums on other emerging technologies need not be permanent or absolute, but the more powerful a technology is, the more care it requires to operate safely.
Detecting deepfakes by looking closely reveals a way to protect against them
by Siwei Lyu, Professor of Computer Science; Director, Computer Vision and Machine Learning Lab, University at Albany, State University of New York on June 25, 2019 at 4:07 pm
Research has found ways to detect deepfakes through flaws that can’t be fixed easily by the fakers.
Setting precedents for privacy: the UK legal challenges bringing surveillance into the open
by Garfield Benjamin, Postdoctoral Researcher, School of Media Arts and Technology, Solent University on June 12, 2019 at 11:18 am
Campaigners in the UK are pushing to protect privacy and make the security services more accountable.
Tech companies collect our data every day, but even the biggest datasets can’t solve social issues
by Doug Specht, Senior Lecturer in Media and Communications, University of Westminster on June 6, 2019 at 1:13 pm
Social biases in digital tech create racist face recognition software and sexist hiring tools, but more data collection isn’t the answer.
As governments adopt artificial intelligence, there’s little oversight and lots of danger
by James Hendler, Tetherless World Professor of Computer, Web and Cognitive Sciences, Rensselaer Polytechnic Institute on April 18, 2019 at 10:43 am
AI can help make government more efficient – but at what cost? Citizens’ lives could be better or worse, based on how the technology is used.
Humans and machines can improve accuracy when they work together
by Davide Valeriani, Postdoctoral Research Fellow in Multimodal Neuroimaging and Machine Learning, Harvard University on March 11, 2019 at 11:11 am
People – individually and in groups – were not as good at facial recognition as an algorithm. But five people plus the algorithm, working together, were even better.
Artificial intelligence must know when to ask for human help
by Sarah Scheffler, Ph.D. Student in Computer Science, Boston University on March 7, 2019 at 11:38 am
When algorithms are at work, there should be a human safety net to prevent harming people. Artificial intelligence systems can be taught to ask for help.
Fingerprint and face scanners aren’t as secure as we think they are
by Wencheng Yang, Post Doctoral Researcher, Security Research Institute, Edith Cowan University on March 6, 2019 at 4:00 am
Current techniques to protect biometric details, such as face recognition or fingerprints, from hacking are effective, but advances in AI are rendering these protections obsolete.
Super-recognisers accurately pick out a face in a crowd – but can this skill be taught?
by Alice Towler, Post-doctoral Research Fellow, UNSW on February 20, 2019 at 6:46 pm
Even the world’s best available training – used to train police, border control agents and other security personnel – does not compensate for natural talent in face recognition.
Face recognition technology in classrooms is here – and that’s ok
by Brian Lovell, Research Director of the Security and Surveillance Research group; Professor, The University of Queensland on February 14, 2019 at 12:44 am
New technologies like facial recognition are coming – whether we like it or not. We can’t turn back the tide, but we can manage new technology to do the least harm and most good.
Police use of facial recognition technology must be governed by stronger legislation
by Joe Purshouse, Lecturer in Criminal Law, University of East Anglia on February 8, 2019 at 12:02 pm
New research on facial recognition technology trials by the police calls for tighter regulation to protect human rights.
Why the #10yearchallenge is more than a simple social media meme
by Amanda du Preez, Professor in Visual Culture Studies, University of Pretoria on February 6, 2019 at 2:26 pm
For those who still consider memes like the #10yearchallenge as harmless and innocent information sharing perhaps it’s time to reconsider.
Australians accept government surveillance, for now
by Anna Bunn, Senior Lecturer, Curtin Law School, Curtin University on February 4, 2019 at 7:20 pm
The government can access your phone metadata, drivers licence photo and much more. And new research shows Australians are OK about it. But that might change.