The Danger of AI Scary Technology Artificial Intelligence.
The Danger of AI is an examination of an alternative means to accurately poll for political opinions through an artificial intelligence program named Polly.
In the face of a century of traditional polling to predict public opinion, there is a shakeup afoot in the prediction game. Margin of Error: AI, Polling and Elections examines how a startup called Advanced Symbolics (ASI) uses artificial intelligence (AI) and public social-media data to forecast voter behaviour.
But the promise of new technology also comes with questions about its accuracy, the threat to citizens’ privacy and our democracy itself. Every one of us volunteers a huge amount of private data with virtually every Internet service we use, without reading or understanding the terms of service. This data can now be harvested by AI to accurately predict among many other things, how we will vote.
Even without surrendering personal information, the new AI algorithm Polly, developed by ASI, combs social media to build profiles of different demographics and determines their preferences.This method has already led to Polly’s success in predicting both the 2016 Trump victory and Brexit.
With the 2019 Canadian federal election campaign as a real time back drop, Margin of Error puts Polly to the test revealing how an AI doesn’t just give a detailed picture of the publics voting intentions, but also how specific events can alter them.
But will knowing what our hopes and concerns are, give politicians the intel they need to respond to our needs, and lead to a “utopian” society, as ASI’s CEO Erin Kelly claims, or can this data be misused to mislead us either by our own governments, or those of our adversaries? And should politicians even be responding to our desires, as expressed through social media?
AI systems can pose a variety of risks if used improperly or for malicious purposes. These risks include automation-spurred job loss, privacy violations, deepfakes, algorithmic bias caused by bad data, socioeconomic inequality, autonomous weapons programmed to kill, social manipulation, invasion of privacy and social grading, misalignment between our goals and AI goals, externalities, personal data risks, safety and security concerns, knock-on effects, and potential mass unemployment due to AI-based automation.
It is important to understand these risks and take steps to mitigate them in order to ensure that AI is used responsibly.