In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor.
In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons.
Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint.
Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information.
This capability, he argues, will not only make spying more accessible but also more comprehensive.
We've recently seen a movement from companies like Google and Microsoft to feed what users create through AI models for the purposes of assistance and analysis.
Microsoft is also building AI copilots into Windows, which require remote cloud processing to work.
That means private user data goes to a remote server where it is analyzed outside of user control.
Despite assurances of privacy from these companies, it's not hard to imagine a future where AI agents probing our sensitive files in the name of assistance start phoning home to help customize the advertising experience.
Eventually, government and law enforcement pressure in some regions could compromise user privacy on a massive scale.
Journalists and human rights workers could become initial targets of this new form of automated surveillance.
Along the way, AI tools can be replicated on a large scale and are continuously improving, so deficiencies in the technology now may soon be overcome.
What's especially pernicious about AI-powered spying is that deep-learning systems introduce the ability to analyze the intent and context of interactions through techniques like sentiment analysis.
It signifies a shift from observing actions with traditional digital surveillance to interpreting thoughts and discussions, potentially impacting everything from personal privacy to corporate and governmental strategies in information gathering and social control.
In his editorial, Schneier raises concerns about the chilling effect that mass spying could have on society, cautioning that the knowledge of being under constant surveillance may lead individuals to alter their behavior, engage in self-censorship, and conform to perceived norms, ultimately stifling free expression and personal privacy.
President Biden's Blueprint for an AI Bill of Rights mentions AI-powered surveillance as a concern.
The European Union's draft AI Act also may obliquely address this issue to some extent, although apparently not directly, to our understanding.
This Cyber News was published on arstechnica.com. Publication date: Wed, 06 Dec 2023 01:29:05 +0000