With many nations expected to hold elections during the next two years, the use of misinformation and disinformation - powered by artificial intelligence - will be the most severe global risk.
Also: 4 ways to overcome your biggest worries about generative AI. During this time period, misinformation and disinformation will emerge as the leading global risk, followed by extreme weather events, and societal polarization.
The report then ranks cyber insecurity and interstate armed conflict to round out the top-five global risks.
Misinformation and disinformation is placed as the top risk in India, while it ranks as the sixth highest risk in the US, and eighth in the European Union.
WEF notes that the disruptive capabilities of manipulated information are rapidly accelerating.
These capabilities are being fuelled by open access to increasingly sophisticated technologies and deteriorating trust in information and institutions.
Over the next couple of years, a wide set of actors will capitalize on the explosion of synthetic content, amplifying societal divisions, ideological violence, and political repression, WEF said.
With almost three billion citizens heading to the polls, including in India, Indonesia, the US, and the UK, the widespread use of misinformation and disinformation, as well as the tools to disseminate it, could undermine the legitimacy of new incoming governments.
New classes of crimes also will proliferate, such as non-consensual deepfake pornography or stock market manipulation, WEF added.
To combat the risks of AI-generated information, some countries have already begun deploying new and evolving regulations that target both hosts and creators of online information and illegal content.
Nascent regulation of generative AI is also likely to complement such efforts, it added, pointing to requirements in China to watermark AI-generated content as an example.
Such rules might help identify false information, including unintentional misinformation through AI-hallucinated content.
It notes that recent technological advances have enhanced the volume, reach, and efficacy of falsified information, with flows that are more difficult to track, attribute, and control.
Disinformation will be increasingly personalized to its recipients and targeted to specific groups, such as minority communities, and disseminated through more opaque messaging platforms, such as WhatsApp or WeChat.
WEF notes also that it's increasingly difficult to discern between AI-generated and human-generated content, even for detection mechanisms and tech-savvy individuals.
Led by its Ministry of Communications and Information, the initiative is slated to run through 2028.
Scheduled for launch during the first half of 2024, the facility will focus on building and customizing tools to detect harmful content, such as deepfakes and non-factual claims.
The center will seek to identify societal vulnerabilities and develop possible interventions, such as flagging or correcting misinformation, that could reduce online users' susceptibility to content deemed harmful.
The facility will also test digital trust technologies, such as watermarking and content authentication.
Global populations should hope for such development efforts to yield effective detection tools because, if left unaddressed, misinformation may lead to two different circumstances.
This Cyber News was published on www.zdnet.com. Publication date: Thu, 11 Jan 2024 18:13:03 +0000