Over the course of September, analysts at the IWF focused on one dark web CSAM forum, which it does not name, that generally focuses on "Softcore imagery" and imagery of girls. Within a newer AI section of the forum, a total of 20,254 AI-generated photos were posted last month, researchers found. A team of 12 analysts at the organization spent 87.5 hours assessing 11,108 of these images. In total, the IWF judged 2,978 images to be criminal. Most of these-2,562-were realistic enough to be treated the same way as non-AI CSAM. Half of the images were classed as Category C, meaning they are indecent, with 564 showing the most severe types of abuse. The images likely depicted children aged between 7 and 13 years old, and were 99.6 percent female children, the IWF says. "The scale at which such images can be created is worrisome," says Nishant Vishwamitra, an assistant professor at the University of Texas at San Antonio who is working on the detection of deepfakes and AI CSAM images online. The IWF's report notes that the organization is starting to see some creators of abusive content advertise image creation services-including making "Bespoke" images and offering monthly subscriptions. This may increase as the images continue to become more realistic. "Some of it is getting so good that it's tricky for an analyst to discern whether or not it is in fact AI-generated," says Lloyd Richardson from the Canadian Centre for Child Protection. The realism also presents potential problems for investigators who spend hours trawling through abuse images to classify them and help identify victims. Analysts at the IWF, according to the organization's new report, say the quality has improved quickly-although there are still some simple signs that images may not be real, such as extra fingers or incorrect lighting. "I am also concerned that future images may be of such good quality that we won't even notice," says one unnamed analyst quoted in the report. "I doubt anyone would suspect these aren't actual photographs of an actual girl," reads one comment posted to a forum by an offender and included in the IWF report. Another says: "It's been a few months since I've checked boy AI. My God it's gotten really good!". In many countries, the creation and sharing of AI CSAM can fall under existing child protection laws. "The possession of this material, as well as the spreading, viewing and creation, is illegal as well," says Arda Gerkens, the president of the Authority for Online Terrorist and Child Pornographic Material, the Dutch regulator. Prosecutors in the US have called for Congress to strengthen laws relating to AI CSAM. More broadly, researchers have called for a multipronged approach to dealing with CSAM that's shared online. There are also various techniques and measures that tech companies and researchers are looking at to stop AI-generated CSAM from being created, and also to stop it from bleeding out of dark web forums onto the open internet. Gerkens says it is possible for tech companies creating AI models to build in safeguards, and that "All tech developers need to be aware of the possibility their tools will be abused."
This Cyber News was published on www.wired.com. Publication date: Thu, 30 Nov 2023 23:19:27 +0000