How AI is being abused to create child sexual abuse imagery

In 2023, the Internet Watch Foundation (IWF) has been investigating its first reports of child sexual abuse material (CSAM) generated by artificial intelligence (AI).

Initial investigations uncovered a world of text-to-image technology. In short, you type in what you want to see in online generators and the software generates the image.

The technology is fast and accurate – images usually fit the text description very well. Many images can be generated at once – you are only really limited by the speed of your computer. You can then pick out your favourites; edit them; direct the technology to output exactly what you want.

In total, 20,254 AI-generated images were found to have been posted to one dark web CSAM forum in a one-month period. Of these, 11,108 images were selected for assessment by IWF analysts. These were the images that were judged most likely to be criminal. — Read More

#cyber