News

Deepfake AI child abuse images spark urgent guidance for schools

The disturbing rise in the use of AI to create nude and sexual imagery of children has sparked the publication of urgent guidance for professionals, including school staff.
Urgent need: In 2024, the Internet Watch Foundation processed 245 reports containing more than 7,500 AI-generated images of child sexual abuse – a 380% increase on 2023 - Adobe Stock

The guidance gives step-by-step advice on how professionals should respond if children find themselves targeted and has been produced by the National Crime Agency and the Internet Watch Foundation.

It comes after authorities have seen a rise in the misuse of AI to replicate imagery of real, recognisable children.

Polling from the NCA has revealed that many professionals are not even aware that AI-generated child sexual abuse imagery is illegal.

The guidance makes it clear that AI child sexual abuse imagery “should be treated with the same level of care, urgency and safeguarding response as any other incidence involving child sexual abuse material” and aims to dispel any misconceptions that AI imagery causes less harm than real photographs or videos.

Register now, read forever

Thank you for visiting SecEd and reading some of our content for professionals in secondary education. Register now for free to get unlimited access to all content.

What's included:

  • Unlimited access to news, best practice articles and podcast

  • New content and e-bulletins delivered straight to your inbox every Monday and Thursday

Register

Already have an account? Sign in here