Best Practice

AI, deepfakes and safeguarding: Ten ways to keep children safe

As feared, we are now seeing the horrifying ways that AI is being used to create images and deepfake videos of children and child abuse. Elizabeth Rose looks at the implications for safeguarding work in schools
Deepfake fears: AI is being used to create images and deepfake videos of child sexual abuse and yet there is little guidance for school safeguarding teams - Adobe Stock

The use of artificial intelligence is a rapidly expanding and developing field, and the potential risks that AI poses to children is an area of increasing concern.

In October 2023, the Internet Watch Foundation (IWF) published a report on the use of AI to generate child sexual abuse material and the wide-ranging harms associated with this, which was then reviewed and updated in mid-2024, tracking some of the rapid advancements in the technology and level of use.

The reports demonstrate the horrifying ways that AI is being used to create images – and now deepfake videos – of child sexual abuse, as well as recommending ways that government and tech companies can respond to this issue.

Register now, read forever

Thank you for visiting SecEd and reading some of our content for professionals in secondary education. Register now for free to get unlimited access to all content.

What's included:

  • Unlimited access to news, best practice articles and podcast

  • New content and e-bulletins delivered straight to your inbox every Monday and Thursday

Register

Already have an account? Sign in here