Deepfake and synthetic media presents a serious safeguarding and wellbeing threat for young people and schools are on the frontline. Margaret Mulholland considers the scale of the issue, what schools can do, and how policy needs to change
Emerging threat: 43% of individuals aged 16 and older have seen at least one deepfake online in the last six months, rising to 50% of children aged 8 to 15 - Adobe Stock

We all know that synthetic products are made from chemicals or artificial substances rather than from natural ones. Synthetic media is artificial information, and transformative advances in artificial intelligence and production software have made it commonplace.

This, combined with significant change in information consumption habits, particularly among young people, leaves schools with a dilemma.

Are our young people and their teachers equipped to determine what is real and what is artificial?

All of us working in schools need to be familiar with synthetic media. It encompasses any type of content – be it video, image, text, or voice – that has been partially or fully generated using AI or machine learning.

Register now, read forever

Thank you for visiting SecEd and reading some of our content for professionals in secondary education. Register now for free to get unlimited access to all content.

What's included:

  • Unlimited access to news, best practice articles and podcast

  • New content and e-bulletins delivered straight to your inbox every Monday and Thursday

Register

Already have an account? Sign in here