
We all know that synthetic products are made from chemicals or artificial substances rather than from natural ones. Synthetic media is artificial information, and transformative advances in artificial intelligence and production software have made it commonplace.
This, combined with significant change in information consumption habits, particularly among young people, leaves schools with a dilemma.
Are our young people and their teachers equipped to determine what is real and what is artificial?
All of us working in schools need to be familiar with synthetic media. It encompasses any type of content – be it video, image, text, or voice – that has been partially or fully generated using AI or machine learning.
As schools continue to integrate new technologies into the learning environment, the rise of synthetic media presents both opportunities and significant challenges.
On the one hand, it offers creative possibilities of problem-solving, increased efficiency or enhanced accessibility. On the other hand, synthetic media poses serious risks that school leaders must be prepared to address, particularly in safeguarding students against online harms.
A particularly troubling subset of synthetic media is deepfakes. These are manipulated or entirely fabricated pieces of content that convincingly imitate real visuals or sounds, ranging from swapping faces in videos to synthesising voices.
These are increasingly difficult to distinguish from genuine content, and can be used maliciously, often for the purpose of deception, violating privacy, and creating damaging content. We need to learn how to recognise them.
Manny Botwe, ASCL president, raised the issue in his address to our annual conference in Liverpool earlier this year. He said: “Today’s young people face challenges that are vastly different from those of previous generations. Their world is shaped by smartphones, social media, memes, and influencers – forces that shape their identities, interactions, and even their mental wellbeing.”
SecEd has also been addressing the safeguarding implications, with a recent article from safeguarding expert Elizabeth Rose offering disturbing statistics and safeguarding advice regarding deepfakes (Rose, 2025).
SecEd has also recently reported on guidance from the UK Safer Internet Centre about how schools can safely manage the use of staff and student images on websites and social media.
The wellbeing issue can get very personal very quickly. In a recent Teacher Tapp Survey conducted by ASCL, nearly three quarters of secondary school teachers (73%) reported that students had been bullied by peers on social media. The Netflix show Adolescence has brought the harmful effect of bullying and misinformation into sharp relief for many.
Online harms: The dark side of technological progress
The rapid evolution of social media and other digital platforms has transformed how we consume (and produce) information. While these advancements can foster knowledge-sharing, they also enable the spread of misinformation and amplify harmful content at an unprecedented pace, as we saw from the 2024 Southport riots. For school leaders, understanding these risks is crucial to ensuring the safety of students.
As a recent report from Ofcom (2024) shows, synthetic content is here to stay. Key findings included:
- 43% of individuals aged 16 and older have seen at least one deepfake online in the last six months, rising to 50% of children aged 8 to 15.
- 14% of adults who have seen synthetic content report encountering synthetic sexual content.
- The most common deepfakes encountered by children aged 8 to 15 were categorised as “funny or satirical” (58%) and deepfake scam advertisements (32%).
- Only 9% of adults feel confident in their ability to identify deepfakes; a slightly higher percentage (20%) of children aged 8 to 15 report similar confidence.
The potential for harm on school communities from synthetic media is real. Incidents in various schools around the world, from Spain to New Jersey to Rio de Janeiro, have highlighted the real dangers posed by deepfake technology.
Students have created and shared explicit or defamatory deepfakes, leading to harassment, bullying, and reputational damage. Safeguarding students and staff will involve proactive measures to prevent online harm and reactive measures to mitigate the negative impact of synthetic media.
The Online Safety Act
In response to these growing threats, the UK government introduced the Online Safety Act in February 2024, mandating that technology companies take responsibility for protecting children from online harm. Key provisions include:
- Enforcing age limits: Ensuring that age restrictions are properly implemented and enforced.
- Assessing risks: Requiring platforms to evaluate and mitigate risks to children.
- Shielding from harmful content: Protecting children from exposure to harmful content, including synthetic media.
- Tackling illegal material: Mandating the removal of illegal content, with particular attention to deepfake pornography and other forms of synthetic sexual content.
The Act also empowers Ofcom to enforce these regulations and raise awareness about online safety, providing a crucial layer of protection for young people.
New legislation came into force in March of this year, but we don’t know when or if this will have the desired effect.
The landscape is changing rapidly. The way we consume information has changed and there are new findings to shape our expectations of how education can respond.
One recent report (Burtonshaw et al, 2025) found that disinformation is having a significant impact on the school community. Its polling claims that 11 to 18-year-olds tend to get their news from less formal channels, their two most popular choices were “word of mouth from family or friends” (41%) and “social media” (38%). Only 26% of students got their news from a television broadcaster.
By comparison, school staff (51%) and parents (65%) were much more likely to get their news from a national television news broadcast (e.g. BBC, ITV, Sky).
Perhaps one of the most worrying findings is how disinformation can affect behaviours. Information accessed on social media appears to be influencing young people’s world-views: 36% of young people said they have changed their beliefs about a mainstream news story based on information found on social media.
The report emphasises that the evidence shows people exist in information siloes. Unsurprisingly, some young people (e.g. those identified with SEND) can be more vulnerable than others.
Educational strategies to address synthetic media
The first point to realise is that adulthood doesn’t equate to expertise. The second point is that simply calling for a statutory ban on phones in schools sidesteps the issue.
Schools already have universally sensible mobile phone policies, and we need to focus on educating everyone in a responsible way, even if phones are not used on the premises.
School leaders must be supported with time and training to take proactive steps to combat the risks associated with synthetic media. This includes educating students and staff about the nature of synthetic media and the dangers it can pose. The good news is schools are already doing much of what is required. For example, here are six things schools can and are doing:
- Promote digital literacy: Teach students how to critically evaluate the content they encounter online and recognise deepfakes.
- Encourage reporting: Make it easy for students to report harmful content, and ensure they know how to access support services.
- Implement filters and detection tools: Use available technologies to filter out harmful synthetic content and identify deepfakes before they cause harm. (For further advice on this issue from SecEd, see the recent article Filtering and monitoring in schools: 18 questions to help evaluate your provision).
- Teaching: Ensure teachers have the knowledge to teach students about e-safety.
- Social media: Provide advice on using social media.
- Parents: Support and include parents and carers by sharing helpful advice and resources.
Schools already cover a good variety of topics relating to online safety, including cyber-bullying, nude-sharing, mental wellbeing, data security, screen-time and harmful content.
Internet Matters has reported that schools take many approaches to teaching about online safety, such as timetabled lessons, form time and via ad-hoc sessions such as assemblies and themed days.
In a 2023 survey, the not-for-profit industry body found that a majority of parents felt that they had a good knowledge of their school’s approach to online safety, and most rated the school’s approach as fairly good or very good.
Encouragingly, three quarters (75%) of parents had experienced one form of outreach from their child’s school. But the survey also reported that competing pressures and priorities faced by schools mean that online safety does not always receive sufficient focus, despite school leaders recognising its importance.
How can schools support affected individuals?
There is plenty of support available for schools to access. Organisations like South West Grid for Learning (SWGfL) offer valuable resources for schools dealing with the fall-out from synthetic media.
The Revenge Porn Helpline also provides support to individuals who believe they’ve been targeted by synthetic sexual content. Schools should also direct students to Report Harmful Content for guidance on how to deal with harmful AI-generated media.
The PSHE Association has excellent resources on AI (created in partnership with the Alan Turing Institute). Its NewsWise lessons on fake news and on exploring how images in the news can be misleading are particularly useful, as is its Belonging and Community pack and accompanying teacher guidance (see further information).
The wider policy continuum
As Manny Botwe says, the world is changing. The curriculum also needs to change to acknowledge and address this but so too do our education policies.
We need to build-in protection. The “belonging” this government speaks so eloquently about cannot be achieved if children are not safe. To emphasise this point, a recent report from the National Foundation for Educational Research (Bocock et al, 2025) cites teenagers in England as having “significantly weaker” socio-economic and cognitive skills than their peers across 30 countries.
Specifically, this means skills of cooperation, curiosity, empathy, persistence and stress resistance, which is quite worrying. Does the evidence tell us that out of 30 countries our young people at this critical age are potentially more vulnerable than their peers; less likely to be inquisitive when faced with disinformation?
Other countries see this as a policy choice in education. Take Finland for example, where critical analysis of media stories and online information is on the curriculum from primary school age.
These are troubling times, when the media is becoming increasingly politicised and the impact of disinformation so clear. Our schools have a responsibility to our young people and our government has a responsibility to our schools.
We need to strengthen our current policy considerations and emphasise in the annual Keeping Children Safe in Education statutory guidance the important role contextual safeguarding can play in relation to disinformation and digital harm.
Teachers require clarity of what can and can’t be challenged. A good start would be to review the political impartiality guidance (DfE, 2022 – see also SecEd’s article on the political impartiality guidance in the classroom).
It needs both depoliticising and updating to enable teachers to address some of the disinformation now prevalent in the world of adolescents with real confidence.
Final thoughts
As synthetic media continues to evolve, school leaders must stay informed and vigilant. By understanding the risks and taking proactive measures, schools can protect students and foster a safer digital environment.
The challenges are complex and real, but with the right strategies, schools can help ensure that technological advancements are harnessed for positive purposes rather than becoming tools for harm.
- Margaret Mulholland is SEND and inclusion specialist at the Association of School and College Leaders. Read her previous articles via www.sec-ed.co.uk/authors/margaret-mulholland
Resources
- Report Harmful Content: https://reportharmfulcontent.com/
- South West Grid for Learning: https://swgfl.org.uk/
- Revenge Porn Helpline: https://revengepornhelpline.org.uk/
- PSHE Association: NewsWise Lesson: https://pshe-association.org.uk/resource/newswise-news-literacy-project
- PSHE Association: Belonging and Community Pack: https://pshe-association.org.uk/resource/belonging-and-community
Further information & references
- Bocock et al: International comparisons: Implications for England of research on high-performing education systems, NFER, 2025: www.nfer.ac.uk/publications/investigating-cross-country-differences-in-young-people-s-skill-development-and-identifying-factors-associated-with-high-performance
- Burtonshaw et al: Commission into countering online conspiracies in schools, Public First, Pears Foundation, Star, February 2025: https://counteringconspiracies.publicfirst.co.uk/
- DfE: Guidance: Political impartiality in schools, 2022: www.gov.uk/government/publications/political-impartiality-in-schools
- Ofcom: A deep dive into deepfakes that demean, defraud and disinform, 2024: www.ofcom.org.uk/online-safety/illegal-and-harmful-content/deepfakes-demean-defraud-disinform
- Rose: AI, deepfakes and safeguarding: Ten ways to keep children safe, SecEd, 2025: www.sec-ed.co.uk/content/best-practice/ai-deepfakes-and-safeguarding-ten-ways-to-keep-children-safe