
Over the last year, forward-thinking schools have been grappling with the many challenges and provocations raised by the now global presence of artificial intelligence (AI).
Pathfinding trusts, such as Woodland Academy Trust in London and Kent, and the Robin Hood Multi-Academy Trust in Birmingham, have been trailblazing with AI-focused INSET days for all staff – raising awareness, providing training, and creating capacity to play and explore ideas and tools alongside colleagues.
But this is not about a technology enthusiast encouraging staff to utilise more digital tools. It is instead an act of leadership which recognises that AI now permeates society beyond our schools, and as civic leaders we have a moral and ethical duty to our children and our communities to raise awareness about the implications.
As an academic researcher working with schools, I spend a huge amount of time in classrooms, listening to children, teachers, leaders and families, and observing what each person’s lived experiences really look like.
These research insights form the heart of powerful discussions with senior leaders about how to close the gap between vision and reality.
Over the last year, working with both primary and secondary-age children a number of new themes have emerged – probably as a direct result of the presence of generative AI. Here are just two examples.
1, Children want our support with AI but don’t know how to ask for it
When offered the opportunity to share anonymously their experiences with generative AI, a majority of children in key stage 2 and above reveal that they have now had some form of exposure.
This varies from a limited awareness or a brief play with a popular app, through to regular and sophisticated use of text, image, audio or video-generating tools.
However, because of rules around age restrictions, permitted use, and narratives about cheating, a significant majority of these young people are opting not to share these experiences with parents nor teachers, meaning that the scale and nature of their use is often misunderstood.
Some media reports about this issue focus on “dishonesty” (e.g. children opting to break rules) or manipulation (e.g. children creating fake images, text or video).
Of course, in some cases this is accurate, and AI has created new opportunities for poor decision-making and behaviours. However, in the majority of conversations I have had with young people, the theme that emerges is that they do not understand why they are not being allowed to use AI to “help” them – especially when they see adults using AI for similar purposes.
For example, students talked about teachers using AI to generate lesson resources. Young people are aware that AI is being used by government agencies, health organisations, and businesses, with the media providing a regular supply of stories about innovations.
The consequence of this is that many young people perceive a hypocrisy – adults can use AI to help them overcome common issues (writers’ block, simplifying or summarising texts or resources), but children are prevented from doing so in school.
While there are often very legitimate reasons for these disparities (not least around education, data security, privacy, safeguarding, and issues with bias and misinformation), young people simply do not understand (nor agree with) some of the decisions being made about the gatekeeping parameters.
As one student told me: “I use Gemini when I get stuck on something like maths homework. I’m not using it to cheat, I’m using it to find how to work the problem out.”
What is clear is that young people feel an enormous frustration that they are often being proactively prevented from using tools that could help them to learn.
But a bigger frustration was the sense of hypocrisy. One student said: “I don’t get it. I heard Mr (X) talk about using AI to write our reports and how it saved loads of time and then he added specific stuff that the AI didn’t know about. So why can’t use it to do the basic stuff and then we can spend our time on stuff we really have to about.”
These young people regularly championed the case for equitable AI awareness-raising, training, support and guidance not just for adults, but for their generation too.
Perhaps we might ask ourselves, are the issues simply about age, or are they more nuanced – relating to an ability to understand the parameters and consequences of where and how these tools should be used?
Furthermore, what role should schools play in increasing awareness and understanding of both children and adults across our school communities?
2, Who owns children’s identity?
Another of the many themes raised by young people has been about the ownership we allow them to take over their own lives.
For example, one remarkably thoughtful group of children spoke about their school asking their parents for permission to use their photographs on the school website each year, and their parents making that decision on their behalf.
The conversation then turned to how some forms of AI trawl online images and use this data to feed into tools and apps. That conversation soon turned to CCTV and facial recognition. These tools, already in widespread use, depend upon photographs of real people.
The issue for these young people was that their photograph was being made public without their individual personal consent (although perhaps some of the parents in question did discuss the decision with their children).
Even so, many students have no agency to decide about how their image is being used. The decision-makers (parents and schools) and the person experiencing the implications of those decisions (child) were seen as disconnected, and the process as unjust.
To many in older generations, a photograph is perhaps no big deal – but in the world of AI a photograph is data and can be used for many different purposes. It may be too early to know how much of an issue these potential concerns will be over the course of these children’s lives. Much depends upon wider ethical and governance decisions made by technology companies and legislators.
However, as school leaders what we can do is to raise awareness among staff (who make decisions about when photos are to be taken and what is seen and inferred through that photo), and among children and families.
Perhaps most importantly, we can ensure sufficient awareness and training so that children, parents and staff feel equipped to make meaningful and informed decisions – understanding the many implications.
What now?
So here is our question. We cannot back away from generative AI – it is now permeating consumer worlds, homes and workplaces. But what we can do is to raise awareness about when and how it can be appropriately used, when it should not be relied upon, and how and where to seek support so that the benefits are realised safely, ethically, and sustainably. Here are some takeaways:
- Prioritise a conversation with your leadership team about how to increase your own understanding about the broader issues surrounding AI (i.e. it is not just about “using AI tools in school”, it is about living with AI).
- Reach out to organisations such as AI-in-Education and pathfinding trust leaders who have a wealth of expertise and practical support to offer.
- Ensure that at least one of your senior leadership team connects with a relevant professional network in order to keep up-to-date with emerging themes.
- Dr Fiona Aubrey-Smith is the founder of One Life Learning, an associate lecturer at the Open University and sits on the board of a number of multi-academy and charitable trusts. Follow her on X @FionaAS. Find her previous articles and podcast/webinar appearances via www.sec-ed.co.uk/authors/dr-fiona-aubrey-smith
Further information & resources
- AI-in-Education: www.ai-in-education.co.uk
- SecEd: Artificial intelligence in schools: Opportunities and risks: A best practice conference, January 15 and 16, 2025: www.sec-ed.co.uk/events/artificial-intelligence-in-schools-opportunities-and-risks-online