e-Safety: tackling the key threats effectively

Written by: Kat Howard and Kate Brady | Published:
Image: iStock
A helpful overview but I am puzzled why there would be internet safety policies in which years 7 ...

Posted by: ,

Internet safety is a crucial safeguarding issue for schools. Experts Kat Howard and Kate Brady highlight problem areas – including social media, sexting, radicalisation and grooming – and discuss the elements of whole-school best practice

Internet safety has been a key part of many schools’ agendas for almost a decade, but with the introduction of Ofsted’s latest safeguarding measures and the recent launch of the government’s Prevent Duty on schools as part of the Counter Terrorism and Security Act 2015, it is now vital that every school has the knowledge, systems and protocols in place to safeguard their students.

Recent social, cultural and behavioural shifts, coupled with the explosion of content, the widespread availability of inappropriate material, growing concerns over online radicalisation and the rise in popularity of apps like Instagram, Snapchat and even Tinder, have meant that schools need to stay ahead of the curve and ensure their staff have the proper training to keep students safe.

And while, in recent years, RM Education’s annual survey has indicated that some schools were still viewing e-safety measures as a cost or an item to be ticked off a list, rather than a pedagogical responsibility, education specialists and internet safety advisors have identified a growing urgency among primary and secondary schools to invest in training, update their policies and embed internet safety into every aspect of their curriculum.

Something has happened in the last 18 months which has meant online safety has moved much higher up the agenda for schools.

In that time, we have seen a growing number of schools suddenly becoming concerned that their policies are out-of-date, and realising they may need to invest in specialised training so they can take a far more proactive approach to online safety within their school. I think, in general, this reflects a more positive trend towards schools empowering themselves, and their students, to understand and minimise the risks.

Taking a whole-school approach

However, it is true to say that not all schools are taking such a thorough approach. Research conducted earlier this year by David Brown HMI as part of Ofsted’s Child Internet Safety Summit discovered that five per cent of UK schools still didn’t have an online safety policy, and in the schools that do, figures showed that both students and – to a lesser extent – governors, were not always aware of this policy.

In fact, more than 25 per cent of secondary students reported that they couldn’t remember having been taught about online safety over the previous 12 months.

There’s a tremendous amount of pressure on teachers but internet safety is everybody’s responsibility, including parents, governors and students themselves.

Involving the whole school community and taking a collaborative approach is fundamental – students across all ages have a key role to play in sharing information on the sites and apps their age group is using, teachers must be appropriately trained to know how particular sites and apps work, and the school’s safeguarding governor should ensure teachers, parents and students are kept up-to-date about their school’s e-safety policy and protocols.

Teachers and senior leaders need to embed it throughout their whole curriculum so that in an English lesson, for example, when students are going online to research a topic, that’s an opportunity for their teacher to talk about how to search safely.

There are a number of successful schools who have created an online safety committee that includes partners from every aspect of the school, and gives them the opportunity to talk through any issues and consequences around online behaviours.

Every school should have some sort of forum for that communication to take place, as well as regularly speaking to parents and offering parent workshops to help them understand and think through the implications of their child’s online activity.

Clear escalation routes

The increase in freely available, inappropriate images, as well as inappropriate behaviour among students in certain age ranges, has given rise to more serious breaches of online safety. This highlights the necessity for schools to have a clear escalation route outlined within their policy.

Many schools we have been into in the last 18 months have had an issue with either social networks or sexting – these are by far the most common issues we face. We see sexting and the distribution of sexual images becoming an issue in some schools where they are running BYOD schemes and their students are mobile – and for schools that don’t have a robust safety policy in place there can be fairly serious implications for staff too.

For example, in one school, a teacher was contacted by the police because they had tried to take a copy of an inappropriate image to show to the head – so it can also lead to sticky situations for staff in terms of potential prosecution.

If images like these are discovered on a student’s device, it is the school’s responsibility to confiscate the device, place it in a secure area and escalate the issue as laid out in their internet safety policy to either the school’s safeguarding lead, or the head.

David Wright, director at the UK Safer Internet Centre, advises schools in this situation to consider whether it is an isolated incident between two pupils, what the nature of the image is, whether there’s a broad age difference between the individuals, whether it appears there was coercion from a third party, whether they have done anything similar before, whether the child is vulnerable, whether the image has been widely broadcast and, finally, whether there is any concern for the individuals involved.

Considering each of these points carefully will help heads and safeguarding leads to determine the relevant course of action.

Online grooming

In secondary schools, a number of social networking sites and apps are becoming increasingly problematic. Beyond the more obvious sites like Facebook, there have been numerous instances of students being targeted or approached on apps like Instagram and Snapchat because students have not updated their privacy settings, as well as video chat app ooVoo and – most alarmingly – the adult dating app, Tinder, which is being used by students as young as 11.

Conversely, there are growing issues in primary schools resulting from children as young as five being given inappropriate console games to play with a certificate age of 15; Grand Theft Auto and Call of Duty both carry extremely visual representations of people dying and being killed, which can obviously have a detrimental impact on that child’s behaviour. But a particularly worrying trend is that adults are increasingly using online versions of these games to groom children.

Stranger danger exists in the virtual world and can continue into the home, so in addition to making students and parents aware of the threats, they should be encouraged to report these issues straight away. It is about having open communication within schools and a clear protocol in place, so students know exactly who to go to and that they will not get into trouble.

Filtering and keyword monitoring

Filtering content for different age groups is also important. Internet safety policies will often create different rules depending on year groups, so that for example, during lunchtime, years 7 and 8 are allowed on Facebook but for the rest of the day they are not. Parents, too, must understand the importance of age-appropriate content.

I visit a lot of schools and speak to parents who tell me they have actually set up their nine-year-old child’s Facebook account. But Facebook doesn’t permit users under the age of 13 to have an account, so if a parent has lied about their child’s date of birth, the targeted media and advertising used in Facebook may not be at all appropriate for that child’s age.

There are various filtering and monitoring tools available that can be added to a school’s network to filter age-appropriate content, and to track and monitor keywords or topics – particularly those which may highlight a major cause for concern, such as students looking for information on suicide, self-harming or content which could be considered radical or extreme.

Under the new Prevent Duty, every school must have an extremism policy for both staff and students, and keyword tracking can be integral to identifying and quickly escalating these issues.

Some schools do not think radicalisation exists in their school, simply because it is not discussed. But it can exist in any type of school in any geographical area. Creating a culture of openness, and a safe environment for students to express their views and opinions without being marginalised is essential in tackling this issue – as is having regular and frank discussions with the wider school community.

Empowering, not prohibiting

With the sheer volume of sites and apps to monitor within the school environment, some institutions feel their students will be better protected if they remove all access to any site or app that isn’t related to learning. But this is a mistake.

If a school simply blocks everything, we tend to find that because students are not learning within a controlled environment about what is and isn’t acceptable online, they do not understand the risks or the consequences when they go home and start using their iPhone or iPad.

Students will always find a way to see content, so rather than prohibiting these sites, we need to educate them on what is appropriate and what is not, so that they are empowered to make informed decisions for themselves. They also need to be aware that if they do get into a situation, there is someone within the school they can approach for help.

I would encourage both students and teachers to Google themselves every four to six weeks to learn more about how they are represented within the digital world – not only to understand the implications on their own safety and privacy, but to see how their digital footprint can have a negative impact on their future careers and other opportunities.

However, despite the seemingly endless list of negative issues that schools must navigate as a result of social media and the wider internet, there can be tremendous opportunities too – one school-leaver became known for creating art on his iPad and using Twitter to share his work, which led to a dream job offer from the producers of The Simpsons and exhibitions in nationwide galleries.

We know the internet can be an amazingly positive place and can create opportunities which can change whole lives. But there are associated risks, and there has to be a balance. It is not about scaring people away from using the internet; it is about empowering them to understand those risks and to be able to reduce them.

  • Kat Howard is senior educational consultant and online safety lead and Kate Brady is e-safety product manager, both at RM Education.

A helpful overview but I am puzzled why there would be internet safety policies in which years 7 and 8 are allowed on Facebook but for the rest of the day they are not.

As the article says ' Facebook doesn’t permit users under the age of 13 to have an account, so if a parent has lied about their child’s date of birth, the targeted media and advertising used in Facebook may not be at all appropriate for that child’s age.' In almost every school in the UK pupils in Year 7 reach the age of 12 and in Year 8 turn 13. What about schools lying about pupils' date of birth or colluding with parents who may have done so?

Posted By: ,

Please view our Terms and Conditions before leaving a comment.

Change the CAPTCHA codeSpeak the CAPTCHA code
Sign up SecEd Bulletin