Best Practice

Facing our fears: Will you ban ChatGPT?

For those of you thinking we must ban ChatGPT, remember that this is currently the worst version of this technology we will ever have. We must face our fears and tackle the risks, but how? Dan Fitzpatrick explains
Image: Adobe Stock -

“AI will achieve human levels of intelligence by 2029. I have set the date 2045 for the 'singularity' which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”

This is according to Ray Kurzweil, Google’s director of engineering.

I would have dismissed Ray Kurzweil’s comments as fanciful, until I tried ChatGPT for the first time in early December.

Reacting quickly to the release of ChatGPT, many schools are blocking the technology. These include Seattle Public Schools, Los Angeles Unified School District, New York City Public Schools and more around the world. There are genuine concerns about this new AI tool facilitating cheating and hindering the academic development of students. So, will you ban it?

First, what is ChatGPT? Well, you can read my first SecEd article responding to the ChatGPT phenomenon. But for now, just imagine the internet suddenly had the ability to have a conversation with you and provide you with any information you wanted, in any style, and any format that you wanted. Got it? Great. We’re about 10% of the way to understanding its capability.

ChatGPT is a website that allows users to put questions to an AI machine and get answers in a fraction of a second. The impressive thing is that it simulates human communication very well and can draw upon more than 300 billion words worth of information. Not only that, but it can search this information, synthesise it and give you an answer with a high probability of being correct.

Although the applications of this new technology are still very much in their infancy, already the disruption being caused is making some schools run scared.

The main concerns are students using ChatGPT to plagiarise in their work and cheat on assignments and homework. This then risks students not engaging at a deep level with learning and thus failing to develop their knowledge and skills.

I touched upon some of these concerns in my last piece, but let’s dive in a bit further...

 

Plagiarism and skills development

Most teachers and educational leaders are naturally worried about the implications of their students being able to generate good quality work with ChatGPT and passing it off as their own.

There are natural follow-on concerns that ChatGPT will therefore prohibit knowledge retention or the development of skills such as critical thinking. Ultimately, we are worried that students using AI won’t think about their work.

Some are already mitigating against this by ensuring written work is done in class, without technology. Daisy Christodoulou, director of education at No More Marking, wrote recently that: “It is perfectly acceptable to ban students from using AI for written assessments and to make greater use of in-person hand-written exams.”

University professors, meanwhile, are exploring ways to adapt how essays or dissertations are assessed to ensure students actually know their work – they are using task design to make it very difficult for students to use AI.

So, a key question for secondary schools is how can we as teachers tweak the way we set homework or design out-of-lesson learning to ensure students know they cannot use ChatGPT?

This might involve different kinds of homework tasks or ensuring we integrate the results of homework into subsequent in-class discussions (when our questioning will easily expose any cheaters).

And it’s worth remembering that most students will do the right thing, so do not underestimate the power of setting clear expectations up front so they understand when they can and cannot use AI – and the potential consequences if they break the rules.

 

Plagiarism checkers

Some still think we can put this horse back in the stable – hoping technology can help through the development of plagiarism checkers to identify work generated by AI. Indeed, OpenAI has published a new “classifier” tool to help us identify texts written using AI.

In a blog unveiling its AI classifier, the company explains: “We’ve trained a classifier to distinguish between text written by a human and text written by AIs from a variety of providers.

“While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human: for example … using AI tools for academic dishonesty.”

It warns that this tool should not be used as a “primary decision-maker” but can complement other methods of determining authenticity (also of note is the fact that it is not as effective on texts below 1,000 words).

This kind of tool may be useful in the short-term – for those of you who have 120 homework assignments on your desk and suspect AI has been used by one or two students.

However, in the longer-term, I refer you back to Ray Kurzweil’s words at the top of this article. Remember: ChatGPT is currently the worst version of this technology we will ever have. Unreleased versions are already much better and they will continue to get more “intelligent”.

Those of us who had a “wow” moment when we first used ChatGPT are going to be having regular “wow” moments in the coming months and years.

 

Education, not censorship

So, as you can see, sticking our head in the sand is not an option. Instead, we must ask ourselves some wider questions. Namely, is a system that requires students to do what technology can now do in a fraction of a second worthwhile?

As well as the classifier already mentioned, OpenAI has pledged to work with educators to investigate the “limitations and considerations” of using ChatGPT and has launched a resource for educators (see further information).

It recognises the threat of plagiarism for schools, but also states: “Ultimately, we believe it will be necessary for students to learn how to navigate a world where tools like ChatGPT are commonplace. This includes potentially learning new kinds of skills, like how to effectively use a language model, as well as about the general limitations and failure modes that these models exhibit.

“Some of this is STEM education, but much of it also draws on students’ understanding of ethics, media literacy, ability to verify information from different sources, and other skills from the arts, social sciences, and humanities.”

This technology will be part and parcel of our students’ personal lives and our work lives. It’s not going away. It would be more beneficial to incorporate this technology into our teaching and assessment methods.

If we are worried that ChatGPT is going to hinder student development, then I think we need to take off our blinkers, survey the landscape, and realise that their development is already hindered – at least in terms of their employability.

Incorporated into education, tools like ChatGPT can help students to become curious about learning, to question knowledge, to evaluate the output, and so on.

Here are my early thoughts on a framework to develop students’ learning, understanding and skills with ChatGPT. What might this look like in a secondary school classroom?

I start with curiosity. Some educational authors recoil at the idea of engagement or “hooks” in the lesson. I never understand this. We want students to be interested, to be curious, to be motivated – this is the first step of learning. Curiosity leads to questions.

With varying degrees of effectiveness, the business of schools seems fixated on memorization (driven no doubt by our system of terminal examination), but the skill of asking questions is rarely explored.

But our students will thrive in this new AI era by asking good questions. Whether it be with AI or not. Dialogue and critical thinking are vital. Students need knowledge to do both. I am not arguing that AI will replace this. However, knowing how to use their knowledge to critically analyse the results of AI will become a major, essential life-skill.

 

Changing workplace skills

In our current system many courses are designed to teach knowledge, so that students can then utilise that knowledge in the world. So what if the knowledge we teach them is known by an AI machine and costs a company a fraction of the price?

Think this is the distant future? Think again. Before you despair, there are real opportunities for education. The challenge for our education system is to discover how we can add value to the lives of our students in this new era. If the purpose of education is to help people to become successful then things have got to change.

OpenAI’s educator resource states: “AI will likely have a significant impact on the world, affecting many aspects of students' lives and futures. For example, the types of job opportunities students look toward may change, and students may need to develop more skepticism of information sources, given the potential for AI to assist in the spread of inaccurate content.

“If not managed well, these changes could present new challenges for students as they face an uncertain future. Educators will need to help students grapple with these questions.”

 

Final thoughts

Concerns around AI are valid, but the opportunities it brings will make our students’ lives better and in turn have a positive impact on our world. So ban it if you must, but be prepared for your students to be left behind.

  • Dan Fitzpatrick is an award-winning digital learning strategist, leading digital innovation at Education Partnership North East. He is a former secondary school senior leader, founder of the ChatGPT-specific https://thirdbox.org/ and he explores the future of education as director at www.edufuturists.com.

 

 

Further information & resources