Blogs

AI and the future of humanity: Three scenarios

AI will soon match or exceed human intelligence across all cognitive domains. But while some believe that AI will never match human traits like creativity and abstract reasoning, Dr Dylan Wiliam, Dr John Hattie and Dr Arran Hamilton are not so confident...

Educators are abuzz about AI. New tools like ChatGPT promise personalised learning and teacher assistance. But amid the enthusiasm, we must not lose sight of the bigger picture. AI’s implications extend far beyond classrooms, to the very future of humanity.

It is understandable that many focus on AI’s near-term potential for education. We, too, are excited by prospects like automated feedback and teacher workload reduction.

But rapid advances could soon propel AI beyond just a classroom aid. By the end of the decade, machines may match or exceed human intelligence across all cognitive domains.

This could fundamentally reshape society. Yet many cling to the convenient fiction that AI can never emulate innate human traits like creativity, empathy, and abstract reasoning. We should not be so confident. AI algorithms can already replicate and even surpass many of our highest mental faculties.

Some insist consciousness makes us unique. But consciousness remains deeply puzzling, even to neuroscientists. Leading philosophers suggest it may just be our brains’ interface for processing data. AI systems may not have subjective experience, being akin to zombies, but they can still excel at associated capabilities and will be able to increasingly pass themselves off as us.

Likewise, some contend emotions and empathy are distinctly human. However, modern psychology sees emotions as biological signals that inform decision-making. AI programs can already simulate empathy by predicting our behaviours and responses. The results may currently feel robotic but can be highly effective at influencing us. And we should expect significant improvements.

As for creativity, AI can now generate art, music, films, and literature. By analysing patterns in vast datasets, algorithms can synthesise fresh outputs and combinations. AI even aided the development of Covid-19 vaccines through creative molecular design. Creativity appears more algorithmic than we presumed.

In short, the skills we consider integral to our humanity can, arguably, be reduced to computational processes. And AI is proving increasingly adept at modelling and improving on those processes. We should not underestimate the pace of progress.

Consider AI timelines. Experts predict human-level artificial general intelligence (AGI) could arrive between 2025 and 2050. Systems like GPT4 already display sparks of AGI, passing the Turing Test. Some researchers contend that AGI is just two years away.

Once achieved, AGI could quickly advance to superintelligence, as algorithms recursively self-improve. AI may eventually think millions of times faster than humans. Its cognitive abilities could surpass all humanity’s combined brainpower.

This has profound implications. As AI matches, and then exceeds, human aptitudes, we may lose incentives to learn and grow. Without education’s purpose, we risk becoming deskilled and unable to understand AI’s rapidly evolving intellect.

Past innovations like calculators freed us to tackle higher order skills. But AI threatens to usurp all human cognitive domains. Will we have anywhere left to climb? Or will we become reliant on machine intelligence, like infants dependent on care-givers?

Some believe new jobs will arise for humans. But corporations may opt for cheaper, faster AI over flawed, limited humans. Without work, many could lose purpose and fall into idleness and vice.

This paints a bleak picture. But the future is not fixed. With judicious governance, AI can uplift humanity. That is why we propose urgent regulations to constrain AI development, providing time to debate how best to harness its benefits.

One option is banning advanced AI. But relinquishing potential gains from AI’s ground-breaking applications, like disease cures and clean energy, seems unwise. More balanced oversight makes more sense.

In our new paper, The future of AI and education: 13 things we can do to minimize the damage (Hamilton et al, 2023), we offer a number of recommendations, including:

  • Government licensing/regulation of AI firms.
  • Restricting access, especially for students/children, subject to risk-based assessments.
  • “Guardrails” enabling parents and educators to audit how and where children are using AI in their learning.
  • Requiring AI to disclose its nature.
  • Mandating algorithmic fairness and transparency.
  • Proportionate penalties for violations to foster accountability

These modest constraints grant time for informed decisions, without forfeiting AI’s advantages. They provide safeguards while allowing controlled progress and public input. 

Because if we fail to act, we risk sleepwalking into an automated dystopia. Without regulation, rapid AI advancements could result in one of three futures:

  1. “Fake work” where humans are forced into pointless jobs alongside superior AI.
  2. “Transhumanism” where people enhance their brains with implants to remain competitive.
  3. Or “universal basic income” where humans are economically decoupled, as machines take over production.

Each scenario has merits and dangers. But we deserve a say in choosing among them. That requires urgent action before runaway AI progress limits options and discards human agency.

Educators are right to consider AI’s classroom impacts. But we must also engage with the bigger picture. Powerful technology demands prudent regulation.

If guided wisely, AI can unlock humanity’s full potential. But we must stay awake to pitfalls, and make decisions before our window of opportunity closes. The future remains unwritten, if we dare to pen it.

Dr Dylan Wiliam is Emeritus Professor of Educational Assessment at UCL Institute of Education; Dr John Hattie is Emeritus Laureate Professor of Education at the University of Melbourne; and Dr Arran Hamilton is Group Director – Education, at Cognition Learning Group. This article was written with editorial support from Claude AI.

 

Further information & resources

Hamilton, Wiliam & Hattie: The future of AI in education: 13 things we can do to minimize the damage, 2023: https://doi.org/10.35542/osf.io/372vr