Technology // October 2023 Intelligent Education
Artificial intelligence in education: Wild West or universal opportunity? Two key figures in Australian education share their insights on how AI could both profoundly disrupt education and change it for the better.

Dr Catherine McClellan (left) is Deputy CEO, Research and Assessment, at the Australian Council for Educational Research. She received her PhD in research and evaluation methodology at the University of Florida and is a psychometrician by training.

Dr Lucinda McKnight (right) is an Australian Research Council Fellow at Deakin University’s Centre for Research in Educational Impact. She is working on a government-funded project that looks at how teachers are reconceptualising writing for the digital age.

What does AI look like in education today?

CM: Like most new tools that become popular suddenly, it’s a bit of a wild west. The tool is still new enough that it hasn’t got widespread consistent use, so you’re going to see patches where it’s used very strongly, is well regulated and of high value, and other places where students are using it to write their essays and teachers may or may not be aware of that. I expect it will be a very influential tool but at the moment it’s a little chaotic.

LM: It’s definitely in evidence already but it’s uneven. ChatGPT was immediately banned in some educational jurisdictions, so independent schools have been able to forge ahead with using it in their classrooms. Some schools are already using it as a participant in their classrooms, throwing questions to it and using it for brainstorming, planning and drafting.  

What will classrooms look like in five years’ time?

CM: We don’t know. AI is an extremely flexible tool, but chatbot use is probably not ideal. People will take this giant corpus, the GPT part of ChatGPT, and they’ll build specialised apps that meet students where they are, so they get exactly the right next step to push them forward. That’s the perfect scenario and if AI can help us get there then more power to it, but I’m not sure that will happen.

LM: These services are going to get exponentially better. It’s a bit hard to predict, but AI will be built into all word-processing software and all learning management systems. It will gradually become integrated into what we do in all sorts of ways, shapes and forms and any piece of writing will have potentially been generated in some way by AI.

Were you preparing for disruption by AI long before ChatGPT and Bard (Google’s conversational AI) came along?

CM: We had been working with a company for a couple of years to generate draft versions of a lot of the assessment items we create without eliminating the people – we think of it as a drafting tool. Given what we do, we handle a lot of data, and we collaborate with companies that use language models.

LM: I applied for this grant some years ago and won it in 2021. I had already been researching and writing about AI, trying to say to people ‘This is coming,’ but no one was really interested in it – apart from the Australian Research Council – until ChatGPT hit last November.



What opportunities can this technology bring to classrooms?

CM: We don’t talk enough about the likelihood that we’ll be able to use AI to lighten teacher burden. Teacher workloads have escalated so dramatically, I’d love to see some focus on AI helping teachers give up some routine, time-costly jobs. It sounds silly, but taking attendance burns a lot of time and effort – surely there’s some way AI can handle that for us, letting teachers focus more on instruction and supporting students. There’s always a need for another human being to interact and connect with students, and giving teachers more time to do that is for the good.

LM: There are ways that it could be incredibly efficient in terms of administration in education and in handling very routine kinds of communication in institutions. It can be great to brainstorm with, you can toss ideas around with it and get feedback on things. There’s also the whole area of personalising learning – but how that’s managed is going to be very important in terms of ethics and social justice.



What are some of those ethical considerations?

LM: We need to address this fundamental problem: these AI services are not ethical. They’re based on what’s been called ‘the biggest heist in history’ – the huge theft of copyright materials online. We need to think about environmental impacts, the way this technology is trained, the people, environments and societies that are harmed in its training. Then there’s the bias, the way AI services reflect the biases of the materials they are trained on – and bias in terms of what is not represented. Ethics needs to frame everything we’re doing with generative AI instead of being an add-on.

CM: Data and privacy come to mind immediately, particularly the collection of identifiable data. We don’t understand very clearly what the implications of all this collection are. And there’s permission, ownership and intellectual property – if something is proprietary, who owns it if you use it somehow in an AI chatbot? 



How is AI affecting equity in education?

CM: Covid exposed a lot of the constraints around digital poverty. AI will sit on mobile devices, which may make it a little less discriminatory in terms of resources, but internet access is going to be a big stopper in this. You need solid internet worldwide and that’s not something we have.

LM: Different versions of ChatGPT provide different levels of empowerment. Some students will have ChatGPT Plus at home and others will still have ChatGPT banned in their classrooms. Already you’re seeing these widening gaps in equity and access to IT.



What are the main challenges around its implementation in schools and universities?

CM: Formation of policy and the boundaries we set. The idea of banning ChatGPT is probably silly – just like banning the internet or banning calculators, banning things has never worked in the history of ever. How do we use this ethically and appropriately to support learning, rather than to invalidate it? Where do you set a boundary around use to say this is still student generated, this is still evidence of learning? And that’s hard.

LM: Access is the obvious one, when you have it banned in some schools and systems, and other schools forging ahead with it when it’s not ethical. And, just keeping up with the extraordinary pace of change, which is unlike anything I have ever seen in my 35-year career in education – even with the arrival of the internet and word processing. AI is changing all the time; it’s learning all the time.



By its very nature, policy will fall behind the pace of change...

CM: The big challenge is going to be how we keep up. We’ll form a policy and by the time we’ve agreed, the tech has changed.

LM: It’s very difficult. Schools have got to try to prepare students for further education and the workplace, but we know that changes within educational systems take a very long time. It’s going to be a struggle to create an agile and responsive policy for this. Schools are already flying by the seat of their pants.



Are you concerned about ChatGPT being used for cheating?

CM: In the immediate term, yes. People are going to cheat. Are they going to be able to use it long term to avoid doing any work? No. People will learn how to manage the tool and they learn best when they are in a relationship, when they can trust and fail and make mistakes. A machine isn’t quite there yet.

LM: We really don’t know how this is going to play out. We also have to ask, if ChatGPT can do the assessment task, is the assessment task really worthwhile and fit for purpose? ChatGPT writes in cliches all the time because it is drawing on material that has already been thought. Education, now more than ever, needs to be helping students to think what hasn’t been thought; it has to help students be really creative.



Does Al mean unlearning everything we know about learning?

CM: Individualised instruction is education’s ideal state. This moves us towards that, so in that way it is a good thing, but it is going to continue to be a challenge. Learning is going to change, teaching is going to change, but education serves purposes that I’m not sure we’re ready to give up. There is a societal imperative, for example, for some level of childcare while adults are at work. It wouldn’t be just taking education apart; it would be taking society apart and rethinking everything we do.

LM: I would love to think it was going to mean a different approach, an approach that is much more grounded in human beings’ lives and interests than mass school education has been thus far. But we have to be careful when we think about personalisation of learning, because personalisation invariably means categorisation and that means normative constraints on people. Already I’ve heard of people monitoring students’ biometrics to make sure they are still on task, which is a really dystopian view of the future.



Finally, what are your hopes and fears for the use of AI in education? 

CM: If we can form good approaches for policies and give teachers solid foundational training in how to use these tools in their classrooms, a lot of things can change for the better. If we don’t invest in careful thinking about how to use this stuff, it could be a change for the worse. I’d like to see us go in the direction of making it a change for the better.

LM: Two-and-a-half years ago I was in the excitement phase that so many people are in now. But seeing what people are planning to do with it without fully engaging in the critical and ethical dimensions of it has brought me to a more cautious and hesitant space. White, male, billionaires are running this show and are making extraordinary profits that are hyper-charging inequalities in society. What are the guardrails that governments are going to put in place to make the benefits of these things more available to all of us? Who is going to profit from generative AI revolutionising the three R’s? It keeps me awake at night.

Share this on LinkedIn, Twitter
All Articles