0
By Samuel Saunders

It is perhaps understandable that a number of schools, colleges and universities across the world have moved quickly to place blanket bans on the use of generative artificial intelligence (AI) technology.

The widely publicised advent of OpenAI’s ChatGPT application, which generates coherent and relevant textual content with only the briefest prompt from the user, has left educators reeling in panic about its potential for plagiarism and the damage it might do to academic integrity standards.

RELATED: AI, Nanotech among 5 top tech growth sectors to quadruple over next few years

A significant number of institutions ranging from Western Australia to the United States and, most recently, to India and France, have moved to outlaw ChatGPT completely.

The point about plagiarism is particularly interesting. Naturally, the conversation hitherto has revolved around whether the fact that the software can emulate any number of different human voices means that content seemingly written by students – but which, in fact, is completely artificial – might slip through the detection net that surrounds assessments.

ADVERTISEMENT

This has sparked fear that the ‘essay’, that stalwart mainstay of universities’ assessment approaches, might get obliterated as a legitimate form of student evaluation, and in some cases have sparked calls to return to face-to-face examinations as the supposedly sole means of ‘valid’ assessment.

But the point runs deeper than that. ChatGPT, as well as other apps that rely on the same technology, draw their information from a vast dataset that already exists, and generates a response based on predictions of relevance from the prompt the user inputs.

In essence, it takes the prompt that the user gives it, scours the dataset that it has been fed, and provides a generated textual response that it thinks will be relevant to the question that has been posed.

ADVERTISEMENT

Quite aside from the potential biases and prejudices that might have emerged from the process of putting that dataset into the software, as well as the algorithmic bias that emerges in its responses simply because of that dataset (both of which are completely different issues), the fact that every response ChatGPT generates is drawn from existing information (even if it is rephrased or repackaged) means that, technically, it is all plagiarism.

As others have already pointed out, we should expect to be having some very complex conversations about copyright, originality and authenticity very soon.

The cat is out of the bag

ADVERTISEMENT

But simply banning the technology is not going to work – and we can pick any number of tired axioms to explain why. The cat is out of the bag. We’ve opened Pandora’s box. What’s done is done.

Already, tech giants are moving to incorporate the technology into their existing offering.

Alphabet is planning on incorporating its new AI chat service ‘Bard’ into the existing Google search engine (though at the time of writing it has had a bit of a shaky start), while Microsoft has very publicly invested tens of billions of dollars into OpenAI and has already announced its eventual incorporation into Microsoft 365 applications.

It is going to be – in fact it already is – everywhere, and it is likely to, well, perhaps not change the face of the internet as we know it (stopping short of such grandiose claims always seems sensible), but at the very least give its foundations a bit of a shake.

So what can universities do?

In short, they can learn from the past. These conversations about a new piece of technology or resource that was set to severely damage the integrity of the university and its approach to teaching and research have happened before.

We all thought Wikipedia sounded the death knell for universities, and before that it was the internet itself that was going to close us down. Heck, even the scientific calculator had its vocal detractors.

But none of those spelled doom for higher education. Instead, we learned to live with the technology, worked to understand how it operated, and we did what universities do best, we probed, critiqued, analysed and explored to find out just what its limitations are – and then talked about this with our students.

Today, any number of conversations about how we all use, but should not directly cite, Wikipedia are happening in university classrooms, alongside interesting conversations about community-generated content and the nature of truth.

Universities and individual academics can, and should, do exactly this with generative AI technology. They should explore it, find out its limitations, consider its potential uses in the context relevant to their disciplines or teaching, and discuss all of this with students (who are likely to be using it already).

Consider the limitations

There are a lot of limitations to consider: ChatGPT, for example, only has access to a limited dataset and it cannot reference anything from beyond 2021. It is also not overly reliable – it generates information that it thinks is relevant to the user’s query, without much (or indeed any) regard for whether that information is correct.

It also falls down on common sense in quite a lot of places, because it does not have a concept of what ‘common sense’ is. A recent Twitter post by Professor Chris Headleand highlighted a particular problem with a ChatGPT-generated crossover story between Thomas the Tank Engine and Paw Patrol, which saw the eponymous Thomas traversing the ocean to save a stranded family of blue whales.

And that is before we even get into a conversation about ChatGPT’s issues surrounding citing secondary sources using established academic referencing systems. It can generate references within the parameters of referencing systems, but it has no regard for whether the sources it is citing actually exist, and a lot of the time it has simply made them up.

All of these problems are valid criticisms of the technology that can be made and discussed directly with students without banning it.

Helpful aspects

None of them remove the fact that generative AI technology can actually be exceptionally useful for academics and students alike in both assessment and normal pedagogic contexts – it is just a matter of thinking it through.

A conversation with ChatGPT, for example, might be able to help with the act of reading and understanding vast quantities of information, as it is very effective at providing overviews, summaries or simpler reconceptualisations of difficult, complex or lengthy concepts or histories.

Asking ChatGPT to summarise, for example, Freud’s concept of the Id, Ego and Superego is not really any different to performing a Google search on the same question. Similarly, generative AI tools can be extremely helpful in sounding out ideas by talking them through with the bot, or helping to structure out a framework for a submission, to name a few.

Is there really any difference between this act and students asking their peer what they are going to do for an assignment?

Academic users might also find it exceptionally helpful with generating scenarios for authentic, case study-based assessments – particularly if the students themselves use ChatGPT to generate the individual scenario to which their assessment will then respond.

This goes some way towards guaranteeing the assessment’s authenticity, and also mitigates against copying or other breaches of academic integrity because the assessment itself is bespoke and student generated.

Some concluding thoughts

It is possible to muse endlessly on the potential for generative AI technologies to shape the way universities teach and assess students in the future, and there is not the space to outline all of the possible approaches here.

The one fundamental truth, however, is that banning these technologies is neither the solution nor even a possibility.

In fact, banning generative AI services simply broadcasts a very strong message that universities assume that students will use them to cheat, which does students a severe disservice. If students feel the need to cheat, then there is likely to be a concrete reason why that goes beyond the technology.

Dr Sam Saunders is an educational developer in the Centre for Innovation in Education at the University of Liverpool in the United Kingdom. He is particularly interested in assessment and feedback, curriculum design, and research-informed or research-led teaching. E-mail: [email protected].

Credit: University World News

COVER IMAGE: CSQ

More in Features

You may also like