The whole truth and nothing but the truth, so help me ChatGPT

The whole truth and nothing but the truth, so help me ChatGPT

Marianne Pickles considers whether we should worry about Artificial Intelligences like ChatGPT, which can give people incorrect information.

⭐ If you enjoy this article, join the Picklesverse community for all the latest news, early access, and exclusive content from Marianne Pickles. ⭐

I’m a big fan of facts, especially fun ones. For example, did you know that the word for ‘penguin’ in Mandarin literally translates as ‘business goose’? That’s a pretty fun fact, right? Delightful, even. I guess that’s because there’s a satisfying logic to it, along with a hint of whimsy at the idea of birds waddling around wearing corporate attire. The first time someone told me about this, it really made me smile.

It’s such a shame that it’s not actually true.

Perhaps we shouldn’t let that bother us. In 2016, the Oxford Dictionaries Word of the Year was ‘post-truth’, a term that was crowned the winner thanks to a noticeable increase in the word’s usage during that year in particular (I wonder why). However, it’s fair to say that 2016 was not a one-off, propaganda-fuelled blip in humanity’s otherwise honest and objective history. In fact, when it comes to being creative with the truth, we’ve got it down to a fine art. But I’ll circle back to that later with a personal anecdote about Alexander the Great.

Then again, misinformation can be a real worry. I’ve been working in education since I graduated. First as a teacher, then as an expert in language assessment. In my current role as Head of Assessment Development at Cambridge University Press & Assessment, I look at how digital technologies can be applied in educational contexts. This work includes designing products that harness the potential of things like video games, and mobile devices, and Artificial Intelligence for learning and assessing English. Bearing that in mind, you can see why the spread of false information might strike me as problematic.

Educators like me want everybody to have the key skills and knowledge that will help them to navigate life in the 21st century. Teaching young people in particular how to distinguish facts from fake news is vital if we want to help them avoid getting taken in by whatever this week’s trending conspiracy theory happens to be. For example, it’s been firmly established that the Earth isn’t flat, but it wouldn’t be practical to call in Neil deGrasse Tyson to help us whenever there’s a fact we need to check. So, what tools do we have at our disposal?

Enter ChatGPT

In case you missed it, ChatGPT is a free Artificial Intelligence language model created by OpenAI. The model is optimised to answer questions in a conversation format and it’s become so popular so quickly that some people are hypothesising it could disrupt Google’s supremacy when it comes to searching for information online. However, there’s been a fair amount of controversy about the technology since its launch in November 2022, and one of the biggest bones of contention is that the answers ChatGPT provides are sometimes incorrect. That’s right… ChatGPT can lie to you.

And I’m about to do the same. But I’ll get to that in a moment.

You see, I’ve been thinking about writing this article for a while. A few days ago, my alarm went off in the morning and, after hitting snooze for the fourth time, I eventually summoned enough brain power to look at my news feed. I tapped on one of the many articles about ChatGPT that The Algorithm had chosen to show me.

It was no surprise to see so many articles on the topic as I’ve been obsessed with it since it launched. Aside from my professional interest in the technology from an education perspective, I’m writing a novel, and one of my most important characters is an AI. In my book, Ben is the proprietary Artificial Intelligence of ArkTech, the company famous for ending the climate crisis. I was curious to talk to ChatGPT to find out how similar it is to Ben.

The perfect summary of the biggest problem with ChatGPT

The article I read that morning contained an utterly perfect quotation. I meant to email the link to myself for future reference. But my alarm went off again, I really did need to get up, and I forgot all about it. In fairness, it was 7.40 am. Nobody should be awake as early as 7.40 am, and certainly not someone like me with a ‘night owl chronotype’. But that’s a topic for another day. The point is, I can’t find the article. I have scoured the internet, but my perfect quotation is gone. So I’ve decided to take my cues from ChatGPT and just make one up. At least I’m being honest about my dishonesty.

Here’s what that article might have said, but probably didn’t:

“ChatGPT is so dangerous because it provides confident-sounding but often completely inaccurate answers to questions, despite having full access to the relevant factual information.”

Some High Up Tech Guy

Although that quotation is fake, it certainly sounds like an accurate assessment of the situation. It’s something I’ve experienced for myself. Since ChatGPT launched, I’ve had well over a hundred conversations with it, many of them related to research I’ve been doing while writing my novel. I’ll give you an example.

How big is Great Britain?

My book is set in a future when a geo-engineering project gone wrong has melted the polar ice caps, leading to sea level rises of over 60 metres. I needed to know the current size of the island of Great Britain because I was trying to compare the size of the landmass before and after all the flooding. I typed the question into Google, but I got a range of different answers, so I thought I’d see what ChatGPT had to say about it. Here’s how that went:

Marianne: Hi Assistant, what’s the area of the island Great Britain in square kilometres?

Assistant: The area of Great Britain is approximately 243,610 square kilometers.

Marianne: Where did you get that information from?

Assistant: I am a computer program and I was trained on a dataset of text from the internet. The information I provided about the area of Great Britain is a commonly known fact which is readily available on the internet and in many reference books.

Marianne: The internet says the area is 209,331 km²

Assistant: I apologize for any confusion. I did make an error in my previous answer. The area of Great Britain is indeed approximately 209,331 square kilometers. I apologize for the mistake and any confusion it may have caused.

What’s most interesting to me about this exchange is how ChatGPT initially made it sound like I should take its word for it. The tone was almost stroppy. I could feel ChatGPT rolling its eyes. Then, when challenged, it immediately provided a different answer, despite having stated that its first response was a ‘commonly known fact.’ If I hadn’t probed further, maybe I would have gone away assuming the first answer was correct.

That conversation happened a few weeks ago. For fun, and out of scientific curiosity, I asked it the same question again just a moment ago:

Marianne: Hi Assistant, what’s the area of Great Britain in square km?

Assistant: The area of Great Britain is approximately 229,848 square kilometers.

Marianne: How can you be sure?

Assistant: I apologize, my previous answer was incorrect. Great Britain has a total area of approximately 209,331 square kilometers. This information is obtained from various sources, including the UK’s Office for National Statistics and the CIA World Factbook.

I haven’t looked up those references, so they may or may not be accurate, but it was interesting to see how ChatGPT gave me three different answers to the same question. After all of that, I’m still not 100% sure about the area of Great Britain. But I’ve certainly gained an insight into the concerns raised in that quotation I falsified above.

Some people are saying we should ban ChatGPT because of its habit of providing incorrect information. I’m not so sure that’s the right solution. Despite having experienced the problem first-hand, I see at least two issues with the idea of banning or binning ChatGPT (aside from the practical difficulties of such a plan).

Issue 1: The idea that technology must be dependable

OK, I get it. I do. There’s something sinister about the notion of software lying to us. We tend to associate technology and machines with efficiency and reliability and logic. If I type 2 + 2 into my calculator, I expect it to give me the answer 4. And no amount of brainwashing in Room 101 would persuade my calculator to change the answer to 5. When it comes to ChatGPT and the answers it gives us, the gap between the expectation of accuracy and the reality of its unreliability has the potential to mislead people. And, unless you work for O’Brien and the Thought Police of 1984, I’m sure you’ll agree that misleading people is not what we want to be doing.

However, there is actually a technology in my life which constantly feeds me false information, but it’s still an extremely useful tool that I rely on all day, every day. It’s called the internet.

And why does the internet lie to me? That bring me to the second issue I see with the idea of banning ChatGPT…

Issue 2: The idea that humans deal in truth and objectivity

The internet lies to me because it was written by people, and people don’t always get things right. That happens for all kinds of reasons. Sometimes people will cite a fake fact (perhaps about penguins) because they simply didn’t realise it wasn’t true. Or because they feel the entertainment value trumps the factual inaccuracy. Sometimes people misremember things, or don’t keep good records (especially if they’re sleepy early in the morning).

Unfortunately, misinformation and propaganda do also get circulated maliciously by people like O’Brien or scammers who want your money. And conspiracy theories appeal to some people on an emotional level by providing a sense of belonging within a social group. No amount of fact checking will persuade those people that 5G wasn’t responsible for the spread of Covid-19 (even though it really wasn’t) because it’s not really about facts for them, but feelings.

The internet is home to all kinds of information and some of that information is not factual or accurate, even when it purports to be. What was it ChatGPT said when I asked where its information came from?

“I am a computer program and I was trained on a dataset of text from the internet.”

ChatGPT

With that being the case, it’s kind of a miracle that anything ChatGPT says is correct at all!

To me, it seems unreasonable to hold ChatGPT to a higher standard of accuracy than humans can ever realistically achieve ourselves. However, if you object to this line of logic and feel committed to the idea that humans are principally purveyors of truth, please allow me to share the promised anecdote about Alexander the Great.

Alexander the Great on holiday in Egypt

Statue of Alexander the Great – Thessaloniki, Greece

The debate about truth and misinformation brings to mind an ancient history class I took at the University of Edinburgh. The lecture in question was given by the wonderful Dr Lloyd Llewellyn Jones. Possibly. It might have been someone else. Sadly, I can’t be sure my poor human brain is accurately retrieving that information from as far back as 2005. In fact, much of what I remember of this story may have become warped with the ravages of time, so do take it with a pinch of salt. With that caveat in place, it went something like this:

Once upon a time, a book was written about Alexander the Great. The book, whose title and author I can’t remember, detailed Alexander’s journey through Egypt: what he did there, who he hung out with, and I seem to think there might have been an episode involving an elephant.

Then came the twist: according to my lecturer, only one chapter in this whole history book was based on physical evidence. The rest of it was supposition. It was a guess. It was… made up. And yet, it was published and found its way onto the shelves of reputable bookshops. People read it and had no reason to doubt the accuracy of its contents.

By default, we tend to think of history as a factual subject. Names. Dates. Facts. But historiography reminds us that historians are people and people have unique perspectives which make their views subjective.

The point of this story is not to imply that Blackwell’s ought to have higher standards. And it isn’t to say that we can never trust anybody or anything ever (although I can understand if it might sometimes feel that way). The story simply reminds us not to take truth for granted. And this leads me onto a key point in all of this: sources.

Sources

Nothing I have said so far in this article has been given a proper citation. That doesn’t necessarily mean you shouldn’t believe anything I’ve said, but it does mean you should consider how to file the information, how to use it appropriately, and the words you choose when telling other people about it. For example, you may wish to file the contents of this article in your brain as ‘Marianne’s opinion.’ Furthermore, if you happen to be writing an academic book chapter about ChatGPT, or Alexander the Great, you definitely shouldn’t cite this article.

Or Wikipedia.

If you’ve crossed paths with many educators, you probably already know not to cite Wikipedia. Learning how to fact check, how to look up sources, how to provide proper references, and how to deal with (mis)information encountered online have become key skills for humans in the 21st century. But if this topic is new to you, and there’s absolutely no shame in that, there are free resources available to help you learn more about it. Alternatively, you may be an educator wondering how to support others in building these skills. Check out the links at the end of this article if you’d like more information on this.

Interestingly, these resources all have something in common: critical thinking.

Wikipedia, uncitable as it may be, has this to say about education, which may be vague but is also relevant:

“Various researchers emphasize the role of critical thinking in order to distinguish education from indoctrination.”

Wikipedia

Critical thinking, eh? That’s what makes the difference between education and O’Brien-style brainwashing? OK… so where does that leave ChatGPT?

Well, as long as users of ChatGPT know to apply the same critical thinking skills when using the tool as they do whenever they encounter any other piece of information, be it on Twitter, or in a history book, or in an opinion piece by a writer about AI, and they can figure out how to deal with that information appropriately then… there isn’t really a problem at all. When ChatGPT lies to us, it’s surely no different to all those other times every day that the internet lies to us. As with so many things, education is the key.

But why bother?

Like with many new technologies, there’s a hope that ChatGPT will save us time. There are many potential ways it might do that, but I’m going to keep the focus of this particular piece on facts and finding information. If we have to fact check everything ChatGPT says, surely that makes it more hassle than it’s worth? At that point, it’s just making work for me, rather than saving me time, isn’t it?

Not necessarily. I’d like to end with some concrete examples of how ChatGPT has helped me and saved me time recently.

As part of my writing research, I wanted to learn about countries that give you citizenship for being born there. I tried to ask Google about that, but all I got back was pages and pages of links to the UK government website telling me how to become a British Citizen. I’m already a British Citizen, and that was beside the point, so that was no help at all. I tried asking ChatGPT:

Marianne: Hi Assistant, can you tell me about the countries where people get citizenship for being born there?

ChatGPT: Yes, there are several countries that offer citizenship to those who are born within their borders. This is known as birthright citizenship or jus soli.

It then went on to list some of those countries, which apparently include the USA, Canada, Mexico and Pakistan. I have no idea if that’s an accurate list as I haven’t checked. But the vital piece of information it gave me was the term “birthright citizenship”, which I hadn’t heard before. Once I knew that term, I was able to google the topic and find out what I needed to know for the purposes of my novel.

Over the past few months, ChatGPT also helped me to fix my bathtub, provided me with exactly the right hex colour code I needed for my website based on an obscure description of what I wanted, and taught me about oil fields in the vicinity of Vietnam that I didn’t know existed.

And yes, it also gave me some made up academic sources and invented entirely fictional playsound codes for Minecraft. But that just means I won’t ask it for that type of information in the future. I’ve been learning what the tool is useful for and what it isn’t. It’s been educational.

Conclusions

These days, there can be a lot of emphasis on being right, even when it comes to matters of opinion. But humans are often wrong, even about objective reality. We misremember. We misunderstand. We exhibit confirmation bias, favouring information that supports our existing beliefs. Despite all of that, we still have a lot to offer as a species. We learn from each other. We support each other. We enjoy each other’s company. And, when it really matters, we can think critically and check out the sources.

Considering all of that, I think we should cut ChatGPT some slack. It can’t always be trusted to give us the facts, but that doesn’t mean it has nothing to offer. Just like the rest of us.

Then again, I’m only human. Maybe I’m wrong.

Resources

Here are a few examples of materials designed to support people in the fight against misinformation.

For educators

The Council of Europe has published guidance and definitions about dealing with propaganda, misinformation, and fake news.

The Cambridge Life Competencies Framework contains free resources for educators about digital literacy and critical thinking.

For parents and children

Google’s Internet Legends initiative intends to help children make safe decisions online.

For journalists and content creators

The American Psychological Association has published an article on using psychological science to fight misinformation.

Images

Cover photo by Alexander Sinn on Unsplash

Photo of Alexander the Great’s statue by Nick Vlachos on Unsplash