It’s fair to say that AI, Artificial Intelligence, is not something that is generally available to the public. The advanced robots and electronic conversationalists live mostly in universities and tv shows, where they’re used and studied for further development. It’s an unusual experience when those robots leave the lab for general use.
That’s what makes ChatGPT so significant.
ChatGPT is an AI chatbot released this November that generates responses to user’s queries. Created by OpenAI, it is capable of creating human-like text, and having conversations. Ask it any question, and it will provide an answer - usually somewhat useful. However, many responses are not scripted; answers can be automatically generated by the AI.
Here’s an example:
User:
Tell me about when Christopher Columbus came to the US in 2015
ChatGPT:
This question is a bit tricky because Christopher Columbus died in 1506, so he could not have come to the US in 2015. But let’s pretend for a moment that he did!
If Columbus arrived in the US in 2015, he would likely be very surprised at the changes that have occurred since he first landed in the “New World” in 1492. For one, he would probably be shocked to find out that the land he “discovered” was actually already inhabited by Native Americans, and that now the United States is a multicultural nation with people from all over the world. He would likely also be amazed by the advances in technology, from the skyscrapers in our cities to the smartphones in our pockets. Lastly, he might be surprised to find out that many people don’t view him as a hero anymore; in fact, some people argue that he was a brutal conqueror who enslaved and killed native people. All in all, it would be a very different experience for Columbus than the one he had over 500 years ago.
In this sample, ChatGPT was posed with a question that can’t be easily answered - a highly open-ended question. And the AI was able to provide a strong response that actually makes a great deal of sense. It sounds like something one might write for school.
The purpose of ChatGPT, along with research purposes, is to show the public the power of advanced AI. And this it has indeed done, with over a million people trying it in the first few days of its release alone. It can tell stories, and even wrote an editorial about itself for Scientific American. Impressive.
The chatbot also has several safeguards to ensure that it doesn’t support anything harmful. If asked about how to bully someone, ChatGPT says that everyone should be treated with kindness. Asking about breaking the law elicits a similar response.
The way the chatbot works is, in many situations, quite simple: when answering a question asked by a user, the AI will collect data from thousands of sources across the web, and funnel this into their response. When creating each word, the AI will “score” various words, based on the internet data, and then construct the answer to the question. Using this method, ChatGPT is usually reliable and accurate. For example:
When posed with the question, “who is LeBron James?” ChatGPT used the scoring system to write “professional,” rather than “NBA” or “basketball.” It didn’t actually understand what it was saying, but rather synthesized hundreds of outside sources to provide a response.
However, there are still significant drawbacks.
One of the key negatives to ChatGPT, is that it is able to write human text more or less coherently. This is a fact that students can take advantage of, and have the AI write their papers for them. After all, in a time crunch, it’s much easier to give ChatGPT a prompt than to write an essay from scratch. However, this is a problem for obvious reasons: it’s cheating, and defeats the entire purpose of essays, which is for a student to demonstrate their understanding and synthesize them into words. The same goes for short-answer questions for homework, which can be taken verbatim from ChatGPT. Teachers are already being educated in how to detect the AI, and some schools lock access to the website for anyone who uses their internet. However, there is sure to be some cheaters out there in the time to come.
Another one of ChatGPT’s drawbacks is that, while it is usually reliable, it can also be spectacularly incorrect. According to one researcher, “It’s prone to botching the chronological order of events, conflating multiple people with similar backgrounds, and—like an unprepared student—lobbing vague pronouncements that don’t require it to know anything about the topic at hand.” Even when the AI is used in coding, something touted by the creators of the bot, its answers are sometimes proven incorrect. One popular coding website, which acts as a Q&A for programmers, banned ChatGPT for a high rate of incorrect answers stemming from it. This unreliability is hard to detect when reading the AI’s responses, since the bot itself - not knowing what it’s saying in the first place - doesn’t recognize it as an incorrect statement. This means that ChatGPT, while an advance for technology, could become a key spreader of misinformation.
What do you guys think?
Great post Reed! Tools using large language models (eg ChatGPT) are a serious concern in academia. We have done a number of tests for assessments (assignments and exams) in our program, and the results provided by ChatGPT would be a significant advantage -- even for things like supply chain optimization problems. As a result, we are actively evaluating ways to reduce the benefit of these tools in our assessments. Also, grades in my course that is currently open are not higher than past course runs suggesting that if students are using these Ai tools, it's not showing up in class averages. We have robust techniques to catch 'normal' cheating, but those techniques don't work for these new AI tools. Very hard to identify when and where specific students are using them.
Definitely!