By Sierra President, Spring 2023 Office of Ethics and Policy Intern
As AI technology continues to advance, educators are increasingly looking for ways to incorporate it into their classrooms. One AI tool that has gained attention in recent years is ChatGPT, a language model developed by OpenAI that can generate human-like text in response to user input. While ChatGPT has potential as an educational tool, its use raises important ethical considerations.
Great opening, right? Well, how would you react if you were to find out that I did not write that paragraph, but instead it is a product of ChatGPT’s human-like text responses? If this information does not completely surprise you, then you may be a part of the estimated 100 million who have used the artificial intelligence (AI) platform since its release in November 2022.1
AI is changing the way that the world works, especially on college campuses, where students must rely on technology for almost every aspect of their education such as for watching lectures, taking notes, completing assignments, and writing papers. With the creation of new AI platforms that assist in gaining and distributing information, the ethical dilemmas of using sites like ChatGPT have become pertinent, making students like me wonder if we should even be using these platforms to begin with.
What are the ethical concerns of using AI tools like ChatGPT?
When users log into the ChatGPT site, they are prompted with three sets of information, one that most importantly outlines the limitations of the platform.2 ChatGPT’s disclaimers say the AI “may occasionally generate incorrect information” and “may occasionally produce harmful instructions or biased content.” OpenAI, the parent company for ChatGPT, also explained in a statement that ChatGPT typically reflects Western perspectives, may reinforce biases over time, and does not always produce material that is suitable for classroom settings.3
Knowing how unreliable this can be as a source, it is understandable why many professors and students do not know how to feel about its usage.
BestColleges conducted a survey on how students feel about this new AI platform and 51 percent of those questioned said that they see it as cheating.4 Sixty percent of those in the same survey said their universities or instructors had not specified their opinions on ethics behind its use. Sixty-one percent said that they believe AI, like ChatGPT, will become the new normal.
If AI is the future of education, universities need to enact policies to clarify when students can use AI. There are many gray areas associated with ChatGPT, which make it hard to determine when AI transformed from being a tool to an obstruction.
The BestColleges survey further asked students about their personal usages with AI. Fifty percent said they have used it to complete some parts of their work but did the majority themselves. Does this mean it could be okay for students to use AI for help if it is not their main source?
OpenAI has cautioned against punishing students if they are not aware of the expectations around the use of AI for assignments. OpenAI said they are working on a way to give students the option to export their ChatGPT use so students can share it with professors. OpenAI also argued that professors should tell students when they used ChatGPT to create lessons.
Determining the true ethics around ChatGPT will take time, since it is evolving at a faster pace than universities can respond. Conversations between students and faculty are crucial to help develop rules about when it is okay for students to use ChatGPT. Casey Fiesler, a technology ethicist at the University of Colorado-Boulder, warned educators about responding correctly to the growth of AI: “We don’t want witchhunts for AI generated writing. What we need to do is work with the students so they know when it’s appropriate to use ChatGPT and when it’s not.”5
Elon Musk and AI experts recently published an open letter calling for a six-month pause in AI development more powerful than OpenAI’s newest version of their chat bot, ChatGPT-4. The letter, produced by the non-profit Future of Life institute, argued that AI potentially poses an existential risk to humanity and may spread misinformation. The letter also said that AI labs are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control. … AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”6
How do university leaders feel about it?
In January 2023, UNC-Chapel Hill Chancellor Kevin Guskiewicz said, “Like so many new technologies over the past few years, artificial intelligence holds huge promise for both progress and disruption. We will have to answer a lot of questions about how we respond in higher education.”7
Chancellor Guskiewicz is not alone in sharing his mixed feelings about the new platform. Many educators around the country share similar concerns and excitement about ChatGPT. While some faculty may be wary about educational impacts, many are ready to see what the new application has in store.
Youngmoo Kim, an engineering professor who sits on a new committee at Drexel University charged with creating university-wide guidance on AI, said that higher education is all about adapting to new technologies and inventions that may affect academia.8
Sarah Elaine Eaton, a professor at the University of Calgary and academic integrity expert, compared ChatGPT to other developments, emphasizing the point that although this will take some getting used to, everyone will adapt.9 Professor Eaton said, “We’ve heard people say things like they think this is going to make students stupid, that they’re not going to learn how to write or learn the basics of language. In some ways it’s similar to arguments we heard about the introduction of calculators when I was a kid.”10
Many professors have chosen to embrace the unknown of ChatGPT and are finding ways to incorporate it into curriculums rather than ignore the emergence of AI.
Can we use AI to detect AI?
Educators should not fear though, because AI detection tools are available. AI detectors were created to decide if text was AI-generated or human-generated. The AI detectors use a combination of data that compares human text to AI text and notices any patterns that can determine the text’s origin. However, anyone using these detection tools should be cautious because the tools’ levels of accuracy fluctuate depending on which AI detector is used and the length of text the tool is asked to evaluate. I decided to put these online AI detectors to the test by inputting my introductory paragraph, which was completely written by ChatGPT, in three different websites to see if any could detect the AI.
- GPTZero, claims to be the world’s #1 AI detector, but determined that my input was “likely to be written entirely by a human,” which we know to be untrue. The site does say that if educators plan to use GPTZero’s conclusions, the educators should include GPTZero’s information as only part of their assessment and should maintain a holistic view of each case.
- Content at Scale provides a human content score percentage. This site determined that there was a combination of AI and human generated information. More specifically, this detector gave the paragraph a 72 percent human content score, specifying that the first sentence was robotic and possibly artificially generated. This detector did not appear to identify the other sentences to be anything other than human-produced.
- Writefull is used to determine the presence of GPT-3, GPT-4 or ChatGPT in a text. This site concluded that there was only a 30 percent chance that the sample paragraph came from one of these AI platforms.
It is important to note that the length of sample text impacts the accuracy of the results. For example, OpenAI’s AI Text Classifier was not able to scan the sample paragraph because it did not reach a character minimum. OpenAI has publicly stated that this detector is only 26 percent accurate even when text does meet the minimum number of characters required.11
How did the pandemic prepare professors for the future of technology?
ChatGPT may be one of the newest forms of AI, but it does bring back the issue of adapting to a new way of life. In 2020, when schools began remote learning due to COVID-19, professors forced to change their curriculum raised some of the same fears about an overreliance on technology.
A 2020 survey conducted by Bay View Analytics focused on how teaching changed because of remote learning.12 Of those surveyed, 56 percent said they had to come up with “new teaching methods” to adjust to their temporary new way of life. In some cases, professors made changes to help students. Alongside this though, professors have been able to adjust their lessons so they are teaching the most important material first, giving room for personalized instruction and engagement, and developing an understanding that students have priorities too.
Just like during the COVID-19 pandemic, the emergence of AI like ChatGPT is forcing professors to change how they teach to make sure they are providing students with a quality education. The rapid evolution of AI much more powerful than ChatGPT makes it likely these changes will continue to occur.
What are the next steps?
As helpful and harmful as ChatGPT may be, there is no debate that AI is a constantly evolving mechanism and is here to stay. It is important for professors to prepare for not just the increased improvement of this specific AI platform, but that many competitors are likely to join soon.
Jared Kaplan, an assistant professor at Johns Hopkins, said “ChatGPT is just the thing we have this year. But next year and the year after, we’re going to have something much, much, more powerful.”13 Universities will soon begin implementing policies governing when students may use AI. However, it is not clear how long any policy will remain effective given how quickly AI is evolving. Due to this rapid growth, universities will need to constantly stay aware of what is going on with ChatGPT and other AI platforms and make updates to their policies at least once a year. Until university leaders develop this guidance, students and faculty should follow the Student Honor Code and Faculty Handbook, and work together to have conversations about expectations in the classroom.
- Ghaffary, S. (2023, March 15). The makers of Chatgpt just released a new AI that can build websites, among other things. Vox.
- openai.com. (n.d.).
- OpenAI API. (n.d.).
- Welding, L. (2023, March 27). Half of college students say using AI is cheating: BestColleges. BestColleges.
- Mock, G. (2023, February 21). ChatGPT is here to stay. What do we do with it? Duke Today.
- Future of Life Institute. (2023, March 22). Pause giant AI experiments: An open letter.
- Guskiewicz, K.M. (2023, January 20). A message from the chancellor: Preparing for careers that don’t exist yet. The University of North Carolina at Chapel Hill.
- Huffman, S. (2023, March 20). This Drexel learning group wants to make AI more approachable for university faculty. Technical.ly.
- McMurtrie, B. (2023, March 22). ChatGPT is everywhere. The Chronicle of Higher Education.
- Friesen, J. (2023, January 31). ChatGPT prompting some professors to rethink how they grade students. The Globe and Mail.
- Kirchner, J.H., Ahmad, L., Aaronson, S., & Leike, J. (2023, January 31). New AI classifier for indicating AI-written text. OpenAI.
- Lederman, D. (2020, April 21). How professors changed their teaching in this spring’s shift to remote learning. Inside Higher Ed.
- Pearce, K. (2023, February 20). ‘An inflection point rather than a crisis’: ChatGPT’s implications for higher ed. The Hub.