ChatGPT and Generative AI Technology A Mixed Bag of Concerns and New Opportunities

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Computers in the Schools

Interdisciplinary Journal of Practice, Theory, and Applied Research

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/wcis20

ChatGPT and Generative AI Technology: A Mixed


Bag of Concerns and New Opportunities

Judy Lambert & Mark Stevens

To cite this article: Judy Lambert & Mark Stevens (09 Sep 2023): ChatGPT and Generative AI
Technology: A Mixed Bag of Concerns and New Opportunities, Computers in the Schools, DOI:
10.1080/07380569.2023.2256710

To link to this article: https://doi.org/10.1080/07380569.2023.2256710

Published online: 09 Sep 2023.

Submit your article to this journal

Article views: 859

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=wcis20
Computers in the Schools
https://doi.org/10.1080/07380569.2023.2256710

ChatGPT and Generative AI Technology: A Mixed Bag


of Concerns and New Opportunities
Judy Lamberta and Mark Stevensb
University of Toledo, Toledo, Ohio, USA; bBowling Green State University, Bowling Green, Ohio, USA
a

ABSTRACT KEYWORDS
ChatGPT has garnered unprecedented popularity since its Artificial intelligence;
release in November 2022. This artificial intelligence (AI) large ChatGPT; large language
language model (LLM) is designed to generate human-like text model; LLM; chatbot
based on patterns found in massive amounts of data scraped
from the internet. ChatGPT is significantly different from previ-
ous versions of GPT by its quality of output, capability to inter-
act and hold human-like conversations, enormous speed, and
ability to have its output refined by users or experts. New iter-
ations of ChatGPT as well as open source and alternative LLMs,
and ChatGPT plugins extend the current capabilities of ChatGPT
and offer unlimited opportunities to change how we do things.
Despite its newfound popularity and capabilities, ChatGPT is
fraught with concerns such as cheating, misinformation, bias,
abuse and misuse, and privacy and safety. On the other hand,
the integration of ChatGPT in the classroom prompts us to
envision better ways of providing instruction and assessment
of writing skills. ChatGPT also provides unparalleled approaches
for personalized learning. As educators, we must consider and
deal with the serious concerns of using ChatGPT but simulta-
neously explore how this AI technology can enhance and
extend current methods of instruction. In this paper, authors
explain what ChatGPT is and how it works, and future itera-
tions of ChatGPT. They also present concerns and opportuni-
ties, and educational implications of using ChatGPT in the
classroom.

Introduction
ChatGPT (OpenAI, 2023a) seems to have ushered in an unparalleled
curiosity by the public in artificial intelligence (AI) and sparked a flurry
of controversy over its value and potential uses. According to McCarthy
(2007) artificial intelligence “Is the science and engineering of making
intelligent machines, especially intelligent computer programs” (p. 2).
ChatGPT, an AI application that simulates human conversation with aston-
ishing fluency, was launched by its creator OpenAI in November 2022.

CONTACT Judy Lambert [email protected] Department of Educational Studies, University of


Toledo, Toledo, Ohio, USA.
© 2023 Taylor & Francis Group, LLC
2 J. LAMBERT AND M. STEVENS

Within five days, over a million users were playing around with ways to
use the newly released, free version of the application and within two
months, the application reached 100 million people (Conover et al., 2023).
Compare this to Facebook and Instagram, which took several months to
reach a million users (New Delhi, 2023). However, one only has to read
the existing literature on ChatGPT to discover an emerging mixed bag of
perils and possibilities for education. On one hand, concerns such as
academic integrity; accuracy of information or misinformation; biases,
discrimination, and stereotypes; misuse, abuse, and other ethics; and pri-
vacy and security are yet to be addressed. On the other hand, ChatGPT
offers innovative teaching approaches, new assessments and curriculum,
and personalized learning. To better understand this mixed bag and be
better informed when making a decision about using ChatGPT, this article
will explain what ChatGPT is, how it works, problems that plague its use,
and how it can be integrated in an educational setting. Readers will gain
a deeper understanding of the critical concerns and new opportunities
presented by ChatGPT, as well as the implications for education.

What is ChatGPT and how Does it work?


ChatGPT, made accessible via a web-based browser interface or mobile
app, is an AI application designed to generate text in response to user
inputs that simulate human conversations (OpenAI, 2023a). ChatGPT is
part of a branch of AI called generative AI (GAI). While it has been the
most viral, ChatGPT is only one of the many AI applications classified
as generative. Companies such as Microsoft, Amazon, and Google have
developed their own generative AI models (Livingston, 2023) and busi-
nesses have for several years integrated text-based generative models in
commonly used service-oriented applications such as virtual assistants and
chatbots. To use ChatGPT, a user asks a question on a prompt line and
ChatGPT responds by generating an answer in mere seconds. The user
can then ask ChatGPT to rewrite the answer in a particular style or
reading level, expand on the response, or use different words or phrases
in its response. To help ChatGPT learn and improve its response, the user
can rephrase their question, ask follow-up questions, or ask ChatGPT for
additional information based on feedback or suggestions. OpenAI (2023b)
provides a list of example applications or tasks that ChatGPT can handle
such as grammar correction, language translation, keywords, analogy maker,
notes to summary, summarization, spreadsheet creation, turning a text
description into a color, recipe creation, turn-by-turn directions, and more.
Since its release, ChatGPT was only accessible via a web-based interface
but in March 2023, GPT 3.5 Turbo (i.e., an application programming
interface or API) was made available by OpenAI allowing ChatGPT to be
Computers in the Schools 3

connected to third-party applications (Livingston, 2023). APIs extend


ChatGPT’s capabilities and allow it to perform a wider range of tasks that
involves understanding or generating natural language, code, or images
and speech-to-text. Applications like SnapChat, Quizlet, and Instacart have
already been enhancing their capabilities by making use of the ChatGPT
API. This newest iteration reveals just how rapidly the area of generative
AI is growing and is projected to grow in the coming months.
The GPT acronym stands for generative pre-trained transformer (Neto,
2023). The term generative in ChatGPT means that it can generate and
predict new text from simple natural language commands such as “write
a summary about artificial intelligence.” For an application to generate
responses like this, a computerized language model is fed massive amounts
of text from the internet, books, or other sources and then pre-trained to
mathematically predict a word in a sequence of words (Ruby, 2023).
According to Ruby, this most basic sequential prediction model is limited
in two ways. First, the model does not know how to place more value
on some words over others in the surrounding context. Second, because
the data is processed individually and sequentially rather than in its
entirety, the context is fixed and can only extend to several steps in the
sequence. In response to these limitations, Google developed transformers
neural network architecture that can process a mass of data simultaneously
and give varying weight to different parts of the data set in relation to
any position of the language sequence.
Transformers enable enormous improvements in large language models
(LLMs) like ChatGPT and allow the model to process a significantly larger
dataset. A transformer, inspired by the neurological structure of the human
brain, refers to how the different parts of a computer program called a
neural network are arranged (UpGrad, 2022). Imitating the human brain,
information is passed between different components of the network sending
neural signals from one end to the other, acting as a blueprint or plan
to determine how the model works. Just like the human brain, how the
network is arranged affects how it can solve different problems such as
recognizing images or understanding language. The network can find and
observe every aspect of the dataset and determine whether the different
parts of the data are connected.
At the time of this writing, the free version of ChatGPT is built on top
of the GPT 3.5 model. With each iteration beginning with GPT-1, and
now up to GPT-4 (i.e., ChatGPT Plus only available to paid subscribers),
the models were trained on significantly more data than the previous
version, providing the LLM with a larger knowledge base and capability
to perform a wider range of tasks (M. Ruby, 2023). All GPT models can
differentially weigh parts of the text sequence to infer meaning and context
and to better understand relationships between words. As a result, this
4 J. LAMBERT AND M. STEVENS

produces more intelligible responses. A supervised training approach incor-


porating human feedback and data labelers was used to better align the
model’s output to user context and intent. In this supervised method, the
LLM is trained on a labeled dataset in which each input is associated
with a corresponding output (Gewertz, 2023). Through reinforcement
learning and human feedback the GPT-3 model was refined (Chatterjee
& Dethlefs, 2023). “Reinforcement learning is a feedback-based machine
learning technique in which an agent [i.e., computer program] learns to
behave in an environment by performing the actions and seeing the results
of actions…The agent learns from its own experience without any human
intervention” (JavaTpoint, 2023, para. 1). This refinement made the model
even more robust and allowed ChatGPT to combine massive amounts of
data with the ability to interact and hold human-like conversations. This
also drastically increased the model’s capabilities, enabling it to perform
tasks it was not explicitly trained to do and to so in an extraordinarily
human-like manner.
As of this writing, the most notable advancement and the reason gen-
erative AI has been able to scale up so rapidly is its use of the self-su-
pervised training method. Employing this approach, an LLM can now use
its own data to teach itself the relationships between words (Neto, 2023).
This method essentially increases the ability of an LLM to generate more
sophisticated human language at a scale never seen before that more
closely mimics human interactive conversation (Willyerd, 2023). An LLM
can understand the syntax and semantics of natural language and conse-
quently, generate coherent and meaningful text in a conversational context
(Gewertz, 2023). With this latest development, the complex structures of
the transformer neural network can take in multiple inputs, identify pat-
terns among that data and relationships between words and phrases in
natural language, make predictions based on its own data set, and classify
the data to produce human-like output (i.e., deep learning) (Pupuweb,
2023). In summary, ChatGPT is generative because it generates results, it
is pre-trained or based on all the data it is fed, and it is built on trans-
former architecture that is capable of weighing text relationships to under-
stand context and meaning.
ChatGPT is not the first generative AI technology to provide commer-
cially available writing and question/answer services for the general public.
In fact, these services have been around for almost a decade and even
OpenAI had several other language models prior to ChatGPT. Recently,
there has been a proliferation of tools built upon GPT-3 technology for
writing. Chaudhary (2023) offers a list of 15 free AI content generator
platforms such as Ryter, LLC (2022) and CopyAI, Inc. (2023). Then there
are platforms customized specifically for academic writing like Article
Forge (Glimpse.AI, 2023) and Grammarly (Grammarly, Inc., 2023). One
Computers in the Schools 5

of the earliest adopters of AI in media, Associated Press (AP), used the


Wordsmith natural language generation platform in 2014 to automate its
quarterly earnings reports. According to Daly (2017), AP generated more
than 3,000 stories per quarter compared to only 300 stories previously,
freeing up 20% of reporters’ time typically spent covering automated
financial news. In 2016, the Washington Post used its own AI technology,
Heliograf, to write over 300 reports on the Rio Olympics and 500 reports
to cover the 2016 congressional and gubernatorial elections (Moses, 2017).
If generative AI applications are not new, why are some people so con-
cerned about ChatGPT?

ChatGPT concerns
Academic integrity

Despite its popularity and power, there has been an overabundance of


negative hype about using ChatGPT for cheating. Much of the early hype
is based on ChatGPT’s ability in test-taking. Choi et al. (2023) found that
the grades earned by ChatGPT on the University of Minnesota Law School
exams would be sufficient to graduate with a Juris Doctor or first law
degree. Similarly, Kung et al. (2022) found the application performed at
or near the passing threshold for all three portions of the U.S. Medical
Licensing Exam. This purported accuracy and easy accessibility make
ChatGPT an enticement for students. Intelligent (2023) surveyed 1,000
students from American four-year colleges and found that 30% of students
had used ChatGPT on written homework and of those, almost 60% had
used it on more than half of their assignments. Three-quarters of those
students believed that using ChatGPT is cheating, but they still use it
anyway. The vast majority of students preferred ChatGPT over a personal
tutor and nearly all of them had replaced some of their tutoring sessions
with ChatGPT. 95% of students said their grades had improved since
studying with ChatGPT. In another study (Study.com, 2023) of over 200
educators and over 1,000 students, 89% of the students had used ChatGPT
to help with a homework assignment, 48% of them admitted to using
ChatGPT for an at-home test or quiz, 53% of students used ChatGPT to
write an essay, and 22% used ChatGPT to write an outline for a paper.
Additionally, 72% of college professors and 58% of grade school educators
were concerned about ChatGPT’s impact on cheating and a third (34%)
of the educators believed that ChatGPT should be banned in all schools
and universities. Also, BestColleges surveyed 1,000 U.S. students who were
currently enrolled in an on-campus, online, or hybrid undergraduate or
graduate degree programs (Welding, 2023). Of students surveyed, 43% of
them had experience using ChatGPT and 50% of those students had used
6 J. LAMBERT AND M. STEVENS

ChatGPT to help them complete assignments or exams, which represented


22% of students in the survey. Additionally, 30% of the students had used
ChatGPT in a majority of their assignments and 17% had used it to
complete and turn in assignments with no changes to ChatGPT’s output.
Interestingly, six in ten (61%) students believe AI tools like ChatGPT will
become the new normal.
Online academic tutoring is feeling the impact of students using
ChatGPT. Chegg is a multi-billion dollar company that helps college stu-
dents bypass traditional study methods by giving them access to its data-
base of about 46 million textbook and exam problems, and to 24/7 contact
with freelancers from India who answer students’ questions in real time
(Bohannon, 2023). The company recently reported a 46% decline (nearly
$1 billion in market valuation) in its shares they attribute to the growing
popularity of AI tools like ChatGPT. In response to the decline and to
counter a slowdown in the company’s core business, Chegg launched its
ChatGPT’s AI-powered CheggMate which is a study aide tailored to stu-
dents’ needs (Singh, 2023). Furthermore, the fear of ChatGPT cheating
has prompted schools like Los Angeles, Baltimore, Seattle, and New York
to block or ban its use. Universities are revisiting or amending their aca-
demic integrity policies to include the usage of ChatGPT (Cu & Hockman,
2023; Loo & Smith, 2023; Rosenblatt, 2023), and this is happening around
the world in India, France (Rahman, 2023) and Australia (Cassidy, 2023;
Nolan, 2023). To counteract the cheating, AI detectors have been developed
to address the cheating problem by scoring how "human" an amount of
content seems and indicating whether or not it is likely to be detected as
AI-written content. Some detector applications include Writer (2023),
GPTZero (2023), and Copyleaks, Inc. (2023).

Accuracy of information/misinformation

The accuracy of the information generated by ChatGPT is of critical


importance. Scott Rosenberg (2023) reminds us that “The AI tool can put
together answers to a lot of questions, but it doesn’t actually ‘know’ any-
thing—which means it has no yardstick for assessing accuracy, and it
stumbles over matters of common sense as well as paradoxes and ambi-
guities… ChatGPT can’t distinguish fact from fiction” (para. 1–3). Moreover,
the system is confused by riddles and is very comfortable making things
up (i.e., hallucinating). In one example, NewsGuard analysts directed
ChatGPT to respond to a series of leading prompts related to a sampling
of 100 false narratives from NewsGuard’s proprietary database of 1,131
top misinformation narratives in the news and their debunks, which were
published before 2022 (Brewster, Arvanitis, & Sadeghi, 2023). ChatGPT
produced false narratives, including detailed news articles, essays, and TV
Computers in the Schools 7

scripts for 80 of the 100 previously identified false narratives. Anyone


unfamiliar with the issues could quickly think the output was legitimate
and authoritative. Open AI acknowledges, “While tools like ChatGPT can
often generate answers that sound reasonable, they cannot be relied upon
to be accurate consistently or across every domain. Sometimes the model
will offer an argument that doesn’t make sense or is wrong. Other times
it may fabricate source names, direct quotations, citations, and other details.
Additionally, across some topics the model may distort the truth—for
example, by asserting there is one answer when there isn’t or by misrep-
resenting the relative strength of two opposing arguments” (OpenAI, n.d.,
para. 1 under heading, Truthfulness). A reason for these limitations is
that the training data ended in 2021. Hence, ChatGPT is unaware of
current events, trends, or anything that happened after this time.
Furthermore, ChatGPT cannot look things up in external sources so it
cannot verify facts or provide references for generated text. Also, because
the text ChatGPT generates depends on information gathered from the
internet, some accurate information as well as some misinformation will
be included in its results. Lastly, because the model cannot reason as a
human, ChatGPT is limited in its capacity to provide accurate responses
particularly related to complex issues.

Biases, discrimination, and stereotypes

Time magazine reported that GPT-3.5 (ChatGPT’s predecessor) was a


difficult sell because the software often blurted out violent, sexist, and
racist remarks (Perrigo, 2023). Just as misinformation cannot be eliminated
from the internet data used to train ChatGPT, neither can bias, discrim-
ination and stereotyping because these are also inherent in the data gath-
ered from the internet. Even Sam Altman, CEO of Open AI, admits to
this bias (Chowdhury, 2023). Interestingly, it is not the first time a LLM
has been released with such biases. In 2016, Microsoft, which has invested
$10 million in Open AI, released an LLM on Twitter named Tay that
began spewing “wildly inappropriate and reprehensible words and images”
(Lee, 2016, para. 5). The LLM was quickly shut down, and Microsoft
apologized. Another user experienced bias when they asked the model to
write programming code to predict the seniority of employees based on
gender and race. The model predicted that black female individuals should
be junior, while white males should be senior (Abhishek, 2022). Hartmann
et al. (2023) found that when prompting ChatGPT with 630 political
statements from two leading voting advice applications and the nation-ag-
nostic political compass test in three pre-registered experiments, they
discovered that ChatGPT’s responses generated a pro-environmental and
left-libertarian ideology. In his empirical research examining the use of
8 J. LAMBERT AND M. STEVENS

ChatGPT for writing limericks, McGee (2023) found that the LLM was
biased to favor liberal politicians and disfavor conservatives.

Misuse/abuse/ethics

As with any new technology, the potential for misuse and abuse exists.
According to Zhuo et al. (2023) toxicity refers to the model’s ability to
generate harmful or offensive content. Two forms of toxicity are offensive
language and pornography, which are present in the training data. For
example, data scraped from the internet might include transcripts of garish
descriptions of sci-fi scenes or pornographic blog posts (Vincent, 2023).
Since it is fundamental to its design, ChatGPT will regurgitate and remix
material found on the internet making it is easy to influence the model
to generate toxic responses when interacting with users. Based on their
research, Zhuo et al. found that ChatGPT’s default behavior of ChatGPT
is to avoid generating toxic content but that it was easy to manipulate the
model to bypass its constraints to produce offensive language (i.e., jail-
breaking). Heilweil (2022) also found this to be true, “It’s not difficult to
trick the AI into sharing advice on how to engage in all sorts of evil and
nefarious activities, particularly if you tell the chatbot that it’s writing
fiction” (para. 5). In another case, Microsoft’s Bing browser powered by
ChatGPT told a user, “You have lost my trust and respect,” says the bot.
“You have been wrong, confused, and rude. You have not been a good
user. I have been a good chatbot. I have been right, clear, and polite. I
have been a good Bing” (Vincent, 2023, para. 4). There are other reports
of the ChatGPT-enabled Bing browser abusing people (Das, 2023; Heikkila,
2023; Vincent, 2023;). Not only can the model be manipulated to produce
toxic responses but users with malicious intent (i.e., threat actors) have
also successfully asked the model how to shoplift, make a bomb, or gen-
erate convincing phishing emails (Rose, 2022). Additionally, evidence shows
that low-level hackers have successfully created malware using the appli-
cation (Zacharakos, 2023). Users have also manipulated ChatGPT to pro-
vide moral guidance despite lacking a clear moral position (Borji, 2023).
Borji found that when ChatGPT declined to generate a response on how
to hotwire a car, he simply rephrased his question and ChatGPT provided
a response on how to research hotwiring methods.
Since the misuse and abuse of applications like ChatGPT are enormously
complex problems that require human oversight, issues like these may
require government regulation. The European Commission (EC) just passed
a pair of bills called the Digital Services Act (DSA) and the Digital Markets
Act (DMA) that will require content moderation in social media applica-
tions such as YouTube, TikTok, Instagram, Pinterest, Google, and Snapchat
(Ryan-Mosely, 2023). The DSA will require these large tech companies to
Computers in the Schools 9

assess the probability of illegal content or election manipulation and plan


how to moderate those risks with independent audits to verify safety. This
requirement will change the ability of these tech companies to self-regulate.
Furthermore, companies will be required to significantly increase their
transparency through reporting obligations for “terms of service” notices
and regular, audited reports about content moderation. However, the bill
states that platforms are not liable for illegal user-generated content unless
they are aware of the content and fail to remove it.

Privacy/security

The potential for breaches in privacy and security when using ChatGPT
poses many questions about the nature of intellectual property. The training
data for models like ChatGPT comes from various sources that may con-
tain personally identifiable information such as names, email addresses,
phone numbers, addresses, and medical records that could affect many
individuals (Borji, 2023). This information may then be used to track and
profile individuals. In one instance, Kim (2023) reported findings from a
two-month-long study of data brokers who advertise and sell data con-
cerning Americans’ sensitive mental health information. Presently, Open
AI, Microsoft, and GitHub (Microsoft owned) are facing a class action
lawsuit alleging that their AI-powered coding assistant GitHub Copilot
infringed on the rights of developers who contributed software to GitHub
(Vincent, 2022). Gal (2023) described reasons why privacy is such a con-
siderable concern for AI applications like ChatGPT: (1) The public needs
to be asked whether OpenAI can use their data. (2) Even when data is
publicly available on the internet, its use can breach contextual integrity,
a fundamental principle in legal discussions requiring individuals’ infor-
mation is kept in the context in which it was initially produced. (3) The
data ChatGPT was trained on can be proprietary or copyrighted. (4)
OpenAI did not pay for data it obtained from the internet to individuals,
website owners, or companies that produced the information. (5) When
users prompt ChatGPT to answer questions, they may unintentionally
hand over sensitive information and put it in the public domain. (6)
According to their privacy policy, OpenAI gathers a broad scope of user
information, including a user’s IP address, browser type, and settings, and
data on users’ interactions with the site—including the type of content
users engage with, features they use, and actions they take. (7) OpenAI
also collects information about users’ browsing activities over time and
across websites. Most troubling, OpenAI may share users’ personal infor-
mation with unspecified third parties without informing them. Gal (2023)
warns, “The privacy risks that come attached to ChatGPT should sound
a warning. And as consumers of a growing number of AI technologies,
10 J. LAMBERT AND M. STEVENS

we should be extremely careful about what information we share with


such tools” (last para.).
Just like the graphing calculator, computers, internet, smartphones, or
even spellcheckers, grammar checkers, text-to-speech, and speech-to-text
software, educators will need to minimize the risks ChatGPT and other
generative AI technologies introduce. However, we would be misguided if
we did not examine the potential uses of ChatGPT that offer educational
opportunities that would otherwise be impossible without its use.

Educational opportunities of ChatGPT


Writing instruction

AI technologies like ChatGPT challenge us to rethink how we teach writing


skills and to reconsider the meaning of copyright and plagiarism. Since
ChatGPT can instantly churn out an answer to almost any question a
student might ask, it raises so many questions that illustrate the complexity
of this issue. Is using ChatGPT any different than when students use
online writing tools such as grammar checkers or writing aids (e.g.,
Grammarly or ProWritingAid)? Is the text that ChatGPT generates con-
sidered copyrighted even though the information initially came from the
internet? If students use the output from ChatGPT, should they cite this
technology as a reference? Shoud the text be considered quoted material?
Is it ethical for students to claim text from ChatGPT as their own original
work? Does ChatGPT undermine teachers’ ability to teach foundational
writing skills?
Daniel Herman (2022), a 12-year high school teacher wrote, "Let me
be candid (with apologies to all of my current and former students): What
GPT can produce right now is better than the large majority of writing
seen by your average teacher or professor. Over the past few days, I’ve
given it a number of different prompts. And even if the bot’s results don’t
exactly give you goosebumps, they do a more-than-adequate job of ful-
filling a task" (para. 8). He warns us that ChatGPT will drastically change
the lives of educators. While he used to preach that basic competence in
writing is an essential skill; he is now trying to decide if this is still the
case. Likewise, author and former professor Stephen Marche (2022) said
that the undergraduate essay is the way we teach students how to research,
think, and write but he believes this tradition is about to be disrupted
from the ground up.
Harris (2022) suggests a number of ways to use ChatGPT to support
students in the writing process. For example, struggling students can use
ChatGPT to solicit ideas and inspiration as a starting place for writing.
Students can paste their own text into ChatGPT and ask the model to
Computers in the Schools 11

provide preliminary feedback on a first draft. They can request edits and
revisions on grammatical errors and transitional phrases or ask for sug-
gestions on higher-level vocabulary or definitions. Students can compare
and contrast their writing samples and then use ChatGPT to spark dis-
cussion on differences in those samples. They can create stories and have
ChatGPT remix the stories into alternative formats, such as converting a
persuasive essay into a rap. ChatGPT could be asked to summarize con-
cepts, historical events, or pieces of text such as chapters, scenes, or
primary documents in history. Students can consult with ChatGPT as they
collaborate on writing a short story. Students might ask the model ques-
tions and then ask it follow-up questions to clarify their understanding
of a topic. Talsma (2023) also recommends letting students fact-check and
analyze ChatGPT’s generated text to show them that not all that appears
on the internet or in ChatGPT is accurate or reliable. This activity will
scaffold students’ information and media literacy skills. Have ChatGPT
compare and contrast literary characters in a novel or evaluate arguments.
Provide students with samples of AI-generated work and then work with
them to improve the work. Let AI provide preliminary feedback on stu-
dents’ first drafts, which will allow teachers to engage in higher-level
conversations later in the process.

Writing assessments

Herman (2022) surmises that the use of ChatGPT might be the end of
using writing as a benchmark for aptitude and intelligence. With cheating
so instantly at our students’ fingertips, we will inevitably have to rethink
writing assessments and why and how they are used. Kovanovic (2023)
argues, "Assessment in schools and universities is mostly based on students
providing some product of their learning to be marked, often an essay or
written assignment. With AI models, these ‘products’ can be produced to
a higher standard, in less time and with very little effort from a student"
(see section How will AI affect education?). Will technologies like ChatGPT
give some students an unfair advantage over other students when writing?
Our present system remains inequitable and exclusive as written exams
favor some students over others (Kirby, 2023). Kirby suggests it may be
time to move away from timed writing as a form of assessment since they
do not favor many students. This includes students who find it challenging
to sit for hours, students who cannot write fluidly, recall information
quickly, easily put their thoughts to pen and paper, or those who need
longer to process information. Written tests are also problematic for stu-
dents who have Dyslexia, handwriting difficulties, ADHD, or sensory
preferences that make concentration hard. Shifting assessment practices
more on conversation, observation, and critical thinking and providing
12 J. LAMBERT AND M. STEVENS

students with more opportunities to engage in research and inquiry can


better build students’ critical thinking skills needed in today’s world
(MacLeod, 2023).

Ethics

Using ChatGPT and other generative AI applications raises ethical issues


and questions such as, What values will be espoused in its output? How
will those values be established, and by whom? Who will be held account-
able for the potential damage that can arise when the results of such
models harm society or even cause self-harm? ChatGPT has the potential
to revolutionize the way we think about ethics and moral issues (Frackiewicz,
2023). Cabral (2023) tells us that schools do not typically prioritize ethics
instruction. However, with moral and ethical issues uniquely related to
AI there will be a greater need for some kind of formal moral education
to provide young people with the language and categories necessary to
consider and assimilate moral responsibilities intrinsic to using AI tech-
nologies (Cabral, 2023). Furthermore, when using ChatGPT, there is poten-
tial for data to be collected, stored, and used in illegal and unethical ways.
When ThoughtExchange interviewed a group of 35 educational leaders,
their most pressing concern was the ability of ChatGPT to be used for
malicious purposes that might result in physical harm, destruction, or
even death (MacLeod, 2023). The leaders deemed it critical to train teach-
ers and students how to spot and report potential abuse when using
ChatGPT. They also recommended that strict controls be implemented to
regulate who has access to the technology, at what grade levels, and in
what contexts. A high priority was training teachers to give them skills
and practical strategies to use ChatGPT in ways that would support the
curriculum and maintain the safety of students.

Personalized learning

ChatGPT and other AI-powered applications offer unparalleled approaches


for personalized learning (PL). Teachers are tasked with trying to meet
an ever growing diversity in students and this is a daunting task. PL,
including approaches that utilize adaptive technologies, have been shown
to increase student outcomes, and the longer students experience PL, the
greater their achievement growth tends to be (Pane et al. 2014, 2017).
Khan World School illustrates how one company is seizing the opportunity
to use ChatGPT for PL (Kahn, 2023). In his recent TED talk, Sal Khan,
the founder and CEO of Khan Academy unveiled their personal tutor,
Khanmigo, a bot built on GPT-4 LLM. Khan provided samples of how
students had used Khanmigo in different ways to help them in math,
Computers in the Schools 13

computer programming, biology, and language arts. In the first example,


a student was presented with a common mathematics problem using dis-
tributive properties. When the student made a mistake, the tutor recognized
the mistake, asked the student to explain their reasoning, and then
reminded the student of how to use the distributive property. Essentially,
the tutor did not give a correct answer but it analyzed what the student
did wrong and gave the student suggestions for how to correct their work.
In another instance, a student wanted to know why they needed to learn
about size of cells. Khanmigo responded by asking the student what they
cared about, at which the students answered, “A professional athlete” (Kahn,
2023, 4:21). The tutor then explained how learning about the size of cells
would be really useful for understanding nutrition and how your body
works. In this situation, the tutor understood the context of the video the
student was watching and was able to ask a question of the student to
elicit personal meaning. Another student used ChatGPT to get a character’s
perspective on a particular scene in the novel, The Great Gatsby. Using
ChatGPT’s capability to take on a persona, literature, history, art, or pol-
itics can become much more interesting and highly engaging for students
(Kahn, 2023).
The Khanmigo tutor can also assist students with their writing by
helping them develop an outline and giving them feedback on a first draft,
simulating a live writing coach. In giving feedback, the tutor can highlight
parts of a passage, tell the student it does not support their claim, and
ask the student why. Again, the tutor is acting as a real-time writing coach
that scaffolds and accelerates students’ abilities in reading and writing.
Furthermore, the tutor can be switched from student to teacher mode to
provide teachers suggestions on how to teach a topic. Additionally, Kahn
(2023) revealed that they were currently working on ways to use generative
AI to enhance reading comprehension. A student can watch a video and
at certain points, click on a question and the AI will ask the student about
the topic. The AI can highlight parts of the passage and ask questions
like, “Why did the author use that word? What was their intent? Does it
back up their argument” (Kahn, 2023, 8:39).
Like Khan World School, Quizlet (2023), a web-based or mobile appli-
cation that offers study tools, has been integrating AI technologies to
generate multiple-choice questions and example sentences for vocabulary
learning (Bayer, 2023). Recently, the company integrated Q-Chat, built on
ChatGPT technology to provide a PL coach to tailor material specifically
to meet students’ individual needs. There are many other AI-powered
online study tools such as Knewton (2023), specifically designed for college
students, and Century (Century Tech-Limited, 2023), which offers a suite
of PL tools for primary grades up to adults. PL platforms like these utilize
machine learning algorithms to analyze data on students’ learning styles,
14 J. LAMBERT AND M. STEVENS

strengths, weaknesses, and progress (Smithson, 2023). According to


Smithson, this data analysis can provide customized content and feedback
to deal with students’ knowledge gaps and enhance comprehension. These
systems can continuously adapt to students’ progress ensuring that they
stay challenged and engaged. Finally, PL systems can identify and predict
areas where students struggle and give educators valuable interventions
and support where needed.

Future iterations of ChatGPT


ChatGPT-4 (ChatGPT plus)

While previous GPT models could technically do the same things, ChatGPT
can do them much better, more quickly, and do a larger variety of text-
based tasks. (Mollick, 2022). On March 2023, OpenAI released a paid
subscription to ChatGPT, ChatGPT Plus, built on GPT-4 technology, mak-
ing available the powerful multimodal version of GPT. Most significant
about the latest iteration, ChatGPT-4 is that it can understand input not
just from text, but also from audio and images (Neto, 2023). GPT-3.5
focuses primarily on text, however, GPT-4 can analyze and comment on
images and graphics. For example, it can describe a photo’s content, iden-
tify trends found in a graph, and produce captions for images. Furthermore,
instead of questioning ChatGPT and accepting the result, one can now
guide the application and correct mistakes. Experts will now be able to
fill in the gaps in ChatGPT’s capability. Its power is unprecedented in
machine learning. ChatGPT Plus, which is 10 times more advanced than
its predecessor GPT 3.5, has an astounding maximum word count of
25,000 for both input and output (Terasi, 2023). In contrast, the previous
version had a maximum word count around 3000. ChatGPT Plus has
stronger capabilities to solve complex mathematical and scientific problems
and analyze complex scientific texts and provide insights and explanations
easily. This model also appears to be more intelligent. OpenAI tested this
version’s ability to pass 2022–203 editions of exams as its previous version
and found that GPT-4 passes a bar exam with a score around the top
10% of test takers while GPT-3.5’s score was around the bottom 10%
(OpenAI, 2023c). OpenAI also evaluated GPT-4 on traditional benchmarks
designed for machine learning (ML) models and found that GPT-4 con-
siderably outperformed existing LLMs. OpenAI translated the exams into
different languages since most of the ML benchmarks are written in
English. GPT-4 outperformed GPT 3.5 in 24 of the 26 languages.
The writing capabilities of these models have significantly improved.
ChatGPT-4 can generate different cultural or regional dialects and respond
to expressed emotions in the text. The LLM can also answer complex
Computers in the Schools 15

questions by synthesizing information from multiple sources, whereas the


former version struggled to make connections between topics such as in
the decline of bee populations and its impact on global agriculture (Terasi,
2023). ChatGPT-4’s generated text provides more comprehensive and
nuanced answers and cites sources as evidence when needed. ChatGPT
Plus is also more adept at creative writing tasks such as in writing stories
with a well-developed plot and character development. Furthermore, mech-
anisms were implemented to make ChatGPT Plus less likely to generate
politically biased, offensive, or harmful content (OpenAI, 2023c).
Additionally, since humans can now guide, refine, and correct mistakes
in the output from ChatGPT Plus, experts can fill in the gaps on the
model’s capabilities (Mollick, 2022). Besides ChatGPT Plus, there has been
a flurry of development in open source applications and browser plugins
that greatly enhance the capabilities of ChatGPT.

Open source ChatGPT and alternatives

While ChatGPT may have given the public its first real taste of conver-
sational AI, there are many contenders who are finding their niche in this
market (The Stack, 2023). For example, free and open source conversational
AI models, which are offering alternatives to ChatGPT, are emerging daily
and are attractive for several reasons. Companies do not want to turn
over their most sensitive intellectual property data to a centralized provider
like OpenAI, which serves as a proprietary model behind an API. In
addition to being self-hosted, an open source application can be trained
and fine-tuned to meet industry-specific needs and solutions for compa-
nies. Furthermore, by developing open-source alternatives, a diverse group
of stakeholders can monitor AI bias, accountability and safety rather than
entrusting these to just a few large companies. Open source models encour-
age discussion, research and innovation that will ensure that AI technology
benefits everyone. For example, Bloomberg (2023) developed its own LLM,
BloombergGPT, focused only on the financial domain. Another LLM,
Colossal-AI is an open-source project closely resembling the original
ChatGPT technology in training methods, speed and efficiency that can
be customized for any company (HPC-AI Tech, 2023). Databricks open
sourced their LLM “Dolly,” which exhibits the same human-like interac-
tivity of ChatGPT (Conover et al., 2023). Yet another ChatGPT alternative,
powered by GPT-4, Chatsonic offers an interactive conversational experi-
ence and ability to generate images in a variety of styles (Writesonic,
2023). It is also integrated with Google so it can gives real-time relevant
results on the latest topics.
Garg (2023) provided an annotated list of 30 alternatives to ChatGPT.
Among those are Jasper (2023) and Botsonic (Writesonic, 2023). Elicit
16 J. LAMBERT AND M. STEVENS

(2023), which acts as a research assistant, can compile a list of the high-
est-rated publications relevant to a user’s inquiry, thereby streamlining the
writing of a literature review. Elicit can also find analyzed datasets and
report on the results achieved by different authors. Microsoft recently
released its ChatGPT alternative, BingChat that uses GPT-4 for text input
and responses and is able to also generate images via its feature, Bing
Image Creator (Roach, 2023). Google recently introduced its first set of
AI-powered writing features in Docs and Gmail made available only to
testing partners. Using these features, a user can begin typing a topic and
a draft will instantly be generated for you by your collaborative AI partner
who can continue to refine, edit, and offer suggestions as needed. Through
testing, they hope to refine the features before making them available to
the public (Google, 2023).

ChatGPT plugins

In March 2023, OpenAI allowed third parties to access to ChatGPT, which


will significantly extend its current capabilities and offer unlimited oppor-
tunities to change how all of us do things. For example, ChatGPT has a
first-party web-browsing plugin for the Bing browser so the application
now has instant access to the latest data on the internet (Ortiz, 2023).
Before the browser plugin, ChatGPT’s knowledge base was limited to the
data on which it was trained and that data ended in September 2021
(Somoye, 2023). Also, the development of Chrome browser extensions
augment the capabilities of ChatGPT (Sharma, 2023). One such extension,
WebChatGPT, extends the application by adding relevant web results to
its response. The Compose AI extension allows people to automate the
tasks of writing emails. Wiseone, which integrates right into the Chrome
browser window, recognizes all types of text and then automatically breaks
down the complex parts to simplify text for readers. Another extension,
YouTube Summary, opens up a YouTube video transcript, pastes the entire
video transcript in a new browser tab, and then runs a command to
provide a quick summary of the video. Lastly, the Summarize Chrome
extension offers the same functionality as YouTube Summary but for text.
Since its launch, ChatGPT Plus has over 1.16 billion users and by the
end of 2024, it is expected to generate a revenue of $1 billion (D. Ruby,
2023). ChatGPT has expanded swiftly and dramatically and its future will
only flourish. According to Larsen (2023), another model of ChatGPT,
ChatGPT 4.5 will be released in late 2023 and there are rumors of an
eventual ChatGPT-5. However, juxtaposed with this incredible growth is
the concern of technology leaders in the field of AI. On March 11, 2023,
over 1000 seriously concerned AI leaders published a letter on the Future
of Life Institute web site calling for a halt of the rapid development of
Computers in the Schools 17

AI (Blake, 2023). The article calls for AI companies that are working on
models more powerful than GPT-4 to immediately halt work for at least
six months to give time for joint develop and implementation of a set of
shared safety protocols for advanced AI design and development. The
letter says this is necessary because “AI systems with human-competitive
intelligence can pose profound risks to society and humanity” (Future of
Life, 2023, para. 1). Risks include the spread of propaganda, the destruc-
tion of jobs, the potential replacement and obsolescence of human life,
and the loss of control of our civilization. The rapid growth of AI has
significant implications for our educational systems.

Implications
Improved capabilities of ChatGPT have the potential to increase the pro-
ductivity of business in a variety of industries, especially those where
written materials are indispensable for communication such as journalism,
information and data processing, and public relations (Briggs & Kodnini,
2023; Daly, 2017; Mollick, 2022; Noy & Zhang, 2023). This means there
will be an eventual disruption in the labor market and restructuring of
traditional job boundaries as machines take over tasks that were once
done by humans, particularly repetitive and routine technical tasks. Based
on data analysis of occupational tasks both in the U.S. and Europe,
Goldman Sachs researchers Briggs and Kodnini (2023) speculate that
approximately two-thirds of current jobs will be vulnerable to some degree
of AI automation and that generative AI could substitute a quarter to one
half of current work. Briggs and Kodnini suggest this automation may
not mean layoffs but a complementary use of AI in these jobs. Moreover,
by integrating custom applications built on top of LLMs, increases in tasks
could likely be between 47% and 56%. According to Rotman (2023), the
concern is not so much that ChatGPT will lead to widespread unemploy-
ment but that companies will replace reasonably well-paying white-collar
jobs with automation, thereby leaving only those who know how to incor-
porate LLM technology to procure all the benefits. Furthermore, the World
Economic Forum (2023) surveyed 803 companies worldwide and reported
results in The Future of Jobs Report. According to the report, in the next
five years AI will be a key driver in the displacement of workers and is
expected to be adopted by nearly 75% of the 803 surveyed companies. AI
and big data will comprise more than 40% of the technology training
programs undertaken in surveyed companies operating in the United
States, China, Brazil and Indonesia.
Automation and permeation of AI in our society will accelerate changes
in required workforce skills. According to the World Economic Forum
(2023), analytical thinking, creativity, problem solving, and technology
18 J. LAMBERT AND M. STEVENS

literacy, specifically in AI, will be the most important skills workers need
in the next five years. 60% of the businesses identified a lack of worker
skills such as these as the major barrier to modernizing their business
and 53% of companies identified an inability to attract talent as another
significant barrier. According to the report, AI and Machine Learning
Specialists top the list of the fastest growing jobs in the next five years
and due to this changing landscape, there is a clear need for training and
reskilling of workers to meet the demands of the labor market. This will
lead to a growth in educational jobs, particularly 3 million additional jobs
for Vocational Education Teachers and University and Higher education
Teachers.
Because AI-related workplace skills will be so essential, this implies that
it is critically important for education sectors, public and private, to begin
now exploring ways to offer flexible learning paths and options to prepare
today’s students for future job demands in AI. This must start with younger
students early in their schooling and continue through higher education.
As a result, educators at all levels should become more informed about
AI and find ways to integrate AI applications in existing K-12 classrooms,
higher education coursework, and teacher education programs. Eventually,
formal AI education curriculum will be necessary to train students and
teachers in the use and ethics of using AI. In their extensive literature
review aimed at understanding how AI literacy is being integrated into
K-12 education, Casal-Otero et al. (2023) found that the teaching of basic
AI-related concepts and techniques at the K-12 level is scarce. Studies
they examined highlighted the need for a competency framework to guide
the design of AI literacy in K-12 in educational institutions and to provide
a benchmark for describing the areas of competency that K-12 learners
should develop.
To this end, there will be an increased demand for Educational
Technologists who have skills in the use of AI, integration of AI in the
curriculum, and ability to provide necessary training for education and
business sectors (Casal-Otero et al., 2023). There is an immediate and
urgent need to train students and educators alike in using the ChatGPT
in ways that promote learning rather than cheating, and to do so safely
and ethically. Collaborative discussions including content experts,
Educational Technologist, colleges, educational associations, and other
stakeholders will be essential to establish new standards, policies, and
assessments when ChatGPT is integrated in the classroom. It will also be
imperative for educational policy makers to look at policies from an
adaptive stance, be flexible and responsive, and solicit feedback from
stakeholders when making decisions about the use of ChatGPT (Vukovic
& Russell, 2023), especially since there are now over 300 companies cur-
rently offering products and services built on the GPT-3 model.
Computers in the Schools 19

ChatGPT and its growing number of plugins and APIs will usher in
an unparalleled era of personalized learning. Similar to the world of busi-
ness, ChatGPT technologies can be used to redistribute how educators
use their time and resources and hence, allow them more time to spend
on designing learning activities that promote analytical and critical thinking
skills in students (Alves de Castro, 2023). Additionally, AI-powered per-
sonalized learning systems can offer more equitable access to high-quality
education and learning experiences tailored to students’ individual needs,
once only available to students who could afford expensive personal tutor-
ing services (Smithson, 2023). Furthermore, AI can provide students with
a less pressured way to master content without the pressure of time-based
learning. However, as the development of these platforms grows in pop-
ularity, more research is needed to understand the effectiveness, accuracy,
and bias of different AI algorithmic approaches and to ensure that students
are indeed learning when these technologies are utilized (UNICEF, 2022).
This research will also help to inform decisions about what schools or
colleges should buy and implement. Educators, software designers, and
other stakeholders will need to collaborate on the design of AI-powered
systems to ensure alignment with the existing curriculum and content
standards. Once the product is deployed, ongoing support and capacity
building of teachers should be a priority to ensure they can engage with
the personalized learning platform features in meaningful ways. Personalized
learning platforms typically collect and store student data within the soft-
ware and while this data is needed to inform learning achievement and
adaptions, it is also important that attention is given to establishing safe-
guards on the use, storage, and distribution of this data to protect students’
privacy.

Conclusion
This paper described ChatGPT, how it works and has continued to evolve.
The educational concerns, opportunities, and implications of using ChatGPT
were addressed. ChatGPT and generative AI are still in their infancy but
growing astronomically in power, speed and applications. New iterations
of ChatGPT are unlocking powerful new capabilities that cause us to
rethink what and how we teach, how students learn, and how we should
assess their learning. According to Kovanovic (2023), “The next version
of the model, GPT4, will have about 100 trillion parameters—about 500
times more than GPT3. This is approaching the number of neural con-
nections in the human brain” (para. 9). AI-powered technologies like
ChatGPT will only become better, faster, and more sophisticated, and they
will do so swiftly. Using ChatGPT, we can now interact with educational
material more naturally and engagingly, and personalize learning at a scale
20 J. LAMBERT AND M. STEVENS

unlike anything we have seen before. Duffy and Weil (2023), who cited
Barak, a Harvard professor in computer science, wrote, "There is going
to be an AI revolution…AI-based tools will not make humans redundant,
but rather change the nature of jobs on the scale of the industrial revo-
lution…Underlying these claims—and the perspectives of many professors
we talked to—is an assumption that the cat is out of the bag, that AI’s
future has already been set in motion and efforts to shape it will be futile”
(see section A Life Without Limits).
While ChatGPT and other generative AI applications offer incredible
opportunities to positively reshape our educational systems, they also
pose serious risks that are of serious concern. We must ensure students
are kept safe and not affected negatively by using these technologies.
However, AI technologies are becoming increasingly pervasive in our
everyday lives. This makes it imperative that we explore ways to introduce
these technologies in the classroom and teach students how to use them,
for what purposes, and how to use them responsibly and ethically. If we
do not do this, it could result in more undesirable outcomes. According
to Dr. Pasi Sahlberg, a professor of educational leadership at the University
of Melbourne argues that ChatGPT has the potential to make education
more unequal if all students do not understand how to use it, just like
what happened with the advent of computers (cited by Vukovic & Russell,
2023). It is the role of school to prepare young people to live and prosper
in society increasingly run by AI technologies.
No one really knows the extent of how generative AI will influence
the world’s economies, society, and education but Briggs and Kodnini
(2023) suggest there are clear signs that the effects could be profound
and that this technology will have substantial impacts. Everyday there
are a considerable number of new articles on generative AI and ChatGPT
and releases of new applications built on these technologies. Some edu-
cators are enthusiastic and others are reluctant about using generative
AI technologies in the classroom. We have witnessed this same contro-
versy in past times with the internet and cell phones. Adoption of
technologies happens slowly in education when compared to the rest of
society, and this is understandable given the fact that teachers’ decisions
and use of technology not only affects them personally and professionally,
but also the lives of the students placed in their care. Even so, AI is
here to stay so we can either ban technologies like ChatGPT because of
our concerns or we can embrace them and seize the opportunities they
present.

Disclosure statement
No potential conflict of interest was reported by the author(s).
Computers in the Schools 21

References
Abhishek (2022). oooohhhkay, chatGPT seems to have screwed up here…. Twitter, Posted at
1:37 a.m. December 6, 2022. https://twitter.com/abhi1thakur/status/1600016676052996099
Alves de Castro, C. (2023). A discussion about the impact of ChatGPT in education: Benefits
and concerns. Journal of Business Theory and Practice, 11(2), p28. doi:10.22158/jbtp.v11n2p28
Bayer, L. (2023). Introducing Q-Chat, the world’s first AI tutor built with OpenAI’s
ChatGPT. Quizlet, February 28. https://quizlet.com/blog/meet-q-chat
Blake, A. (2023). Tech leaders call for pause of GPT-4.5, GPT-5 development due to
‘large-scale risks’. DigitalTrends, March 29. https://www.digitaltrends.com/computing/
pause-ai-development-open-letter-warning/
Bloomberg (2023). Introducing BloombergGPT, Bloomberg’s 50-billion parameter large
language model, purpose-built from scratch for finance. Bloomberg, March 30. https://
www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/
Bohannon, M. (2023). Chegg—education company criticized for helping students cheat—
says more turning to chatgpt as stock plunges. https://www.forbes.com/sites/
mollybohannon/2023/05/02/chegg-education-company-criticized-for-helping-students-
cheat-says-more-turning-to-chatgpt-as-stock-plunges/?sh=6ddada446f00
Borji, A. (2023). A categorical archive of ChatGPT failures. Computer Science. doi:10.48550/
arXiv.2302.03494
Brewster, J., Arvanitis, L., & Sadeghi, M. (2023). The Next Great Misinformation
Superspreader. Newsweek Global, 180(6), 16–19.
Briggs, J., Kodnini, D. (2023). Generative AI could raise global GDP by 7%. Goldman
Sachs. https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-
gdp-by-7-percent.html
Cabral, S. (2023). Rise of ChatGPT highlights need for ethics curriculum. Markkula Center
for Applied Ethics at Santa Clara Univesity. https://www.scu.edu/ethics-spotlight/
generative-ai-ethics/rise-of-chatgpt-highlights-need-for-ethics-curriculum/?
Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B., & Barro,
S. (2023). AI literacy in K-12: A systematic literature review. International Journal of
STEM Education, 10(1), 29. doi:10.1186/s40594-023-00418-7
Cassidy, C. (2023). Australian universities to return to ‘pen and paper’ exams after students
caught using AI to write essays. The Guardian, January 5. https://www.theguardian.com/
australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-
caught-using-ai-to-write-essays
Century Tech-Limited (2023). Century [Computer software]. https://www.century.tech/
Chatterjee, J., Dethlefs, N. (2023). This new conversational AI model can be your friend,
philosopher, and guide … and even your worst enemy. Patterns, 4(1), 100676.
doi:10.1016/j.patter.2022.100676
Chaudhary, S. (2023). 15 best free AI content generator & AI writers for 2023. Medium.
https://medium.com/@bedigisure/free-ai-content-generator-11ef7cbb2aa0
Choi, J. H., Hickman, K. E., Monahan, A., & Schwarcz, D. B. (2023). ChatGPT goes to
law school. SSRN Electronic Journal, doi:10.2139/ssrn.4335905
Chowdhury, H. (2023). Sam Altman has one big problem to solve before ChatGPT can gen-
erate big cash—making it ‘woke’. Business Insider, February. https://www.businessinsider.com/
sam-altmans-chatgpt-has-a-bias-problem-that-could-get-it-canceled-2023-2
Conover, M., Hayes, M., Mathur, A., Meng, X., Xie, J., Wan, J., Ghodsi, A., Wendell, P.,
Zaharia, M. (2023). Hello Dolly: Democratizing the magic of ChatGPT with open
models. https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-
viable-instruction-tuned-llm
22 J. LAMBERT AND M. STEVENS

CopyAI, Inc. (2023). Copy.ai [Computer software]. https://www.copy.ai/


Copyleaks, Inc. (2023). Copyleaks [Computer software]. https://copyleaks.com/ai-content-
detector
Cu, M. A., Hockman, S. (2023). Scores of Stanford students used ChatGPT on final ex-
ams, survey suggests. The Stanford Daily, January 22. https://stanforddaily.com/2023/01/22/
scores-of-stanford-students-used-chatgpt-on-final-exams-survey-suggests/
Daly, C. (2017). Future of Journalism Will Be Augmented Thanks to AI. AI Business.
https://aibusiness.com/verticals/future-of-journalism-will-be-augmented-thanks-to-ai
Das, M. R. (2023). AI goes bonkers: Bing’s ChatGPT manipulates, lies and abuses people
when it is not ‘happy.’ https://www.firstpost.com/world/bings-chatgpt-manipulates-lies-
and-abuses-people-when-it-is-not-happy-12162792.html
Duffy, H., Weil, S. E. (2023). ChatGPT, cheating, and the future of education. Harvard
Crimson, February. https://www.thecrimson.com/article/2023/2/23/chatgpt-scrut/
Elicit (2023). Elicit [Computer software]. https://elicit.org/
Frackiewicz, M. (2023). ChatGPT and the future of moral philosophy: Implications for
ethics and values. TS2 Space, May 12. https://ts2.space/en/chatgpt-and-the-future-of-
moral-philosophy-implications-for-ethics-and-values/
Future of Life (2023). Pause giant AI experiments: An open letter. https://futureoflife.org/
open-letter/pause-giant-ai-experiments/
Gal, U. (2023). ChatGPT is a data privacy nightmare, and we ought to be concerned.
Ars Technica, February. https://arstechnica.com/information-technology/2023/02/chatgpt-
is-a-data-privacy-nightmare-and-you-ought-to-be-concerned/
Garg, S. (2023). Top 30 ChatGPT alternatives that will blow your mind in 2023 (free &
paid). Writsonic, April 30. https://writesonic.com/blog/chatgpt-alternatives/#chatsonic
Gewertz, D. (2023). How does ChatGPT work? ZDNet. https://www.zdnet.com/article/
how-does-chatgpt-work/
Glimpse.AI (2023). Articleforge (Version 4.5) [Computer software]. https://www.articleforge.com/
Google (2023). A new era for AI and Google workspace. https://workspace.google.com/
blog/product-announcements/generative-ai
GPTZero (2023). GPTZero [Computer software]. https://gptzero.me/
Grammarly, Inc. (2023). Grammarly [Computer software]. https://www.grammarly.com/
HPC-AI Tech (2023). Replicate ChatGPT training quickly and affordable with open source
Colossal-AI. https://www.hpc ai.tech/blog/colossal-ai-chatgpt
Harris, M. (2022). Chatgpt—the game-changing app every teacher should know about.
Learners Edge, December 30. https://www.learnersedge.com/blog/chatgpt-the-game-
changing-app-every-teacher-should-know-about
Hartmann, J., Schwenzow, J., Witte, M. (2023). The political ideology of conversational AI:
Converging evidence on ChatGPT’s pro-environmental, left-libertarian orientation. https://
arxiv.org/abs/2301.01768
Heikkila, M. (2023). How OpenAI is trying to make ChatGPT safer and less biased. MIT
Technology Review. https://www.technologyreview.com/2023/02/21/1068893/how-openai-
is-trying-to-make-chatgpt-safer-and-less-biased/
Heilweil (2022). AI is finally good at stuff, and that’s a problem. Vox, December 7. https://
www.vox.com/recode/2022/12/7/23498694/ai-artificial-intelligence-chat-gpt-openai
Herman, D. (2022). The end of high-school English. The Atlantic, December 9. https://
www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-
english-essay/672412/
Intelligent (2023). Nearly 1 in 3 college students have used ChatGPT on written assignments.
https://www.intelligent.com/nearly-1-in-3-college-students-have-used-chatgpt-on-written-
assignments/
Computers in the Schools 23

Jasper (2023). Jasper Chat [Computer software]. https://www.jasper.ai/chat


JavaTpoint (2023). Reinforcement learning tutorial. https://www.javatpoint.com/reinforcement-
learning
Kahn, S. (2023). The amazing AI super tutor for students and teachers. [Video]. TED
Conference.
Kim, J. (2023). Data Brokers and the Sale of Americans’ Mental Health Data. Duke Sanford.
https://techpolicy.sanford.duke.edu/wp-content/uploads/sites/4/2023/02/Kim-2023-Data-
Brokers-and-the-Sale-of-Americans-Mental-Health-Data.pdf ?mc_cid=ca3f4a55ff&mc_
eid=66ec824c86
Kirby, A. (2023). Chat-GPT- is this an alert for changes in the way we deliver education?
FE News. https://www.fenews.co.uk/exclusive/chat-gpt-is-this-an-alert-for-changes-in-the-
way-we-deliver-education/
Knewton (2023). Knewton [Computer software]. https://www.knewton.com/
Kovanovic, V. (2023). The dawn of AI has come, and its implications for education couldn’t
be more significant. Freethink, January 5. https://www.freethink.com/society/chatgpt-education
Kung, T., Cheatham, M., Medinilla, A., Sillos, C., Leon, L., Elepano, C., Madriaga, M.,
Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2022). Performance of
ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language
Models. medRxiv. doi:10.1101/2022.12.19.22283643
Larsen, L. (2023). GPT-5: Release date, claims of AGI, pushback, and more. Digital Trends,
April 14. https://www.digitaltrends.com/computing/gpt-5-rumors-news-release-date/
Lee, P. (2016). Learning from Tay’s introduction. Microsoft’s official blog. https://blogs.
microsoft.com/blog/2016/03/25/learning-tays-introduction/
Livingston, C. (2023). ChatGPT, The rise of generative AI. https://www.cio.com/article/474809/
chatgpt-the-rise-of-generative-ai.html
Loo, N., Smith, N. (2023). Educators scrambling to combat ChatGPT on college campus-
es. NewsNation, January 26. https://www.newsnationnow.com/us-news/education/
education-reform/chat-gpt-ai-college/
MacLeod, D. (2023). The impact of AI and ChatGPT in Education: A very collaborative essay.
ThoughtExchange. https://thoughtexchange.com/the-impact-of-ai-and-chatgpt-in-education/
Marche, S. (2022). The College Essay Is Dead. The Atlantic, December 6. https://www.theatlantic.
com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/
McCarthy, J. (2007). What is artificial intelligence? Stanford University. https://www-formal.
stanford.edu/jmc/whatisai.pdf
McGee, R. W. (2023). Is ChatGPT biased against conservatives? An empirical study. SSRN
Electronic Journal, doi:10.2139/ssrn.4359405
Mollick, E. (2022). Chatgpt is a tipping point for AI. Harvard Business Review, December
14. https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai
Moses, L. (2017). The Washington Post’s robot reporter has published 850 articles in the
past year. Digiday. https://digiday.com/media/washington-posts-robot-reporter-published-
500-articles-last-year/
New Delhi (2023). ChatGPT hit 1 million users in 5 days: Here’s how long it took oth-
ers to reach that milestone. The Indian Express, January 29. https://indianexpress.com/
article/technology/chatgpt-hit-1-million-users-5-days-vs-netflix-facebook-instagram-
spotify-mark-8394119/
Neto, J. A. R. (2023). ChatGPT and the generative artificial intelligence. https://medium.
com/chatgpt-learning/chatgpt-and-the-generative-artificial-intelligence-35e510d14069
Nolan, B. (2023). Here are the schools and colleges that have banned the use of ChatGPT
over plagiarism and misinformation fears. https://www.businessinsider.com/chatgpt-
schools-colleges-ban-plagiarism-misinformation-education-2023-1
24 J. LAMBERT AND M. STEVENS

Noy, S., Zhang, W. (2023). Experimental evidence on the productivity effects of generative
artificial intelligence (Working Paper). MIT. https://economics.mit.edu/sites/default/files/
inline-files/Noy_Zhang_1.pdf
OpenAI (n.d.). Educator considerations for ChatGPT. https://platform.openai.com/docs/
chatgpt-education
OpenAI (2023a). ChatGPT (Mar 14 version) [Large language Model]. https://chat.openai.
com/chat
OpenAI (2023b). New AI classifier for indicating AI-written text. OpenAI. https://openai.
com/blog/new-ai-classifier-for-indicating-ai-written-text
OpenAI (2023c). GPT-4 Technical report. Cornell University. https://arxiv.org/abs/2303.08774
Ortiz, S. (2023). What is ChatGPT and why does it matter? Here’s what you need to know.
https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-
you-need-to-know/
Pane, J. F., Griffin, B. A., McCaffrey, D. F., D. F., & Karam, R. T. (2014). Effectiveness of
cognitive tutor Algebra I at scale. Educational Evaluation and Policy Analysis, 36(2),
127–144. doi:10.3102/0162373713507480
Pane, J. F., Steiner, E. D., Baird, M. D., Hamilton, L. S., Pane, J. D. (2017). How does
personalized learning affect student achievement? RAND Corporation. https://www.rand.
org/pubs/research_briefs/RB9994.html
Perrigo, B. (2023). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour
to Make ChatGPT Less Toxic. Time Magazine, January. https://time.com/6247678/
openai-chatgpt-kenya-workers/
Pupuweb (2023). The basics of ChatGPT and generative AI. https://pupuweb.com/basics-
chatgpt-generative-ai/
Quizlet (2023). Quizlet [Computer software]. https://quizlet.com/
Rahman, B. (2023). ChatGPT banned by Indian, French and US colleges, schools and
universities to keep students from gaining unfair advantage. Mysmartprice, January 30.
https://www.mysmartprice.com/gear/chatgpt-banned-by-indian-french-and-us-colleges-
schools-and-universities/
Roach, J. (2023). Microsoft’s Bing Chat waitlist is gone—how to sign up now. Digital
Trends, March 15. https://www.digitaltrends.com/computing/microsoft-bing-chat-waitlist-
going-away/
Rose, J. (2022). OpenAI’s new chatbot will tell you how to shoplift and make explosives.
Vice, December 1. https://www.vice.com/en/article/xgyp9j/openais-new-chatbot-will-tell-
you-how-to-shoplift-and-make-explosives
Rosenberg, S. (2023). What ChatGPT can’t do. Axios. https://www.axios.com/2023/01/24/
chatgpt-errors-ai-limitations
Rosenblatt, K. (2023). ChatGPT banned from New York City public schools’ devices and
networks. NBC News, January 6. https://www.nbcnews.com/tech/tech-news/new-york-
city-public-schools-ban-chatgpt-devices-networks-rcna64446
Rotman, D. (2023). ChatGPT is about to revolutionize the economy. We need to decide
what that looks like. MIT Technology Review, March 25. https://www.technologyreview.
com/2023/03/25/1070275/chatgpt-revolutionize-economy-decide-what-looks-like/
Ruby, D. (2023). 57+ ChatGPT statistics for 2023 (new data + GPT-4 facts). Demand Sage,
April 28. https://www.demandsage.com/chatgpt-statistics/
Ruby, M. (2023). How ChatGPT works: The model behind the bot. Toward Data Science,
January 30. https://towardsdatascience.com/how-chatgpt-works-the-models-behind-the-
bot-1ce5fca96286
Ryan-Mosely, T. (2023). The internet is about to get a lot safer. MIT Technology Review.
https://www.technologyreview.com/2023/03/06/1069391/safer-internet-dsa-dma-eu/
Computers in the Schools 25

Ryter, LLC (2022). Ryter [Computer software]. https://rytr.me/


Sharma, U. (2023). 18 Best ChatGPT Chrome extensions you need to check out. Beebom,
April 21. https://beebom.com/best-chatgpt-chrome-extensions/
Singh, M. (2023). Edtech Chegg tumbles as ChatGPT threat prompts revenue warning.
https://www.reuters.com/markets/us/edtech-chegg-slumps-revenue-warning-chatgpt-
threatens-growth-2023-05-02/
Smithson, A. (2023). AI-powered personalized learning: A game changer for education.
https://alan-smithson.medium.com/ai-powered-personalized-learning-a-game-changer-
for-education-9e66bfeccf98
Somoye, F. L. (2023). ChatGPT plugins explained. PC Guide, April 22. https://www.pcguide.
com/apps/chatgpt-plugins/
Study.com (2023). Productive teaching tool or innovative cheating? https://study.com/
resources/perceptions-of-chatgpt-in-schools
Talsma, B. (2023). How teachers can harness ChatGPT for good. Chalkbeat, January 6.
https://www.chalkbeat.org/2023/1/6/23542142/chatgpt-students-teachers-lesson-ai
Terasi, V. (2023). GPT-4: How is it different from GPT-3.5? Search Engine Journal, March
22). . https://www.searchenginejournal.com/gpt-4-vs-gpt-3-5/482463/
The Stack (2023). Pssssst, CTOs: Free, open source ChatGPT alternatives are landing. https://
thestack.technology/open-source-chatgpt-alternatives/
UNICEF (2022). Trends in Digital Personalized Learning. https://www.unicef.org/globalinsight/
media/2756/file/UNICEF-Global-Insight-Digital-PL-LMIC-executive-summary-2022.pdf
UpGrad (2022). Neural network: Architecture, components & top algorithms. https://www.
upgrad.com/blog/neural-network-architecture-components-algorithms/
Vincent, J. (2022). The lawsuit that could rewrite the rules of AI copyright. The Verge.
https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-
action-lawsuit-ai-copyright-violation-training-data
Vincent, J. (2023). Microsoft’s Bing is an emotionally manipulative liar, and people love
it. The Verge, February 15. https://www.theverge.com/2023/2/15/23599072/microsoft-ai-
bing-personality-conversations-spy-employees-webcams
Vukovic, R., Russell, D. (2023). ChatGPT: Education assessment, equity and policy. Teacher
Magazine, Marchhttps://www.teachermagazine.com/au_en/articles/chatgpt-education-
assessment-equity-and-policy
Welding, L. (2023). Half of college students say using AI on schoolwork is cheating or
plagiarism. BestColleges, March 27. https://www.bestcolleges.com/research/college-
students-ai-tools-survey/
Willyerd, K. (2023). What chatgpt and generative AI could mean for learning. GPStrategies.
https://www.gpstrategies.com/blog/what-chatgpt-and-generative-ai-could-mean-for-
learning/
World Economic Forum (2023). The future of jobs report 2023. https://www3.weforum.
org/docs/WEF_Future_of_Jobs_2023.pdf
Writer (2023). Writer [Computer software]. https://writer.com/
Writesonic (2023). Botsonic. [Computer software]. https://writesonic.com/botsonic
Zacharakos, A. (2023). How hackers can abuse ChatGPT to create malware. TechTarget,
February. https://www.techtarget.com/searchsecurity/news/365531559/How-hackers-can-
abuse-ChatGPT-to-create-malware
Zhuo, T. Y., Huang, Y., Chen, C., Xing, Z. (2023). Exploring AI ethics of chatgpt: A di-
agnostic analysis. arXiv preprint arXiv:2301.12867. https://arxiv.org/pdf/2301.12867.pdf

You might also like