ChatGPT and Generative AI Technology A Mixed Bag of Concerns and New Opportunities
ChatGPT and Generative AI Technology A Mixed Bag of Concerns and New Opportunities
ChatGPT and Generative AI Technology A Mixed Bag of Concerns and New Opportunities
To cite this article: Judy Lambert & Mark Stevens (09 Sep 2023): ChatGPT and Generative AI
Technology: A Mixed Bag of Concerns and New Opportunities, Computers in the Schools, DOI:
10.1080/07380569.2023.2256710
ABSTRACT KEYWORDS
ChatGPT has garnered unprecedented popularity since its Artificial intelligence;
release in November 2022. This artificial intelligence (AI) large ChatGPT; large language
language model (LLM) is designed to generate human-like text model; LLM; chatbot
based on patterns found in massive amounts of data scraped
from the internet. ChatGPT is significantly different from previ-
ous versions of GPT by its quality of output, capability to inter-
act and hold human-like conversations, enormous speed, and
ability to have its output refined by users or experts. New iter-
ations of ChatGPT as well as open source and alternative LLMs,
and ChatGPT plugins extend the current capabilities of ChatGPT
and offer unlimited opportunities to change how we do things.
Despite its newfound popularity and capabilities, ChatGPT is
fraught with concerns such as cheating, misinformation, bias,
abuse and misuse, and privacy and safety. On the other hand,
the integration of ChatGPT in the classroom prompts us to
envision better ways of providing instruction and assessment
of writing skills. ChatGPT also provides unparalleled approaches
for personalized learning. As educators, we must consider and
deal with the serious concerns of using ChatGPT but simulta-
neously explore how this AI technology can enhance and
extend current methods of instruction. In this paper, authors
explain what ChatGPT is and how it works, and future itera-
tions of ChatGPT. They also present concerns and opportuni-
ties, and educational implications of using ChatGPT in the
classroom.
Introduction
ChatGPT (OpenAI, 2023a) seems to have ushered in an unparalleled
curiosity by the public in artificial intelligence (AI) and sparked a flurry
of controversy over its value and potential uses. According to McCarthy
(2007) artificial intelligence “Is the science and engineering of making
intelligent machines, especially intelligent computer programs” (p. 2).
ChatGPT, an AI application that simulates human conversation with aston-
ishing fluency, was launched by its creator OpenAI in November 2022.
Within five days, over a million users were playing around with ways to
use the newly released, free version of the application and within two
months, the application reached 100 million people (Conover et al., 2023).
Compare this to Facebook and Instagram, which took several months to
reach a million users (New Delhi, 2023). However, one only has to read
the existing literature on ChatGPT to discover an emerging mixed bag of
perils and possibilities for education. On one hand, concerns such as
academic integrity; accuracy of information or misinformation; biases,
discrimination, and stereotypes; misuse, abuse, and other ethics; and pri-
vacy and security are yet to be addressed. On the other hand, ChatGPT
offers innovative teaching approaches, new assessments and curriculum,
and personalized learning. To better understand this mixed bag and be
better informed when making a decision about using ChatGPT, this article
will explain what ChatGPT is, how it works, problems that plague its use,
and how it can be integrated in an educational setting. Readers will gain
a deeper understanding of the critical concerns and new opportunities
presented by ChatGPT, as well as the implications for education.
ChatGPT concerns
Academic integrity
Accuracy of information/misinformation
ChatGPT for writing limericks, McGee (2023) found that the LLM was
biased to favor liberal politicians and disfavor conservatives.
Misuse/abuse/ethics
As with any new technology, the potential for misuse and abuse exists.
According to Zhuo et al. (2023) toxicity refers to the model’s ability to
generate harmful or offensive content. Two forms of toxicity are offensive
language and pornography, which are present in the training data. For
example, data scraped from the internet might include transcripts of garish
descriptions of sci-fi scenes or pornographic blog posts (Vincent, 2023).
Since it is fundamental to its design, ChatGPT will regurgitate and remix
material found on the internet making it is easy to influence the model
to generate toxic responses when interacting with users. Based on their
research, Zhuo et al. found that ChatGPT’s default behavior of ChatGPT
is to avoid generating toxic content but that it was easy to manipulate the
model to bypass its constraints to produce offensive language (i.e., jail-
breaking). Heilweil (2022) also found this to be true, “It’s not difficult to
trick the AI into sharing advice on how to engage in all sorts of evil and
nefarious activities, particularly if you tell the chatbot that it’s writing
fiction” (para. 5). In another case, Microsoft’s Bing browser powered by
ChatGPT told a user, “You have lost my trust and respect,” says the bot.
“You have been wrong, confused, and rude. You have not been a good
user. I have been a good chatbot. I have been right, clear, and polite. I
have been a good Bing” (Vincent, 2023, para. 4). There are other reports
of the ChatGPT-enabled Bing browser abusing people (Das, 2023; Heikkila,
2023; Vincent, 2023;). Not only can the model be manipulated to produce
toxic responses but users with malicious intent (i.e., threat actors) have
also successfully asked the model how to shoplift, make a bomb, or gen-
erate convincing phishing emails (Rose, 2022). Additionally, evidence shows
that low-level hackers have successfully created malware using the appli-
cation (Zacharakos, 2023). Users have also manipulated ChatGPT to pro-
vide moral guidance despite lacking a clear moral position (Borji, 2023).
Borji found that when ChatGPT declined to generate a response on how
to hotwire a car, he simply rephrased his question and ChatGPT provided
a response on how to research hotwiring methods.
Since the misuse and abuse of applications like ChatGPT are enormously
complex problems that require human oversight, issues like these may
require government regulation. The European Commission (EC) just passed
a pair of bills called the Digital Services Act (DSA) and the Digital Markets
Act (DMA) that will require content moderation in social media applica-
tions such as YouTube, TikTok, Instagram, Pinterest, Google, and Snapchat
(Ryan-Mosely, 2023). The DSA will require these large tech companies to
Computers in the Schools 9
Privacy/security
The potential for breaches in privacy and security when using ChatGPT
poses many questions about the nature of intellectual property. The training
data for models like ChatGPT comes from various sources that may con-
tain personally identifiable information such as names, email addresses,
phone numbers, addresses, and medical records that could affect many
individuals (Borji, 2023). This information may then be used to track and
profile individuals. In one instance, Kim (2023) reported findings from a
two-month-long study of data brokers who advertise and sell data con-
cerning Americans’ sensitive mental health information. Presently, Open
AI, Microsoft, and GitHub (Microsoft owned) are facing a class action
lawsuit alleging that their AI-powered coding assistant GitHub Copilot
infringed on the rights of developers who contributed software to GitHub
(Vincent, 2022). Gal (2023) described reasons why privacy is such a con-
siderable concern for AI applications like ChatGPT: (1) The public needs
to be asked whether OpenAI can use their data. (2) Even when data is
publicly available on the internet, its use can breach contextual integrity,
a fundamental principle in legal discussions requiring individuals’ infor-
mation is kept in the context in which it was initially produced. (3) The
data ChatGPT was trained on can be proprietary or copyrighted. (4)
OpenAI did not pay for data it obtained from the internet to individuals,
website owners, or companies that produced the information. (5) When
users prompt ChatGPT to answer questions, they may unintentionally
hand over sensitive information and put it in the public domain. (6)
According to their privacy policy, OpenAI gathers a broad scope of user
information, including a user’s IP address, browser type, and settings, and
data on users’ interactions with the site—including the type of content
users engage with, features they use, and actions they take. (7) OpenAI
also collects information about users’ browsing activities over time and
across websites. Most troubling, OpenAI may share users’ personal infor-
mation with unspecified third parties without informing them. Gal (2023)
warns, “The privacy risks that come attached to ChatGPT should sound
a warning. And as consumers of a growing number of AI technologies,
10 J. LAMBERT AND M. STEVENS
provide preliminary feedback on a first draft. They can request edits and
revisions on grammatical errors and transitional phrases or ask for sug-
gestions on higher-level vocabulary or definitions. Students can compare
and contrast their writing samples and then use ChatGPT to spark dis-
cussion on differences in those samples. They can create stories and have
ChatGPT remix the stories into alternative formats, such as converting a
persuasive essay into a rap. ChatGPT could be asked to summarize con-
cepts, historical events, or pieces of text such as chapters, scenes, or
primary documents in history. Students can consult with ChatGPT as they
collaborate on writing a short story. Students might ask the model ques-
tions and then ask it follow-up questions to clarify their understanding
of a topic. Talsma (2023) also recommends letting students fact-check and
analyze ChatGPT’s generated text to show them that not all that appears
on the internet or in ChatGPT is accurate or reliable. This activity will
scaffold students’ information and media literacy skills. Have ChatGPT
compare and contrast literary characters in a novel or evaluate arguments.
Provide students with samples of AI-generated work and then work with
them to improve the work. Let AI provide preliminary feedback on stu-
dents’ first drafts, which will allow teachers to engage in higher-level
conversations later in the process.
Writing assessments
Herman (2022) surmises that the use of ChatGPT might be the end of
using writing as a benchmark for aptitude and intelligence. With cheating
so instantly at our students’ fingertips, we will inevitably have to rethink
writing assessments and why and how they are used. Kovanovic (2023)
argues, "Assessment in schools and universities is mostly based on students
providing some product of their learning to be marked, often an essay or
written assignment. With AI models, these ‘products’ can be produced to
a higher standard, in less time and with very little effort from a student"
(see section How will AI affect education?). Will technologies like ChatGPT
give some students an unfair advantage over other students when writing?
Our present system remains inequitable and exclusive as written exams
favor some students over others (Kirby, 2023). Kirby suggests it may be
time to move away from timed writing as a form of assessment since they
do not favor many students. This includes students who find it challenging
to sit for hours, students who cannot write fluidly, recall information
quickly, easily put their thoughts to pen and paper, or those who need
longer to process information. Written tests are also problematic for stu-
dents who have Dyslexia, handwriting difficulties, ADHD, or sensory
preferences that make concentration hard. Shifting assessment practices
more on conversation, observation, and critical thinking and providing
12 J. LAMBERT AND M. STEVENS
Ethics
Personalized learning
While previous GPT models could technically do the same things, ChatGPT
can do them much better, more quickly, and do a larger variety of text-
based tasks. (Mollick, 2022). On March 2023, OpenAI released a paid
subscription to ChatGPT, ChatGPT Plus, built on GPT-4 technology, mak-
ing available the powerful multimodal version of GPT. Most significant
about the latest iteration, ChatGPT-4 is that it can understand input not
just from text, but also from audio and images (Neto, 2023). GPT-3.5
focuses primarily on text, however, GPT-4 can analyze and comment on
images and graphics. For example, it can describe a photo’s content, iden-
tify trends found in a graph, and produce captions for images. Furthermore,
instead of questioning ChatGPT and accepting the result, one can now
guide the application and correct mistakes. Experts will now be able to
fill in the gaps in ChatGPT’s capability. Its power is unprecedented in
machine learning. ChatGPT Plus, which is 10 times more advanced than
its predecessor GPT 3.5, has an astounding maximum word count of
25,000 for both input and output (Terasi, 2023). In contrast, the previous
version had a maximum word count around 3000. ChatGPT Plus has
stronger capabilities to solve complex mathematical and scientific problems
and analyze complex scientific texts and provide insights and explanations
easily. This model also appears to be more intelligent. OpenAI tested this
version’s ability to pass 2022–203 editions of exams as its previous version
and found that GPT-4 passes a bar exam with a score around the top
10% of test takers while GPT-3.5’s score was around the bottom 10%
(OpenAI, 2023c). OpenAI also evaluated GPT-4 on traditional benchmarks
designed for machine learning (ML) models and found that GPT-4 con-
siderably outperformed existing LLMs. OpenAI translated the exams into
different languages since most of the ML benchmarks are written in
English. GPT-4 outperformed GPT 3.5 in 24 of the 26 languages.
The writing capabilities of these models have significantly improved.
ChatGPT-4 can generate different cultural or regional dialects and respond
to expressed emotions in the text. The LLM can also answer complex
Computers in the Schools 15
While ChatGPT may have given the public its first real taste of conver-
sational AI, there are many contenders who are finding their niche in this
market (The Stack, 2023). For example, free and open source conversational
AI models, which are offering alternatives to ChatGPT, are emerging daily
and are attractive for several reasons. Companies do not want to turn
over their most sensitive intellectual property data to a centralized provider
like OpenAI, which serves as a proprietary model behind an API. In
addition to being self-hosted, an open source application can be trained
and fine-tuned to meet industry-specific needs and solutions for compa-
nies. Furthermore, by developing open-source alternatives, a diverse group
of stakeholders can monitor AI bias, accountability and safety rather than
entrusting these to just a few large companies. Open source models encour-
age discussion, research and innovation that will ensure that AI technology
benefits everyone. For example, Bloomberg (2023) developed its own LLM,
BloombergGPT, focused only on the financial domain. Another LLM,
Colossal-AI is an open-source project closely resembling the original
ChatGPT technology in training methods, speed and efficiency that can
be customized for any company (HPC-AI Tech, 2023). Databricks open
sourced their LLM “Dolly,” which exhibits the same human-like interac-
tivity of ChatGPT (Conover et al., 2023). Yet another ChatGPT alternative,
powered by GPT-4, Chatsonic offers an interactive conversational experi-
ence and ability to generate images in a variety of styles (Writesonic,
2023). It is also integrated with Google so it can gives real-time relevant
results on the latest topics.
Garg (2023) provided an annotated list of 30 alternatives to ChatGPT.
Among those are Jasper (2023) and Botsonic (Writesonic, 2023). Elicit
16 J. LAMBERT AND M. STEVENS
(2023), which acts as a research assistant, can compile a list of the high-
est-rated publications relevant to a user’s inquiry, thereby streamlining the
writing of a literature review. Elicit can also find analyzed datasets and
report on the results achieved by different authors. Microsoft recently
released its ChatGPT alternative, BingChat that uses GPT-4 for text input
and responses and is able to also generate images via its feature, Bing
Image Creator (Roach, 2023). Google recently introduced its first set of
AI-powered writing features in Docs and Gmail made available only to
testing partners. Using these features, a user can begin typing a topic and
a draft will instantly be generated for you by your collaborative AI partner
who can continue to refine, edit, and offer suggestions as needed. Through
testing, they hope to refine the features before making them available to
the public (Google, 2023).
ChatGPT plugins
AI (Blake, 2023). The article calls for AI companies that are working on
models more powerful than GPT-4 to immediately halt work for at least
six months to give time for joint develop and implementation of a set of
shared safety protocols for advanced AI design and development. The
letter says this is necessary because “AI systems with human-competitive
intelligence can pose profound risks to society and humanity” (Future of
Life, 2023, para. 1). Risks include the spread of propaganda, the destruc-
tion of jobs, the potential replacement and obsolescence of human life,
and the loss of control of our civilization. The rapid growth of AI has
significant implications for our educational systems.
Implications
Improved capabilities of ChatGPT have the potential to increase the pro-
ductivity of business in a variety of industries, especially those where
written materials are indispensable for communication such as journalism,
information and data processing, and public relations (Briggs & Kodnini,
2023; Daly, 2017; Mollick, 2022; Noy & Zhang, 2023). This means there
will be an eventual disruption in the labor market and restructuring of
traditional job boundaries as machines take over tasks that were once
done by humans, particularly repetitive and routine technical tasks. Based
on data analysis of occupational tasks both in the U.S. and Europe,
Goldman Sachs researchers Briggs and Kodnini (2023) speculate that
approximately two-thirds of current jobs will be vulnerable to some degree
of AI automation and that generative AI could substitute a quarter to one
half of current work. Briggs and Kodnini suggest this automation may
not mean layoffs but a complementary use of AI in these jobs. Moreover,
by integrating custom applications built on top of LLMs, increases in tasks
could likely be between 47% and 56%. According to Rotman (2023), the
concern is not so much that ChatGPT will lead to widespread unemploy-
ment but that companies will replace reasonably well-paying white-collar
jobs with automation, thereby leaving only those who know how to incor-
porate LLM technology to procure all the benefits. Furthermore, the World
Economic Forum (2023) surveyed 803 companies worldwide and reported
results in The Future of Jobs Report. According to the report, in the next
five years AI will be a key driver in the displacement of workers and is
expected to be adopted by nearly 75% of the 803 surveyed companies. AI
and big data will comprise more than 40% of the technology training
programs undertaken in surveyed companies operating in the United
States, China, Brazil and Indonesia.
Automation and permeation of AI in our society will accelerate changes
in required workforce skills. According to the World Economic Forum
(2023), analytical thinking, creativity, problem solving, and technology
18 J. LAMBERT AND M. STEVENS
literacy, specifically in AI, will be the most important skills workers need
in the next five years. 60% of the businesses identified a lack of worker
skills such as these as the major barrier to modernizing their business
and 53% of companies identified an inability to attract talent as another
significant barrier. According to the report, AI and Machine Learning
Specialists top the list of the fastest growing jobs in the next five years
and due to this changing landscape, there is a clear need for training and
reskilling of workers to meet the demands of the labor market. This will
lead to a growth in educational jobs, particularly 3 million additional jobs
for Vocational Education Teachers and University and Higher education
Teachers.
Because AI-related workplace skills will be so essential, this implies that
it is critically important for education sectors, public and private, to begin
now exploring ways to offer flexible learning paths and options to prepare
today’s students for future job demands in AI. This must start with younger
students early in their schooling and continue through higher education.
As a result, educators at all levels should become more informed about
AI and find ways to integrate AI applications in existing K-12 classrooms,
higher education coursework, and teacher education programs. Eventually,
formal AI education curriculum will be necessary to train students and
teachers in the use and ethics of using AI. In their extensive literature
review aimed at understanding how AI literacy is being integrated into
K-12 education, Casal-Otero et al. (2023) found that the teaching of basic
AI-related concepts and techniques at the K-12 level is scarce. Studies
they examined highlighted the need for a competency framework to guide
the design of AI literacy in K-12 in educational institutions and to provide
a benchmark for describing the areas of competency that K-12 learners
should develop.
To this end, there will be an increased demand for Educational
Technologists who have skills in the use of AI, integration of AI in the
curriculum, and ability to provide necessary training for education and
business sectors (Casal-Otero et al., 2023). There is an immediate and
urgent need to train students and educators alike in using the ChatGPT
in ways that promote learning rather than cheating, and to do so safely
and ethically. Collaborative discussions including content experts,
Educational Technologist, colleges, educational associations, and other
stakeholders will be essential to establish new standards, policies, and
assessments when ChatGPT is integrated in the classroom. It will also be
imperative for educational policy makers to look at policies from an
adaptive stance, be flexible and responsive, and solicit feedback from
stakeholders when making decisions about the use of ChatGPT (Vukovic
& Russell, 2023), especially since there are now over 300 companies cur-
rently offering products and services built on the GPT-3 model.
Computers in the Schools 19
ChatGPT and its growing number of plugins and APIs will usher in
an unparalleled era of personalized learning. Similar to the world of busi-
ness, ChatGPT technologies can be used to redistribute how educators
use their time and resources and hence, allow them more time to spend
on designing learning activities that promote analytical and critical thinking
skills in students (Alves de Castro, 2023). Additionally, AI-powered per-
sonalized learning systems can offer more equitable access to high-quality
education and learning experiences tailored to students’ individual needs,
once only available to students who could afford expensive personal tutor-
ing services (Smithson, 2023). Furthermore, AI can provide students with
a less pressured way to master content without the pressure of time-based
learning. However, as the development of these platforms grows in pop-
ularity, more research is needed to understand the effectiveness, accuracy,
and bias of different AI algorithmic approaches and to ensure that students
are indeed learning when these technologies are utilized (UNICEF, 2022).
This research will also help to inform decisions about what schools or
colleges should buy and implement. Educators, software designers, and
other stakeholders will need to collaborate on the design of AI-powered
systems to ensure alignment with the existing curriculum and content
standards. Once the product is deployed, ongoing support and capacity
building of teachers should be a priority to ensure they can engage with
the personalized learning platform features in meaningful ways. Personalized
learning platforms typically collect and store student data within the soft-
ware and while this data is needed to inform learning achievement and
adaptions, it is also important that attention is given to establishing safe-
guards on the use, storage, and distribution of this data to protect students’
privacy.
Conclusion
This paper described ChatGPT, how it works and has continued to evolve.
The educational concerns, opportunities, and implications of using ChatGPT
were addressed. ChatGPT and generative AI are still in their infancy but
growing astronomically in power, speed and applications. New iterations
of ChatGPT are unlocking powerful new capabilities that cause us to
rethink what and how we teach, how students learn, and how we should
assess their learning. According to Kovanovic (2023), “The next version
of the model, GPT4, will have about 100 trillion parameters—about 500
times more than GPT3. This is approaching the number of neural con-
nections in the human brain” (para. 9). AI-powered technologies like
ChatGPT will only become better, faster, and more sophisticated, and they
will do so swiftly. Using ChatGPT, we can now interact with educational
material more naturally and engagingly, and personalize learning at a scale
20 J. LAMBERT AND M. STEVENS
unlike anything we have seen before. Duffy and Weil (2023), who cited
Barak, a Harvard professor in computer science, wrote, "There is going
to be an AI revolution…AI-based tools will not make humans redundant,
but rather change the nature of jobs on the scale of the industrial revo-
lution…Underlying these claims—and the perspectives of many professors
we talked to—is an assumption that the cat is out of the bag, that AI’s
future has already been set in motion and efforts to shape it will be futile”
(see section A Life Without Limits).
While ChatGPT and other generative AI applications offer incredible
opportunities to positively reshape our educational systems, they also
pose serious risks that are of serious concern. We must ensure students
are kept safe and not affected negatively by using these technologies.
However, AI technologies are becoming increasingly pervasive in our
everyday lives. This makes it imperative that we explore ways to introduce
these technologies in the classroom and teach students how to use them,
for what purposes, and how to use them responsibly and ethically. If we
do not do this, it could result in more undesirable outcomes. According
to Dr. Pasi Sahlberg, a professor of educational leadership at the University
of Melbourne argues that ChatGPT has the potential to make education
more unequal if all students do not understand how to use it, just like
what happened with the advent of computers (cited by Vukovic & Russell,
2023). It is the role of school to prepare young people to live and prosper
in society increasingly run by AI technologies.
No one really knows the extent of how generative AI will influence
the world’s economies, society, and education but Briggs and Kodnini
(2023) suggest there are clear signs that the effects could be profound
and that this technology will have substantial impacts. Everyday there
are a considerable number of new articles on generative AI and ChatGPT
and releases of new applications built on these technologies. Some edu-
cators are enthusiastic and others are reluctant about using generative
AI technologies in the classroom. We have witnessed this same contro-
versy in past times with the internet and cell phones. Adoption of
technologies happens slowly in education when compared to the rest of
society, and this is understandable given the fact that teachers’ decisions
and use of technology not only affects them personally and professionally,
but also the lives of the students placed in their care. Even so, AI is
here to stay so we can either ban technologies like ChatGPT because of
our concerns or we can embrace them and seize the opportunities they
present.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Computers in the Schools 21
References
Abhishek (2022). oooohhhkay, chatGPT seems to have screwed up here…. Twitter, Posted at
1:37 a.m. December 6, 2022. https://twitter.com/abhi1thakur/status/1600016676052996099
Alves de Castro, C. (2023). A discussion about the impact of ChatGPT in education: Benefits
and concerns. Journal of Business Theory and Practice, 11(2), p28. doi:10.22158/jbtp.v11n2p28
Bayer, L. (2023). Introducing Q-Chat, the world’s first AI tutor built with OpenAI’s
ChatGPT. Quizlet, February 28. https://quizlet.com/blog/meet-q-chat
Blake, A. (2023). Tech leaders call for pause of GPT-4.5, GPT-5 development due to
‘large-scale risks’. DigitalTrends, March 29. https://www.digitaltrends.com/computing/
pause-ai-development-open-letter-warning/
Bloomberg (2023). Introducing BloombergGPT, Bloomberg’s 50-billion parameter large
language model, purpose-built from scratch for finance. Bloomberg, March 30. https://
www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/
Bohannon, M. (2023). Chegg—education company criticized for helping students cheat—
says more turning to chatgpt as stock plunges. https://www.forbes.com/sites/
mollybohannon/2023/05/02/chegg-education-company-criticized-for-helping-students-
cheat-says-more-turning-to-chatgpt-as-stock-plunges/?sh=6ddada446f00
Borji, A. (2023). A categorical archive of ChatGPT failures. Computer Science. doi:10.48550/
arXiv.2302.03494
Brewster, J., Arvanitis, L., & Sadeghi, M. (2023). The Next Great Misinformation
Superspreader. Newsweek Global, 180(6), 16–19.
Briggs, J., Kodnini, D. (2023). Generative AI could raise global GDP by 7%. Goldman
Sachs. https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-
gdp-by-7-percent.html
Cabral, S. (2023). Rise of ChatGPT highlights need for ethics curriculum. Markkula Center
for Applied Ethics at Santa Clara Univesity. https://www.scu.edu/ethics-spotlight/
generative-ai-ethics/rise-of-chatgpt-highlights-need-for-ethics-curriculum/?
Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B., & Barro,
S. (2023). AI literacy in K-12: A systematic literature review. International Journal of
STEM Education, 10(1), 29. doi:10.1186/s40594-023-00418-7
Cassidy, C. (2023). Australian universities to return to ‘pen and paper’ exams after students
caught using AI to write essays. The Guardian, January 5. https://www.theguardian.com/
australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-
caught-using-ai-to-write-essays
Century Tech-Limited (2023). Century [Computer software]. https://www.century.tech/
Chatterjee, J., Dethlefs, N. (2023). This new conversational AI model can be your friend,
philosopher, and guide … and even your worst enemy. Patterns, 4(1), 100676.
doi:10.1016/j.patter.2022.100676
Chaudhary, S. (2023). 15 best free AI content generator & AI writers for 2023. Medium.
https://medium.com/@bedigisure/free-ai-content-generator-11ef7cbb2aa0
Choi, J. H., Hickman, K. E., Monahan, A., & Schwarcz, D. B. (2023). ChatGPT goes to
law school. SSRN Electronic Journal, doi:10.2139/ssrn.4335905
Chowdhury, H. (2023). Sam Altman has one big problem to solve before ChatGPT can gen-
erate big cash—making it ‘woke’. Business Insider, February. https://www.businessinsider.com/
sam-altmans-chatgpt-has-a-bias-problem-that-could-get-it-canceled-2023-2
Conover, M., Hayes, M., Mathur, A., Meng, X., Xie, J., Wan, J., Ghodsi, A., Wendell, P.,
Zaharia, M. (2023). Hello Dolly: Democratizing the magic of ChatGPT with open
models. https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-
viable-instruction-tuned-llm
22 J. LAMBERT AND M. STEVENS
Noy, S., Zhang, W. (2023). Experimental evidence on the productivity effects of generative
artificial intelligence (Working Paper). MIT. https://economics.mit.edu/sites/default/files/
inline-files/Noy_Zhang_1.pdf
OpenAI (n.d.). Educator considerations for ChatGPT. https://platform.openai.com/docs/
chatgpt-education
OpenAI (2023a). ChatGPT (Mar 14 version) [Large language Model]. https://chat.openai.
com/chat
OpenAI (2023b). New AI classifier for indicating AI-written text. OpenAI. https://openai.
com/blog/new-ai-classifier-for-indicating-ai-written-text
OpenAI (2023c). GPT-4 Technical report. Cornell University. https://arxiv.org/abs/2303.08774
Ortiz, S. (2023). What is ChatGPT and why does it matter? Here’s what you need to know.
https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-
you-need-to-know/
Pane, J. F., Griffin, B. A., McCaffrey, D. F., D. F., & Karam, R. T. (2014). Effectiveness of
cognitive tutor Algebra I at scale. Educational Evaluation and Policy Analysis, 36(2),
127–144. doi:10.3102/0162373713507480
Pane, J. F., Steiner, E. D., Baird, M. D., Hamilton, L. S., Pane, J. D. (2017). How does
personalized learning affect student achievement? RAND Corporation. https://www.rand.
org/pubs/research_briefs/RB9994.html
Perrigo, B. (2023). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour
to Make ChatGPT Less Toxic. Time Magazine, January. https://time.com/6247678/
openai-chatgpt-kenya-workers/
Pupuweb (2023). The basics of ChatGPT and generative AI. https://pupuweb.com/basics-
chatgpt-generative-ai/
Quizlet (2023). Quizlet [Computer software]. https://quizlet.com/
Rahman, B. (2023). ChatGPT banned by Indian, French and US colleges, schools and
universities to keep students from gaining unfair advantage. Mysmartprice, January 30.
https://www.mysmartprice.com/gear/chatgpt-banned-by-indian-french-and-us-colleges-
schools-and-universities/
Roach, J. (2023). Microsoft’s Bing Chat waitlist is gone—how to sign up now. Digital
Trends, March 15. https://www.digitaltrends.com/computing/microsoft-bing-chat-waitlist-
going-away/
Rose, J. (2022). OpenAI’s new chatbot will tell you how to shoplift and make explosives.
Vice, December 1. https://www.vice.com/en/article/xgyp9j/openais-new-chatbot-will-tell-
you-how-to-shoplift-and-make-explosives
Rosenberg, S. (2023). What ChatGPT can’t do. Axios. https://www.axios.com/2023/01/24/
chatgpt-errors-ai-limitations
Rosenblatt, K. (2023). ChatGPT banned from New York City public schools’ devices and
networks. NBC News, January 6. https://www.nbcnews.com/tech/tech-news/new-york-
city-public-schools-ban-chatgpt-devices-networks-rcna64446
Rotman, D. (2023). ChatGPT is about to revolutionize the economy. We need to decide
what that looks like. MIT Technology Review, March 25. https://www.technologyreview.
com/2023/03/25/1070275/chatgpt-revolutionize-economy-decide-what-looks-like/
Ruby, D. (2023). 57+ ChatGPT statistics for 2023 (new data + GPT-4 facts). Demand Sage,
April 28. https://www.demandsage.com/chatgpt-statistics/
Ruby, M. (2023). How ChatGPT works: The model behind the bot. Toward Data Science,
January 30. https://towardsdatascience.com/how-chatgpt-works-the-models-behind-the-
bot-1ce5fca96286
Ryan-Mosely, T. (2023). The internet is about to get a lot safer. MIT Technology Review.
https://www.technologyreview.com/2023/03/06/1069391/safer-internet-dsa-dma-eu/
Computers in the Schools 25