AI Policy Education ChatGPT Policy in HE
AI Policy Education ChatGPT Policy in HE
AI Policy Education ChatGPT Policy in HE
https://doi.org/10.1186/s41239-023-00408-3
Technology in Higher Education
*Correspondence:
[email protected] Abstract
1
University of Hong Kong, Hong This study aims to develop an AI education policy for higher education by examining
Kong, Hong Kong, HKSAR, China the perceptions and implications of text generative AI technologies. Data was collected
from 457 students and 180 teachers and staff across various disciplines in Hong Kong
universities, using both quantitative and qualitative research methods. Based on the
findings, the study proposes an AI Ecological Education Policy Framework to address
the multifaceted implications of AI integration in university teaching and learning. This
framework is organized into three dimensions: Pedagogical, Governance, and Opera-
tional. The Pedagogical dimension concentrates on using AI to improve teaching and
learning outcomes, while the Governance dimension tackles issues related to privacy,
security, and accountability. The Operational dimension addresses matters concern-
ing infrastructure and training. The framework fosters a nuanced understanding of
the implications of AI integration in academic settings, ensuring that stakeholders are
aware of their responsibilities and can take appropriate actions accordingly.
Highlights
Introduction
In recent months, there has been a growing concern in the academic settings about the
use of text generative artificial intelligence (AI), such as ChatGPT, Bing and the latest,
Co-Pilot integrated within the Microsoft Office suite. One of the main concerns is that
students may use generative AI tools to cheat or plagiarise their written assignments
and exams. In fact, a recent survey of university students found that nearly one in three
© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third
party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the mate-
rial. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://
creativecommons.org/licenses/by/4.0/.
Chan Int J Educ Technol High Educ (2023) 20:38 Page 2 of 25
students had used a form of AI, such as essay-generating software, to complete their
coursework (Intelligent.com, 2023). About one-third of college students surveyed (sam-
ple size 1000) in the US have utilized the AI chatbot such as ChatGPT to complete writ-
ten homework assignments, with 60% using the programme on more than half of their
assignments. ChatGPT types of generative AI tools is capable of imitating human writ-
ing, with some students using it to cheat. The study found that 75% of students believe
that using the programme for cheating is wrong but still do it, and nearly 30% believe
their professors are unaware of their use of the tool. The study also noted that some pro-
fessors are considering whether to include ChatGPT in their lessons or join calls to ban
it, with 46% of students saying their professors or institutions have banned the tool for
homework. This has led to calls for stricter regulations and penalties for academic mis-
conduct involving AI.
Another concern is that the use of generative AI may lead to a decline in students’
writing and critical thinking skills (Civil, 2023; Warschauer et al., 2023), as they become
more reliant on automated tools to complete their work. Some academics argue that
this could have a negative impact on the quality of education and ultimately harm the
students’ learning outcomes (Chan & Lee, 2023; Korn & Kelly, 2023; Oliver, 2023; Zhai,
2022).
These concerns have led some universities to ban the use of generative AI in their aca-
demic programmes. Eight out of 24 universities in the prestigious UK Russell Group
have declared the use of the AI bot for assignments as academic misconduct including
Oxford and Cambridge. Meanwhile, many other universities around the world, are rush-
ing to review their plagiarism policies citing concerns about academic integrity (Wood,
2023; Yau & Chan, 2023). Some Australian universities have had to alter their exam and
assessment procedures back to pen- and paper-based (Cassidy, 2023; Cavendish, 2023).
However, there are also those who argue that generative AI has the potential to revolu-
tionize education and enhance the learning experience for students. For example, some
experts suggest that generative AI could be used to provide personalized feedback and
support to students, helping them to identify areas of weakness and improve their skills
in an adaptive manner (Kasneci et al, 2023; Sinhaliz et al., 2023).
perform a wide range of language tasks and generate human-like text. The development
of generative AI models like GPT-3.5 and GPT-4 has the potential to revolutionize many
fields, including natural language processing, creative writing, and content generation.
in education. Its blend of Eastern and Western educational philosophies and practices
offers a fertile ground for examining the impacts and opportunities of AI integration in
varied educational contexts. Furthermore, as Hong Kong is actively striving to enhance
its digital learning capabilities and infrastructure, studying AI policy could provide val-
uable insights into the challenges and best practices of implementing AI in education,
thereby potentially informing AI education strategies not only in Hong Kong but also in
other global contexts.
The study employed a comprehensive approach to data collection, gathering rich
quantitative and open-ended survey data from a diverse range of stakeholders in the
education community to ensure that it reflects the needs and values of all those involved.
The combination of these data sources allowed for a holistic understanding of the topic
under investigation, providing a nuanced and multifaceted view of the issues at hand. By
doing so, we can help to ensure that the use of generative AI in education is both benefi-
cial and ethical.
2021b, 2023). AI strategy in the European Union, as Renda’s (2020) analysis pointed out,
also focused on ethics and highlighted a human-centric approach to AI. In order to pro-
tect EU citizens from the danger of abusive use of advanced technologies, EU proposed
its own pillars (legal compliance, ethical alignment and sociotechnical robustness) to
ensure the trustworthiness of AI and established a specific AI expert group to work on
specific policy recommendations and guidelines.
The heavy focus that these national and regional policies has placed on ethics dem-
onstrates how limited they can do for the implementation of AI technologies. On the
one hand, difficulty to lay down a universal definition on ethical principles becomes a
hinderance for certain countries in formulating policies on the use of AI (Dexe & Franke,
2020). On the other hand, as AI can weave into the fabrics of everyday human activi-
ties, the resulting wide coverage of policy areas ranging from governance to education
and even to environment makes it a challenging task for government to establish specific
policies on AI usage (UNESCO, 2021b). Thus, as the Singaporean AI governance frame-
work highlighted, model framework or ethical guidelines were in themselves directional
and for reference only, and AI practitioners need to consider them with flexibility and
according to the relevance of particular situations (IMDA & PDPC, 2020).
Moving forward, the ongoing efforts of national and international organizations to
ensure the positive implementation of AI technologies will continue to prioritize discus-
sions and the formulation of legal and ethical principles (AI regulation, 2023; UNESCO,
2023). However, until these principles are validated by real-time implementation of AI
technologies, they will remain primarily predictive and prescriptive in nature (Chatter-
jee, 2020). Over time, it may become necessary for countries to establish institutional
support systems to effectively manage AI practices in accordance with validated legal
and ethical guidelines (Renda, 2020).
1. Accountability: Ensure AI actors are held responsible for the AI systems’ functioning and adherence to ethical
principles
2. Accuracy: Recognize and communicate sources of error and uncertainty in algorithms and data to inform
mitigation procedures
3. Auditability: Allow third parties to examine and review algorithm behavior through transparent information
disclosure
4. Explainability: Ensure that algorithmic decisions and underlying data can be explained in layman’s terms
5. Fairness: Prevent discriminatory impacts, include monitoring mechanisms, and consult diverse perspectives
during system development
6. Human Centricity and Well-being: Prioritize the well-being and needs of humans in AI development and
implementation
7. Human rights alignment: Ensure technologies do not violate internationally recognized human rights
8. Inclusivity: Make AI accessible to everyone
9. Progressiveness: Favour projects with significantly greater value than their alternatives
10. Responsibility, accountability, and transparency: Build trust through responsibility, accountability, and fairness,
provide avenues for redress, and maintain records of design processes
11. Robustness and Security: Ensure AI systems are safe, secure, and resistant to tampering or data compromise
12. Sustainability: Favour implementations that provide long-lasting, beneficial insights and can predict future
behavior
Chan Int J Educ Technol High Educ (2023) 20:38 Page 6 of 25
Below consists of a compilation of fundamental ethical principles for AI that have been
extracted from multiple policies (IMDA & PDPC, 2020) (Table 1).
encouraged to engage further with policymakers through their work on ethics in the use
of AI in education (Sam & Olbrich, 2023; Schiff, 2022). In view of this gap, this research
intends to propose a policy framework for integrating AI in higher education, taking into
consideration aspects of teaching and learning as well as ethical and practical concerns.
In this research, we will employ the guidelines put forth by UNESCO (2021a) as
the starting point for crafting a more accurate AI policy for university teaching and
learning. The rationale for employing UNESCO recommendations as the basis is mul-
tifaceted. First, UNESCO is an esteemed international organisation with significant
expertise in education, their recommendations are supported by thorough research
and knowledge from experts worldwide. These recommendations are designed to be
relevant and flexible for a variety of educational systems and cultural settings, making
them suitable for diverse institutions. UNESCO’s guidelines also take a comprehen-
sive approach to incorporating AI in education, addressing important ethical, social,
economic, and technological aspects essential for creating effective policies. Using an
existing framework like UNESCO’s recommendations saves time and resources and
provides a well-organized starting point for examining specific AI policy issues in
university teaching and learning. Finally, anchoring the study in UNESCO’s recom-
mendations enhances the credibility of the research.
The UNESCO framework for AI in education is centred around a humanistic
approach, which aims to safeguard human rights and provide individuals with the
necessary skills and values for sustainable development, as well as effective human–
machine collaboration in life, learning, and work. The framework prioritizes human
control over AI and ensures that it is utilized to improve the capabilities of both
teachers and students. Moreover, the framework calls for ethical, transparent, non-
discriminatory, and auditable design of AI applications. From the UNESCO’s AI and
Education: Guidance for Policy-Makers document, the following recommendations
are provided:
4. Pilot testing, monitoring and evaluation, and building an evidence base: This rec-
ommendation highlights the importance of testing and evaluating the use of AI in
education through pilot projects to build an evidence base for its effectiveness. For
example, policymakers could fund pilot projects that test the use of AI tools in spe-
cific educational contexts or with specific learner populations.
5. Fostering local AI innovations for education: This recommendation suggests that
policymakers should encourage the development of local innovations in AI for edu-
cation to ensure that it meets the specific needs of their communities. For example,
policymakers could provide funding or support to local startups or research institu-
tions working on developing new AI tools or applications specifically designed for
their region’s educational needs.
Using UNESCO’s recommendations as a basis, this study aims to examine higher edu-
cation stakeholders’ perceptions of text generative AI technology. Based on their ideas,
recommendations, and concerns, an AI education policy framework will be developed
to promote ethical and effective integration of AI technologies in higher education.
Methodology
In this study, a survey design was utilized to gather data from students, teachers, and
staff in Hong Kong to develop AI education policy framework for university teaching
and learning. The survey was administered through an online questionnaire, featuring a
mix of closed-ended and open-ended questions. The questionnaire was designed based
on a review of current literature on AI use in higher education. Topics covered in the
survey were major issues concerning the use of AI in higher education, which included
the use of generative AI technologies like ChatGPT, the integration of AI technologies
in higher education, potential risks associated with AI technologies, and AI’s impact on
teaching and learning.
Data were collected via an online survey from a diverse group of stakeholders in the
education community, ensuring that the results reflect the needs and values of all par-
ticipants. A convenience sampling method was employed for selecting the respondents,
based on their availability and willingness to participate in the study. Participants were
recruited through an online platform and provided with an informed consent form prior
to completing the survey.
The survey was completed by 457 undergraduate and postgraduate students, as well
as 180 teachers and staff members across various disciplines in Hong Kong. Descrip-
tive analysis was used to analyse the survey data, while a thematic analysis approach was
applied to examine the responses from the open-ended questions in the survey.
Descriptive analysis was employed to analyse the survey data collected from students
and teachers in Hong Kong, in order to gain a better understanding of the usage and
perception of generative AI technologies like ChatGPT in higher education. Descriptive
analysis is an appropriate statistical method for summarizing and describing the main
characteristics of the sample and the data collected. It is particularly useful for analysing
survey data and can provide an overview of the distribution, central tendency, and vari-
ability of the responses.
Results
Findings from the quantitative data
The survey was conducted among 457 students and 180 teachers and staff from different
disciplines in Hong Kong universities. The goal was to explore the kinds of requirements,
guidelines and strategies necessary for developing AI policies geared towards university
teaching and learning. The findings reveal valuable insights into the perception of gen-
erative AI technologies like ChatGPT among students and teachers (refer to Table 2).
Regarding the usage of generative AI technologies, both students (mean = 2.28,
SD = 1.18) and teachers (mean = 2.02, SD = 1.1) reported relatively low experience, sug-
gesting that there is significant room for growth in adoption. Both groups demonstrated
a belief in the positive impact of integrating AI technologies into higher education (stu-
dents: mean = 4, SD = 0.891; teachers: mean = 3.87, SD = 1.32). This optimism was also
reflected in the strong agreement that institutions should have plans in place associ-
ated with AI technologies (students: mean = 4.5, SD = 0.854; teachers: mean = 4.54,
SD = 0.874).
Both students and teachers were open to integrating AI technologies into their
future teaching and learning practices (students: mean = 3.93, SD = 1.09; teachers:
mean = 3.92, SD = 1.31). However, there were concerns among both groups about other
students using AI technologies to get ahead in their assignments (students: mean = 3.67,
SD = 1.22; teachers: mean = 3.93, SD = 1.12). Interestingly, both students and teachers
did not strongly agree that AI technologies would replace teachers in the future (stu-
dents: mean = 2.14, SD = 1.12; teachers: mean = 2.26, SD = 1.34).
Chan Int J Educ Technol High Educ (2023) 20:38 Page 10 of 25
I have used generative AI technologies like 457 2.28 2 1.18 180 2.02 2 1.1
ChatGPT
The integration of generative AI technologies like 457 4 4 0.891 180 3.87 4 1.32
ChatGPT in higher education will have a positive
impact on teaching and learning in the long run
Higher education institutions should have a plan 457 4.5 5 0.854 180 4.54 5 0.874
in place for managing the potential risks associ-
ated with using generative AI technologies like
ChatGPT in teaching and learning
I envision integrating generative AI technolo- 455 3.93 4 1.09 180 3.92 4 1.31
gies like ChatGPT into my teaching and learning
practices in the future
I am concerned that other students may use gen- 456 3.67 4 1.22 180 3.93 4 1.12
erative AI technologies like ChatGPT to get ahead
in their assignments. /I am concerned that there
may be an unfair advantage for some students
as they may use generative AI technologies like
ChatGPT to get ahead in their assignments
AI technologies like ChatGPT will replace teach- 457 2.14 2 1.12 180 2.26 2 1.34
ers in the future
Students must learn how to use generative AI 457 4.07 4 0.998 180 4.1 4 1.08
technologies well for their career
Teachers can already accurately identify a 457 3.02 3 1.56 180 2.72 2 1.62
student’s usage of generative AI technologies to
partially complete an assignment
Generative AI technologies such as ChatGPT can 455 3.19 3 1.25 180 2.93 3 1.4
provide guidance for coursework as effectively as
human teachers
Using generative AI technologies such as Chat- 455 3.29 3 1.25 180 3.56 4 1.31
GPT to complete assignments undermines the
value of a university education
I can ask questions to generative AI technolo- 454 3.51 4 1.2 180 3.97 4 1.06
gies such as ChatGPT that I would otherwise
not voice out to my teacher. /Students can ask
questions to generative AI technologies such as
ChatGPT that they would otherwise not voice
out to their teacher
Generative AI technologies such as ChatGPT 452 3.66 4 1.15 180 4 4 1.17
will not judge me, so I feel comfortable with
it. /Students will not feel judged by generative
AI technologies such as ChatGPT, so they feel
comfortable with it
Generative AI technologies such as ChatGPT will 454 3.24 3 1.32 180 3.69 4 1.3
limit my opportunities to interact with others and
socialize while completing coursework. /Genera-
tive AI technologies such as ChatGPT will limit
students’ opportunities to interact with others
and socialize while completing coursework
Generative AI technologies such as ChatGPT will 454 3.3 3 1.33 180 3.74 4 1.41
hinder my development of generic or transfer-
able skills such as teamwork, problem-solving,
and leadership skills. /Generative AI technologies
such as ChatGPT will hinder students’ develop-
ment of generic or transferable skills such as
teamwork, problem-solving, and leadership skills
Chan Int J Educ Technol High Educ (2023) 20:38 Page 11 of 25
Table 2 (continued)
Item Students Teachers
N Mean Median SD N Mean Median SD
If a fully online programme with the assistance of 454 2.92 3 1.46 180 3.21 3 1.52
a personalized AI tutor was available, I would be
willing to pursue my degree through this option.
/If a fully online programme with the assistance
of a personalized AI tutor was available, students
should be open to pursuing their degree through
this option
I can become over-reliant on generative AI tech- 454 3.11 3 1.35 180 4.24 4 0.955
nologies. /Students can become over-reliant on
generative AI technologies
I believe generative AI technologies such as 454 3.8 4 1.06 180 3.83 4 1.12
ChatGPT can improve my digital competence. /I
believe Generative AI technologies such as Chat-
GPT can improve students’ digital competence
I believe generative AI technologies such as 455 3.67 4 1.18 180 3.63 4 1.36
ChatGPT can improve my overall academic per-
formance. /I believe Generative AI technologies
such as ChatGPT can improve students’ overall
academic performance
I believe generative AI technologies such as Chat- 453 4.23 4 0.848 180 4.06 4 1.01
GPT can help me save time. /I believe Genera-
tive AI technologies such as ChatGPT can help
students save time
I think generative AI technologies such as Chat- 455 3.46 4 1.27 180 3.31 3 1.45
GPT can help me become a better writer. /I think
Generative AI technologies such as ChatGPT can
help students become a better writer
I believe AI technologies such as ChatGPT can 455 3.84 4 1.13 180 3.77 4 1.26
provide me with unique insights and perspec-
tives that I may not have thought of myself. /I
believe AI technologies such as ChatGPT can
provide students with unique insights and
perspectives that they may not have thought of
themselves
I think AI technologies such as ChatGPT can 455 3.75 4 1.14 180 3.86 4 1.34
provide me with personalized and immediate
feedback and suggestions for my assignments. /
AI technologies such as ChatGPT can provide stu-
dents with personalized and immediate feedback
and suggestions for their assignments
I think AI technologies such as ChatGPT is a great 455 4.16 4 0.893 180 3.81 4 1.17
tool as it is available 24/7. /I think AI technologies
such as ChatGPT is a great tool for students as it
is available 24/7
I think AI technologies such as ChatGPT is a 455 3.91 4 1.12 180 3.77 4 1.29
great tool for student support services due to
anonymity
and offering anonymity in student support services. However, there were concerns about
over-reliance on AI technologies, limited social interaction, and the potential hindrance
to the development of generic skills.
These findings highlight the need for a comprehensive AI policy in higher educa-
tion that addresses the potential risks and opportunities associated with generative AI
technologies. Based on these findings, some implications and suggestions for university
teaching and learning AI policy include:
1. Training: Providing training for both students and teachers on effectively using and
integrating generative AI technologies into teaching and learning practices.
2. Ethical Use and Risk Management: Developing policies and guidelines for ethical use
and risk management associated with generative AI technologies.
3. Incorporating AI without replacing human: Incorporating AI technologies as supple-
mentary tools to assist teachers and students, rather than as replacements for human
interaction.
4. Continuously Enhancing Holistic Competencies: Encouraging the use of AI technol-
ogies to enhance specific skills, such as digital competence and time management,
while ensuring that students continue to develop vital transferable skills.
5. Fostering a transparent AI environment: Fostering a transparent environment where
students and teachers can openly discuss the benefits and concerns associated with
using AI technologies in higher education.
6. Data Privacy and security: Ensuring data privacy and security while using AI tech-
nologies.
(1) Understanding, identifying and preventing academic misconduct and ethical dilem-
mas
To address academic misconduct, universities must develop clear guidelines and strat-
egies for detecting and preventing the misuse of generative AI. Teachers emphasize the
importance of creating university-wide policies on how to test students suspected of
using AI to complete tasks in which AI use is prohibited or misused. As one student
stated, “A clear set of rules about what happens if AI is used and resources on informing
Chan Int J Educ Technol High Educ (2023) 20:38 Page 13 of 25
1. Understanding, identifying and preventing aca- Develop guidelines and strategies for detecting and
demic misconduct and ethical dilemmas preventing the misuse of generative AI
Identify ethical dilemmas
Familiarize students with ethical issues
2. Addressing governance of AI: Data privacy, transpar- Be transparent about decisions concerning AI use
ency, accountability and security Ensure data privacy and security
Address ethical issues such as bias and stereotypes
3. Monitoring and evaluating AI implementation Conduct longitudinal experiments to examine the
effects of AI use
Collect feedback from teachers and students to make
informed decisions
4. Ensuring equity in access to AI technologies Provide resources and support to all students and staff
Ensure all students have access and training to AI tools
5. Attributing AI technologies Promote academic integrity in AI use
Develop guidelines on how to attribute generative AI’s
contribution to student work
6. Providing training and support for teachers, staff Enhance staff confidence and competence through
and students in AI literacy adequate training
Teach students how to use and critique the use of AI
technologies
Provide education on ethics; knowledge of the affor-
dances, use, and limitations; and capability to evaluate
AI outputs
7. Rethinking assessments and examinations Design assessments that integrate AI technologies to
enhance learning outcomes
Develop assessment strategies that focus on students’
critical thinking and analysis
8. Encouraging a balanced approach to AI adoption Recognize the potential benefits and limitations of
generative AI technologies
Avoid over-reliance on AI technologies
Use AI technologies as complementary tools
9. Preparing students for the AI-driven workplace Teach students how to use AI responsibly
Develop curricula that equip students with AI skills and
knowledge
Familiarize students with AI tools they will encounter for
university studies and future workplace
10. Developing student holistic competencies/generic Enhance students’ critical thinking to help them use AI
skills technologies effectively
Provide opportunities for developing competencies that
are impeded by AI use such as teamwork and leadership
students about the rule set are needed.” They also suggested, “Clearly stipulate in which
areas generative AI technologies are allowed and which are not. What are the procedures
to handle suspended cases? What are the consequences?” Another student mentioned
that “the level of restriction should be clarified.” Both teachers and students have also
suggested the use of assessments that minimize opportunities for AI misuse, such as oral
examinations or controlled settings where internet access is limited, to help maintain
academic integrity. Both teachers and students have also questioned “what is the defini-
tion of cheating?” in this AI era.
Teachers highlight the importance of identifying ethical dilemmas and recommend
familiarizing students with ethical issues, such as the boundaries between plagiarism
and inspiration and appropriate situations for seeking help from AI. Establishing clear
policies around AI use, including ethical guidelines and legal responsibilities, will help
students and staff navigate these complex issues. One teacher noted, “The education on
Chan Int J Educ Technol High Educ (2023) 20:38 Page 14 of 25
academic and research ethics should be strengthened.” Explicitly stipulating the areas
where AI is allowed and the procedures for handling suspected cases of misuse will help
maintain a transparent and equitable learning environment.
(2) Addressing governance of AI: data privacy, transparency, accountability and security
Universities must take responsibility for decisions made regarding the use of gen-
erative AI in teaching and learning, which includes being transparent about data
collection and usage, and being receptive to feedback and criticism. By disclosing
information about the implementation of generative AI, including the algorithms
employed, their functions, and any potential biases or limitations, universities can
foster trust and confidence among students and staff in AI technology usage. Teach-
ers emphasize the importance of addressing ethical concerns, privacy, security, and
other related issues when using generative AI technologies. Teachers commented
“In general, its impact is inevitable. It may negatively affect social consciousness and
responsibility. Depending on climate change management and its consequences, it may
contribute to the demise of a significant portion of humanity. It may also protect and
advance the interests of those who benefit from chaos.”
Privacy and Security: AI technologies rely on vast amounts of data, which raises
concerns about privacy and security if the data is not adequately protected. “Institu-
tions should ensure that the data used by generative AI technologies is kept private and
secure. This includes ensuring that any data used in training or testing the technology
is de-identified, and that appropriate security measures are in place to prevent unau-
thorized access or use of data.”
Transparency and Accountability: Universities should be transparent about the
use of generative AI in teaching and learning, which includes disclosing information
about the algorithms and their functions, as well as any potential biases or limitations
of the AI tools. “It is essential to recognize ethical dilemmas and consider privacy,
security, and related issues when employing generative AI technologies.”
The complexity of AI technologies can make it difficult to hold organizations and
individuals accountable for their decisions and actions. Institutions should address
ethical issues, such as potential discrimination, bias, and stereotypes, while ensuring
data privacy and security.
(6) Providing training and support for teachers, staff and students in AI literacy
complementary tools to support learning rather than relying on them as a substitute for
traditional teaching methods. Students should be encouraged to use AI as an aid to their
learning process and not solely depend on it for academic success.
Preparing students for an AI-driven workplace involves teaching them how to use
AI responsibly, ethically, and effectively. Universities should develop curricula that
reflect the increasing prominence of AI in various industries, ensuring that students
are equipped with the skills and knowledge to navigate the evolving workplace land-
scape. This includes teaching students how to integrate AI into their workflows,
evaluate the effectiveness of AI tools, and understand their role in professional set-
tings. As one teacher notes, “Teaching students how to use it properly and under-
standing its limitations and strengths would be useful.”
Integrating AI technologies into teaching and learning involves familiarizing stu-
dents with AI tools they will likely encounter during their university studies and in
the workplace, as mentioned by a student who said, “Teach students how to best use
AI tools and make AI tools a common part of education, just like PowerPoint and
Excel.” Teachers suggest guiding students to recognize ethical issues and helping
them self-appropriate AI in study and work settings.
As the workplace increasingly adopts AI technologies, universities should prepare
students for this shift. One student stated, “Plans should be implemented to assist
students in making better and more constructive use of AI in learning, career plan-
ning, and personal development.”
Discussion
Triangulating quantitative and qualitative data
The quantitative findings support the key areas found in the qualitative data for AI
integration in education. The quantitative data reveals that both students and teach-
ers share concerns about the potential misuse of AI technologies, such as ChatGPT,
in assignments (students: mean 3.67, teachers: mean 3.93). This emphasizes the need
for guidelines and strategies to prevent academic misconduct. Furthermore, there is
significant agreement among students and teachers on the necessity for higher educa-
tion institutions to implement a plan for managing the potential risks associated with
using generative AI technologies (students: mean 4.5, teachers: mean 4.54), highlight-
ing the importance of addressing data privacy, transparency, accountability, and secu-
rity. The overall positive perception of AI technologies integration within education
implies that proper policies should be in place to ensure responsible AI incorporation
in higher education.
The concern that some students might use generative AI technologies to gain an
advantage in their assignments (students: mean 3.67, teachers: mean 3.93) under-
scores the importance of ensuring equal access to AI technologies for all students.
Moreover, the consensus that students must become proficient in using generative AI
technologies for their careers (students: mean 4.07, teachers: mean 4.1) highlights the
need for AI literacy and training for all stakeholders in the educational process, pre-
paring students for the AI-driven workplace.
Interestingly, teachers and students are unsure if teachers can accurately identify a
student’s use of generative AI technologies to partially complete an assignment (stu-
dents: mean 3.02, teachers: mean 2.72), yet they also believe that AI technologies can
provide unique insights and perspectives and personalized feedback. This suggests
that rethinking assessment methods may be necessary.
Data indicating that neither students nor teachers believe AI technologies will
replace teachers in the future (students: mean 2.14, teachers: mean 2.26) supports the
need for a balanced approach to AI adoption, utilizing AI technologies as comple-
mentary tools rather than substitutes for traditional teaching methods. Finally, con-
cerns that generative AI technologies could hinder students’ development of generic
or transferable skills, such as teamwork, problem-solving, and leadership (students:
mean 3.3, teachers: mean 3.74), emphasize the importance of focusing on students’
holistic competencies and generic skills in preparation for the AI-driven workplace.
Students,
Teachers, Staff,
Management,
External
Agents Operational
Pedagogical Dimension
Dimension [Teaching and
[Teachers] Learning and IT
staff]
For the Pedagogical dimension, the framework emphasizes the need to adapt teaching
methods and assessment strategies in response to AI’s growing capabilities, preparing
students for an increasingly AI-driven workplace. By focusing on pedagogy, the frame-
work ensures that AI technologies are harnessed to enhance learning outcomes and
develop critical thinking, creativity, and other essential skills, rather than undermining
academic integrity. Teachers are the initiator for the Pedagogical Dimension, as they are
the ones who design and implement lesson plans, activities, and assessments that utilize
AI technologies. They will need to have the expertise to determine how AI can best sup-
port and enhance students’ learning experiences. At the same time, they must possess
the necessary awareness and knowledge to educate students on the potential risks asso-
ciated with the use of generative AI in learning, particularly in assessments and assign-
ments, such as plagiarism and contract cheating. Teachers need to foster ethical use of
AI, for example through proper attribution to acknowledge the contributions of AI tech-
nologies in student work, and develop assessment tasks that require critical and analyti-
cal thinking to avoid AI-assisted plagiarism. By assigning teachers the responsibility for
this dimension, we ensure that AI tools are used in a way that is pedagogically sound and
enhances the learning outcomes of students.
The Governance dimension highlights the importance of addressing issues related to
academic misconduct, data privacy, transparency, and accountability. The framework
ensures that stakeholders understand and address the ethical challenges associated with
AI technologies, fostering responsible use and helping to maintain trust within the uni-
versity community. This focus on governance encourages universities to develop clear
policies and guidelines, ensuring that students and staff can navigate the complex ethical
landscape surrounding AI.
Senior management will be the initiator for the Governance Dimension of the AI Ecol-
ogy Framework. As they hold decision-making authority, they are tasked with develop-
ing and enforcing policies, guidelines, and procedures that address the ethical concerns
Chan Int J Educ Technol High Educ (2023) 20:38 Page 22 of 25
surrounding AI use in education. These include academic integrity, data privacy, trans-
parency, accountability, and security. Senior management’s role is to ensure that AI is
used responsibly and ethically, fostering a learning environment that is fair, equitable,
and inclusive.
The Operational dimension of the framework underscores the need for ongoing moni-
toring, evaluation, and support to ensure the effective and equitable implementation of
AI technologies. By considering operational aspects, the framework encourages univer-
sities to provide training, resources, and support to all stakeholders, promoting equal
access to AI technologies and fostering an inclusive learning environment. Furthermore,
the operational dimension emphasizes the importance of continuous improvement and
adaptation, enabling universities to refine their AI integration strategies in response to
new insights and changing needs.
Teaching and Learning and IT staff will be tasked to look after the Operational Dimen-
sion. They play a crucial role in managing and maintaining the AI technologies used in
the educational setting. Their tasks include providing training and support for both stu-
dents and staff, ensuring the proper functioning of AI tools, and addressing any techni-
cal issues that may arise. They can ensure that AI technologies are seamlessly integrated
into the educational environment, minimizing disruptions and maximizing their poten-
tial benefits.
It is crucial to recognize that the responsibility of each dimension in the ecological
framework should not be viewed in isolation. Collaboration and communication among
all stakeholders (universities, teachers, students, staff and external agents such as accred-
itation, quality assurance bodies) are essential to ensure the successful implementation
of any policy. Each group should actively participate in the development and execution
of AI-related initiatives and work together to achieve the desired outcomes in university
teaching and learning.
Conclusions
This study aims to establish an AI education policy for university teaching and learn-
ing, addressing concerns related to the use of text-generating AI in academic environ-
ments, such as cheating and plagiarism. The findings of this study yield ten key areas in
AI education policy planning, from which an AI Ecological Education Policy Framework
is constructed to fulfil the objective of the study. However, this study has some limita-
tions, including a relatively small sample size that may not be representative of all edu-
cational institutions. Additionally, the research only focused on text-based generative AI
technology and did not explore other types or variations. Lastly, the study relied on self-
reported data from participants, which may be subject to bias or inaccuracies.
This study proposes an AI Ecological Education Policy Framework to address the
diverse implications of AI integration in university settings. The framework consists of
three dimensions—Pedagogical, Governance, and Operational—each led by a respon-
sible party. This structure allows for a more comprehensive understanding of AI inte-
gration implications in teaching and learning settings and ensures stakeholders are
aware of their responsibilities. By adopting this framework, educational institutions can
align actions with their policy, ensuring responsible and ethical AI usage while maxi-
mizing potential benefits. However, more research is necessary to fully comprehend the
Chan Int J Educ Technol High Educ (2023) 20:38 Page 23 of 25
potential advantages and risks associated with AI in academic settings. Merely advocat-
ing for AI implementation in education is insufficient; stakeholders need to carefully
evaluate which AI technologies to employ, determine the best methods for their use, and
understand their true capabilities.
Acknowledgements
The author wishes to thank the students and teachers who participated the survey.
Author contributions
The study is conducted solely by the author.
Declarations
Competing interests
The author declares that one has no competing interests.
References
Abduljabbar, R., Dia, H., Liyanage, S., & Bagloee, S. A. (2019). Applications of artificial intelligence in transport: An overview.
Sustainability, 11(1), 189. https://doi.org/10.3390/su11010189
Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of
ChatGPT. Contemporary Educational Technology, 15(3), ep429. https://doi.org/10.30935/cedtech/13152
AI regulation (2023). Legal and Ethical Aspects of ChatGPT: EU Parliament’s Amendment, French Experts’ Opinion on Ethical Issues
and Other Useful Resources. Retrieved from https://ai-regulation.com/legal-and-ethical-aspects-of-chatgpt/
Al Braiki, B., Harous, S., Zaki, N., & Alnajjar, F. (2020). Artificial intelligence in education and assessment methods. Bulleting of
Electrical Engineering and Informatics, 9(5), 1998–2007. https://doi.org/10.11591/eei.v9i5.1984
Aoun, J. E. (2017). Robot-proof: Higher education in the age of artificial intelligence. The MIT Press.
Atlas, S. (2023). ChatGPT for higher education and professional development: A guide to conversational AI. https://digitalcom
mons.uri.edu/cba_facpubs/548
Bholat, D., & Susskind, D. (2021). The assessment: Artificial intelligence and financial services. Oxford Review of Economic Policy,
37(3), 417–434. https://doi.org/10.1093/oxrep/grab015
Buckley, R. P., Zetzsche, D. A., Arner, D. W., & Tang, B. W. (2021). Regulating artificial intelligence in finance: Putting the human in
the loop. The Sydney Law Review, 43(1), 43–81.
Cassidy, C. (2023). Universities to return to pen and paper exams after students caught using AI to write essays. The Guardian.
https://www.theguardian.com/australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-stude
nts-caught-using-ai-to-write-essays
Cavendish, C. (2023). ChatGPT will force school exams out of the dark ages. Financial Times. https://www.ft.com/content/41243
091-d8d7-4b74-9ad1-5341c16c869f
Chan, C.K.Y. (2023). Is AI changing the rules of academic misconduct? An in-depth look at students’ perceptions of ’AI-giarism’.
https://arxiv.org/abs/2306.03358
Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education.
https://doi.org/10.48550/arXiv.2305.00290
Chan, CKY & Chen, S. (2023). Student Partnership in Assessment in Higher Education: A Systematic Review. https://doi.org/10.
1080/02602938.2023.2224948
Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such
as ChatGPT in teaching and learning than their Gen X and Millennial Generation teachers? arxiv:2305.02878
Chan, C. K. Y., & Tsi, L. H. Y. (2023). The AI Revolution in Education: Will AI Replace or Assist Teachers in Higher Education?
[Preprint]. arxiv:2305.01185
Chan, C. K. Y., & Zhou, W. (2023). Deconstructing Student Perceptions of Generative AI (GenAI) through an Expectancy Value
Theory (EVT)-based Instrument [Preprint]. arxiv:2305.01186
Chatterjee, S. (2020). AI strategy of India: Policy framework, adoption challenges and actions for government. Transforming
Government, 14(5), 757–775. https://doi.org/10.1108/TG-05-2019-0031
Civil, B. (2023, March 16). ChatGPT can hinder students’ critical thinking skills: Artificial intelligence is changing how students
learn to write. The Queen’s Journal. https://www.queensjournal.ca/story/2023-03-16/opinions/chatgpt-can-hinder-stude
nts-critical-thinking-skills/
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT.
Innovations in Education and Teaching International. https://doi.org/10.1080/14703297.2023.2190148
Dexe, J., & Franke, U. (2020). Nordic lights? National AI policies for doing well by doing good. Journal of Cyber Policy, 5(3),
332–349. https://doi.org/10.1080/23738871.2020.1856160
Eggmann, F., Weiger, R., Zitzmann, N. U., & Blatz, M. B. (2023). Implications of large language models such as ChatGPT for
dental medicine. Journal of Esthetic and Restorative Dentistry. https://doi.org/10.1111/jerd.13046
Chan Int J Educ Technol High Educ (2023) 20:38 Page 24 of 25
Federspiel, F., Mitchell, R., Asokan, A., Umana, C., & McCoy, D. (2023). Threats by artificial intelligence to human health and
human existence. BMJ Global Health, 8(5), e010435. https://doi.org/10.1136/bmjgh-2022-010435
Feldstein, S. (2019). The road to digital unfreedom: How artificial intelligence is reshaping repression. Journal of Democracy,
30(1), 40–52. https://doi.org/10.1353/jod.2019.0003
Floridi, L. (2021). A Unified Framework of Five Principles for AI in Society. In Ethics, Governance, and Policies in Artificial Intel-
ligence (Vol. 144, pp. 5–17). Springer International Publishing AG. https://doi.org/10.1007/978-3-030-81907-1_2
Gellai, D. B. (2022). Enterprising academics: Heterarchical policy networks for artificial intelligence in British higher education.
ECNU Review of Education. https://doi.org/10.1177/20965311221143798
Greiman, V. A. (2021). Human rights and artificial intelligence: A universal challenge. Journal of Information Warfare, 20(1),
50–62.
Hogenhout, L. (2021). A Framework for Ethical AI at the United Nations. https://doi.org/10.48550/arxiv.2104.12547
IMDA & PDPC (2020). Model Artificial Intelligence Governance Framework. Retrieved from https://www.pdpc.gov.sg/-/media/
files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf
Intelligent.com. (2023, January 23). Nearly 1 in 3 College Students Have Used ChatGPT on Written Assignments. https://www.intel
ligent.com/nearly-1-in-3-college-students-have-used-chatgpt-on-written-assignments/
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., & Kasneci, G. (2023). ChatGPT for
good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103,
102274.
Korn, J. & Kelly, S. (2023, January 5). New York City public schools ban access to AI tool that could help students cheat. CNN. https://
edition.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html
Luan, H., Geczy, P., Lai, H., Gobert, J., Yang, S. J. H., Ogata, H., Baltes, J., Guerra, R., Li, P., & Tsai, C.-C. (2020). Challenges and future
directions of big data and artificial intelligence in education. Frontiers in Psychology, 11, 580820. https://doi.org/10.3389/
fpsyg.2020.580820
Luckin, R. (2017). Towards artificial intelligence-based assessment systems. Nature Human Behaviour, 1, 0028. https://doi.org/
10.1038/s41562-016-0028
McKinsey Consultant. (2023). What is generative AI? [Article]. Retrieved February 12, 2023, from https://www.mckinsey.com/
featured-insights/mckinsey-explainers/what-is-generative-ai
Ocaña-Fernández, Y., Valenzuela- Fernández, L. A., & Garro-Aburto, L. L. (2019). Artificial intelligence and its implications in
higher education. Journal of Educational Psychology, 7(2), 553–568. https://doi.org/10.20511/pyr2019.v7n2.274
Oliver, J. (2023). John Oliver on new AI programs: ‘The potential and the peril here are huge’. The Guardian. https://www.thegu
ardian.com/tv-and-radio/2023/feb/27/john-oliver-new-ai-programs-potential-peril
Pelletier, K., McCormack, M., Reeves, J., Robert, J., Arbino, N., Al-Freih, M., Dickson-Deane, C., Guevara, C., Koster, L., Sánchez-
Mendiola, M., Bessette, L. S., & Stine, J. (2022). EDUCAUSE Horizon Report, teaching and learnig edition. EDUCAUSE.
Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education.
Research and Practice in Technology Enhanced Learning, 12, 22. https://doi.org/10.1186/s41039-017-0062-8
Renda, A. (2020). Europe: Toward a Policy Framework for Trustworthy AI. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford
Handbook of Ethics of AI (pp. 650–666). Oxford Academic. https://doi.org/10.1093/oxfordhb/9780190067397.013.41
Sam, A. K., & Olbrich, P. (2023). The need for AI ethics in higher education. In C. C. Corrigan, S. A. Asakipaam, J. J. Kponyo, & C.
Luetge (Eds.), AI ethics in higher education: Insights from Africa and beyond (pp. 3–10). Springer. https://doi.org/10.1007/
978-3-031-23035-6_1
Schiff, D. (2022). Education for AI, not AI for Education: The Role of Education and Ethics in National AI Policy Strategies. Inter-
national Journal of Artificial Intelligence in Education, 32(3), 527–563. https://doi.org/10.1007/s40593-021-00270-2
Sinhaliz, S., Burdjaco, Z., & Du Preez, J. (2023). How ChatGPT Could Revolutionize Academia. IEEE Spectrum. https://spectrum.
ieee.org/how-chatgpt-could-revolutionize-academia
Southgate, E. (2020). Artificial intelligence, ethics, equity and higher education: A “beginning-of-the-discussion” paper. National
Centre for Student Equity in Higher Education, Curtin University, and the University of Newcastle. http://ncsehe.edu.au/
wp-content/uploads/2020/07/Southgate_AI-Equity-Higher-Education_FINAL.pdf
Swiecki, Z., Khosravi, H., Chen, G., Martinez-Maldonado, R., Lodge, J. M., Milligan, S., Selwyn, N., & Gašević, D. (2022). Assessment
in the age of artificial intelligence. Computers and Education Artificial Intelligence, 3, 100075. https://doi.org/10.1016/j.
caeai.2022.100075
Tanveer, M., Hassan, S., & Bhaumik, A. (2020). Academic policy regarding sustainability and artificial intelligence (AI). Sustain-
ability, 12(22), 9435. https://doi.org/10.3390/su12229435
TEQSA (February 28, 2023). Artificial Intelligence: advice for students. Retrieved from https://www.teqsa.gov.au/students/artif
icial-intelligence-advice-students
UNESCO. (2021a). AI and education: Guidance for policy-makers. UNESCO.
UNESCO. (2021b). Recommendations on the Ethics of Artificial Intelligence. UNESCO.
UNESCO (2023). Ethics of Artificial Intelligence. Retrieved from https://www.unesco.org/en/artifi cial-intelligence/recommenda
tion-ethics
Wang, S., Wang, G., Chen, X., Wang, W., & Ding, X. (2021). A review of content analysis on China artificial intelligence (AI) educa-
tion policies. In W. Wang, G. Wang, X. Ding, & B. Zhang (Eds.), Artificial intelligence in education and teaching assessment
(pp. 1–8). Springer. https://doi.org/10.1007/978-981-16-6502-8_1
Warschauer, M., Tseng, W., Yim, S., Webster, T., Jacob, S., Du, Q, & Tate, T. (2023). The affordances and contradictions of AI-
generated text for second language writers. https://doi.org/10.2139/ssrn.4404380
Wood, P. (2023, February 28). Oxford and Cambridge ban AI language tool GPT-3 over fears of plagiarism. inews.co.uk. https://
inews.co.uk/news/oxford-cambridge-ban-chatgpt-plagiarism-universities-2178391
World Economic Forum. (2023). Model Artificial Intelligence Governance Framework and Assessment Guide. Retrieved from
https://www.weforum.org/projects/model-ai-governance-framework
Wu, J., Wang, X., Dang, Y., & Lv, Z. (2022). Digital twins and artificial intelligence in transportation infrastructure: Classification,
application, and future research directions. Computers and Electrical Engineering, 101, 107983. https://doi.org/10.1016/j.
compeleceng.2022.107983
Chan Int J Educ Technol High Educ (2023) 20:38 Page 25 of 25
Yau, C., & Chan, K. (2023, February 17). University of Hong Kong temporarily bans students from using ChatGPT, other AI-based
tools in coursework. South China Morning Post. https://www.scmp.com/news/hong-kong/education/article/32106
50/university-hong-kong-temporarily-bans-students-using-chatgpt-other-ai-based-tools-coursework
Yu, K.-H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature Biomedical Engineering, 2, 719–731.
https://doi.org/10.1038/s41551-018-0305-z
Zhai, X. (2022). ChatGPT user experience: Implications for education. Available at SSRN 4312418.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.