APF 2023 Fall

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

The AIR

Professional File
Fall 2023 Volume
Supporting quality data and
decisions for higher education.

SPECIAL ISSUE EDITED BY HENRY


ZHENG AND KAREN WEBBER
FEATURING FOUR ARTICLES ON
ARTIFICIAL INTELLIGENCE AND
ADVANCED ANALYTICS
Copyright © 2023, Association for Institutional Research
PREFACE

Artificial Intelligence and Advanced (as is inherent in a continuous learning model),


Analytics in Higher Education: we find the hands-on user experience of these AI
Implications for Institutional chatbots simultaneously interesting and worrisome.
Research and Institutional ChatGPT bots and image-building tools such as
Effectiveness Practitioners DALL-E from OpenAI seem to be the latest in AI
applications that have generated media hysteria.
New technologies in our post-pandemic world have Other AI-supported systems have been used in
prompted substantial changes in every facet of higher higher education, however, including the Georgia
education. The emergence of Big Data is one of several Institute of Technology’s use of AI Jill Watson (Goel
key facilitating conditions that accelerated the adoption & Polepeddi, 2019) for student tutoring and the
of artificial intelligence (AI) and machine learning (ML) in U.S. Department of Education’s use of a chatbot for
key application areas. According to Gartner (2023), Big federal financial aid (Aidan) (Federal Student Aid,
Data are the high-volume, high-velocity, and/or high- n.d.). The soaring interest in ChatGPT and other AI
variety information assets that demand cost-effective tools signal that the AI/ML revolution is accelerating
and innovative forms of information processing (McKendrick, 2021). According to Bill Gates (2023),
that enable enhanced insight and decision-making, there have been two technology revolutions in his
and process automation. Considerations for when, lifetime: the first was the introduction of a graphical
how, and why we use Big Data and forms of AI data- user interface as the forerunner of every modern
informed analytics are critical in institutional research operating system; and now there is a second
(IR) and institutional effectiveness (IE). revolution: “The development of AI is as fundamental
as the creation of the microprocessor, the personal
Recently, Chat Generative Pre-trained Transformer computer, the Internet, and the mobile phone. It will
(ChatGPT) and generative AI tools including those change the way people work, learn, travel, get health
listed by Dilmengali (2023), have grabbed our care, and communicate with each other” (Gates, 2023).
attention for their novelty and ability to provide
answers to questions in a conversational style. In this special volume of the biannual Association
Although they have risks (Reagan, 2023), and for Institutional Research’s (AIR) Professional File,
refinements are being introduced constantly we briefly describe some of the key factors that
helped drive the development of AI and ML in multitalented). We could wait 6 to 12 months or
higher education; we also include a focus on more and see how the AI tools evolve, but we believe
the implications and opportunities for IR and IE it is more valuable for IR/IE leaders to get engaged
professionals. Although this topic continues to as soon as possible, considering the issues and
evolve, we think it is important to forge ahead with implications, while being mindful of the likelihood
some discussion, while acknowledging that some that there will be changes to the tools, techniques,
aspects of these new tools will change—and will data governance, and other institutional policies.
change rapidly. Nevertheless, as critical colleagues
on our campus and in policy agencies, we need to According to Digital Science’s Dimensions Database
be engaged with others on this topic right away. (dimensions.digital-science.com, accessed May
We believe it is essential that IR/IE colleagues 23, 2023), the number of publications in higher
(who either already have or who want a seat at education related to AI in general as well as
the table) contribute actively to discussions about publications specific to large language models
AI in higher education. Being involved in these (LLMs), predictive analytics, and ChatGPT, climbed
discussions with senior administrative officials and a steep trajectory in the past few years. As shown
academic instructional staff members can help in Figure 1, publications about general AI and
cement the perception that IR/IE professionals are predictive analytics have been growing steadily since
knowledgeable, broadly skilled, and able to situate 2017, but publications about LLMs and generative
issues within the context of a specific campus AI models such as ChatGPT have exponentially
environment (yes, IR/IE professionals are indeed increased only within the past year.

Figure 1. Scholarly Publications in Key Artificial Intelligence–Related Areas in Higher Education


If the speed that ChatGPT grabbed people’s 4| Scholarly research, including synthesizing
attention is stunning, the subsequent rush to literature, drafting grant proposals, and creating
leverage its growth is equally dazzling. Companies new knowledge in many disciplines (both within
and organizations rushed to create plugins to individual disciplines as well as cross-disciplinary
ChatGPT. (A ChatGPT plugin is a software add-on collaborations)
that integrates other applications into the ChatGPT
AI chatbot. Plugins allow a third-party software or During the early years when AI was introduced
content generator to tap into ChatGPT’s capabilities to higher education, both in the United States
for search optimization and conversational and in other countries, we saw some promising
interaction.) As of June 17, 2023, less than 7 months applications of AI and ML. Early adopters sought
since the official launch of ChatGPT, nearly 500 to enhance student success through tools such as
plugins have been published and connected to online chat assistants, homework tutoring chatbots,
ChatGPT 4.0. For example, the plugin ScholarAI or course learning systems that sought to gather
allows users to use ChatGPT’s interface to answer student learning data from multiple sources. Some
questions on scholarly articles and research of the early tools were not user friendly, lacked
papers. The plugin SummarizeAnything helps users comprehensive data, and/or did not have faculty
summarize books, articles, and website content. buy-in and so did not remain viable. However,
More plugins and similar products are likely to follow. these early tools sharpened our thinking, and
the ensuing refinements moved members of the
AI and other advanced analytics in higher education higher education community forward on how digital
can serve to benefit students in a number of ways. technologies can contribute positively to the higher
Informed by the work of Zeide (2019) and Holmes education mission.
and Tuomi (2022), we group the current AI and
advanced analytic techniques available in higher Over the past few years, Georgia State University
education into four categories: (GSU) has become well known for its success in
gathering and using voluminous data points every
1| Institutional use, including marketing day that are related to student characteristics (e.g.,
and student recruitment, estimating class financial aid need) to predict and track student
size, optimizing course catalog descriptions, academic progress. Their extensive use of the
allocating resources, network security, and facial data-enabled digital systems, in combination with
recognition human advisors, has produced a significant impact

2| Student support, including academic on student success and graduation. The GSU

monitoring, course scheduling, suggesting system was quite successful, and GSU now hosts

majors and career pathways, allocating financial the National Institute of Student Success (NISS), a

aid, identifying students at risk, and supporting national effort aimed at helping institution officials

mental health to identify potential challenges related to student


access, finding ways to maximize impact and ensure
3| Instruction, including personalized learning,
success for all students.
creating library guides, using generative
language models (e.g., ChatGPT, DALL-E), and
making grading more efficient
A number of institutions are incorporating AI these initiatives are focused on interdisciplinary
into teaching and learning as well as into campus efforts to consider ways in which AI can improve
operations. For example, team members at aspects of society. We believe that collaborative,
Rensselaer Polytechnic Institute have incorporated interdisciplinary efforts like these will make dramatic
an AI-powered assistant into a language-immersive improvements in our higher education systems and
classroom that helps students learn to speak overall quality of life.
Mandarin (Su, 2018). According to Gardner (2018),
leaders at Elon University are using an AI-based An ongoing concern about data analytics will be
course planning and advising system developed by ensuring ample representation of the population
a tech company, Stellic, to plan courses, consider under study and/or that the analyses are
cocurricular activities, and keep students on the contextualized to the broader population. The
path to graduation. Also according to Gardner, unique changes that occurred during or as a result
leaders at the University of Iowa are using AI to of the Covid-19 pandemic, as well as continued
monitor campus buildings for energy efficiency and emphases on the need for diversified campuses,
to monitor for facilities problems. These and other left many institution officials unable to use historical
examples of AI-based systems can promote student data to reliably predict the future. Vigilance with
and institution success, but they also require staff continued improvements in data security and
to have robust technical skills and relevant ways of unbiased models will continue as we progress in the
thinking about data. use of AI in higher education, and IR practitioners
must be an integral part of these discussions.
An important concern about the use of Big Data or
comprehensive predictive analytic models is the high Foreseeing the significant changes and
potential for the unintended inclusion of bias, either implications from AI-assisted education technology
through training data that do not fully represent the implementation in all aspects of education, the
population under study or that fail to contextualize U.S. Department of Education issued a guidance
the results to a broader population. The unique document (U.S. Department of Education, 2023)
changes that occurred during or as a result of the acknowledging that AI poses both risks and
Covid-19 pandemic, as well as continued emphases opportunities in teaching, learning, research, and
on the need for diversified campuses, left many assessment. The report recommends several
institution officials unable to reliably use historical key considerations as key stakeholder continue
data for predicting the future. to explore the use of AI in educational and other
academic endeavors:
Along with applications in teaching and learning and
overall student success, AI is growing its applications • Emphasize humans-in-the-loop: Keep a
in research as well. We have an explosive list of humanistic view of teaching front and center.
AI applications in business and industry such as • Align AI models to a shared vision for
health care, banking, and retail customer service. education: Humans, not machines, should
AI is gaining strength in university endeavors such determine educational goals and measure the
as Emory University’s AI. Humanity Initiative and degree to which models fit and are useful.
the Graz Center for Machine Learning. Both of
• Design AI using modern learning principles: In the first article, Kelli Bird describes promises
Connect AI algorithms with principles of as well as the cautions that must be considered
collaborative and social learning and respect the in the use of predictive analytics to identify at-risk
student not just for their cognition but also for students. With her eyes wide open to the potential
the whole human skillset. challenges of algorithmic bias and the need for a

• Prioritize strengthening trust: Incorporate personal touch, Bird offers examples of success

safety, usability, and efficacy in creating a in student support that have occurred through

trusting environment for the use of AI. carefully considered predictive modeling. Bird makes
an excellent point that, as more-advanced analytics
• Inform and involve educators: Show the
tools become available, the main challenge will not
respect and value we hold for educators by
be whether the algorithms (i.e., from machines)
informing and involving them in every step of
are able to identify at-risk students better and
the process of designing, developing, testing,
more efficiently than humans. Instead, most of
improving, adopting, and managing AI-enabled
the challenges will surround the question of how
edtech.
humans will use the output that machines provide.
• Develop education-specific guidelines and
This aligns with the U.S. Department of Education’s
guardrails: The issues are not only data privacy
key observation that humans, not machines, should
and security, but also new issues such as bias,
determine educational goals and measure the
transparency, and accountability.
degree to which models fit and are useful.

Clearly, the growth of AI tools in the world around


In the second article, Emily Oakes, Yih Tsao,
us will also impact current strategies and actions
and Victor Borden urge readers to consider
in higher education. Allowing only a short time to
how predictive analytics at large scale as well as
adjust, higher education officials must continue
applications of AI can be used to center the student
to consider its impact on student and institutional
voice in developing higher education access and
success. This special volume of the Professional File
policy development related to learning analytics
includes four thoughtful articles related to specific
and AI-embedded student supports. Like Bird,
facets of AI and/or advanced analytics in higher
these authors remind readers to be mindful of
education today. In this volume we seek (a) to bring
the potential biases that can be inadvertently built
attention to and provide an effective introduction
into analytic models, and they urge researchers
to AI/ML developments in higher education; (b) to
to ground data in a social justice framework.
introduce IR/IE professionals to some of the latest
This cannot be a one-and-done approach, but
developments in AI/ML, especially in generative
instead must include a general framework that is
AI, natural language processing, and predictive
used for all analytics tasks as well as the policies
analytics; and (c) discuss policy, ethics, privacy,
governing the collection, management, and
and IR/IE workforce implications of these new
implementation of data-based systems. Oakes,
developments. Each article covers a specific facet or
Tsao, and Borden’s article aligns well with some of
application of AI in higher education. Time and space
the keen observations made by Cathy O’Neil in her
do not allow us to cover all of the equally important
bestselling book, Weapons of Math Destruction, such
topics, but we offer these topics as a starting point
as suggesting that, lacking a humanistic perspective,
for future discussions.
machine algorithms would rely on historical data technologies, how we will use them, and how we will
and learning models that cause harm to those less ensure that the data are used responsibly and with
favored by historical data and machine logics. transparency. As those who are deeply embedded
in the collection, storage, analysis, and reporting of
We know that academic advising is critical to data, IR and IE professionals must firmly understand
student success, however, resource-constrained the data, and how they are being used within a
higher education institutions might not have the particular context and without black box designs.
capacity to offer comprehensive student support IR professionals can ensure ethical deployment,
that can yield success. Aspects of AI including LLMs privacy and confidentiality of data, and guard against
enable large-scale collection of data and automated bias. We like Urmeneta’s comment, “Being a passive
data systems to assist; authors of the third article spectator is neither optional nor tenable.” With AI
describe an enterprise-level academic system and advanced data analytics, we encourage IR/IE
called AutoScholar. Professor Rawatlal developed professionals to seize the day!
the system and colleague Rubby Dhunpath led
the implementation of a multifaceted advising Although the first paper on AI was published more
system that provides information to students as than 50 years ago and has been embedded in
well as to their instructors, department leaders, business and industry practices for a few decades,
and other administrative managers who seek applications of AI are quite new in the higher
to examine student success across a college or education arena. We realize that we offer this
total institution. Authors Rawatlal and Dhunpath volume to Professional File readers closer to the
describe the AutoScholar system and acknowledge beginning of the journey into AI and advanced
the importance of being able to provide advising analytics in the higher education context. The
information to students, regardless of institutional months ahead will see a growth in publications on
resources. They acknowledge the high benefits of a this topic in higher education, but we are confident
data-informed application that augments automated that the articles herein can help Professional File
information with human judgement. readers to contemplate their role and ways to stay
actively involved.
In the fourth and final article in this volume, Michael
Urmeneta starts with a review of recent discussions In its policy guidance document, the U.S.
on the potential impact of AI in higher education, Department of Education (2023, p. 4) acknowledged,
the increasing proliferation of AI tools, and the need “AI is advancing exponentially, with powerful new AI
for ethics and accountability. Urmeneta reflects on features for generating images and text becoming
transitions that helped carve out the path toward available to the public and leading to changes in
AI and advanced data analytics in higher education how people create text and images. The advances
as well as on the need for ethics and accountability, in AI are not only happening in research labs but
and offers a cogent discussion on many important also are making news in mainstream media and
implications for IR and IE professionals. Although our in educational-specific publications.” With the
landscape for ML and other forms of AI continues rapid speed of AI-related developments, the U.S.
to evolve, Urmeneta reminds us that the future is Department of Education considered its policy
here, and it is important that we understand the guidance document not as a definitive document but
rather as a starting point for discussion. Likewise, Editors
we believe that this volume of Professional File offers
beginning conversations from the authors. Henry Zheng
Special Guest Editor
We hope you enjoy the articles in this volume. We Carnegie Mellon University
believe that AI and advanced analytics will continue
to grow in our world of higher education, and, as Karen Webber
they grow, we hope you will contribute to the positive Special Guest Editor
impact of AI for IR and IE practitioner success. The University of Georgia

Henry Zheng Iryna Muse


Karen Webber Editor
University of Alabama System

Inger Bergom
Assistant Editor
Harvard University

Leah Ewing Ross


Managing Editor
Association for Institutional Research

ISSN 2155-7535
REFERENCES Holmes, W., & Tuomi, I. (2022). The state of the art
and practice of AI in education. European Journal of
Dilmengali, C. (2023). Top 35 generative AI tools Education, 57(4), 542–570. https://onlinelibrary.wiley.
by category (text, image, . . . ). https://research. com/doi/10.1111/ejed.12533
aimultiple.com/generative-ai-tools/
McKendrick, J. (2021). AI adoption skyrocketed over
Federal Student Aid. (n.d.). Meet Aidan. https:// the last 18 months. Harvard Business Review. https://
studentaid.gov/h/aidan hbr.org/2021/09/ai-adoption-skyrocketed-over-the-
last-18-months
Gardner, L. (2018, April 8). How A.I. is infiltrating
every corner of the campus. Chronicle of Higher Reagan, M. (2023, February 27). The darker side of
Education. https://www.chronicle.com/article/how-a- ChatGPT. IIA blog. https://iianalytics.com/community/
i-is-infiltrating-every-corner-of-the-campus/ blog/darker-side-of-chatgpt

Gartner. (2023). Information technology Gartner Su, H. (2018, August 30). Mandarin language
glossary. https://www.gartner.com/en/information- learners get a boost from AI. RPI blog. https://
technology/glossary/big-data everydaymatters.rpi.edu/mandarin-language-
learners-get-a-boost-from-ai/
Gates, B. (2023, March 21). The age of AI has begun.
Bill Gates blog. https://www.gatesnotes.com/The-Age- U.S. Department of Education. (2023). Artificial
of-AI-Has-Begun intelligence and future of teaching and learning:
Insights and recommendations. Office of Educational
Goel, A., & Polepeddi, L. (2019). Jill Watson: A
Technology. https://tech.ed.gov/ai-future-of-
virtual teaching assistant for online education. In
teaching-and-learning/
C. Dede, J. Richards, & B. Saxeberg (Eds.), Learning
engineering for online education. Routledge. Zeide, E. (2019). Artificial intelligence in higher
https://www.taylorfrancis.com/chapters/ education: Applications, promise and perils, and
edit/10.4324/9781351186193-7/jill-watson-ashok- ethical questions. EDUCAUSE Review, August 26,
goel-lalith-polepeddi 2019. https://er.educause.edu/articles/2019/8/
artificial-intelligence-in-higher-education-
applications-promise-and-perils-and-ethical-
questions
Table of Contents

11 Predictive Analytics in Higher Education: The Promises and Challenges of


Using Machine Learning to Improve Student Success
Article 161
Author: Kelli Bird

19 Centering Student Voice in Developing Learning Analytics and Artificial


Intelligence–Embedded Supports
Article 162
Authors: Emily Oakes, Yih Tsao, and Victor Borden

33 Advising at Scale: Automated Guidance of the Role Players


Influencing Student Success
Article 163
Authors: Randhir Rawatlal and Rubby Dhunpath

49 Reflections on the Artificial Intelligence Transformation: Responsible Use


and the Role of Institutional Research and Institutional Effectiveness
Professionals
Article 164
Author: Mike Urmeneta

61 About Special Issues of the AIR Professional File

61 About AIR
Predictive Analytics in Higher
Education: The Promises and
Challenges of Using Machine Learning
to Improve Student Success

Kelli Bird

About the Author


Kelli Bird is a Research Assistant Professor at the University of Virginia’s School of Education and
Human Development.

Acknowledgments
I am grateful to Ben Castleman, whose continued collaboration and mentorship has been instrumental in my
exploration of this topic. I am also grateful for the long-standing partnerships with the Virginia Community
College System and Bloomberg Philanthropies that have enabled us to do this work.

Abstract
Colleges are increasingly turning to predictive analytics to identify “at-risk” students in order to target additional
supports. While recent research demonstrates that the types of prediction models in use are reasonably
accurate at identifying students who will eventually succeed or not, there are several other considerations for
the successful and sustained implementation of these strategies. In this article, I discuss the potential challenges
to using risk modeling in higher education and suggest next steps for research and practice.

The AIR Professional File, Fall 2023 https://doi.org/10.34315/apf1612023

Article 161 Copyright © 2023, Association for Institutional Research

Fall 2023 Volume 11


To administrators who have been searching for
INTRODUCTION solutions to improve student success, risk modeling
With persistently low retention and graduation rates and resource targeting are tempting solutions.
at many colleges and universities, higher education Because colleges often lack the analytic capacity
administrators are increasingly looking for innovative to implement these methods, private industry has
ways to improve student success outcomes. As a stepped in with solutions, and those solutions are
result, predictive analytics are increasingly pervasive now a $500 million industry. Roughly a third of
in higher education (Ekowo & Palmer, 2016). The colleges and universities have bought predictive
most common and arguably the most impactful analytics products, with each institution spending
application of predictive analytics is to use a approximately $300,000 per year (Barshay &
prediction model to identify students who are at Aslanian, 2019). Despite this investment, however,
risk of doing poorly in a course or of leaving college there is no rigorous evidence to show that these
without completing, and to intervene with these methods (either proprietary or in-house applications
students early before they are too far off track. For 1
developed by colleges themselves) are successful
instance, more than half of colleges and universities at improving student outcomes.2 What’s more,
report using “statistical modeling to predict the there are concerns that racially biased algorithms
likelihood of an incoming student persisting to or poorly executed messaging could exacerbate,
degree completion” (Ruffalo Noel Levitz, 2021, p. 22). instead of mitigating, existing inequities (Acosta,
Once the at-risk students have been identified by 2020; Angwin et al., 2016; Burke, 2020; Engler, 2021).
the prediction model, then faculty or staff proactively In this article, I will discuss the promises of predictive
reach out to these students with offers of additional analytics in higher education, the challenges of
supports, such as academic advising or tutoring. predictive analytics (human vs. machine), obstacles
While these types of resources are typically available to effective implementation, and recommendations
to students upon request (though perhaps at limited for next steps for research and practice.
capacity), many students do not take advantage
of them. Since colleges do not typically have
the resources to provide all students with these
extended supports—at the median community
PROMISES OF
college, academic advisors are responsible for
PREDICTIVE ANALYTICS
2,000 students (Carlstrom & Miller, 2013)—the goal IN HIGHER EDUCATION
of predictive analytics is for colleges to efficiently While the current research is lacking in rigorous
target the resources to students who need the evaluations of the impact of risk modeling and
resources to succeed. I will refer to this application resource targeting on student success, an increasing
of predictive analytics as “risk modeling and resource body of literature demonstrates that algorithms
targeting” throughout this article. can achieve relatively high levels of accuracy at

1. Colleges also use predictive analytics for enrollment management purposes, such as identifying high-target students for recruitment or offering generous
financial aid packages. These enrollment management practices are designed to bolster the quality of a colleges’ incoming class. In this article, I choose to focus
on predictive analytic applications designed to support at-risk students.
2. Still, there are several anecdotes to suggest that current applications risk modeling and resource targeting are leading to improved student outcomes. Most
notably is Georgia State University (GSU), which reports an 8-percentage-point increase in its graduation rate since implementing EAB’s predictive analytics
products. This implementation accompanied several other changes at the university, however (Swaak, 2022).

Fall 2023 Volume 12


predicting student success. For a recent cohort the output the machines provide. A quote from
of high school seniors, my colleagues and I Pedro Domingos highlights this tension: “It’s not man
compared the accuracy of a relatively simple logistic versus machine; it’s man with machine versus man
regression model with the students’ professional without. Data and intuition are like horse and rider,
college advisors at predicting the students’ college and you don’t try to outrun a horse; you ride it.”
enrollment outcomes (Akmanchi et al., 2023). We For humans to harness the machine effectively, it is
found that the logistic model is at least as accurate important to remember two important distinctions.
as the advisors for students who interacted with First, much like a horse and rider, the human and
the advisors up to eight times. This is true even machine have different objectives when it comes
though advisors likely had much more pertinent to predicting which students are at risk. Humans
information about the students’ college search, (administrators, policymakers, researchers, etc.) have
such as the names of colleges where they had been complex objectives of increasing student success,
admitted. In a separate line of work, my colleagues improving equity, and ensuring the longevity of the
and I found that incorporating behavioral trace data colleges and universities. The machine’s objective is
from online learning management systems can much simpler: to make the best predictions possible
significantly improve the prediction accuracy for new using the information provided. Second, the human
students—which is the population with the lowest and machine have different responsibilities. The
retention rates and thus those for whom predictions humans have the responsibility to rely on context
could be most important (Bird et al., 2022). In recent when building the prediction models, since there
University of Oregon applications, a more advanced are many subjective decisions to be made regarding
machine learning (ML) algorithm (XGBoost) is roughly sample construction, outcome specification, and
three times better at identifying at-risk students predictors to include. Humans must also investigate
than relying on students’ high school GPAs alone potential biases within models, which I will discuss
(Greenstein et al., 2023). below. Once the predictions have been made and
at-risk students have been flagged, the machine’s
job is done, but the human’s job is not: people must

CHALLENGES OF decide how to communicate to at-risk students


and which additional supports to provide. This is
PREDICTIVE ANALYTICS: no simple undertaking, and requires significant
HUMAN VS. MACHINE engagement with colleges’ faculty and advising
There are many challenges to successfully deploying staff. Allison Calhoun-Brown at GSU highlights the
risk modeling and resource targeting in higher importance of the human work: “The innovation is
education. However, as the research I briefly discuss not the technology. The innovation is the change
above demonstrates, the main challenge will not be that accompanies the technology” (Calhoun-Brown
whether the algorithms (i.e., machine) are able to quoted in Swaak, 2022). In other words, if we want
identify at-risk students better and more efficiently to improve student success outcomes, it is not a
than humans. Instead, most of the challenges question of if we use predictive analytics, but instead
surround the question of how humans will use what how we use it.

Fall 2023 Volume 13


graduation, fewer than one-third of administrators
OBSTACLES TO EFFECTIVE thought it was a very effective strategy at improving
IMPLEMENTATION student success (Ruffalo Noel Levitz, 2021). One
One of the biggest obstacles that colleges face of the reasons that faculty may distrust predictive
in implementing predictive analytics is effectively analytics is their black box nature. Many prediction
communicating to students (Acosta, 2020). You models in use are from third-party for-profit
could imagine someone drafting this message: “Kelli, venders; their proprietary nature means that
an algorithm flagged you as someone likely to fail institutions have little understanding of what goes
English 101. Work hard to improve your grade.” This on under the hood. A recent GAO report specifically
message raises several concerns. A recipient might calls out these higher education models as needing
be concerned about their data privacy: How is the more scrutiny from both their consumers and from
college using their personal data to determine their regulators (Bauman, 2022).
likelihood of failing? This type of messaging could
also reinforce stereotype threats of not being “good Humans also may find it difficult to incorporate
enough” or “college material,” and being labeled as risk modeling due to the impersonal nature of
likely to fail could become a self-fulfilling prophecy. the machine. Prediction models inherently rely
Perhaps this message would be more appropriate: on information from a large historical sample and
“Hi Kelli, this is Professor Smith. I noticed you’ve generate predictions to optimize the accuracy for
been interacting less frequently than some of your the group as a whole, as opposed to considering
classmates. Let’s set up a time to talk about how potential nuance in a particular individual’s
you’re doing in the class.” This message puts more circumstance. In a recent pilot where my colleagues
of a human touch on the outreach, does not lead and I collaborated with a community college to
with the idea of failure, and provides a concrete next improve transfer outcomes for their students,
step on which the student can act. My colleagues we incorporated an algorithm that generated
and I are currently working with social psychologists personalized course recommendations that
to design effective messaging for an upcoming pilot accounted for the probability that the student
program, which I describe below. Simply getting would succeed in the course. Despite significant
the communication right is not sufficient, however. collaboration on how the algorithm would select
Several recent low-touch nudge interventions with the courses to recommend, the advisors still
behaviorally informed messaging failed to improve changed roughly one out of three courses the
student outcomes (e.g., Bird, Castleman, Denning, et algorithm had identified before communicating the
al., 2021), so it is also imperative for students to be recommendations to students.
connected to the right supports to meet their needs.
Finally, many are concerned about the potential
Another barrier to successfully implementing negative impacts of algorithmic bias to exacerbate,
risk modeling and resource targeting is achieving instead of mitigate, existing inequities. These
buy-in from faculty and staff. Among colleges and concerns are not unfounded: several studies
universities using statistical modeling to predict have found the existence of algorithmic bias in
higher education prediction models (e.g., Baker &

Fall 2023 Volume 14


Hawn, 2021; Yu et al., 2020).3 When my colleagues analytics. My colleagues and I are planning a pilot
and I investigated algorithmic bias in two models program that we will evaluate through a randomized
predicting course completion and degree control trial, with three experimental conditions:
completion among community college students, (1) control (i.e., business as usual); (2) early-term
we find evidence of meaningful bias (Bird et al., predictions, in which community college instructors
2023). Specifically, we find that the calibration bias will be informed which of their students a prediction
present in the models would lead to roughly 20% model flagged as being at risk, with the instructors
fewer at-risk Black students receiving additional receiving training in how best to communicate with
supports, compared with a simulated unbiased those students; and (3) early-term predictions plus
model.4 Our exploration suggests that this bias is additional embedded course supports. We include
driven not by the inclusion of race or socioeconomic the third condition recognizing that community
information as model predictors, but instead by college instructors likely face meaningful constraints
success being inherently more difficult to predict in the additional supports they can provide students
for Black students. This result may reflect structural on their own. While randomized control trials are
racism in K–12 education systems where many Black the gold standard of research, they are not the
have access to fewer advantages. Specifically, model only rigorous method. For institutions interested
predictors based on past performance reflect that in evaluating their predictive analytic applications,
unequal circumstances would not be as powerful there are many researchers, including me, who
to predict a disadvantaged student’s full potential. would be happy to collaborate on designing a quasi-
The algorithmic bias is particularly prevalent experimental study.
among new students for whom there is very little
baseline information, suggesting that additional Another important topic for future research is to
pre-matriculation data collection could mitigate better understand which point(s) in the distribution
bias in this case. We also find that the amount of of predicted risk would be most effective and
algorithmic bias—and the strategies for mitigating efficient for intensive resource targeting. While
the bias—can vary substantially across models; it is students are typically lumped into categories
therefore imperative to address bias on a case-by- based on their risk (e.g., two categories: at risk or
case basis.5 on track; three categories: green, yellow, or red),
the raw model output is a continuous predicted
risk score ranging from zero to one. An immediate
thought may be to target the students at highest
RECOMMENDATIONS risk, meaning those least likely to succeed. However,
FOR NEXT STEPS FOR it might be quite difficult to get these students to
RESEARCH AND PRACTICE engage with additional supports, and they may not
First and foremost, we need rigorous evaluations have a high likelihood of success even when they are
of different strategies that incorporate predictive targeted. So perhaps students at a more moderate

3. Algorithmic bias has been found in other predictive analytic applications outside higher education, including criminal justice and health care (Angwin et al.,
2016; Obermeyer et al., 2019).
4. Calibration bias occurs when, conditional on predicted risk score, subgroups have different actual success rates. In our application, this means that, at a
particular point in the distribution of predicted risk scores, Black students have a higher success rate than White students.
5. Our related work also suggests that small changes in modeling decisions (e.g., choosing logistic regression versus XGBoost as the prediction model) can
significantly change the sorting of students within the risk score distribution, and therefore have the potential to significantly alter which students would receive
additional supports (Bird, Castleman, Mabel, et al., 2021).

Fall 2023 Volume 15


risk level, or students just at the margin of success, including the integration of this data collection into
would be a more appropriate targeting strategy. existing learning management systems or student
It is not clear where in the distribution of risk we success platforms that already track several other
would expect to see the most bang for the buck in student behaviors (e.g., Blackboard).
terms of resources moving students from failure
to success; thus future research could significantly Finally, it is imperative for us as an education
improve the cost-effectiveness of risk modeling research community to develop standards for
and resource targeting. It is important to note that ethical considerations relevant to these applications.
the answer to this question will almost certainly be Researchers and policymakers are increasingly
context-dependent. For example, at more-selective recognizing the need for transparency and student
colleges with higher persistence and graduation rights with regard to artificial intelligence (AI)
rates, the best strategy would likely target those in education (e.g., Holmes & Tuomi, 2022; U.S.
with the highest risk scores; at broad or open access Department of Education, 2023), though additional
institutions, however, there is a much wider range considerations should be given to the technical
of students who could benefit from additional aspects of algorithmic bias. There are many metrics
resources. Institutional research (IR)/institutional that could be used to determine whether a model
effectiveness (IE) professionals who are involved is generating fair predictions, and the choice of
in institution assessment are positioned well to metric is critical since they can be at odds with each
contribute important context of student success that other (Kleinberg et al., 2016). In the paper I describe
would not only inform the design of student success above (Bird et al., 2023), my colleagues and I chose
supports tied to the risk models, but also estimate to focus on calibration bias because we thought
the institution’s return on investment of these the most important type of bias in this application
additional resources. would be at-risk students from underrepresented
or minoritized groups who are less likely to receive
I also believe that ML has the potential to improve additional supports, compared to at-risk students
how we structure the targeted students supports. from majority groups. However, this metric is less
Struggling students have a variety of different appropriate for an application where at-risk students
needs that may be inhibiting their success: lack are counseled out of college majors that are
of academic preparedness, financial constraints, associated with the highest earnings (e.g., Barshay &
inflexible schedules, unfamiliarity with administrative Aslanian, 2019). We also need to develop standards
processes, and so on. So how do we connect for what level algorithmic bias is acceptable since
students to the right supports that they need? ML reducing bias often leads to decreases in overall
methods commonly used in the private sector such model accuracy, and it may not be feasible to
as market basket analysis (Aguinis et al., 2013) have achieve zero bias while still maintaining a high-
a lot of potential to inform this question, although performing model.
it would require colleges to invest in the collection
of student support usage data. IR officials who are At this time predictive analytics has shown its
involved in campus-wide data governance could help promise at efficiently identifying at-risk students;
colleagues think about data collection, management, with the possible inclusion of more-detailed
and analytic uses of this and other student data, data from learning management systems, these

Fall 2023 Volume 16


predictions will only improve (Bird et al., 2022). Still, Barshay J., & Aslanian S. (Hosts). (2019, August 6).
there is much important work to be done to both Under a watchful eye: Colleges are using big data to
unlock its full potential and to ensure its safe use. track students in an effort to boost graduation rates,
Before risk modeling practices and applications that but it comes at a cost. [Audio podcast episode].
use predictive analytics become too ingrained in The Educate Podcast, apmreports.org. https://www.
our colleges and universities, it is critical that we use apmreports.org/story/2019/08/06/college-data-
the momentum fueled by the various discussions tracking-students-graduation
I mention above to ensure a fruitful future for
predictive analytics in higher education. Bauman, D. (2022, June 3). Congress should
scrutinize higher ed’s use of predictive analytics,
watchdog says. Chronicle of Higher Education. https://
www.chronicle.com/article/congress-should-
REFERENCES scrutinize-higher-eds-use-of-predictive-analytics-
Acosta, A. (2020). How you say it matters: watchdog-says?cid2=gen_login_refresh
Communicating predictive analytics findings to students.
Report, New America. https://www.newamerica.org/ Bird, K. A., Castleman, B. L., Denning, J. T., Goodman,
education-policy/reports/how-you-say-it-matters/ J., Lamberton, C., & Rosinger, K. O. (2021). Nudging
at scale: Experimental evidence from FAFSA
Aguinis, H., Forcum, L. E., & Joo, H. (2013). completion campaigns. Journal of Economic Behavior
Using market basket analysis in management & Organization 183(March), 105–128. https://
research. Journal of Management, 39(7), www.sciencedirect.com/science/article/abs/pii/
1799–1824. https://journals.sagepub.com/ S0167268120304819?via%3Dihub
doi/10.1177/0149206312466147
Bird, K. A., Castleman, B. L., Mabel, Z., & Song,
Akmanchi, S., Bird, K. A., & Castleman, B. L. (2023). Y. (2021). Bringing transparency to predictive
Human versus machine: Do college advisors analytics: A systematic comparison of predictive
outperform a machine-learning algorithm in modeling methods in higher education.
predicting student enrollment? EdWorkingPaper AERA Open, 7. https://journals.sagepub.com/
23-699, Annenberg Institute at Brown University. doi/10.1177/23328584211037630
https://doi.org/10.26300/gadf-ey53
Bird, K. A., Castleman, B. L., & Song, Y. (2023). Are
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). algorithms biased in education? Exploring racial bias
Machine bias. ProPublica. https://www.propublica. in predicting community college student success.
org/article/machine-bias-risk-assessments-in- EdWorkingPaper 23-717, Annenberg Institute at
criminal-sentencing Brown University. https://www.edworkingpapers.
com/ai23-717
Baker, R. S., & Hawn, A. (2022). Algorithmic bias in
education. International Journal of Artificial Intelligence
in Education, 32, 1052–1092. https://doi.org/10.1007/
s40593-021-00285-9

Fall 2023 Volume 17


Bird, K. A., Castleman, B. L., Song, Y., & Yu, R. Holmes, W., & Tuomi, I. (2022). State of the art and
(2022). Is Big Data better? LMS data and predictive practice in AI in education. European Journal of
analytic performance in postsecondary education. Education, 57, 542–570. https://onlinelibrary.wiley.
EdWorkingPaper 22-647. Annenberg Institute at com/doi/10.1111/ejed.12533
Brown University. https://www.edworkingpapers.
com/ai22-647 Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016).
Inherent trade-offs in the fair determination of risk
Burke, L. (2020, December 13). The death and scores. Cornell University. https://doi.org/10.48550/
life of an admissions algorithm. Inside Higher Ed. arXiv.1609.05807
https://www.insidehighered.com/admissions/
article/2020/12/14/u-texas-will-stop-using- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan,
controversial-algorithm-evaluate-phd S. (2019). Dissecting racial bias in an algorithm
used to manage the health of populations. Science,
Carlstrom, A. H., & Miller, M. A. (Eds.). (2013). 2011 366(6464), 447–453. https://www.science.org/
NACADA national survey of academic advising. doi/10.1126/science.aax2342
Monograph No. 25, Global Community for Academic
Advising. https://nacada.ksu.edu/Resources/ Ruffalo Noel Levitz. (2021). 2021 effective practices
Clearinghouse/View-Articles/2011-NACADA-National- for student success, retention, and completion report.
Survey.aspx Ruffalo Noel Levitz. https://www.ruffalonl.com/
papers-research-higher-education-fundraising/
Ekowo, M., & Palmer, I. (2016). The promise and student-retention-practices-report/
peril of predictive analytics in higher education:
A landscape analysis. Policy paper, New America. Swaak, T. (2022, August 26). How higher ed is
https://www.newamerica.org/education-policy/ trying to improve student performance with data.
policy-papers/promise-and-peril-predictive-analytics- PBS News Hour. https://www.pbs.org/newshour/
higher-education/ education/how-higher-ed-is-trying-to-improve-
student-performance-with-data
Engler, A. (2021). Enrollment algorithms are
contributing to the crises of higher education. Part U.S. Department of Education. (2023). Artificial
of series “AI Governance.” The Artificial Intelligence intelligence and future of teaching and learning:
and Emerging Technology (AIET) Initiative, Brookings Insights and recommendations. Office of Educational
Institution. https://reachhighscholars.org/Articles/ Technology. https://tech.ed.gov/ai-future-of-
Enrollment%20Algorithms%20-%20Brookings%20 teaching-and-learning/
Institution%209.14.21.pdf
Yu, R., Li, Q., Fischer, C., Doroudi, S., & Xu, D. (2020).
Greenstein, N., Crider-Phillips, G., Matese, C., & Towards accurate and fair prediction of college
Choo, S-W. (2023). Predicting risk earlier: Machine success: Evaluating different sources of student
learning to support success and combat inequity. data. In A. N. Rafferty, J. Whitehill, V. Cavalli-Sforza, &
Academic data analytics briefing document. Office C. Romero (Eds.), Proceedings of the 13th International
of the Provost, University of Oregon. https://provost. Conference on Educational Data Mining (pp. 292–301).
uoregon.edu/analytics/student-success https://files.eric.ed.gov/fulltext/ED608066.pdf

Fall 2023 Volume 18


Centering Student Voice in Developing
Learning Analytics and Artificial
Intelligence–Embedded Supports

Emily Oakes, Yih Tsao, and Victor Borden

About the Authors


Emily Oakes is the Principal Unizin IT Consultant and University Data Steward for Learning Management and
Learning Analytics Data for Indiana University. Yih Tsao is a project research associate with Indiana University’s
Charting the Future initiative. Victor Borden is professor of higher education and project director in the
Indiana University Center for Postsecondary Research.

Abstract
Accelerating advancements in learning analytics and artificial intelligence (AI) offers unprecedented
opportunities for improving educational experiences. Without including students’ perspectives, however, there is
a potential for these advancements to inadvertently marginalize or harm the very individuals these technologies
aim to support. This article underscores the risks associated with sidelining student voices in decision-making
processes related to their data usage. By grounding data use within a social justice framework, we advocate
for a more equitable and holistic approach. Drawing on previous research as well as insights we have gathered
from a student panel, we outline effective methods to integrate student voices. We conclude by emphasizing the
long-term implications for the institutional research field, arguing for a shift toward more inclusive and student-
centric practices in the realm of learning analytics and AI-embedded supports.

The AIR Professional File, Fall 2023 https://doi.org/10.34315/apf1622023

Article 162 Copyright © 2023, Association for Institutional Research

Fall 2023 Volume 19


with the promise of these technologies: instructors
INTRODUCTION: cannot easily tell whether the work submitted by
ADVANCEMENTS students represents solely their own thinking or if it
IN ANALYTICS AND was aided by AI. It has been demonstrated, too, that
ARTIFICIAL INTELLIGENCE AI can contribute to widening equity gaps due to

Institutional research (IR) professionals have become bias inherent in algorithms as well as to equity gaps

increasingly central to college and university efforts in access to and use of this powerful technology

to improve student success through the use of (Ahn, 2022; Alonso et al., 2020).

empirical research and reporting. This tradition goes


While some tremendous successes have already
back to the early 20th century when information
been realized, there are incalculable opportunities
technologies and statistical methods were relatively
still to be discovered. Critical to the discovery of
cumbersome, through the information technology
those opportunities is ensuring the involvement
explosion of the late 20th century when tools like
of the voice of our most important population:
personal computers, spreadsheets, and statistical
students. An oft-cited achievement in the use of
packages allowed for more-rapid deployment
institutional data is Georgia State University’s (GSU)
of research results. The 20-plus years since the
predictive analytics service. Since partnering with
beginning of the new millennium have seen
EAB in 2012, GSU has seen its graduation rates
another explosion of capacity, with institutional
increase by more than 35 percentage points; as
data supplemented by diffuse information systems
of 2023 those rates have been consistent across
available from national data systems that can be
racial and ethnic lines for 7 years. The institution
used for benchmarking and tracking students
has increased degrees awarded by 84% and more
from their early school years, through college, and
than doubled the number awarded to low-income
into the workforce. Officials at many colleges and
and minority students. Powering their alerts are
universities have had great success leveraging such
10 years of data that were reviewed to identify 800
data systems, as countless sessions at the annual
factors that correlate with challenges completing
Association for Institutional Research (AIR) Forum
their degrees on time (Calhoun-Brown, 2023). Of
have demonstrated.
equal importance, 42 advisors were hired alongside
Recent advances in predictive analytics have opened
the service’s launch, enabling more advisor–student
new possibilities in providing direct support to
interactions (Kurzweil & Wu, 2015). GSU has
students—to the instructors who teach them, to
profoundly and positively impacted its students’
the advisors who support them, and to many other
paths to success, as have many other institutions,
new types of professionals that have roles in helping
aided by the use of advanced information and
students navigate the increasingly complicated
analytic capacities.
choices available to them within a particular
institution and across the higher education But, as noted, GSU’s successes involved more
landscape. Artificial intelligence (AI) now offers a than just leveraging new analytic technologies.
quantum leap in capabilities that students, faculty, The institution was already seeing consistent
and staff can leverage to support student learning improvements in its graduation rates before the
and success. However, there is much peril along implementation of its advising alert system in 2012

Fall 2023 Volume 20


(GSU, 2021). In their 2015 case study, Kurzweil & as they move forward in deploying AI as part of their
Wu (2015) noted that GSU’s incredible results are information use strategies.
not related to a single solution, but rather to the
institution’s overall approach to problem solving. First, concerns have been raised among scholars
Staff members at GSU use the institution’s data and practitioners working toward data justice that
warehouse to find barriers to graduation and resolve data reflect social ideas of the default as implicitly
those barriers through a cycle of implementing defined by those with power in a particular context:
interventions to remove identified barriers; they White, heterosexual, cisgender, abled, neurotypical,
assess their effectiveness and scale them up if they financially comfortable, and so on (Benjamin, 2019;
find them to be effective. As Kurzweil & Wu (2015, p. D’Ignazio & Klein, 2020). When data are captured,
18) note, “It is the process, and not merely its outputs, structured, interpreted, and applied on the
that other institutions should seek to replicate.” assumption of a particular default, those who fall
outside of that category are less likely to benefit and
GSU’s process included opportunities for centering more likely to possibly experience harm.
the student voice. In this article we first describe
considerations and risks when student voices are Consider AI researcher, artist, and advocate Joy
not included in deciding how their data will be Buolamwini’s now-famous experience discovering
used. Next, we discuss ways to ground data use bias in facial analysis software (Kantayya, 2020).
in a social justice framework. Finally, we share While interacting with the software, Buolamwini
perspectives and recommendations on how to found that the software was unable to identify her
support students’ successes. darker-skinned face (a label that itself implies a
default), despite successfully capturing her lighter-
Although applications of AI often operate on a more skinned colleagues’ faces. The software was similarly
diverse range of data types and use techniques that able to identify the features of a plain white mask
are different from predictive analytics, the issues she placed over her own face (Kantayya, 2020).
considered in this article apply equally, if not more Buolamwini and computer scientist Timnit Gebru
strongly, given that that the user of AI’s output is had previously found that multiple data sets used
even farther removed from the analysis process to train facial recognition software had included
than is the user of predictive analytics. majority lighter-skinned subjects, causing the
software to frequently misclassify darker-skinned
faces, with the greatest number of errors occurring
Considerations and Risks when the software attempted to analyze the faces
When Student Voices Are Not Included of darker-skinned women (Buolamwini & Gebru,
Understanding that there are risks when students, 2018). These issues of bias and unfairness occur
especially students from marginalized populations, with generative AI, such as Chat Generative Pre-
are not involved in uses of their data is critical to trained Transformer (ChatGPT), as well, and thus
avoiding those risks. Fortunately, many lessons require training users on diverse data, careful
have already been learned regarding a lack of monitoring, and other bias mitigation tactics (Kasneci
participation in data use generally that institutional et al., 2023). Mitigation strategies should be defined
researchers can consider in the context of their work in use policies informed by impacted populations

Fall 2023 Volume 21


(i.e., students of various identities) to surface issues or otherwise provide narrative context alongside
that others outside those populations may not be their data, circumstances perceived as negative and
aware of. As institutions invest in data-powered permanently recorded by an institution official can
identity-based outreach, AI video assessment and follow students throughout their academic careers.
proctoring, AI-assisted admissions, or staff interview
software, and so on, their similar investments in In their review of relevant literature, Braunack-Mayer
mitigation strategies will only grow in importance. et al. (2020) found that students have expressed
concern across multiple studies about being labeled
Early alert systems are a useful tool for demonstrating “at risk”; these authors note that being categorized
the practical risks when services do not incorporate in certain ways could bias their instructors such that
student-guided use policy. Early alert systems are they exclude the categorized students from future
frequently implemented in higher education in an academic opportunities. In this way, the label “risky”
effort to increase retention (Parnell et al., 2018). becomes a quality inherent to a student, detached
These systems use data about students that are from its use as a descriptor applied to those who
based on some predefined metrics to identify when are being failed by a specific process or system.
students are at greater risk of incurring negative Nopper (2019, p. 170) refers to the “digital profile
academic consequences, and send an alert to assessed to make inferences regarding character
instructors or academic support staff so that they in terms of credibility, reliability, industriousness,
may intervene as appropriate (Hanover Research, responsibility, morality, and relationship choices”
2014). Interventions might include offering tutoring, as “digital character” that is used to paternalistically
having a student meet with an advisor, assigning a “help” individuals, often without their knowledge
mentor, or referring a student to a relevant social or consent. (See also Braunack-Mayer et al., 2020.)
service (Ekowo & Palmer, 2017). This focus on applying interventions based on a
student’s digital character situates them as data
Numerous risks arise when a diversity of objects or passive recipients of services rather
student voices have not been considered in the than as autonomous agents (Kruse & Pongsajapan,
development, deployment, and operation of early 2012; Prinsloo & Slade, 2016; Roberts et al., 2016;
alert systems. First, the integration of multiple Rubel & Jones, 2014). Given that groups of students
data sets means that a risk label can be made have also expressed such concerns about threats
more broadly visible, which creates opportunities to their autonomy by these systems themselves,
for riskiness to be assumed in contexts unrelated it is critical that they are provided mechanisms for
to the one that the risk was measured against in having their voices considered (Roberts et al., 2016).
the first place (Benjamin, 2019; Prinsloo & Slade, This example is not intended to imply that all early
2016). This is additionally problematic given that alert systems are problematic—there is evidence
student identities and circumstances frequently that students do consider them beneficial (Atif et al.,
change: while data about students often tend to 2015; Roberts et al., 2016). Rather, the example is
be rigid, the realities of their lives are not (Slade & used here to illustrate the potential issues that may
Prinsloo, 2013). Without an opportunity to dispute arise if development of such systems is not aligned
with student-informed policies for use.

Fall 2023 Volume 22


Grounding Data Use in a intersect with an explicit social justice focus. While
Social Justice Framework there are diverse approaches to and definitions
of data justice, there are some themes, including
To productively address risks like those described,
the recommendation to meaningfully collaborate
we suggest that higher education officials align
with the individuals whose data will be captured
their efforts to grow data capacities and use AI-
and used during the conception, development, and
infused solutions with their diversity, equity, and
implementation of data-based systems and the
inclusion priorities. This is not a novel approach
policies governing them (Dencik et al., 2019; Dencik
to data use: the social impacts of mass data use
& Sanchez-Monedero, 2022). In academia, these
have received increasing attention for more than
individuals are often our students.
a decade. In 2012, Facebook gained significant
media attention around its nonconsensual research
In the remainder of this article, we consider the
on and manipulation of users’ moods; the use
implications of using a social justice framework
of its data by political consulting firm Cambridge
for advancing the use of generative AI and other
Analytica in 2018 helped raise public consciousness
Big Data applications within higher education
about mass data’s capabilities and misuses (Meyer,
institutions. This framework derives from a focus on
2014; Zialcita, 2019). Zuboff (2019) described
minoritized populations, such as Indigenous peoples
how surveillance capitalism—the widespread
and other racial/ethnic minorities, who are often
collection and commodification of personal data by
underrepresented within postsecondary institutions.
corporations—poses significant threats to society,
We believe, however, that the ideas pertain more
privacy, and autonomy. Relatedly, O’Neil (2016)
generally to students who, although often the largest
laid out numerous examples of the harm Big Data
group of constituents of a college or university, are
algorithms can cause across contexts, including their
not consulted about the use of their personal data
use in college rankings and teacher evaluations,
within such applications.
and Wachter-Boettcher (2017) discussed the lack of
diversity and inclusivity in the technology industry,
leading to sexist, inaccessible, and otherwise biased
systems. Additionally, Noble (2018) detailed the PERSPECTIVES AND
ways that search engines reinforce racism, sexism, RECOMMENDATIONS
and other forms of oppression; Benjamin (2019) Numerous communities have shared their
broadened Noble’s work, discussing additional perspectives on and recommendations for data
applications of data that cause harm to vulnerable use as it relates to their unique experiences. While
populations, including in AI systems. these communities are not monolithic, the concerns
they raise reflect themes that might otherwise go
Applications of data and the calculations we apply
unidentified by those who develop and deploy AI
to data (i.e., algorithms) have been investigated
and Big Data applications (D’Ignazio & Klein, 2020).
from a variety of perspectives and within numerous
contexts. Out of these investigations has developed One such group advocating for data justice is the
the concept of data justice—a framework for Native Nations Institute (NNI). The NNI defines a
engaging with the ways datafication and society Native nation’s data as “any facts, knowledge, or

Fall 2023 Volume 23


information about the nation and about its citizens, promote Indigenous data sovereignty using the
lands, resources, programs, and communities. CARE Principles for Indigenous Data Governance
Information ranging from demographic profiles that were developed by the Research Data
to educational attainment rates, maps of sacred Alliance’s International Indigenous Data Sovereignty
lands, songs, and social media activities are all Interest Group in 2018 and published in 2020
data” (Rainie et al., 2017, p. 1). The NNI aims to (Carroll et al., 2020). The CARE Principles and their
subcomponents are summarized in Table 1.

Table 1. The CARE Principles for Indigenous Data Governance

Principle Component
Collective Benefit For inclusive development and innovation
For improved governance and citizen engagemen
For equitable outcomes

Authority to Control Recognizing rights and interests


Data for governance
Governance of data

Responsibility For positive relationships


For expanding capability and capacity
For Indigenous languages and worldviews

Ethics For minimizing harm and maximizing benefits


For justice
For future use

Source: Adapted from Carroll et al., 2020, Figure 2.

The Responsibility principle’s first subsection, “For subsections implies some form of collaboration
positive relationships,” identifies that “Indigenous between institution officials using Indigenous
data use is unviable unless linked to relationships students’ data and the students themselves: to
built on respect, reciprocity, trust, and mutual create mutual understanding, to increase data
understanding, as defined by the Indigenous literacy between both parties, and to enable
Peoples to whom those data relate” (Carroll et Indigenous students to (consensually) share their
al., 2022, p. 4). The following subsections, “For experiences.
Expanding Capability and Capacity” and “For
Indigenous Languages and Worldviews,” require When considering the use of early alert systems,
efforts to increase data literacy and to ground it is important to note that the CARE Principles for
data in the world views and the lived experiences Indigenous Data Governance specify that ethical
of Indigenous peoples, respectively. Each of these data not portray Indigenous peoples in terms of

Fall 2023 Volume 24


deficit, and that benefits and harms should be threatening hypervisibility (Benjamin, 2019). Asher et
evaluated from the perspective of the Indigenous al.’s (2022) survey of student perspectives on library
peoples the data relate to (Carroll et al., 2020). analytics found that students in minority racial/ethnic
This guidance provides a model for data use policy groups and those of lower socioeconomic status
development that may be applied to other student were more concerned than the overall student
populations regardless of identity; rather than population about the privacy of their personal
administrators determining what may harm or data, thus supporting this perspective in the
benefit communities, administrators can consult with academic context (Asher et al., 2022). Collaborating
those communities to provide their contextualized with students, especially those who experience
view of potential risks and benefits, and to describe heightened surveillance, may help to shift support
assets to highlight with students. methods such that students experience them in
a less threatening manner. To this point, GSU’s
Although designed with specific focus on a highly predictive advising service provides another
marginalized population, the principles can be applied example: risk factors are shared with students
more generally to incorporating student voice into as well as with advisors, promoting transparent
the formulation of machine learning (ML), AI, and conversations; and advisors are thoroughly trained
other Big Data applications and resources. However, on how to use the service as well as how to have
these principles also remind us that we need to pay discussions about its outputs with students (Bailey et
special attention to the voices of marginalized student al., 2019).
populations, such as racially minoritized students and
other subgroups that are not well represented by the
dominant student culture. Methods for Including Student Voices
There are a variety of potential methods for involving
Other issues related to data capture have been student perspectives when developing access and
identified as well. Ruberg & Ruelos (2020) note use policies. West et al. (2020) note that these
that it is difficult to accurately represent gender methods could include research into students’
and sexuality using traditional demographic descriptions of their own needs, concerns, and ideas
capture-and-reporting techniques. Those authors for how learning analytics might benefit them, as well
provide multiple recommendations based on their as the creation of user users’ stories and principles
findings: (1) When capturing gender and sexuality, against which data-based tools may be built. Jones
multiple answer possibilities should be available. (2) et al. (2019, 2020, 2023) demonstrate adaptable
Gender and sexuality identities may change, and all methods for gathering student feedback in their
reported identities are valid unless the individual studies by collecting student perspectives in three
states otherwise. (3) Collaboration with relevant phases across 3 years: first, they conduct interviews
communities is critical for understanding and with undergraduate students across eight U.S.
accurately capturing their identities. institutions, then they send a quantitative survey to
random samples of students across the same eight
Finally, marginalized groups are often centered
institutions, and finally they hold virtual focus groups
and surveilled by both punitive and purportedly
centered on discussions of data use scenarios
supportive systems, which promotes feelings of

Fall 2023 Volume 25


rooted in real-life practice. Data for Black Lives’ on the use of learning analytics and Big Data, the
report, Data Capitalism + Algorithmic Racism (Milner & student panelists first reviewed a set of materials
Traub, 2021), suggests a few methods for supporting related to the use of learning analytics at several
collective data practice that can be adjusted for the different universities, as well as among a community
higher education context, such as Collington’s (2019) of learning management system users. Students
proposed “system including a digital platform for then completed a survey including questions about
debating and deciding priorities for use of public their awareness of the types of data collected, about
data” (Milner & Traub, 2021, p. 26). their privacy and agency regarding learning data,
about issues related to instructors and advisors
An even more-robust strategy is provided in A Toolkit who have access to and use the data, as well as
for Centering Racial Equity Throughout Data Integration questions about the benefits and risks with the
from Actionable Intelligence for Social Policy, which use of these data. Student responses were split
includes guidance for involving community voices somewhat evenly on the awareness of the types
at every stage of design, use, and implementation of data that were being collected, but the majority
of data-infused practices (Hawn Nelson et al., (70%) of the students disagreed with the statement
2020). While the Toolkit was developed to support that they were adequately informed about how their
those using data for civic purposes, many of its data were being used. Interestingly, while more than
recommendations apply to higher education 80% of the students agreed that there are benefits
data uses and align with calls from the learning to making these data available to their instructors,
analytics field to include student voices at all levels 40% agreed with the reverse statement that
of data use, from design development through such awareness may also negatively impact their
membership in oversight committees (Braunack- motivation and engagement in a course.
Mayer et al., 2020). Among other recommendations,
the Toolkit suggests involving diverse community The panel discussion focused on four questions for
members in discussions about algorithms and their which the students used Google’s Jamboard app to
purposes early in the design stage, inviting people record and organize their ideas into themes. The
with multiple perspectives to provide potential four questions asked were the following:
interpretations of data that will be used.
1. What were your reactions to learning about
the kind of learning data that your instructors
Using a Student Panel Methodology to can access?
Center Student Voice
2. What were your reactions to advisors’ use of
A method that incorporated both surveys and Early Alert Systems?
focus groups was devised as part of a university-
3. How do you feel about your learning data being
wide student success initiative within the authors’
used to identify that you are struggling in a course?
institution. Fifteen students were initially recruited
from across the institution’s seven campuses, and 4. What would you like your instructor to

most of the same students attended each panel, communicate to you about learning data use in

which helped to establish an environment of open your courses?

sharing. For the panel exploring student views After completing the analysis, the students were

Fall 2023 Volume 26


split into two groups to formulate a plan or list of learning data in a course. Table 2 shows an
of recommendations regarding safeguards/ organization of the thematic responses to this task
procedures that should be in place to ensure that from the student panelists.
inequities or biases are not introduced in the use

Table 2. Thematic Responses from the Breakout Room Activity During the Panel

Themes Examples/Explanations
Possible forms of biases • Instructor shows favoritism for students struggling less.
in current practices • Not all struggling students receive the appropriate outreach.

• There are biases regarding students’ socioeconomic status backgrounds.

• Student backgrounds (e.g., they were homeschooled, are first-generation


students) lead to different knowledge or resources used to reach out to
students with invisible needs.

Theme 1 • Student consent should be collected before the data are collected and
shared with instructors, advisors, or any other parties.
Transparency/Open
Communication • The types of data collected or shared should be communicated clearly to
both students and instructors.

• Researchers should explain to the students how the data are being used
or will be used.

• Students should have access to their own learning data.

• All students should have equal access to resources and support.


Theme 2 • Instructors, advisors, and anyone who may be in close contact with any
student data should receive bias and diversity training.
Training
• Instructors and advisors should be trained in how to be sensitive to when
and especially how to reach out to struggling students with more care
and attention to their words.

• Instructors should be trained in how to initiate contact with students.


Theme 3 • There should be a separate office that analyzes student data before the
data are used by instructors or advisors for reaching out to students, or
Human Oversight
by students themselves.

• There should be more communications or surveying of students to better


understand their perspectives and opinions.

• Teachers and administrators or advisors should be allowed to review


their decisions based on their bias trainings.

Fall 2023 Volume 27


Through the survey responses and the panel
discussions, student data use is clearly a topic
THE IMPORTANCE
that is sensitive and requires more attention to its OF BRINGING IN
ethics and to the treatment of individuals. When STUDENT VOICES AND
using AI and Big Data in higher education, we must IMPLICATIONS FOR
be more diligent in protecting the humans behind THE INSTITUTIONAL
the numbers. Students may feel uncomfortable RESEARCH FIELD
when they become aware of the data that are being
Actionable Intelligence for Social Policy’s Toolkit (Hawn
collected about them; that sense of discomfort can
Nelson et al., 2020), discussed earlier, recommends
escalate when the data are shared outside of the
questioning how data use can help communities
context where they are generated, such as in-class
(i.e., students, in our context) to interrogate systems,
data being shared with an academic advisor. Finally,
as opposed to using data only to identify how to
the panel discussion revealed a concern about
treat those communities. To align with effective and
how students are treated when the data are used:
ethical practice, we recommend that institutional
Will they be treated fairly? Is outreach done with
researchers intentionally and continually frame their
sensitivity and care? And how can marginalization
work as student-centric as opposed to intervention-
and biases be avoided in terms of access to
centric, and that they direct their actions in response
resources and support?
to collaborations with students primarily toward
the systems the students interact with instead of
This student panel methodology serves to center
the students themselves (Hawn Nelson et al., 2020;
student voice in IR and to inform policies. To
Kruse & Pongsajapan, 2012; Slade & Prinsloo, 2013).
accurately represent students’ voices, however, it
Actionable Intelligence for Social Policy’s Toolkit
is essential to reflect the diversity of the student
(Hawn Nelson et al., 2020) includes activities that
body to avoid bias. For example, while this student
may be adapted for this purpose. Practical steps for
panel was recruited from various campuses of the
operationalizing racial equity in data use are included,
same institution, more than half of the student
as well as numerous real-world examples of the
panelists were from the main campus. Even
guidance in practice. Once again, GSU’s approach
though this accurately reflects the representation
to data use in support of student success provides
of students across the university, it skews the
an example of this practice in action: by asking
possible viewpoints and practices experienced by
first whether the institution is the problem (i.e., by
the students. Similarly, their classification (year),
interrogating the institution’s systems), GSU has been
gender, race/ethnicity, socioeconomic status,
able to find and resolve significant barriers facing
and other demographics should also be taken
students (Kurzweil & Wu, 2015; Zipper, 2022). It is
into consideration when recruiting to prevent
crucial to involve student voices: in addition to data,
representation disparity in data that could lead
students can provide context for why something was
to unjust applications, such as Buolamwini’s facial
a barrier as well as advice for how institutions can
analysis software, as mentioned before (Buolamwini
break down barriers.
& Gebru, 2018).

Fall 2023 Volume 28


It is critical that student voices are actively centered Asher, A. D., Briney, K. A., Jones, K. M., Regalado,
when developing data access and use policies. M., Perry, M. R., Goben, A., Smale, M. A., & Salo,
When we authentically include student voices in the D. (2022). Questions of trust: A survey of student
development of data policy, we can uncover novel expectations and perspectives on library learning
opportunities that will be situated in the contexts analytics. Library Quarterly, 92(2), 151–171. https://
of our most important constituents. We can learn www.journals.uchicago.edu/doi/abs/10.1086/718605
what they value and what their challenges are
from their own perspectives instead of mediated, Atif, A., Richards, D., & Bilgin, A. (2015). Student
decontextualized data sets. Including students in the preferences and attitudes to the use of early alerts.
development of data policy and system development In T. Bandyopadhyay, D. Beyene, & S. Negash (Eds.),
increases trust, and fosters development of systems 2015 Americas Conference on Information Systems (pp.
and initiatives that support success as students 1–14). Americas Conference on Information Systems.
define it. In this article, we have shared one https://researchers.mq.edu.au/en/publications/
approach used for our context and numerous other student-preferences-and-attitudes-to-the-use-of-
approaches that could be adapted, and we invite early-alerts
institutional researchers to consider how they may
Bailey, A., Vaduganathan, N., Henry, T., Leverdiere,
take advantage of these methods for their contexts
R., & Jacobson, M. (2019). Turning more tassels:
as well.
How colleges and universities are improving student
and institutional performance with better advising.
Boston Consulting Group and NASPA Student Affairs
REFERENCES Administrators in Higher Education. https://web-
Ahn, J. (2022). Exploring the negative and gap- assets.bcg.com/img-src/BCG-Turning-More-Tassels-
widening effects of EdTech on young children’s Jan-2019_tcm9-212400.pdf
learning achievement: Evidence from a longitudinal
dataset of children in American K–3 classrooms. Benjamin, R. (2019). Assessing risk, automating
International Journal of Environmental Research and racism: A health care algorithm reflects underlying
Public Health, 19(9), 5430 (19pp.). https://www.mdpi. racial bias in society. Science, 366(6464), 421–422.
com/1660-4601/19/9/5430 https://www.science.org/doi/10.1126/science.
aaz3873
Alonso, C., Berg, A., Kothari, S., Papageorgiou, C.,
& Rehman, S. (2022). Will the AI revolution cause a Braunack-Mayer, A. J., Street, J. M., Tooher, R., Feng,
great divergence? Journal of Monetary Economics, X., & Scharling-Gamba, K. (2020). Student and staff
127, 18–37. https://www.sciencedirect.com/science/ perspectives on the use of Big Data in the tertiary
article/abs/pii/S0304393222000162 education sector: A scoping review and reflection
on the ethical issues. Review of Educational Research,
90(6), 788–823. https://journals.sagepub.com/doi/
abs/10.3102/0034654320960213?journalCode=rera

Fall 2023 Volume 29


Buolamwini, J., & Gebru, T. (2018). Gender shades: D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT
Intersectional accuracy disparities in commercial Press. https://mitpress.mit.edu/9780262547185/
gender classification. In S. A. Friedler and C. Wilson data-feminism/
(Eds.), Proceedings of the 1st Conference on Fairness,
Accountability and Transparency (Proceedings of Dencik, L. & Sanchez-Monedero, J. (2022). Data
Machine Learning Research), Journal of Machine justice. Internet Policy Review, 11(1). https://
Learning Research, 81, 77–91. http://proceedings.mlr. policyreview.info/articles/analysis/data-justice
press/v81/buolamwini18a.html
Dencik, L., Hintz, A., Redden, J., & Treré, E. (2019).
Calhoun-Brown, A. (2023, January 9). How data Exploring data justice: Conceptions, applications and
and technology can improve advising and equity. directions. Information, Communication & Society,
Chronicle for Higher Education. https://www.chronicle. 22(7), 873–881. https://www.tandfonline.com/doi/
com/article/how-data-and-technology-can-improve- pdf/10.1080/1369118X.2019.1606268
advising-and-equity
Ekowo, M., & Palmer, I. (2017). Predictive analytics in
Carroll, S. R., Garba, I., Figueroa-Rodríguez, O. L., higher education. New America. https://kresge.org/
Holbrook, J., Lovett, R., Materechera, S., Parsons, wp-content/uploads/2020/06/predictive-analytics-
M., Raseroka, K., Rodriguez-Lonebear, D., Rowe, R., guidingprinciples.pdf
Sara, R., Walker, J. D., Anderson, J., & Hudson, M.
Georgia State University (GSU). (2021). Strategic plan
(2020). The CARE principles for Indigenous data
goal 1: Student success. Retrieved June 21, 2023,
governance. Data Science Journal, 19(1), 43. https://
from https://strategic.gsu.edu/accomplishments/
doi.org/10.5334/dsj-2020-043
goal-1-accomplishments/
Carroll, S. R., Garba, I., Plevel, R., Small-Rodriguez, D.,
Hanover Research. (2014, November). Early
Hiratsuka, V. Y., Hudson, M., & Garrison, N. A. (2022).
alert systems in higher education. https://www.
Using Indigenous standards to implement the
hanoverresearch.com/wp-content/uploads/2017/08/
CARE principles: Setting expectations through tribal
Early-Alert-Systems-in-Higher-Education.pdf
research codes. Frontiers in Genetics, 13, 823309.
https://www.frontiersin.org/articles/10.3389/
Hawn Nelson, A., Jenkins, D., Zanti, S., Katz, M.,
fgene.2022.823309/full
Berkowitz, E., Burnett, T., & Culhane, D. (2020). A
toolkit for centering racial equity throughout data
Collington, R. (2019). Digital public assets: Rethinking
integration. Actionable Intelligence for Social Policy.
value and ownership of public sector data in the
https://aisp.upenn.edu/resource-article/a-toolkit-for-
platform age. Common Wealth. https://www.
centering-racial-equity-throughout-data-integration/
common-wealth.org/publications/digital-public-
assets-rethinking-value-access-and-control-of-public-
sector-data-in-the-platform-age

Fall 2023 Volume 30


Jones, K. M. L., Asher, A., Goben, A., Perry, M. R., Salo, Kruse, A., & Pongsajapan, R. (2012). Student-
D., Briney, K. A., & Robertshaw, M. B. (2020). “We’re centered learning analytics. CNDLS Thought Papers,
being tracked at all times”: Student perspectives of 1(9), 98–112. https://citeseerx.
their privacy in relation to learning analytics in higher ist.psu.edu/document?repid=rep1&type=pdf&doi=
education. Journal of the Association for Information 692c8661e4c13efe3559cce96694689ce72f307e
Science and Technology, 71(9), 1044–1059. https://doi.
org/10.1002/asi.24358 Kurzweil, M., & Wu, D. (2015, April 23). Building
a pathway to student success at Georgia State
Jones, K. M. L., Goben, A., Perry, M. R., Regalado, University. Ithaka S+R. https://sr.ithaka.org/
M., Salo, D., Asher, A. D., Smale, M. A., & Briney, publications/building-a-pathway-to-student-success-
K. A. (2023). Transparency and consent: Student at-georgia-state-university/
perspectives on educational data analytics scenarios.
Portal: Libraries & the Academy, 23(3), 485–515. Meyer, M. N. (2014, June 30). Everything you need
https://muse.jhu.edu/article/901565 to know about Facebook’s controversial emotion
experiment. Wired. https://www.wired.com/2014/06/
Jones, K. M. L., Perry, M. R., Goben, A., Asher, A., everything-you-need-to-know-about-facebooks-
Briney, K. A., Robertshaw, M. B., & Salo, D. (2019). In manipulative-experiment/
their own words: Student perspectives on privacy
and library participation in learning analytics Milner, Y., & Traub, A. (2021). Data capitalism +
initiatives. [Paper presentation]. ACRL 19th National algorithmic racism. Data for Black Lives. https://
Conference, “Recasting the Narrative,” Cleveland, OH. datacapitalism.d4bl.org/documents/
https://alair.ala.org/handle/11213/17656 Demos_Data_Capitalism_Final.pdf

Kantayya, S. (Director). (2020). Coded bias Noble, S. U. (2018). Algorithms of oppression: How
[Documentary]. 7th Empire Media. https://www. search engines reinforce racism. New York University
youtube.com/watch?v=jZl55PsfZJQ (trailer). Press. https://psycnet.apa.org/record/
2018-08016-000
Kasneci, E., Sessler, K., Küchemann, S., Bannert,
M., Dementieva, D., Fischer, F., Gasserm, U., Nopper, T. K. (2019). Digital character in “The
Groh, G., Günnemann, S., Hüllermeier, E., scored society.” In R. Benjamin (Ed.), Captivating
Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, technology: Race, carceral technoscience, and liberatory
C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, imagination in everyday life (pp. 170–187). Duke
A., Seidel, T., . . . Kasneci, G. (2023). ChatGPT University Press. https://www.degruyter.com/
for good? On opportunities and challenges of document/doi/10.1515/9781478004493-010/
large language models for education. Learning html?lang=en
and Individual Differences, 103(April). https://
O’Neil, C. (2016). Weapons of math destruction: How
www.sciencedirect.com/science/article/pii/
Big Data increases inequality and threatens democracy.
S1041608023000195?via%3Dihub
Crown. https://archive.org/details/
weapons-of-math-destruction_202209

Fall 2023 Volume 31


Parnell, A., Jones, D., Wesaw, A., & Brooks, D. C. Slade, S., & Prinsloo, P. (2013). Learning analytics:
(2018). Institutions’ use of data and analytics for Ethical issues and dilemmas. American Behavioral
student success. EDUCAUSE: Center for Analysis and Scientist, 57(10), 1510–1529. https://doi.
Research. https://eric.ed.gov/?id=ED588396 org/10.1177/0002764213479366

Prinsloo, P., & Slade, S. (2016). Student vulnerability, Wachter-Boettcher, S. (2017). Technically wrong:
agency, and learning analytics: An exploration. Sexist apps, biased algorithms, and other threats of
Journal of Learning Analytics, 3(1), 159–182. https:// toxic tech. W. W. Norton & Company. https://dl.acm.
learning-analytics.info/index.php/JLA/article/ org/doi/10.5555/3307044
view/4447
West, D., Luzeckyj, A., Toohey, D., Vanderlelie, J.,
Rainie, S. C., Rodriguez-Lonebear, D., & Martinez, & Searle, B. (2020). Do academics and university
A. (2017). Policy brief (ver. 2): Data governance for administrators really know better? The ethics
native nation rebuilding. Native Nations Institute. of positioning student perspectives in learning
https://nnigovernance.arizona.edu/policy-brief-data- analytics. Australasian Journal of Educational
governance-native-nation-rebuilding Technology, 36(2), 60–70. https://doi.org/10.14742/
ajet.4653
Roberts, L. D., Howell, J. A., Seaman, K., & Gibson, D.
C. (2016). Student attitudes toward learning analytics Zialcita, P. (2019, October 30). Facebook
in higher education: The Fitbit version of the learning pays $643,000 fine for role in Cambridge
world. Frontiers in Psychology, 7. https://www. Analytics scandal. NPR News. https://www.npr.
frontiersin.org/articles/10.3389/fpsyg.2016.01959/full org/2019/10/30/774749376/facebook-pays-643-
000-fine-for-role-in-cambridge-analytica-scandal
Rubel, A., & Jones, K. M. (2016). Student privacy in
learning analytics: An information ethics perspective. Zipper, T. (Host). (2022, September). Are we the
The Information Society, 32(2), 143–159. https:// problem? with Dr. Tim Renick [Audio podcast].
papers.ssrn.com/sol3/papers.cfm?abstract_ (Season 2, Episode 3). An Educated Guest. Wiley
id=2533704 University Services. https://universityservices.wiley.
com/podcast-tim-renick-are-we-the-problem/
Ruberg, B., & Ruelos, S. (2020). Data for queer
lives: How LGBTQ gender and sexuality identities Zuboff, S. (2019). Surveillance capitalism and the
challenge norms of demographics. Big Data & Society, challenge of collective action. New Labor Forum,
7(1). https://doi.org/10.1177/2053951720933286 28(1), 10–29. https://journals.sagepub.com/doi/
full/10.1177/1095796018819461

Fall 2023 Volume 32


Advising at Scale: Automated
Guidance of the Role Players
Influencing Student Success

Randhir Rawatlal and Rubby Dhunpath

About the Authors


Randhir Rawatlal is associate professor of chemical engineering at the University of KwaZulu-Natal. Rubby
Dhunpath is associate professor and director of the University Teaching and Learning Office at the University
of KwaZulu-Natal

Abstract
Although student advising is known to improve student success, its application is often inadequate in
institutions that are resource constrained. Given recent advances in large language models (LLMs) such as
Chat Generative Pre-trained Transformer (ChatGPT), automated approaches such as the AutoScholar Advisor
system affords viable alternatives to conventional modes of advising at scale. This article focuses on the
AutoScholar Advisor system, a system that continuously analyzes data using modern methods from the fields
of artificial intelligence (AI), data science, and statistics. The system connects to institutional records, evaluates
a student’s progression, and generates advice accordingly. In addition to serving large numbers of students,
the term “advising at scale” refers to the various role players: the executives (whole-institution level), academic
program managers (faculty and discipline levels), student advisors (faculty level), lecturers (class level), and,
of course, the students (student level). The form of advising may also evolve to include gamification elements
such as points, badges, and leaderboards to promote student activity levels. Case studies for the integration
with academic study content in the form of learning pathways are presented. We therefore conclude with the
proposition that the optimal approach to advising is a hybrid between human intervention and automation,
where the automation augments human judgment.

The AIR Professional File, Fall 2023 https://doi.org/10.34315/apf1632023

Article 163 Copyright © 2023, Association for Institutional Research

Fall 2023 Volume 33


In the South African context, academic advising
INTRODUCTION provides structured support by an institutional
Traditional academic advising in a one-advisor-to- advisor to a student. The resources necessary
one student approach is resource intensive and to provide such a facility, however, may limit the
difficult to sustain, prompting institution officials to number of students who can receive such advice.
develop alternative student advising models (Thiry Kuhn (2008, chap. 1) describes various models
& Laursen, 2011). This approach uses analytics to of academic advising. The nature of the advising
sort students by their likelihood to drop out or stop could be to inform, suggest, counsel, discipline,
out, which allows advisors to prioritize their time in coach, mentor and even teach. The practice
favor of students that face rising risk. Networks of helps students align their various goals through a
advisors, faculty, and other student support leaders continuous developmental process to promote their
form teams that can effectively address the complex own success. The act of academic advising lies on
needs of students. This approach allows for efficient an advising–teaching and an advising–counseling
use of resources and a focus on individualized spectrum.
academic advising (EAB, 2023).
Evidence of the positive role by student advisors
Although institution officials continue to offer ongoing in student success has been mounting, warranting
support programs such as orientations, tutoring, institutions to formalize and professionalize
and learning centers (Bornschlegl et al., 2020), such academic advising. In South Africa, advising is being
resources typically require students to actively seek professionalized through the coordinated efforts
out those programs (Fong, 2021). Many universities of ELETSA, which translates to the word “advising”
struggle to develop and maintain effective advising in Sesotho, which is one of South Africa’s 11 official
services that promote student satisfaction and languages. ELETSA is a South African nonprofit
increase student retention (Anderson et al., 2014). In organization that seeks to provide leadership in
response, there has been an ascendancy of automated cultivating collaborations across institutions in the
advising approaches to mediate the challenges of area of academic advising. The association holds
diminishing resources and the perceived lack of value allied membership status through the Global
in the conventional approaches for advising (Atalla et Community for Academic Advising (NACADA), which
al., 2023; Rawatlal, 2022). is based in Kansas.

Fall 2023 Volume 34


application of chatbots to automate this brokering
ADVISING AT SCALE and to ensure more-effective use of a human
Although wide-scale student advising is thought advisor’s time.
to significantly improve the graduation rates,
traditional forms of advising are relatively resource
intensive. While automation and web-based systems Automated Advising
are obvious candidates to scale the advising, Recent developments in AI have resulted in the
such systems must offer a high enough level of emergence of chatbots in higher education to
customization to be effective in the context of a automate specific student queries for information
diverse student body. In particular, the operation brokering, thus freeing human advisors to focus on
of such systems should acknowledge a constantly more-complex tasks (Meotti, 2023). AdmitHub, an
iterative development approach as the needs AI developer, has partnered with more than 100
change in response to the effectiveness or lack universities to improve student access and retention
thereof of the approaches of the previous iteration. by using chatbots (Page & Gehlbach, 2017). Bots of
this type use natural language processing to support
student success (Chen et al., 2023).
Advising as a High-Impact Practice
As the practice of academic advising intensifies At Georgia State University (GSU), a chatbot helps
across institutions, it is being portrayed as a students with preenrollment processes such as
social justice imperative for higher education, navigating financial aid (Nurshatayeva et al., 2021);
and potentially as a high-impact practice (Keup & GSU’s chatbot has led to significant increases in
Young, 2021). However, advising large numbers retention and graduation rates. The chatbot’s
of students requires substantial investment that effectiveness continues after enrollment: research
challenges under-resourced institutions (Assiri et al., indicates that students who used GSU’s chatbot
2020). One-on-one advising approaches alone are were 3% more likely to re-enroll, while having higher
therefore neither feasible nor effective, and motivate rates of FAFSA filing and registration (Nurshatayeva
the inception of automated systems that might et al., 2021).
minimize incorrect advice and the load on academic
Automated systems can aggregate and process large
advisors (Assiri et al., 2020).
amounts of data more efficiently than humans. This
Evidence is now also emerging on the nonacademic can lead to more-informed advice, since the system
or quasi-academic benefits of advising (Haley, can consider various factors and possibilities (Shift,
2016). Using expectancy violations theory as a lens, 2022). In recognition of the various roles that support
Anderson et al. (2014) argue that student satisfaction student success, the AutoScholar Advisor system uses
with advising is linked to alignment between student AI to generate advice to the various levels of seniority
expectations of the advising process and perceived in the higher education institution to fully support and
advisor behaviors. In some instances, student queries integrate the various interventions that can lead to
are merely information seeking, such as when they ask increased student success.
for schedules and timetables, financial aid sources,
and other pragmatic needs. This is evidenced in the

Fall 2023 Volume 35


Evolutions in Student Advising context to context or from person to person. When
generating advice through the AutoScholar Advisor
When generating automated advice to students, we
system, it was therefore necessary to evaluate the
acknowledge that human motivation is a complex
advice rendered in a variety of contexts to serve as
field that requires high variance in responses.
large a group as possible.
The factors that prompt action may differ from

A screenshot from an early instance of student advising is shown in Figure 1.

Note: Student numbers and names have been hashed.

Fall 2023 Volume 36


In this case, the system calculates the assessment resources available elsewhere in AutoScholar. In
statistics in a class and determines which students the case of underperforming or at-risk students,
are significantly underperforming by computing the it further suggests engaging with student support.
number of standard deviations between the mean This form of advising is already a partial evolution;
and each student’s result. Based on this value, an in the first versions, it was possible for the system
apparently personalized message is generated to to alert students of their being at risk of failure. In
the student; each message includes some specific the version shown in Figure 1, the advice is heavily
data about that student’s current and potential moderated only to suggest engagement with
future performance. In other words, from a single available services.
advising script, which is itself prompted by the
student’s performance metrics, the system can The advising concept may be further generalized to
generate a message to each student that appears include gamification elements. As shown in Figure 2,
to be customized to that student’s profile. A default the advising may take the form of virtual awards and
advising script is included that may be further badges that can be attached to a student’s profile.
customized by a lecturer or student advisor. The Although these virtual awards require no resources
system advises both high-performing students from the institution, they are a powerful means of
and average-performing students to continue driving student activity, since students value these
improving and suggests engagement with learning awards to a high degree in their applications for
scholarships and employment.

Figure 2. Advising in the Form of Virtual Awards

Fall 2023 Volume 37


While the application of points, badges, and integrated whole, the third evolution of student
leaderboards can drive students’ level of advising involves providing a large goal to students
engagement, one has to apply these methods based on an assumption of a graduation and an
judiciously to avoid degradation of the educational assumption that each student is striving not merely
experience to one of jumping through a series of to pass, but also to accomplish academic excellence.
hoops and thereby limiting the experience of a Figure 3 illustrates this evolution.
cohesive curriculum. To develop the sense of an

Figure 3. Advising for Success ng: The Simulated Case Study

Fall 2023 Volume 38


In this case, the system advises students (see top the ability to benchmark a student against the
right) of their current status, which, based on their student population is key. At the executive scale,
current records, indicates that they are on track the allocation of resources to support teaching and
to graduate with a lower second class of degree. learning to specific programs should correlate with
(For institutions that do not implement such a the performance levels in the programs.
classification, this can be substituted with mark
ranges such as credited weight average in excess
of 70%.) The system also alerts students that they
Role Players in Higher Education:
can graduate with an upper second class of degree
Another Dimension in Advising at Scale
instead by improving their performance by only a Although advising at-risk students is emphasized
few fractions of a percentage point. This provides at most institutions, it is also necessary to advise
a student with an overall objective based on an the other role players that influence student
assumption of a final graduation rather than simply success. Lecturers require advice on their course/
the avoidance of failure. module management, student advisors require
insights into student performance to render advice
Furthermore, below that top-right box the system effectively, faculty management require insights into
shows students which classes they are currently which academic programs require more teaching
registered for, together with their performance in resources, and executives require insight to the
the various assessments. It notes to students what faculties that would benefit from additional financial
their minimum performance level should be in the resources. Some case studies in advising these roles
remaining assessments in that class to accomplish players are shown in Figures 4 through 8.
the overall goal with respect to the final degree. This
evolution of advising can encourage the student to In Figure 4 it is possible to understand which students
constantly strive higher and achieve a greater level require advising as well as to identify the various
of academic accomplishment. activities that can be undertaken to better organize
the learning content and to generally support better
student engagement with the course content.

MULTIDIMENSIONAL
ADVISING: ROLE PLAYERS
IN HIGHER EDUCATION
To achieve significant improvements in the
progression and hence the graduation rates, it is
necessary for the various role players to receive
accurate advice. At the student scale, advice on
coursework registration as well as day-to-day study
habits are a direct influence. Advice to lecturers with
respect to students at risk and course management
practices can significantly improve the student
(and lecturer) experience. At the counselor scale,

Fall 2023 Volume 39


Figure 4. Screenshot from ClassView Connect Component of the AutoScholar Advisor System

Note: Advice rendered to lecturers to manage classes better.

Fall 2023 Volume 40


From the perspective of an academic program of an academic program where low pass rates,
manager, such as a head of department or the influence of a high confluence of prerequisite
program convenor, it is also useful to identify which requirements, and the impact on senior courses;
coursework in a program should be prioritized to students then take those courses later than
resolve low pass rates. Figure 5 illustrates analysis intended.

Figure 5. Program Analyst Component of AutoScholar: Identification of Coursework Issues

Fall 2023 Volume 41


To fully advise faculty staff on student progression, it involved in the various routes to graduation. Figure 6
is also necessary to evaluate the transfer of students illustrates that a program manage can determine at
from one semester to the next, and to maintain which point in the curriculum the largest number of
awareness of the various combinations of courses students exit or recycle.

Figure 6. Population Balance Illustrating Student Progression through an Academic Program

At the whole-institution scale, executives maintain a year, it is necessary to monitor what fraction of
bird’s-eye view of the entry and graduation statistics. students complete in minimum time and what
In particular, given an entering cohort in a particular fraction exit without graduating (Figure 7).

Fall 2023 Volume 42


Figure 7. Executive Insight Component of AutoScholar Advisor System to Monitor
Institutional Progression

To take action by alerting relevant staff or allocating which programs exhibit the lowest pass rates and
resources, the next step would be to determine, lowest performance indices. Figure 8 illustrates that
among all academic programs at the institution, such programs can easily be identified.

Fall 2023 Volume 43


Figure 8. Executive Insight: Identifying Academic Programs in Need of Support

It is therefore possible for all role players in higher cases. At the student level, however, it could be
education to receive sufficient insight and hence to more necessary to moderate the advice rendered by
apply suitable interventions or allocate resources interpreting the results and suggesting interventions
to ameliorate the limitations identified. Such data- based on the student temperament and degree of
oriented advising may be directly applied in most reception to critical feedback.

Fall 2023 Volume 44


Hybrid Advising model can help break down barriers to access,
allowing universities to reach a broader and more
In academic advising, human advisors often do
diverse population of students. It can also better
not have all the requisite information at hand,
meet the changing workforce’s needs and provide
with inherent limitations in what they can do with
working adults with lifelong learning and career
such information. For example, an advisor cannot
opportunities (Selingo & Clark, 2021).
make decisions for an advisee, but can provide
various alternatives for the student to consider.
Similarly, an advisor cannot increase the native
Implications for Institutional Research
ability of the advisee, but can encourage maximum
The approaches outlined here emphasize
use of that ability. Advisors also cannot reduce an
awareness of a need for intervention at a specific
underperforming student’s academic load, but can
point of application. It is also possible to apply
recommend appropriate interventions. Confidential
this approach to evaluate the effectiveness of
matters also present challenges, since advisors
any specific interventions that might be applied.
must balance the need for information exchange
Such an approach is typical of Improvement
with the need to respect student confidentiality.
Science frameworks (Perry et al., 2020), where the
Furthermore, when complex problems arise
continuation of an intervention must be evaluated
related to financial aid, mental or physical health, or
according to the observed improvement or lack
personal or social counseling, advisors often have to
thereof. In fact, it is a well-established practice in
refer students to other professionals.
Improvement Science to reevaluate not only the

Given the inherent diversity in student attributes, suitability of any applied interventions, but also the

moderating advice to students is essential as metrics used in the evaluation itself.

students navigate the complexities of their


It is also worth noting that, although metrics are
institutions. Personalized connections can
cited for the performance of students at the whole-
help bridge the gap between expectations and
institution scale, the system also generates the
experiences, especially for international students
same statistics at the college, faculty, and academic
and those who require support to prevent departure
program levels. This is significant since the context
before graduation (Moore, 2022).
of the student and nature of studies undertaken will

This points to the value of hybrid advising, which influence the performance metrics. It then becomes

combines in-person and online elements, and possible to customize the applied interventions

can help to mitigate some of these challenges rather than assuming that a blanket strategy applies

by using chatbots to handle routine transactions to all disciplines.

while leveraging human interactions to address


The ease of access to data analysis may also afford
specific and unique situations. The Covid-19
new insights to the student support staff. Student
pandemic accelerated the blending of in-person
advisors often complain that their role devolves to
and online learning in many schools, a shift that,
simple information brokering rather than affording
despite its challenges, can potentially enhance the
students insight to performance improvement.
academic experience in the long run. This hybrid
This is at least in part due to nonacademic advising

Fall 2023 Volume 45


emphasizing the student impression of the severity be considered, such as the evaluation of different
of the challenges faced. If an advisor is also able to models of advising on student experience and
correlate this with actual changes in performance as satisfaction: automated, human, and hybrid advising.
reflected by data showing a student’s progression,
it might be possible to review perceived challenges IR and IE officials might also want to ensure that
more objectively and hence to raise the value of the other colleagues are considering potential bias that
advice rendered. can occur in data (majority vs. minority students,
or other known facets of differences). Indeed, we
It is still necessary to actively challenge the believe that this system can help underserved
interpretation of data, however. Various forms student populations and that IR officials can help
of bias easily enter even careful analysis, to say articulate those benefits to campus colleagues.
nothing of the tendency to adopt an auto-generated
message as the gospel truth. There are as many as
six main categories of bias (confirmation, selection,
CONCLUSION
historical, survivorship, availability, and outlier).
In this article we have attempted to demonstrate
Without suitable training, it is all too easy for a
that, while academic advising has consistently been
viewer to take action that yields unexpected results.
rated a top predictor of students’ success and
On the other hand, it is known that the students satisfaction during their undergraduate careers
most in need of support are often the least likely to (Anderson et al., 2014), the traditional human-
ask for it. Automation may play a role in provoking centered academic advising is a resource-intensive
at least a conversation if not an active engagement process that is difficult to sustain, prompting
between a student and a human advisor that institutions to develop alternative advising models.
might not otherwise have occurred. There are rich Based on our experiences of advising development
possibilities for the hybridization of automated and at a South African university, we contend that
human advising. automated systems that use AI techniques (such
as the AutoScholar Advisor system) can “minimize
There are other implications for the AutoScholar incorrect advice, minimize the load on academic
Advisor system for institutional research (IR) and advisors, solve the issue of the limited number of
institutional effectiveness (IE) professionals. It advisors, and free up more of their time” (Assiri et al.,
is possible that IR or IE officials can engage in 2020, p. 1).
collaborative work with academic advising staff on
how data are collected, managed, and prepared However, automated systems alone can have
for the feedback loops. In addition, the IR analysts unintended consequences, such as engendering
might want to design a study to examine student demotivation among students. We therefore
success based on use of the AutoScholar Advisor conclude with the proposition that the optimal
system (e.g., perhaps a pre–post type of research approach to advising is a hybrid between human
design). This could yield great benefits for students intervention and automation, where the automation
and provide return-on-investment rationale for use augments human judgment. In this modality, the
of the system. Other research studies may also automated advice function provides the initial

Fall 2023 Volume 46


prompt to alert students of their at-risk status or Bornschlegl, M., Meldrum, K., & Caltabiano, N. J.
of their potential to attain higher grades. These 2020. Variables related to academic help-seeking
students are then ushered to appropriately behaviour in higher education: Findings from a
qualified advisors who provide the human touch to multidisciplinary perspective. Review of Education,
ameliorate the limitations of automated systems. 8(2), 486–522. https://bera-journals.
onlinelibrary.wiley.com/doi/abs/10.1002/rev3.3196

Chen, Y., Jensen, S., Albert, L. J., Gupta, S., & Lee, T.
REFERENCES (2023). Artificial intelligence (AI) student assistants
Bennett, A. (2008). Process tracing: A Bayesian in the classroom: Designing chatbots to support
perspective. In J. M. Box-Steffensmeier, H. E. Brady, student success. Information Systems Frontiers, 25(1),
& D. Collier (Eds.), The Oxford handbook of political 161–182. https://link.springer.
methodology. Oxford University Press. com/article/10.1007/s10796-022-10291-4

Anderson, W., Motto, J. S., & Bourdeaux, R. EAB. (2023, July 5). Develop a student-centered
(2014). Getting what they want: Aligning student academic advising model. https://eab.com/research/
expectations of advising with perceived advisor academic-affairs/roadmaps/develop-a-student-
behaviors. Mid-Western Educational Researcher, 26(1), centered-academic-advising-model/
27–51. https://www.mwera.org/MWER/volumes/v26/
issue1/v26n1-Anderson-Motto-Bourdeaux-feature- Fong, C. J., Gonzales, C., Hill-Troglin Cox, C., &
articles.pdf Shinn, H. B. (2021). Academic help-seeking and
achievement of postsecondary students: A meta-
Atalla, S., Daradkeh, M., Gawanmeh, A., Khalil, H., analytic investigation. Journal of Educational
Mansoor, W., Miniaoui, S., & Himeur, Y. (2023). An Psychology, 115(1), 1–21. https://
intelligent recommendation system for automating psycnet.apa.org/record/2022-17344-001
academic advising based on curriculum analysis and
performance modeling. Mathematics, 11(5), 1098. Haley, L. (2016, February 16). The role of emotional
https://www.mdpi.com/2227-7390/11/5/1098 intelligence in quality academic advising. Global
Community for Academic Advising. https://nacada.
Assiri, A., Al-Ghamdi, A. A., & Brdesee, H. ksu.edu/Resources/Academic-Advising-Today/
(2020). From traditional to intelligent academic View-Articles/The-Role-of-Emotional-Intelligence-in-
advising: A systematic literature review of Quality-Academic-Advising.aspx
e-academic advising. International Journal of
Advanced Computer Science and Applications, Keup, J. R., & Young, D. G. (2021). Being HIP: Advising
11(4), 507–517.https://thesai.org/Publications/ as an emerging high-impact practice. New Directions
ViewPaper?Volume=11&Issue=4&Code= for Higher Education, 2021(195–196), 91–99. https://
IJACSA&SerialNo=67 onlinelibrary.wiley.com/doi/full/10.1002/he.20411

Fall 2023 Volume 47


Kuhn, T. (2008). Historical foundations of academic Page, L. C., & Gehlbach, H. (2017). How an artificially
advising. In V. N. Gordon, W. R. Habley, & T. J. Grites intelligent virtual assistant helps students navigate
(Eds.), Academic advising: A comprehensive campus the road to college. Aera Open, 3(4). https://journals.
process (3–16). Jossey-Bass. https://psycnet.apa.org/ sagepub.com/doi/full/10.1177/2332858417749220
record/2008-17135-001
Perry, J. A., Zambo, D., & Crow, R., 2020. The
Meotti, M., & Magliozzi, D. (2023, March 9). Using improvement science dissertation in practice: A guide for
artificial intelligence to navigate the new challenges of faculty, committee members, and their students. Myers
college and career. Harvard Advanced Leadership Education Press.
Initiative. https://www.sir.advancedleadership.
harvard.edu/articles/using-artificial-intelligence-to- Rawatlal, R. (2022). The role of automated student
navigate-the-new-challenges-of-college-and-career advising when incentivising student success.
ICERI2022 Proceedings, IATED. https://www.
Moore, W. (2022). Personalized advising that researchgate.net/publication/365754097_the_role_
is purposely inconsistent: The constants of of_automated_student_advising_when_incentivising_
great advisors and the variability that demands student_success
adaptability. In D. Westfall-Rudd, C. Vengrin, & J.
Elliott-Engel (Eds.), Teaching in the university: Learning Selingo, J., & Clark, C. (2021, October 8). Imagining
from graduate students and early-career faculty the hybrid college campus. Harvard Business Review.
(chap. 12). Virginia Tech College of Agriculture https://hbr.org/2021/10/imagining-the-hybrid-
and Life Sciences. https://pressbooks.lib.vt.edu/ college-campus
universityteaching/chapter/personalized-advising-
Shift. (2022, November 29). Shift principles in action:
that-is-purposely-inconsistent-the-constants-of-
Hybrid advising co-op convening. https://shift-results.
great-advisors-and-the-variability-that-demands-
com/resources/blog/shift-principles-in-action-hybrid-
adaptability/
advising-co-op-convening/
Nurshatayeva, A., Page, L. C., White, C. C., &
Thiry, H., & Laursen, S. L. (2011). The role of student-
Gehlbach, H. (2021). Are artificially intelligent
advisor interactions in apprenticing undergraduate
conversational chatbots uniformly effective
researchers into a scientific community of practice.
in reducing summer melt? Evidence from
Journal of Science Education and Technology, 20,
a randomized controlled trial. Research in
771–784. https://link.springer.com/article/10.1007/
Higher Education, 62, 392–402. https://eric.
s10956-010-9271-2
ed.gov/?id=EJ1294362#:~:text=Our%20findings%20
suggest%20that%20proactive%20chatbot%20
outreach%20to,who%20may%20need%20the%20
chatbot%20support%20the%20most.

Fall 2023 Volume 48


Reflections on the Artificial Intelligence
Transformation: Responsible Use and
the Role of Institutional Research
and Institutional Effectiveness
Professionals

Mike Urmeneta

About the Author


Mike Urmeneta is with the Association for Institutional Research (AIR)’s Data Literacy Institute. Previously he
was director of analytics at the New York Institute of Technology.

Abstract
This article explores the potential impact of artificial intelligence (AI) and machine learning (ML) on higher
education. It overviews current generative AI capabilities and argues for ethical frameworks to address
issues such as bias. The article advocates for a multidisciplinary governance approach involving institutional
stakeholders by examining past academic technology adoption. It highlights the strategic role institutional
research (IR) and institutional effectiveness (IE) professionals can play in navigating AI complexities. This article
provides specific suggestions for IR/IE professionals to embrace the role of AI ethicist: continuously developing
AI literacy, ensuring ethical deployment, upholding privacy and confidentiality, mitigating bias, enforcing
accountability, championing explainable AI, incorporating student perspectives, and developing institutional
AI policies. The article concludes by asserting that IR/IE’s research expertise, ethical commitment, and belief in
human judgment equip the field to adapt to and lead in the AI era. By taking an active role, IR/IE can shape the
technology’s impact to benefit higher education.

The AIR Professional File, Fall 2023 https://doi.org/10.34315/apf1642023

Article 164 Copyright © 2023, Association for Institutional Research

Fall 2023 Volume 49


INTRODUCTION EVOLUTION AND
As discussed in this volume’s preface and evidenced CAPABILITIES OF
by the other articles in this volume, artificial GENERATIVE ARTIFICIAL
intelligence (AI) and machine learning (ML) are far INTELLIGENCE
from new concepts (Stahl, 2021). Until recently,
Today’s generative AI tools have an array of
however, discussions around these tools were
capabilities, including the ability to summarize
predominantly confined to specialists, researchers,
and condense complex information, generate
and enthusiasts. This changed in November of 2022
art and imagery, and streamline writing and
when Chat Generative Pre-trained Transformer
research (Megahed et al., 2023). Using natural
(ChatGPT) provided unprecedented access to this
language prompts, large language models (LLMs)
technology, ushering in a new wave of widespread
like ChatGPT, Google Bard, Microsoft Bing Chat,
interest. Seemingly overnight, generative AI had
Jasper.ai, Perplexity, HuggingChat, Language
catapulted to the forefront of public awareness. AI
Model for Dialogue Applications (LaMDA), and
and ML started to permeate every field and industry,
Large Language Model Meta AI (LLaMA) can draft
spanning technology, business, health care, law, and
sophisticated written content. Based solely on
education. The reactions ranged from excitement
descriptive text, these models can create reports,
and enthusiasm to criticism and concern. While
marketing materials, cover letters, and program
generative AI has the potential to increase efficiency,
code. Furthermore, they can summarize dense
encourage exploration, and spark creativity, it also
material and provide sentiment analysis of uploaded
has the potential to disseminate misinformation,
content. Generative art tools such as Midjourney,
compromise privacy, and amplify biases (Megahed
Stable Diffusion, Leonardo AI, and Adobe Firefly have
et al., 2023; Shahriar & Hayawi, 2023). Certainly, as
the capability to convert descriptive text into studio-
these technologies continue to evolve, they also
quality art and imagery. Finally, AI-enhanced tools
continue to introduce opportunities and challenges.
like Elicit and Consensus can accelerate the process
of identifying and reviewing research studies and
This article reflects on the potential impact of AI in
articles, complete with citations (Lund et al., 2023).
higher education, from the increasing proliferation
of AI tools, to the need for ethics and accountability,
The landscape continues to evolve. Third-party
to the pivotal role of institutional research (IR) and
plugins can enhance ChatGPT capabilities by
institutional effectiveness (IE) offices. It begins by
providing access to external resources and
exploring generative AI’s evolution and capabilities.
services (OpenAI, 2023). Multimodal large language
It then advocates for robust ethical framework and
models (MLLMs) like Microsoft’s Kosmos-2 can
accountability measures to mitigate AI biases. It
accommodate a broader range of input types than
next examines disruptive technology in academia
just text, including images, audio, and video (Peng et
through a historical lens. Next it discusses the need
al., 2023). Autonomous AI agents, such as Auto-GPT
to leverage IR and IE effectiveness expertise. It
and Tree of Thoughts, can be assigned an objective
concludes by embracing the role of the AI ethicist,
and can be programmed to run on an iterative loop
and challenges IR/IE professionals to not only navigate
until that objective has been met (Nakajima, 2023;
the complexities of AI but also to harness its potential
to shape a sustainable and inclusive future.

Fall 2023 Volume 50


Tindle, 2023; Yao et al., 2023). In these models, 2020). The film goes on to criticize how the lack of
intermediate steps are generated, tested, and legal structures around AI results in human rights
updated without human guidance. violations. It reveals how specific algorithms and
AI technologies discriminate based on race and
Research indicates that GPT-4 is performing “strikingly gender, affecting vital areas of life such as housing,
close to human-level” in executing tasks across a job opportunities, health care, credit, education, and
diverse range of disciplines such as law, medicine, legal issues.
psychology, mathematics, and programming (Bubeck
et al., 2023, p. 1). In 2022, GPT-3 was nearly able Following her discoveries, Buolamwini and her
to pass the U.S. Medical Licensing Exam (Jenkins colleagues testified about AI before the U.S.
& Lin, 2023; Kung et al., 2023). And in 2023 GPT-4 Congress. Buolamwini then established the
successfully passed the Uniform Bar Examination Algorithmic Justice League (AJL), a digital advocacy
(Katz et al., 2023). This evolution highlights the rapid group whose goal is to address these biases and
advancements in AI, marking an era of possibility for create a fair and accountable AI ecosystem by
this transformative technology. increasing awareness, equipping advocates, and
uncovering AI abuses and biases (AJL, n.d.). AJL
members advocate for accountability through third-

THE NEED FOR ETHICS party audits of AI algorithms (Koshiyama et al., 2021;
Raji et al., 2023).
AND ACCOUNTABILITY IN
MITIGATING ARTIFICIAL Fortunately, progress has been made since the
INTELLIGENCE BIASES documentary Coded Bias was released (Kantayya,

The future is not a vague, distant concept. In 2020). In August 2022 AI resolutions were

discussions about technology and society, a quote introduced in at least 17 states (National Conference

by science fiction author William Gibson (2003) is of State Legislatures, 2022). In October of the

frequently cited: “The future is already here—it’s just same year, the White House (2022) published the

not evenly distributed.” His phrase implies a disparity “Blueprint for an AI Bill of Rights” to address potential

where advanced technologies are available to some harms. Meanwhile, the European Parliament has

groups but not to others. It highlights the need to taken the lead in legislation safeguarding individuals

democratize technology and make its benefits more from possible AI-related hazards. In June 2023 the

universally accessible. Council of the European Union voted to approve


the Artificial Intelligence Act (European Parliament
Coded Bias is a 2020 documentary film directed and Council of the European Union, 2021), the most
by Shalini Kantayya that delves into the biases far-reaching legislative piece on AI. The European
embedded within AI technology. The film centers Union’s Artificial Intelligence Act addresses concerns
around MIT media researcher Joy Buolamwini, about surveillance, algorithmic discrimination,
who discovered that facial recognition systems and misinformation; it also introduces regulations
failed to recognize her own face. This discovery led and requirements for AI developers, which could
Buolamwini to investigate further how AI technology be likened to the European Union’s General Data
can disproportionately affect minorities (Kantayya, Protection Regulation (2018).

Fall 2023 Volume 51


The future is indeed upon us but is not uniformly authors proposed a three-layered framework
accessible, as evidenced by the bias in technologies for regulating AI systems, covering its technical,
like AI. The work by Joy Buolamwini and the AJL has ethical, and legal aspects. These layers offered
brought this bias to the forefront. These bias-related a broad but practical approach to implementing
issues underline the importance of democratizing governance structures for AI, an approach that
technology by enforcing privacy, fairness, and can vary among industries and organizations.
transparency in AI tools (Cath, 2018; Mhlanga, 2023).
With the increasing capabilities of AI models, the Officials in higher education institutions can use
urgency for human oversight becomes ever more a similar multipronged approach. Colleagues in
crucial (Prud’homme et al., 2023). While AI can multiple divisions can work both independently
accomplish remarkable feats, it is fundamentally and in concert to tackle AI issues. University
important to acknowledge that human guidance and information technology offices can address AI from
ethical considerations are pivotal to guaranteeing a technical perspective by managing how physical
responsible and beneficial outcomes. and software systems interact with AI algorithms.
This layer can focus on transparency, audits,
algorithmic accountability, and fairness in data
usage. Likewise, the general counsel, compliance,
A HISTORY OF and human resources offices can address AI from
DISRUPTIVE TECHNOLOGY a regulatory and policy perspective. This layer can
IN ACADEMIA: A incorporate technical and ethical insights into legal
MULTIDISCIPLINARY and regulatory frameworks (Viljanen & Parviainen,
APPROACH TO 2022). Finally, IR and IE officers can approach AI from
GOVERNANCE an ethical perspective through oversight, evaluation,
policy development, and data governance.
AI is not academia’s first encounter with disruptive
technology. One only needs to look to the recent
Given the speed of advancements, even full-
past to see similar concerns and debates around
time AI researchers report feeling anxious and
the use of the Internet, analytics, mobile technology,
overwhelmed (Togelius & Yannakakis, 2023). The
data science, and cloud computing. Addressing
difficulty for educational professionals is further
the impact of these technologies required a
exacerbated by the traditionally glacial pace of
multidisciplinary approach involving higher
educational transformation. However, established
education professionals from across the academy.
principles and frameworks can be a consistent
The same approach can be used for generative AI.
foundation for navigating the evolving technological
landscape (Taeihagh, 2021).
Gasser and Almeida (2017) addressed how
governance mechanisms, accountability, and
transparency can be jointly examined with broad
stakeholders when dealing with technological
black boxes. Mirroring a model used for
the General Data Protection Regulation, the

Fall 2023 Volume 52


LEADING THE EMBRACING THE ROLE OF
CHARGE: LEVERAGING ARTIFICIAL INTELLIGENCE
INSTITUTIONAL RESEARCH ETHICIST: GUIDELINES
AND INSTITUTIONAL FOR INSTITUTIONAL
EFFECTIVENESS RESEARCHERS AND
EXPERTISE INSTITUTIONAL
IR/IE offices are tasked with collecting, analyzing, EFFECTIVENESS
and using data to support decision-making, PROFESSIONALS
planning, policymaking, and institutional One crucial role that IR/IE professionals can play
improvement. Moreover, it is a fundamental is that of AI ethicist. Niederman and Baker (2023)
aspect of the IR/IE professionals’ role to establish argued that the ethical issues associated with AI
robust engagement, encourage collaboration, and are not unique, and current frameworks have the
ensure open communication with stakeholders capacity to tackle them. In their study, Jobin et al.
across their respective institutions. As custodians (2019) conducted an extensive analysis of 84 AI
and advisors of data-informed decision-making, ethics reports that had been drawn from a diverse
IR/IE professionals provide crucial context and range of private corporations, research institutions,
nuance to their organizations. As such, IR/IE and governmental bodies. Through a thematic
professionals are frequently entrusted to lead analysis, they discovered an agreement across
and advise on projects related to data literacy, these reports, centering around five key ethical
data governance, and institutional assessment. considerations for AI: transparency, fairness, safety,
Leadership in implementing AI strategies is not accountability, and privacy. To guide their actions,
such a far reach. The skillset, relationships, and IR/IE professionals can look to the Association for
experience required to excel in their current roles Institutional Research (AIR) Statement of Ethical
can help IR/IE professionals navigate this era of Principles (AIR, 2019) as their North Star. The
technological change. The ability to interpret data statement equips IR/IE professionals with a flexible
and communicate insights effectively is essential to and familiar framework to effectively handle the
AI development and implementation. concerns and complexities associated with AI.
It comprehensively addresses a multitude of
The remainder of this article outlines how IR/IE
concerns that have been raised by those expressing
professionals can take an active role in leveraging
apprehension about AI. Like the above ethical
AI for their institutions. Some suggestions may
considerations, the AIR statement emphasizes
seem aspirational, given that many IR/IE offices
privacy, accuracy, contextual relevance, fairness,
frequently work under high demands and with
transparency, and accessibility. These principles can
scarce resources. However, strategies that are
serve as a compass to guide practitioners in their
applied incrementally can still lead to impactful
work with AI as the AIR statement has successfully
changes despite resource limitations. AI can benefit
done with the tools and technologies that preceded
small IR/IE offices by enhancing workflow to create
it. Following are a few suggestions on how IR/IE
more capacity. The time saved by leveraging AI
professionals can apply these ethical principles.
individually can then be redirected toward leveraging
AI organizationally.

Fall 2023 Volume 53


Continuous Learning and Development A black box model is not a substitute for the skills,
expertise, transparency, and nuanced judgment
A good guide must understand the terrain. The
an experienced IR/IE professional can provide.
first step in leveraging AI involves taking time to
Thus, an IR/IE professional’s responsibility must
understand and experiment with it. As with any new
extend beyond just describing these models to
skill, proficiency will develop through practice and
stakeholders. It is crucial to educate users about
application. Fortunately, gaining AI expertise is no
their underlying methodology and limitations.
longer a steep hill to climb.
Practitioners can offer clarity and insight to campus

Many LLMs, such as ChatGPT, Google Bard, community members, and can equip them with

Claude, and Microsoft Bing Chat, are free and knowledge of these models’ capabilities and

accessible. Despite some models being proprietary, limitations. This understanding can empower

the information about the technology and its stakeholders to make informed decisions about the

foundational principles are documented and use of AI.

available. The only differences among models lie in


the specific data sets on which they are trained—
Ethical Deployment
which can vary significantly. Traditional ML models
often rely on supervised learning, where the model The significance of ethics in AI usage, even when

is trained on data sets that are known. LLMs, on the using publicly available tools, cannot be overstated.

other hand, use unsupervised learning techniques Upholding ethical principles is essential at all stages

on vast amounts of data in order to train models to of AI adoption, from selecting the right tool, to

predict the next likely word in a phrase or sentence. understanding data needs, to deployment of AI in

Given the sheer enormity and complexity of these daily operations. Collaboration across institutional

models, LLMs are effectively black boxes designed to teams is crucial to maintaining these ethical standards.

generate human-like responses. Knowing this, IR/IE IR/IE professionals can foster interdepartmental

practitioners should focus on applying LLMs to areas cooperation, thus ensuring that AI tools are used

where their strengths can be used most effectively. responsibly and ethically, in line with the best interests
of campus stakeholders. Soliciting campus feedback
From a practical standpoint, there is no shortage of can broaden and diversify perspectives on AI tool use.
documentation, videos, forums, and communities Facilitating open dialogues on AI ethics can stimulate
to obtain tips, techniques, and examples. The act ethical mindfulness. Finally, establishing training
of designing, testing, and refining AI instructions is sessions on AI ethics can strengthen awareness and
called “prompt engineering.” The process is similar to responsible usage.
developing effective research questions. It requires
an understanding of context and a willingness to
Privacy and Confidentiality
continue refining. Arming oneself with technical
and practical information will go a long way toward When using generative AI tools, IR/IE professionals
reducing anxiety and increasing competence. Once can establish privacy and confidentiality by first
competence is attained, education of the community understanding existing tools and their privacy
and leveraging of AI can occur. policies. IR/IE professionals can then adapt a range

Fall 2023 Volume 54


of established research protocols to protect user data. Practitioners must ensure that the data fed
data further and to limit exposure. These protocols into these models fully represent the populations
include practices like data minimization, where only and scenarios to be considered. Additionally,
the necessary data are input into the AI tool. This practitioners must use professional judgment when
technique reduces the risk of privacy breaches. interpreting and presenting results. Involving key
Another approach would be to anonymize any stakeholders at each stage can help ensure that
personal data before they are input into the tool. A diverse perspectives are considered.
third protocol would be to obtain informed consent
when sensitive data are used, even when personal
identifiers are removed. Furthermore, educating
Accountability and Responsibility
staff on privacy and responsible AI use is essential. Working collaboratively with campus colleagues,
Finally, one should not hesitate to consult with legal IR/IE professionals can help drive the discussion
counsel to ascertain that all necessary precautions on AI accountability. These dialogues should not
are being taken. be theoretical but rather should be grounded
in specific use cases. They must identify who
will take responsibility when an AI system inflicts
Bias and Fairness harm or commits a significant error (Dignum,
IR/IE professionals know that bias can be introduced 2018). For example, someone must be willing to
at multiple stages of the research process and must take responsibility if an AI tool is used to make
be managed (Roulston & Shelton, 2015). Likewise, an incorrect prediction that impacts a student
bias can be inserted at multiple points in AI models negatively. Comfort in taking responsibility will
and must be mitigated. Bias can be hidden in the require proficiency with the AI tools used, the
training data, algorithms, and the subjective choices establishment of clear guidelines for usage, and
of their creators. In her TED Talk, Cathy O’Neil (2017) clear communication with other stakeholders. IR/IE
challenged the common perception that algorithms professionals can facilitate all of these steps.
were objective, and asserted that algorithms were
influenced by the biases of their designers. The Furthermore, a review mechanism and an appeal
same protocols to mitigate bias in research can also process should be established to evaluate decisions
be applied to AI use. informed by AI. Finally, a strategy to ensure
accountability is to include third-party audits.
IR/IE offices can adopt several measures to minimize External evaluators bring an objective perspective
bias and enhance fairness when using publicly and use distinct methodologies and frameworks for
available generative AI tools. One of the first steps is assessment. These auditors serve as a safeguard,
to carefully review and select the tools to be used. adding another layer of scrutiny to AI usage and
It is essential to choose tools with a reputation for decision-making processes.
fairness and transparency. The selection process can
include reading reviews and studying case studies
to make an informed choice. Once the right tools
have been chosen, it must be understood that the
process can still be contaminated with biased input

Fall 2023 Volume 55


Transparent and Explainable of the student voice (Sullivan et al., 2023). Instead,
Artificial Intelligence the dominant discussion focused on institutional
concerns about academic integrity. This oversight
Transparency is necessary for developing trust
must be corrected. Together with their peers in
among student, administrative, and faculty
student affairs, IR/IE practitioners with qualitative
stakeholders. Furthermore, transparency is a
research backgrounds can help lead the discussion.
fundamental principle that underpins robust and
Involving and communicating with students about
credible research. Extending this principle to AI
AI tools that affect them is crucial. It is important to
is relevant and necessary. IR/IE professionals can
seek methods to educate students about these AI
champion the need for transparent and explainable
tools involved in their education, emphasizing their
AI. It is difficult to achieve transparency when
rights, benefits, and potential risks.
dealing with something that is continually evolving.
Examining the issue from a legal perspective, Miriam
Buiten (2019) acknowledged this difficulty and
Develop Institutional Policies for
proposed a practical solution: instead of creating
Artificial Intelligence
new regulations for a rapidly changing field, Buiten
Finally, having articulated policies and procedures
recommended the application of existing regulations
can help guide the campus community toward
from a more familiar but related area. Likewise, IR/IE
responsible AI use. I agree with Webber and Zheng
practitioners can follow a similar strategy by applying
(2020) that change is best facilitated through
the established principles of good research design
campus-wide strategies. This guiding strategy
to AI use. One does not need to be an AI expert to
should include rules for data collection and usage,
ensure transparency. IR/IE professionals can uphold
principles establishing AI transparency, directives for
the principle of transparency by assisting with tool
setting data use parameters, processes for initiating
selection, researching methodology, maintaining
the ethical review of AI tools, and mechanisms
open communication with the community, and being
for ensuring accountability across one’s campus
an example of ethical and responsible use.
or organization. Such policies would not only
uphold institutional integrity but also enhance the
Student Involvement and effectiveness and value of AI in supporting data-
Communication informed decisions and optimizing institutional
outcomes.
As discussed by Emily Oakes, Yih Tsao, and Victor
Borden in their article in this volume, it is critical to
incorporate the student voice in the work of student
success. Student voice refers to individual students’
and student groups’ values, beliefs, perspectives,
and cultural backgrounds. Higher education
professionals must listen to, learn from, and respond
to the collective student voice. Unfortunately,
a recent meta-analysis of media articles on AI’s
impact on higher education found little mention

Fall 2023 Volume 56


CONCLUSION REFERENCES
AI will be increasingly impossible to ignore. Algorithmic Justice League (AJL). (n.d.). Unmasking AI
Microsoft, Google, Adobe, and other architects of harms and biases. https://www.ajl.org/
the digital ecosystem have already begun to embed
AI into their existing applications (Microsoft, 2023). Association for Institutional Research (AIR). (2019).
Being a passive spectator is neither optional nor AIR statement of ethical principles. https://www.
tenable. Fortunately, the frameworks and skillsets airweb.org/ir-data-professional-overview/statement-
that have enabled IR/IE professionals to thrive in of-ethical-principles
their current roles can empower them to transition
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke,
from mere observers to key influencers during this
J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y.,
technological revolution.
Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T.,
It is essential to remember that the tools now & Zhang, Y. (2023). Sparks of artificial general
considered indispensable to IR/IE professionals were intelligence: Early experiments with GPT-4.
once enigmatic and unfamiliar. The same strategies arXiv:2303.12712 [cs.CL]. https://doi.org/10.48550/
used to master data visualization, business arXiv.2303.12712
intelligence, statistical analysis, and data science
Buiten, M. C. (2019). Towards intelligent regulation
can be used to leverage AI. Armed with research
of artificial intelligence. European Journal of Risk
expertise, ethical commitment, data-informed
Regulation, 10(1), 41–59. https://doi.org/10.1017/
decision-making knowledge, and a profound
err.2019.8
belief in human insight, IR/IE professionals stand
ready to both adapt and lead. By harnessing this
Cath, C. (2018). Governing artificial intelligence:
unique combination of skills and perspectives, IR/
Ethical, legal and technical opportunities and
IE professionals can confidently step into the future
challenges. Philosophical Transactions of the Royal
and remain valued leaders in the higher education
Society A: Mathematical, Physical and Engineering
community.
Sciences, 376(2133), 20180080. https://doi.
org/10.1098/rsta.2018.0080

Dignum, V. (2018). Ethics in artificial intelligence:


Introduction to the special issue. Ethics and
Information Technology, 20(1), 1–3. https://doi.
org/10.1007/s10676-018-9450-z

Fall 2023 Volume 57


European Parliament and Council of the European Koshiyama, A., Kazim, E., Treleaven, P., Rai, P.,
Union. (2021). Proposal for a regulation of the Szpruch, L., Pavey, G., Ahamat, G., Franziska Leutner,
European Parliament and of the Council laying Leutner, F., Goebel, R., Knight, A., Adams, J., Hitrova,
down harmonised rules on artificial intelligence C., Barnett, J., Nachev, P., Barber, D., Chamorro-
(artificial intelligence act) and amending certain Premuzic, T., Klemmer, K., Gregorovic, M., . . . Lomas,
union legislative acts. https://eur-lex.europa.eu/legal- E. (2021). Towards algorithm auditing: A survey on
content/EN/TXT/?uri=celex%3A52021PC0206 managing legal, ethical and technological risks of AI,
ML and associated algorithms. Available at SSRN:
Gasser, U., & Almeida, V. (2017). A layered model for https://doi.org/10.2139/ssrn.3778998
AI governance. IEEE Internet Computing, 21(6), 58–62.
https://doi.org/10.1109/mic.2017.4180835 Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C.,
Leon, L. D., Elepaño, C., Madriaga, M., Aggabao,
General Data Protection Regulation. (2018). R., Diaz-Candido, G., Maningo, J., & Tseng, V.
European Union. https://gdpr-info.eu/ (2023, February 9). Performance of ChatGPT on
USMLE: Potential for AI-assisted medical education
Gibson, W. (2003). The future is already here—it’s
using large language models. PLOS Digital Health,
just not evenly distributed. The Economist, 4(2),
2(2), e0000198. https://doi.org/10.1371/journal.
152–152. https://doi.org/10.1080/23748834.2020.1
pdig.0000198
807704
Lund, B., Wang, T., Mannuru, N. R., Nie, B., Shimray,
Jenkins, R., & Lin, P. (2023). AI-assisted authorship:
S., & Wang, Z. (2023). ChatGPT and a new academic
How to assign credit in synthetic scholarship.
reality: AI-written research papers and the ethics of
Available at SSRN: https://papers.ssrn.com/sol3/
the large language models in scholarly publishing.
papers.cfm?abstract_id=4342909
Journal of the Association for Information Science and
Technology, 74(5), 570–581. https://doi.org/10.1002/
Jobin, A., Ienca, M., & Vayena, E. (2019). The global
asi.24750
landscape of AI ethics guidelines. Nature Machine
Intelligence, 1(9), 389–399. https://doi.org/10.1038/
Megahed, F. M., Chen, Y.-J., Ferris, J. A., Knoth, S., &
s42256-019-0088-2
Jones-Farmer, L. A. (2023). How generative AI models
such as ChatGPT can be (mis)used in SPC practice,
Kantayya, S. (Director). (2020). Coded bias
education, and research? An exploratory study.
[Documentary]. 7th Empire Media.
arXiv. https://doi.org/10.48550/arXiv.2302.10916

Katz, D. M., Bommarito, M. J., Gao, S., & Arredondo,


Mhlanga, D. (2023). Open AI in education, the
P. (2023, March 15). GPT-4 passes the bar exam.
responsible and ethical use of ChatGPT towards
SSRN Scholarly Paper No. 4389233. Available at
lifelong learning. Available at SSRN: https://doi.
SSRN: https://doi.org/10.2139/ssrn.4389233
org/10.2139/ssrn.4354422

Fall 2023 Volume 58


Microsoft (Director). (2023, May 23). Full Keynote | Raji, I. D., Costanza-Chock, S., & Buolamwini,
Satya Nadella at Microsoft Build 2023. https://www. J. (2023). Change from the outside: Toward
youtube.com/watch?v=FaV0tIaWWEg credible third-party audits of AI systems. In B.
Prud’homme, C. Régis, G. Farnadi, V. Dreier,
Nakajima, Y. (2023). BabyAGI [Python]. https://github. S. Rubel, & C. d’Oultremont (Eds.), Missing
com/yoheinakajima/babyagi links in Al governance. UNESCO paper. https://
unesdoc.unesco.org/in/documentViewer.
National Conference of State Legislatures. (2022,
xhtml?v=2.1.196&id=p::usmarcdef_0000384787&file
August 26). Legislation related to artificial intelligence.
=/in/rest/annotationSVC/DownloadWatermarked
Report. Updated January 31, 2023. https://www.
Attachment/attach_import_c75d935a-735f-
ncsl.org/technology-and-communication/legislation-
435d-b54b-2a288c57da69%3F_%3D384787eng.
related-to-artificial-intelligence
pdf&locale=en&multi=true&ark=/ark:/48223/
pf0000384787/PDF/384787eng.pdf#A14974_Mila_
Niederman, F., & Baker, E. W. (2023). Ethics and AI
MissingLinksAI_8x10_AN_EP10.indd:.142489:1122
issues: Old container with new wine? Information
Systems Frontiers, 25(1), 9–28. https://doi.
Roulston, K., & Shelton, A. (2015). Reconceptualizing
org/10.1007/s10796-022-10305-1
bias in teaching qualitative research methods.
Qualitative Inquiry, 21(4), 332–342. https://doi.
O’Neil, C. (2017, August 12). The era of blind faith in
org/10.1177/1077800414563803
Big Data must end. TED Talk. https://www.ted.com/
talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_
Shahriar, S., & Hayawi, K. (2023). Let’s have a chat! A
must_end
conversation with ChatGPT: Technology, applications,
and limitations. arXiv. https://doi.org/10.48550/
OpenAI. (2023, March 23). ChatGPT plugins. OpenAI
arXiv.2302.13817
Blog. https://openai.com/blog/chatgpt-plugins

Stahl, B. C. (2021). Perspectives on artificial


Peng, Z., Wang, W., Dong, L., Hao, Y., Huang, S., Ma,
intelligence. In B. C. Stahl (Ed.), Artificial intelligence for
S., & Wei, F. (2023). Kosmos-2: Grounding multimodal
a better future: An ecosystem perspective on the ethics
large language models to the world. arXiv:2306.14824;
of AI and emerging digital technologies (pp. 7–17).
Version. http://arxiv.org/abs/2306.14824
Springer International. https://link.springer.com/
chapter/10.1007/978-3-030-69978-9_2
Prud’homme, B., Régis, C., Farnadi, G., Dreier, V.,
Rubel, S., & d’Oultremont, C. (Eds.). (2023). Missing
Sullivan, M., Kelley, A., & McLaughlan, P. (2023).
links in Al governance. UNESCO. https://unesdoc.
ChatGPT in higher education: Considerations for
unesco.org/ark:/48223/pf0000384787
academic integrity and student learning. Journal
of Applied Learning & Teaching, 6(1). https://doi.
org/10.37074/jalt.2023.6.1.17

Fall 2023 Volume 59


Taeihagh, A. (2021). Governance of artificial Webber, K. L., & Zheng, H. Y. (2020). Data-informed
intelligence. Policy and Society, 40(2), 137–157. decision-making and the pursuit of analytics maturity
https://doi.org/10.1080/14494035.2021.1928377 in higher education. In K. L. Webber & H. Y. Zheng
(Eds.), Big Data on campus: Data analytics and
Tindle, N. (2023). Auto-GPT. [GitHub repository]. decision-making in higher education. Johns Hopkins
Retrieved July 15, 2023, from https://github.com/ University Press. https://www.researchgate.net/
Significant-Gravitas/Auto-GPT publication/334098849_Data-Informed_Decision_
Making_and_the_Pursuit_of_Analytics_Maturity_in_
Togelius, J., & Yannakakis, G. N. (2023). Choose
Higher_Education
your weapon: Survival strategies for depressed AI
academics. arXiv:2304.06035 [cs.OH]. https://doi. White House. (2022, October). Blueprint for an AI Bill
org/10.48550/arXiv.2304.06035 of Rights. Office of Science and Technology Policy.
https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Viljanen, M., & Parviainen, H. (2022). AI applications
and regulation: Mapping the regulatory strata. Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L.,
Frontiers in Computer Science, 3, 141. https://doi. Cao, Y., & Narasimhan, K. (2023). Tree of thoughts:
org/10.3389/fcomp.2021.779957 Deliberate problem solving with large language
models. arXiv:2305.10601 [cs.CL]. https://doi.
org/10.48550/arXiv.2305.10601

Fall 2023 Volume 60


ABOUT SPECIAL ABOUT AIR
ISSUES OF THE AIR The Association for Institutional Research (AIR)
PROFESSIONAL FILE empowers higher education professionals to leverage

Special Issues of the AIR Professional File are guest- data, analytics, information, and evidence to make

edited and feature themed journal-length articles decisions and take actions that benefit students and

grounded in relevant literature that synthesize current institutions and improve higher education. For over

issues, present new processes or models, or share 50 years, AIR has been committed to building and

practical applications related to institutional research. sustaining strong data-informed decision cultures
within higher education institutions.
For more information about AIR Publications,
including the AIR Professional File and instructions for Association for Institutional Research

authors, visit www.airweb.org/publications. Christine M. Keller, Executive Director and CEO


www.airweb.org
ISSN 2155-7535

Fall 2023 Volume 61

You might also like