APF 2023 Fall
APF 2023 Fall
APF 2023 Fall
Professional File
Fall 2023 Volume
Supporting quality data and
decisions for higher education.
2| Student support, including academic on student success and graduation. The GSU
monitoring, course scheduling, suggesting system was quite successful, and GSU now hosts
majors and career pathways, allocating financial the National Institute of Student Success (NISS), a
aid, identifying students at risk, and supporting national effort aimed at helping institution officials
• Prioritize strengthening trust: Incorporate personal touch, Bird offers examples of success
safety, usability, and efficacy in creating a in student support that have occurred through
trusting environment for the use of AI. carefully considered predictive modeling. Bird makes
an excellent point that, as more-advanced analytics
• Inform and involve educators: Show the
tools become available, the main challenge will not
respect and value we hold for educators by
be whether the algorithms (i.e., from machines)
informing and involving them in every step of
are able to identify at-risk students better and
the process of designing, developing, testing,
more efficiently than humans. Instead, most of
improving, adopting, and managing AI-enabled
the challenges will surround the question of how
edtech.
humans will use the output that machines provide.
• Develop education-specific guidelines and
This aligns with the U.S. Department of Education’s
guardrails: The issues are not only data privacy
key observation that humans, not machines, should
and security, but also new issues such as bias,
determine educational goals and measure the
transparency, and accountability.
degree to which models fit and are useful.
Inger Bergom
Assistant Editor
Harvard University
ISSN 2155-7535
REFERENCES Holmes, W., & Tuomi, I. (2022). The state of the art
and practice of AI in education. European Journal of
Dilmengali, C. (2023). Top 35 generative AI tools Education, 57(4), 542–570. https://onlinelibrary.wiley.
by category (text, image, . . . ). https://research. com/doi/10.1111/ejed.12533
aimultiple.com/generative-ai-tools/
McKendrick, J. (2021). AI adoption skyrocketed over
Federal Student Aid. (n.d.). Meet Aidan. https:// the last 18 months. Harvard Business Review. https://
studentaid.gov/h/aidan hbr.org/2021/09/ai-adoption-skyrocketed-over-the-
last-18-months
Gardner, L. (2018, April 8). How A.I. is infiltrating
every corner of the campus. Chronicle of Higher Reagan, M. (2023, February 27). The darker side of
Education. https://www.chronicle.com/article/how-a- ChatGPT. IIA blog. https://iianalytics.com/community/
i-is-infiltrating-every-corner-of-the-campus/ blog/darker-side-of-chatgpt
Gartner. (2023). Information technology Gartner Su, H. (2018, August 30). Mandarin language
glossary. https://www.gartner.com/en/information- learners get a boost from AI. RPI blog. https://
technology/glossary/big-data everydaymatters.rpi.edu/mandarin-language-
learners-get-a-boost-from-ai/
Gates, B. (2023, March 21). The age of AI has begun.
Bill Gates blog. https://www.gatesnotes.com/The-Age- U.S. Department of Education. (2023). Artificial
of-AI-Has-Begun intelligence and future of teaching and learning:
Insights and recommendations. Office of Educational
Goel, A., & Polepeddi, L. (2019). Jill Watson: A
Technology. https://tech.ed.gov/ai-future-of-
virtual teaching assistant for online education. In
teaching-and-learning/
C. Dede, J. Richards, & B. Saxeberg (Eds.), Learning
engineering for online education. Routledge. Zeide, E. (2019). Artificial intelligence in higher
https://www.taylorfrancis.com/chapters/ education: Applications, promise and perils, and
edit/10.4324/9781351186193-7/jill-watson-ashok- ethical questions. EDUCAUSE Review, August 26,
goel-lalith-polepeddi 2019. https://er.educause.edu/articles/2019/8/
artificial-intelligence-in-higher-education-
applications-promise-and-perils-and-ethical-
questions
Table of Contents
61 About AIR
Predictive Analytics in Higher
Education: The Promises and
Challenges of Using Machine Learning
to Improve Student Success
Kelli Bird
Acknowledgments
I am grateful to Ben Castleman, whose continued collaboration and mentorship has been instrumental in my
exploration of this topic. I am also grateful for the long-standing partnerships with the Virginia Community
College System and Bloomberg Philanthropies that have enabled us to do this work.
Abstract
Colleges are increasingly turning to predictive analytics to identify “at-risk” students in order to target additional
supports. While recent research demonstrates that the types of prediction models in use are reasonably
accurate at identifying students who will eventually succeed or not, there are several other considerations for
the successful and sustained implementation of these strategies. In this article, I discuss the potential challenges
to using risk modeling in higher education and suggest next steps for research and practice.
1. Colleges also use predictive analytics for enrollment management purposes, such as identifying high-target students for recruitment or offering generous
financial aid packages. These enrollment management practices are designed to bolster the quality of a colleges’ incoming class. In this article, I choose to focus
on predictive analytic applications designed to support at-risk students.
2. Still, there are several anecdotes to suggest that current applications risk modeling and resource targeting are leading to improved student outcomes. Most
notably is Georgia State University (GSU), which reports an 8-percentage-point increase in its graduation rate since implementing EAB’s predictive analytics
products. This implementation accompanied several other changes at the university, however (Swaak, 2022).
3. Algorithmic bias has been found in other predictive analytic applications outside higher education, including criminal justice and health care (Angwin et al.,
2016; Obermeyer et al., 2019).
4. Calibration bias occurs when, conditional on predicted risk score, subgroups have different actual success rates. In our application, this means that, at a
particular point in the distribution of predicted risk scores, Black students have a higher success rate than White students.
5. Our related work also suggests that small changes in modeling decisions (e.g., choosing logistic regression versus XGBoost as the prediction model) can
significantly change the sorting of students within the risk score distribution, and therefore have the potential to significantly alter which students would receive
additional supports (Bird, Castleman, Mabel, et al., 2021).
Abstract
Accelerating advancements in learning analytics and artificial intelligence (AI) offers unprecedented
opportunities for improving educational experiences. Without including students’ perspectives, however, there is
a potential for these advancements to inadvertently marginalize or harm the very individuals these technologies
aim to support. This article underscores the risks associated with sidelining student voices in decision-making
processes related to their data usage. By grounding data use within a social justice framework, we advocate
for a more equitable and holistic approach. Drawing on previous research as well as insights we have gathered
from a student panel, we outline effective methods to integrate student voices. We conclude by emphasizing the
long-term implications for the institutional research field, arguing for a shift toward more inclusive and student-
centric practices in the realm of learning analytics and AI-embedded supports.
Institutional research (IR) professionals have become bias inherent in algorithms as well as to equity gaps
increasingly central to college and university efforts in access to and use of this powerful technology
to improve student success through the use of (Ahn, 2022; Alonso et al., 2020).
Principle Component
Collective Benefit For inclusive development and innovation
For improved governance and citizen engagemen
For equitable outcomes
The Responsibility principle’s first subsection, “For subsections implies some form of collaboration
positive relationships,” identifies that “Indigenous between institution officials using Indigenous
data use is unviable unless linked to relationships students’ data and the students themselves: to
built on respect, reciprocity, trust, and mutual create mutual understanding, to increase data
understanding, as defined by the Indigenous literacy between both parties, and to enable
Peoples to whom those data relate” (Carroll et Indigenous students to (consensually) share their
al., 2022, p. 4). The following subsections, “For experiences.
Expanding Capability and Capacity” and “For
Indigenous Languages and Worldviews,” require When considering the use of early alert systems,
efforts to increase data literacy and to ground it is important to note that the CARE Principles for
data in the world views and the lived experiences Indigenous Data Governance specify that ethical
of Indigenous peoples, respectively. Each of these data not portray Indigenous peoples in terms of
most of the same students attended each panel, communicate to you about learning data use in
sharing. For the panel exploring student views After completing the analysis, the students were
Table 2. Thematic Responses from the Breakout Room Activity During the Panel
Themes Examples/Explanations
Possible forms of biases • Instructor shows favoritism for students struggling less.
in current practices • Not all struggling students receive the appropriate outreach.
Theme 1 • Student consent should be collected before the data are collected and
shared with instructors, advisors, or any other parties.
Transparency/Open
Communication • The types of data collected or shared should be communicated clearly to
both students and instructors.
• Researchers should explain to the students how the data are being used
or will be used.
Kantayya, S. (Director). (2020). Coded bias Noble, S. U. (2018). Algorithms of oppression: How
[Documentary]. 7th Empire Media. https://www. search engines reinforce racism. New York University
youtube.com/watch?v=jZl55PsfZJQ (trailer). Press. https://psycnet.apa.org/record/
2018-08016-000
Kasneci, E., Sessler, K., Küchemann, S., Bannert,
M., Dementieva, D., Fischer, F., Gasserm, U., Nopper, T. K. (2019). Digital character in “The
Groh, G., Günnemann, S., Hüllermeier, E., scored society.” In R. Benjamin (Ed.), Captivating
Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, technology: Race, carceral technoscience, and liberatory
C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, imagination in everyday life (pp. 170–187). Duke
A., Seidel, T., . . . Kasneci, G. (2023). ChatGPT University Press. https://www.degruyter.com/
for good? On opportunities and challenges of document/doi/10.1515/9781478004493-010/
large language models for education. Learning html?lang=en
and Individual Differences, 103(April). https://
O’Neil, C. (2016). Weapons of math destruction: How
www.sciencedirect.com/science/article/pii/
Big Data increases inequality and threatens democracy.
S1041608023000195?via%3Dihub
Crown. https://archive.org/details/
weapons-of-math-destruction_202209
Prinsloo, P., & Slade, S. (2016). Student vulnerability, Wachter-Boettcher, S. (2017). Technically wrong:
agency, and learning analytics: An exploration. Sexist apps, biased algorithms, and other threats of
Journal of Learning Analytics, 3(1), 159–182. https:// toxic tech. W. W. Norton & Company. https://dl.acm.
learning-analytics.info/index.php/JLA/article/ org/doi/10.5555/3307044
view/4447
West, D., Luzeckyj, A., Toohey, D., Vanderlelie, J.,
Rainie, S. C., Rodriguez-Lonebear, D., & Martinez, & Searle, B. (2020). Do academics and university
A. (2017). Policy brief (ver. 2): Data governance for administrators really know better? The ethics
native nation rebuilding. Native Nations Institute. of positioning student perspectives in learning
https://nnigovernance.arizona.edu/policy-brief-data- analytics. Australasian Journal of Educational
governance-native-nation-rebuilding Technology, 36(2), 60–70. https://doi.org/10.14742/
ajet.4653
Roberts, L. D., Howell, J. A., Seaman, K., & Gibson, D.
C. (2016). Student attitudes toward learning analytics Zialcita, P. (2019, October 30). Facebook
in higher education: The Fitbit version of the learning pays $643,000 fine for role in Cambridge
world. Frontiers in Psychology, 7. https://www. Analytics scandal. NPR News. https://www.npr.
frontiersin.org/articles/10.3389/fpsyg.2016.01959/full org/2019/10/30/774749376/facebook-pays-643-
000-fine-for-role-in-cambridge-analytica-scandal
Rubel, A., & Jones, K. M. (2016). Student privacy in
learning analytics: An information ethics perspective. Zipper, T. (Host). (2022, September). Are we the
The Information Society, 32(2), 143–159. https:// problem? with Dr. Tim Renick [Audio podcast].
papers.ssrn.com/sol3/papers.cfm?abstract_ (Season 2, Episode 3). An Educated Guest. Wiley
id=2533704 University Services. https://universityservices.wiley.
com/podcast-tim-renick-are-we-the-problem/
Ruberg, B., & Ruelos, S. (2020). Data for queer
lives: How LGBTQ gender and sexuality identities Zuboff, S. (2019). Surveillance capitalism and the
challenge norms of demographics. Big Data & Society, challenge of collective action. New Labor Forum,
7(1). https://doi.org/10.1177/2053951720933286 28(1), 10–29. https://journals.sagepub.com/doi/
full/10.1177/1095796018819461
Abstract
Although student advising is known to improve student success, its application is often inadequate in
institutions that are resource constrained. Given recent advances in large language models (LLMs) such as
Chat Generative Pre-trained Transformer (ChatGPT), automated approaches such as the AutoScholar Advisor
system affords viable alternatives to conventional modes of advising at scale. This article focuses on the
AutoScholar Advisor system, a system that continuously analyzes data using modern methods from the fields
of artificial intelligence (AI), data science, and statistics. The system connects to institutional records, evaluates
a student’s progression, and generates advice accordingly. In addition to serving large numbers of students,
the term “advising at scale” refers to the various role players: the executives (whole-institution level), academic
program managers (faculty and discipline levels), student advisors (faculty level), lecturers (class level), and,
of course, the students (student level). The form of advising may also evolve to include gamification elements
such as points, badges, and leaderboards to promote student activity levels. Case studies for the integration
with academic study content in the form of learning pathways are presented. We therefore conclude with the
proposition that the optimal approach to advising is a hybrid between human intervention and automation,
where the automation augments human judgment.
MULTIDIMENSIONAL
ADVISING: ROLE PLAYERS
IN HIGHER EDUCATION
To achieve significant improvements in the
progression and hence the graduation rates, it is
necessary for the various role players to receive
accurate advice. At the student scale, advice on
coursework registration as well as day-to-day study
habits are a direct influence. Advice to lecturers with
respect to students at risk and course management
practices can significantly improve the student
(and lecturer) experience. At the counselor scale,
At the whole-institution scale, executives maintain a year, it is necessary to monitor what fraction of
bird’s-eye view of the entry and graduation statistics. students complete in minimum time and what
In particular, given an entering cohort in a particular fraction exit without graduating (Figure 7).
To take action by alerting relevant staff or allocating which programs exhibit the lowest pass rates and
resources, the next step would be to determine, lowest performance indices. Figure 8 illustrates that
among all academic programs at the institution, such programs can easily be identified.
It is therefore possible for all role players in higher cases. At the student level, however, it could be
education to receive sufficient insight and hence to more necessary to moderate the advice rendered by
apply suitable interventions or allocate resources interpreting the results and suggesting interventions
to ameliorate the limitations identified. Such data- based on the student temperament and degree of
oriented advising may be directly applied in most reception to critical feedback.
Given the inherent diversity in student attributes, suitability of any applied interventions, but also the
This points to the value of hybrid advising, which influence the performance metrics. It then becomes
combines in-person and online elements, and possible to customize the applied interventions
can help to mitigate some of these challenges rather than assuming that a blanket strategy applies
Chen, Y., Jensen, S., Albert, L. J., Gupta, S., & Lee, T.
REFERENCES (2023). Artificial intelligence (AI) student assistants
Bennett, A. (2008). Process tracing: A Bayesian in the classroom: Designing chatbots to support
perspective. In J. M. Box-Steffensmeier, H. E. Brady, student success. Information Systems Frontiers, 25(1),
& D. Collier (Eds.), The Oxford handbook of political 161–182. https://link.springer.
methodology. Oxford University Press. com/article/10.1007/s10796-022-10291-4
Anderson, W., Motto, J. S., & Bourdeaux, R. EAB. (2023, July 5). Develop a student-centered
(2014). Getting what they want: Aligning student academic advising model. https://eab.com/research/
expectations of advising with perceived advisor academic-affairs/roadmaps/develop-a-student-
behaviors. Mid-Western Educational Researcher, 26(1), centered-academic-advising-model/
27–51. https://www.mwera.org/MWER/volumes/v26/
issue1/v26n1-Anderson-Motto-Bourdeaux-feature- Fong, C. J., Gonzales, C., Hill-Troglin Cox, C., &
articles.pdf Shinn, H. B. (2021). Academic help-seeking and
achievement of postsecondary students: A meta-
Atalla, S., Daradkeh, M., Gawanmeh, A., Khalil, H., analytic investigation. Journal of Educational
Mansoor, W., Miniaoui, S., & Himeur, Y. (2023). An Psychology, 115(1), 1–21. https://
intelligent recommendation system for automating psycnet.apa.org/record/2022-17344-001
academic advising based on curriculum analysis and
performance modeling. Mathematics, 11(5), 1098. Haley, L. (2016, February 16). The role of emotional
https://www.mdpi.com/2227-7390/11/5/1098 intelligence in quality academic advising. Global
Community for Academic Advising. https://nacada.
Assiri, A., Al-Ghamdi, A. A., & Brdesee, H. ksu.edu/Resources/Academic-Advising-Today/
(2020). From traditional to intelligent academic View-Articles/The-Role-of-Emotional-Intelligence-in-
advising: A systematic literature review of Quality-Academic-Advising.aspx
e-academic advising. International Journal of
Advanced Computer Science and Applications, Keup, J. R., & Young, D. G. (2021). Being HIP: Advising
11(4), 507–517.https://thesai.org/Publications/ as an emerging high-impact practice. New Directions
ViewPaper?Volume=11&Issue=4&Code= for Higher Education, 2021(195–196), 91–99. https://
IJACSA&SerialNo=67 onlinelibrary.wiley.com/doi/full/10.1002/he.20411
Mike Urmeneta
Abstract
This article explores the potential impact of artificial intelligence (AI) and machine learning (ML) on higher
education. It overviews current generative AI capabilities and argues for ethical frameworks to address
issues such as bias. The article advocates for a multidisciplinary governance approach involving institutional
stakeholders by examining past academic technology adoption. It highlights the strategic role institutional
research (IR) and institutional effectiveness (IE) professionals can play in navigating AI complexities. This article
provides specific suggestions for IR/IE professionals to embrace the role of AI ethicist: continuously developing
AI literacy, ensuring ethical deployment, upholding privacy and confidentiality, mitigating bias, enforcing
accountability, championing explainable AI, incorporating student perspectives, and developing institutional
AI policies. The article concludes by asserting that IR/IE’s research expertise, ethical commitment, and belief in
human judgment equip the field to adapt to and lead in the AI era. By taking an active role, IR/IE can shape the
technology’s impact to benefit higher education.
THE NEED FOR ETHICS party audits of AI algorithms (Koshiyama et al., 2021;
Raji et al., 2023).
AND ACCOUNTABILITY IN
MITIGATING ARTIFICIAL Fortunately, progress has been made since the
INTELLIGENCE BIASES documentary Coded Bias was released (Kantayya,
The future is not a vague, distant concept. In 2020). In August 2022 AI resolutions were
discussions about technology and society, a quote introduced in at least 17 states (National Conference
by science fiction author William Gibson (2003) is of State Legislatures, 2022). In October of the
frequently cited: “The future is already here—it’s just same year, the White House (2022) published the
not evenly distributed.” His phrase implies a disparity “Blueprint for an AI Bill of Rights” to address potential
where advanced technologies are available to some harms. Meanwhile, the European Parliament has
groups but not to others. It highlights the need to taken the lead in legislation safeguarding individuals
democratize technology and make its benefits more from possible AI-related hazards. In June 2023 the
Many LLMs, such as ChatGPT, Google Bard, community members, and can equip them with
Claude, and Microsoft Bing Chat, are free and knowledge of these models’ capabilities and
accessible. Despite some models being proprietary, limitations. This understanding can empower
the information about the technology and its stakeholders to make informed decisions about the
is trained on data sets that are known. LLMs, on the using publicly available tools, cannot be overstated.
other hand, use unsupervised learning techniques Upholding ethical principles is essential at all stages
on vast amounts of data in order to train models to of AI adoption, from selecting the right tool, to
predict the next likely word in a phrase or sentence. understanding data needs, to deployment of AI in
Given the sheer enormity and complexity of these daily operations. Collaboration across institutional
models, LLMs are effectively black boxes designed to teams is crucial to maintaining these ethical standards.
generate human-like responses. Knowing this, IR/IE IR/IE professionals can foster interdepartmental
practitioners should focus on applying LLMs to areas cooperation, thus ensuring that AI tools are used
where their strengths can be used most effectively. responsibly and ethically, in line with the best interests
of campus stakeholders. Soliciting campus feedback
From a practical standpoint, there is no shortage of can broaden and diversify perspectives on AI tool use.
documentation, videos, forums, and communities Facilitating open dialogues on AI ethics can stimulate
to obtain tips, techniques, and examples. The act ethical mindfulness. Finally, establishing training
of designing, testing, and refining AI instructions is sessions on AI ethics can strengthen awareness and
called “prompt engineering.” The process is similar to responsible usage.
developing effective research questions. It requires
an understanding of context and a willingness to
Privacy and Confidentiality
continue refining. Arming oneself with technical
and practical information will go a long way toward When using generative AI tools, IR/IE professionals
reducing anxiety and increasing competence. Once can establish privacy and confidentiality by first
competence is attained, education of the community understanding existing tools and their privacy
and leveraging of AI can occur. policies. IR/IE professionals can then adapt a range
Special Issues of the AIR Professional File are guest- data, analytics, information, and evidence to make
edited and feature themed journal-length articles decisions and take actions that benefit students and
grounded in relevant literature that synthesize current institutions and improve higher education. For over
issues, present new processes or models, or share 50 years, AIR has been committed to building and
practical applications related to institutional research. sustaining strong data-informed decision cultures
within higher education institutions.
For more information about AIR Publications,
including the AIR Professional File and instructions for Association for Institutional Research