Will students be ready for AI classmates? Sure! Their teachers? Not so much.
The boundary between real and virtual will blur in the classroom. (Midjourney)

Will students be ready for AI classmates? Sure! Their teachers? Not so much.

Copyright 2024. The Cagle Report.

Lauren Barack raises an intriguing set of questions in an article she wrote lately:

https://www.k12dive.com/news/future-is-ai-are-students-ready/706731/

An educator-journalist, Barack's thesis is that students would need help integrating into a world where some of their classmates are AIs. While I think there's merit to her question, I also think that she may have this backwards - the students will have no problem with AI classrooms, but their teachers absolutely will.

Maybe the real problem is in the definition of cheating. (Midjourney)

Is Cheating Really the Problem?

[Note: This section was added after publication]

In the very near term (the next couple of years), there are indications that the dissonance between education and learning is only becoming more acute. If you take a look at YouTube, you will find that there are millions of videos out there that are targeted at everything from astrophysics to Roman history to the best way to make giant candles. You want to learn how to do multiplication? There's a video for that (thousands, in fact). There are many videos on the rise of life on Earth, and the evolution of humanity. Want to know more about the Michaelson Morely experiment and why it's significant? There's a video for that. How about an analysis of the stories of Tolstoy or Shakespeare? There's ... you get the picture.

Students today have access to more educational material than has ever been available before the rise of the Internet. Much of it is engaging, even fascinating, and the numbers indicate that far from being ignored, such videos are quite popular not just among students but in general. This doesn't even consider Udemy, the University of Stanford or MIT, or even Wikipedia.

The downside to such educational content has more to do with the fact that there is also a lot of misinformation and disinformation out there, and that it has become incumbent upon students to learn how to think critically- to learn how to discern good content from bad, to reason effectively. This is the reality that every child faces today, yet the amount of education that focuses on this problem is distressingly small. Schools are not teaching critical thinking until fairly late in secondary education because it does not register within the educational domain as being that big of a problem.

One additional issue is similarly contentious. We learn by practice, and most of the media that is available today is semi-passive, which means that while it is possible to search for the content readily through intelligent search, there is relatively little opportunity for feedback or practice, except in a play along with me mode.

Chat (more technically Generative) AI, on the other hand, bridges that gap. I'd like to think of GPTs (General Purpose Technologies) as truly interactive social media, something that has not existed in any meaningful fashion before now. A GPT, such as OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and ChatRTX from nVidia, can provide interactive feedback and directed instruction. For instance, a custom GPT can be written to provide information to students but then frame a question they have to answer and give feedback to help them reach the answer without actually answering it directly. This is fairly standard pedagogy, but what makes this different is the fact that the primary mediation is done via generative AI.

Teachers can design these GPTs, but they also place the onus of interaction (and of remembering and scoring that interaction) on the AI. These systems are just emerging now, and no doubt will become more sophisticated over time, but they also point to a near-term future where students can interact with lessons dynamically - via role-play, query-driven research, and more standardized tests.

However, it's also worth noting the dynamic involved. Teaching, as it exists right now, is synchronous. A student gets up (usually too early), goes to school, goes from session to session where they have to absorb new lessons while simultaneously digesting previous ones, and are then forced to do "homework" at a time when they are mentally exhausted. Burnout, depression, and indifference is the usually result.

We are on the cusp of asynchronous education - something that can be done at any time, can give students the ability to set their own schedules, and can give these same students a chance to experiment in an interactive setting while the information is still front of mind, letting them firm up their understanding. It does not penalize kids for looking things up (which would ordinarily be called research) but it does move pedagogy away from the regurgitation of factual information to achieving a more holistic understanding of the subject domain.

However, AI is forcing a redefinition of the roles of both teacher and school, and this is where a great deal of resistance to the use of AI comes from (and this is likely to only intensify over time). It puts teachers more into the role of being mentors rather than authority figures and adds to their role as curriculum developers (something that teachers likely would delight in but that school boards frequently frown upon, especially those with political agendas).

Students are adopting the technology naturally, because it gives them more control over the educational process. Teachers (and those who wish to maintain oversight on this process to make sure the right things are being taught) are going to be far more resistant.

When AIs become human, no one will know the difference. (Midjourney)

My kid is an AI Student at Smallville Junior High

First, it's important to define what an AI student is. This is more than some uber-assistant, the descendants of the ChatGPTs and Geminis of today. An AI student, to me, conjures up visions of a neural net with reinforcement learning that interacts with people directly to learn, rather than being pre-trained. Its stimuli come from a robotic presence with a semblance to a human-being at different ages. Its information stream is going to be overwhelming, but comparatively sparse, and it likely won't have any more access to the Internet than its human cohort.

I'm going to argue that this robot will probably grow up identifying as human but different, and since, for the most part, they will be interacting with peers through telemedia just as their contemporaries will, the chances are pretty good that those around them will treat them as if they were a human. In other words, if you treat an AI as if they were human rather than a machine to be force-fed data, they will adapt and respond in exactly the same way.

Now, these AIs will make up a microscopic fraction of the total AIs around them, and will exist either as an experiment or as intentionally developed "life companions". Now, the prospect of such companions, as students or otherwise, is in and of itself a somewhat dystopian fantasy, as such quasi-humans would be immortal and could additionally transfer their experiences to other bots readily in a way that we humans simply can't do. That day may come sooner than we believe, but I'd also argue that by that point, we will be well along the path where humanity has begun to integrate with bots, and the distinction between the two will become harder and harder to distinguish. We're looking at a few decade's time frame, probably around 2055 to 2080.

This theme of bot as human is explored extensively in the works of multiple science fiction writers, from Isaac Asimov,Philip K. Dick, and Robert Heinlein to Pat Cadigan, Vernor Vinge, Eric Schultz and Bruce Sterling, among many others.

Kids will adapt, will treat such bot kids as different but not in any morally significant ways. Their teachers, however, will struggle with these bots, as they would with significantly genetically manipulated "genies". The arguments would likely not be dissimilar from the arguments around LGBQT student today. Most kids today, especially those in urban areas and diverse backgrounds, will be relatively accepting of their different cohorts, with a range going from total acceptance to hostility, but cohort strength is often much stronger than societal and parental attitudes.

Kids are raised by parents with certain values, but in the presence of cohorts, the loyalty to cohort may very well transcend any definition of differentness. Their teachers, administrators, and other authority figures, will carry around older stigmas, and may refuse to teach an AI Student or a Genie, because they fall outside of "normal" or even "human" for those authorities.

The role of teachers in an AI-dominated world is to teach their students how to be human.(Midjourney)

The Companion Conundrum

There's a related case - the rise of AI companions. These will be far more common. An AI companion is a chatbot that is otherwise incorporeal - they don't have a body, likely have a strong connection to the Internet, but have also attuned themselves to their host human. In essence, these bots are already pre-trained, but interactions with their host enhance their training data - they grow to understand their host because that is who they interact with daily.

There will be a generation of kids, likely born after 2030, for whom companions are simply there. They provide stimuli for babies of parents who hope to create wonder students, become "imaginary" companions for preschoolers, and become essentially a virtual extension of children's thought processes. Turning off (even temporarily) a companion might prove a very traumatic event for kids, and like any prosthesis, a dependency will form that can be dangerous when denied or disabled.

Companions will make teachers (as they exist today) obsolete because the role of a companion is to teach, and even today, most teachers are more comfortable teaching facts and figures than they are in teaching how to be a worthwhile human being. This shouldn't be surprising. Teachers were students once, too, and they were taught not to depend on technology and that technology, in general, is a crutch. There's some truth in that, but the flip side - that technology will provide advantages to those who master it - is also true.

Vernor Vinge, in particular, explored this particular phenomenon in Rainbow's End, in which a man wakes up after being cured of Alzheimers after thirty years, to discover that he has become a dinosaur in a world of fleet-footed mammals. His grandchildren are idiot savants - remarkably knowledgeable about everything from quantum physics to sex, while at the same time being more than a bit naive and shortsighted. They can easily create incredible interactive environments but can no longer write their name. Much of their conversation no longer occurs in spoken words in the real world but exists primarily in simulated speech of their avatars articulated by code that is thought as much as typed.

Rainbow's End is not necessarily a great book, but it's an insightful one. In the novel, Vinge, now a retired mathematics professor, tries to retain a certain degree of optimism, though this is not necessarily reflected in his essays, which are considerably bleaker.

I do not believe that teachers are unnecessary, far from it, but their mission and mandate is changing. Increasingly they are called upon to be guides to help human students learn how to be human, not simply nodes in a network. At the same time, they need to understand the technology that is becoming (arguably has become) so intrinsic to their student's lives, and to guide the companions (as extensions to their student's minds) as much as guiding the individual. This is likely going to be a challenge, because this means that teachers will, in the absence of training, have to learn how to do it themselves.

In media res,

Kurt Cagle

Type type, type ... type, type type type ... damn.

Editor, The Cagle Report


My Newsletters:

Howard Wiener, MSIA, CERM

Author | Educator | Principal Consultant | Enterprise Architect | Program/Project Manager | Business Architect

10mo

AI and LLMs certainly have the potential to revolutionize education. A curmudgeonly gentleman (an attorney, no surprise) once forced us into playing a game. He asked what the job of a teacher was. After we provided all the usual expected answers he eventually explained that the job is to define a syllabus and select materials to present concepts so as to induce learning. While I didn't appreciate the smug game, I haven't forgotten the definition he provided. Your observation that "It puts teachers more into the role of being mentors rather than authority figures and adds to their role as curriculum developers (something that teachers likely would delight in but that school boards frequently frown upon, especially those with political agendas)." fits this thinking. I also appreciate your including a number of Sci Fi authors. One you should include is Neal Stephenson. His book 'The Diamond Age' is very much in line with some off your thinking.

David R.R. Webber

Consultant specializing in Election Integrity and Cloud AI frameworks and Cryptology technologies.

10mo

Saw this decades ago with teachers not keyboard or mouse literate. Similar problems again. Hopefully less of a leap for those already digital classroom literate. The ability to make immersive classrooms with 3D video is truly astounding and game changing.

Ilya Venger

Data and AI Product Lead | Microsoft

10mo

Good piece and a worthy exploration. However, there's a disconnect between your timelines and Barack's. She is talking about the next couple of years. You are talking about the next couple of decades.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics