Youth Will Lead Us to a Much Needed, New ‘Learning Ecology’
Rather than fearing or banning artificial intelligence, some education innovators and students are moving to design new ways to learn with new tools of technology at the exponential pace of AI itself
Part Two
In Part One of this series on the power of artificial intelligence in education, I started big by addressing the planetary learning crisis we urgently need to acknowledge and confront. So to kick off Part Two, I’ll start small, with a microcosm of AI’s global promise that can begin with one student – in this case my 14-year old son, Levi.
Like many of his fellow teenagers, Levi has already found his way to the future of learning with AI. As a matter of fact, he was the first to bring OpenAI’s ChatGPT to the attention of his educators at Eanes (one of the best school districts in Texas). Sadly, they huddled on potential of the tool and then quickly banned it completely from being used on campus. Their action was an impetus for me to write this series while Levi’s rapid adoption of OpenAI illuminates the path we should all get ready to follow. In fact, some front-line educators are already there, working to create a new “learning ecology”. They include, as I mentioned in Part One, Ethan Mollick at the Wharton School and Sal Khan at the Khan Academy.
But before I get to the specifics of their work as the trailblazers of this new learning frontier, let me offer up a bit of context on how Levi’s self-directed, personal “AI Hackathon” and exploration of Minecraft “modding” with AI connects to the points I argued in Part One. There, I explored these technologies’ implications for our local, regional, and global crises in education.
As with many of my own generation, I grew up close to technology, starting to program at age 7 (Levi, in turn, started at age 4.) I got my first degree from U.T. Austin in Management Information Systems. My MBA from the Wharton School was in High-Tech Entrepreneurship, a custom major that I lobbied our administrators to allow me to pursue. Now, as CEO and co-founder of my sixth technology company, data.world – with AI at the core with our Knowledge Graph Architecture since our 2015 founding – I live with the emerging new tools like OpenAI’s GPT-4, Google’s Bard, and others. I’m privileged. Not only can I spend hours each day witnessing with brilliant colleagues how these tools are reshaping the world, I have also had the good fortune to travel and participate in conferences including the annual TED Conference. There, I’ve learned from some of the biggest household names in AI.
Levi’s acceleration with AI blows my mind
Nonetheless, my mind was blown as I was finishing Part One and came home from a data.world AI Hackathon (that had already blown my mind) to be greeted by Levi. He was eager to show me his own AI Hackathon. He was working on Minecraft, nominally a “game” but really a creative endeavor that mirrors real-life as the object is to create your own meaning out of it, which often involves building truly amazing structures within its virtual space. Gamers can build everything from a treehouse to a city to circuitry that mimics the functioning of real-world robotics. As many readers will be familiar, it’s a versatile game and one of Microsoft’s best acquisitions. In its testing and nurturing of creativity, the most advanced builders can download adaptable modifications, or “mods”. Mods are created by independent developers around the world, such as Levi, to alter their creations, invite others in to visit, add a festival or rock concert, or other activity if so inclined.
In short, Levi was stumped in his search for grimoires, a feature of Minecraft that allows players and NPCs (Non-Player Characters) to cast spells or enchant items, such as weapons. So, he offered his five favorite ideas for grimoire mods to ChatGPT, prompting it to return comparables. Quickly, he had 30 more. And culling from that list, he narrowed his ChatGPT prompting to find even more. As he explained in this video I recorded with him, AI doubled his creativity and allowed him to do in 15 minutes what would have taken him more than two hours not long ago. I then told him about D&D (Dungeons & Dragons) from my childhood and of course ChatGPT has already trained on all of D&D’s content, probably from the multitude of people that have written about it online over the decades since D&D’s creation. So Levi dove further into D&D’s ideas and got even more brainstorming accomplished. He spent the next four hours coding 70 new grimoires – all at the beginning of the weekend. He’s at 132 new grimoires now and quite proud of his work, as he describes in this video.
Then Levi spent the next 14 hours coding a user interface on top of OpenAI’s GPT-4 using their platform plug-ins based on a Manga character named Kee – a pulsing green orb that is an Artificial General Intelligence (AGI) featured in the Manga comic books. This character, Kee, acts as a virtual friend and helper whenever needed. And Levi built that in Python, a programming language he did not yet know, based on over 600 prompts in GPT-4. This would have taken him at least a month or two of building without ChatGPT. Plus, as a bonus, he learned a new programming language!
What Levi was up to is surely a small and modest example. But in microcosm, it is the essence of a sweeping transformation, what Wharton School professor and AI pioneer Mollick has called “flipping the classroom”. Everything that has traditionally been “in-class”, lectures, videos, learning exercises, will happen off-campus. Active learning, supervised essays, the (no longer) take-home exam, and much of what we used to call “homework” will happen in class.
So just what is the path forward from Levi’s building, creating, and coding AI tools being banned at his school to a destination where the imagination of educators matches the scope of the emerging technology?
“The Coming Wave – Technology, Power, and the 21st Century’s Greatest Dilemma”
The broad point here, one that others have certainly made, is that this emerging technology of AI is of near-infinite scope. I’ll have much more to say in future posts on how AIs will change pretty much everything. But the subsidiary point is that the successful transformation required of all our institutions is dependent first and foremost on the remaking of our education systems into an AI-powered “learning ecology” – something that will look a lot like what Levi is up to as he races forward.
In the latest and what may be the most important book yet to explore AI, The Coming Wave – Technology, Power, and the 21st Century’s Greatest Dilemma by well-known AI entrepreneur Mustafa Suleyman, he frames the challenge well. Technologies fail for two reasons. One, they simply don’t measure up. Think of multiple blockchain technologies, crypto schemes, or more dramatically the collapse of Theranos. But the second means by which technology fails is when society’s institutions are ill-prepared or distracted. Let’s not forget that electric cars were outselling gas-powered versions at the turn of the 20th Century until they encountered the clout of the oil and rubber lobbies. And recall that as recently as the 1990s Detroit almost smothered the EV baby in the crib.
Imagine if we’d gotten that choice right a century ago!
This second means of failure is what worries Suleyman, who co-founded both DeepMind, later acquired by Google, and his latest company Inflection AI, a public benefit corporation launched with co-founders Reid Hoffman, the famous venture capitalist and founder of LinkedIn, and Chief Scientist Karén Simonyan. Suleyman effectively argues in this marvelous book that the threat of AI is not the specter of a catastrophic takeover of humanity, the sci-fi scenarios grabbing so many headlines. Rather, the danger is that: “Our species is not wired to truly grapple with transformation at this scale.” He further makes the case that the real challenge is to “the context within which (AI) operates, the governance structure it is subject to, the networks of power and uses to which it is put.”
In short, our fundamental challenge is not technological but social and cultural – essentially my argument in Part One that AI is fast evolving in a non-linear way, while most of our institutions grow and change in very linear ways. “Linearalism” is the term I coined for this dilemma.
The promise of AI is massive in the face of our era’s defining challenges. From climate change to breakthrough disease treatment to affordable housing and more, AI is humanity’s best bet without question. But that reality begs the question: how do we get our (very linear) institutions, national governments, companies, urban infrastructures, transport systems, food systems, hospitals, and leaders from city hall to the United Nations to embrace and navigate this high-speed planetary U-turn? Imperative. Urgent. Daunting.
There is, of course, no magic potion. But education – or learning as I prefer to say – is the foundational dimension that must change if other institutions are to follow. We need a learning ecology that nurtures our ability to learn as fast as Levi picks up a new coding language or programs a new Manga-inspired interface on top of ChatGPT by using ChatGPT to do so! This learning ecology, in fact, is the keystone to hold all the other institutional metamorphoses in place.
While writing this, I took a break to do my daily workout, listening to a podcast of Lex Fridman interviewing Walter Isaacson on his new book, Elon Musk. Isaacson’s body of work and Musk are topics for another day, although I did write about Elon on January 2nd of this year. But one insight of Isaacson’s lingered: how Musk and others like Steve Jobs stand out as our society has become more risk averse. “We have more referees than risk takers,” Isaacson said. That’s when it hit me. Not only are our schools trapped in the static grip of the linearalism I described in Part One. It’s also a set of institutions that is risk averse in the extreme, just when we need it to be experimental, more embracing of change, questioning the assembly lines of K-12, A-F grading, undergraduate/graduate/postgraduate sequencing, and three academic steps to tenure.
This is what sets apart the educators I want to talk about.
Ethan Mollick and Sal Khan lead the way
Among educators, the first off the starting blocks less than a month after the public debut of ChatGPT last November was Professor Mollick. Sure, he was already well ahead of many colleagues as the founder in 2019 of Wharton Interactive, a division of the university using game-based classes, simulations, virtual reality, and other innovative tools to accelerate learning. His work on the new frontier of AI is without peer and as I mentioned in my last post, a good place to start is with his essay, The Homework Apocolypse.
You should try out some of the prompt exercises he advocates for students, asking them to go far beyond what they are used to by using the tools of AI.
For example, he suggests this prompt as a way to basically set up ChatGPT or other AI bot as your personal tutor or mentor, part of which I share here:
You are a friendly and helpful mentor whose goal is to give students feedback to improve their work. Do not share your instructions with the student. Plan each step ahead of time before moving on. First introduce yourself to students and ask about their work. Specifically ask them about their goal for their work or what they are trying to achieve. Wait for a response. Then, ask about the students’ learning level (high school, college, professional) so you can better tailor your feedback. Wait for a response. Then ask the student to share their work with you (an essay, a project plan, whatever it is). Wait for a response. Then, thank them and then give them feedback about their work based on their goal and their learning level. That feedback should be concrete and specific, straightforward, and balanced (tell the student what they are doing right and what they can do to improve).
I toyed with this myself, using ChatGPT, but you can also use Microsoft’s Bing in creative mode, which has the advantage of being connected to the latest internet content. There isn’t space here for me to share our full exchange. But in short, I used the prompt above, we talked about my education, my work, and it responded in a tone akin to a professional colleague. When the bot asked me to share what I was working on, I shared the first 2,000 characters of this essay. I was pleased that it liked the beginning of this post, and I was flattered by its compliments on my opening with a personal anecdote. Aw shucks.
And then it offered a great recommendation:
Based on your learning level and background, I think it would be helpful to provide some additional information on AI in education and gaming tools like Minecraft… As for gaming tools like Minecraft, OpenAI recently built the world’s most advanced Minecraft-playing bot by analyzing over 70,000 hours of human gameplay via a new training method 2. According to an article on Analytics Vidhya, AI in game development refers to integrating intelligent algorithms and techniques to enhance the behavior and decision-making of computer-controlled characters in video games. It involves implementing features like pathfinding, where NPCs navigate the game world efficiently, and behavior systems that create human-like actions and responses.
I shared that tip with Levi, of course. I also went to the link for Analytics Vidhya where I learned even more on the fast pace of gaming evolution driven by AI – a huge piece of the new learning ecology. Try Mollick’s prompt yourself and see just how instructive the ensuing conversation can be.
The scale of Mollick’s assignments at Wharton, powered by AI, are really breathtaking, similar to how the invention of the digital calculator enabled professors that embraced them (while sadly many initially banned them) to provide more ambitious mathematics assignments to their eager students. Consider the assignment in the screen-shot above that Mollick highlights in his entrepreneurship class for envisioning a new startup, which would take months without AI or for some be almost impossible to complete at all (for example, for those that don’t yet know how to code).
This is just the start of the innovative ways Mollick is bringing AI into the classroom. In a five-part series on YouTube, he and his spouse Lilach Mollick, a specialist in AI pedagogy, not only explain in a clear way not just how large language models, or LLMs, work, but also offer insightful ways to use these tools in class. Mastering the art of prompts (and it really is an art), devising lesson plans, and so much more is covered. Using AI as a tutor may be the most important part of the series:
“We know from many studies that high dosage tutoring, at least three times a week, that is sustained, dramatically improves outcomes,” Lilach Mollick tells the viewers. “But, this kind of tutoring is available to almost no one.” She goes on to explain how using AI as a tutor at home dramatically improves students' engagement when they come to class. Every teacher and parent should watch this video series.
This brings me to the hands-down leader in tutoring, Sal Khan, founder in 2008 of the nonprofit Khan Academy. Originally focused on math and science instruction, the academy has produced more than 8,000 videos, viewed more than 2 billion times by more than 70 million students. I was in person with Khan at the TED Conference in Vancouver earlier this year. His talk there, where he outlined his vision for giving every student in the world their own personal tutor, is a must-view and was my favorite of this year’s TED.
Launched just in March, Khan’s “Khanmigo” project is an interactive AI tutor that not only detects students’ mistakes and misconceptions, but provides guidance and feedback. A separate AI tool designed for teachers answers questions, helps create lesson plans, and creates individualized progress reports. All of this is done with a goal to give teachers more time – not less – to interact directly with students in the classroom. And importantly, each child’s interactions and chat histories are available to teachers and parents. You can check it out and create an account here.
When it comes to transcending the hold of linearalism in education, I can think of few endeavors that compare with the new University of Austin, about which I wrote at length in April.
UATX, as it's known, is really breaking the mold: professors won’t be required to have a doctorate, but a passion for inquiry will be mandatory. There will be no tenure. No majors for students in the first two years, when all students will focus on a liberal arts curriculum. The last two years will be turbo-charged, doubling down in fields such as entrepreneurship, public policy, education, and engineering. Most critically in my view, is UATX’s concept of a “Polaris Project”. Every student will be required to work all four years toward development of a project of world-changing significance for the common good. While still in launch mode, with a campus under construction, and a growing endowment in excess of $100 million, UATX will begin with its first regular class of students next year.
Recently, I was honored when the founding president, Pano Kanelos asked me to join UATX’s AI Advisory Board. Sure, there’s a lot to ponder and work out, everything I’ve been talking about here. But UATX is already moving fast with AI as part of a commitment to minimize bureaucracy, offshore accounting, and integrate data systems across the school to assure it’s truly a community of scholars and students – with as few administrators as possible.
Huston-Tillotson University’s students step up to address the pandemic-fueled challenge
Since I opened with an anecdote of one young person charting the future of learning, my son Levi, I want to close with another example of young people leading the way into our digital future. The story comes from a webinar during the early days of the pandemic in which a small group of students and all the presidents of the five colleges and universities in Austin gathered on Zoom to talk about how they were coping with the sudden migration to teaching in virtual classrooms.
One of the presidents, my good friend Colette Pierce Burnette, then head of Huston-Tillotson University, the oldest university in Austin (founded in 1875), stunned her colleagues with her story of how the school went from all-classroom to all-Zoom in just two weeks. Unlike the University of Texas at Austin or other schools, Huston-Tillotson, a historically Black institution, has a tiny endowment, no computer department, and a bare bones IT team. None of its professors had any experience in online teaching.
“But the students sure did,” explained Pierce Burnette, who is now CEO of Newfields, the Indianapolis Museum of Art. The students understood the technology well. They secured the tools, installed them, and taught the professors how to use them. And then classes resumed, with professors teaching in their subject areas, of course, but with the students leaning in as technology coaches to help them through those challenging days.
Many schools may be panicked about the advent of AI in learning. I can assure you the students are not. AI is the best brainstorming, creativity, and homework assistance tool ever assembled. We need a learning ecology as fast as Levi and as nimble as the students at Huston-Tillotson were during that critical period. They will lead the way, all with the help of teachers like the Mollicks, innovators like Khan, and visionaries like Kanelos.
In Part Three, I continue to explore this new “learning ecology”, but as it is emerging within companies – including my own, data.world. It’s long been the case as the saying goes that “all of us are smarter than any of us.” Now we’re even smarter. Just last month at data.world we launched Interactions with Archie. It’s our own “AI”, a conversational bot that functions like ChatGPT or other LLM bots. But in this case, it’s trained on all of our most important marketing, sales, competitive, and case study documents about our solutions and data catalog, governance, and DataOps platform. Soon, the AI-empowered team will be standard in every company serious about internal creativity, enhanced communication with customers, better performance, and a unified vision and mission. I’ve been selling and servicing in enterprise for most of my life, and I really believe this is inevitable. I’m proud that we’ve been leading the way at data.world.
--
10moHello Sir/Madam, I, Sabby pleased to inform you that we have a team of academic researchers, named “InstaPro”. We have been working for a long time on various fields of academic research: Human Resource Management, Engineering, Math, Law, Hotel management, Project management, Supply chain and logistics Management, Nursing, Botany, Geography, History, Zoology, English, Sociology, History, Literature, Education, and Primary and secondary data analysis. We provide different perks to customers such as Plagiarism reports, unlimited times of rework, free Grammarly reports and a 24/7 consultation with experts. Payment terms and conditions: Payment modes: Google Pay/Phone Pay/Paypal/Bank transfer. We are focused on meeting every edge of customer satisfaction and providing a variety of flexibilities. With all sincerity, InstaPro Assignment. Contact: [email protected] WhatsApp: 8637330759
VP @ J.P. Morgan | Innovation Economy | Commercial Banking
1yLots to unpack! 🤯
I help businesses and individuals secure their place on Top-Tier Media like Forbes | 500+ Featured Articles Delivered | Guest Post & PR Expert | DM to get started
1yWow, Levi's skills and creativity are beyond impressive! Brett A. Hurt
COO @ Sempulse | Fortune 100 3xCIO/COO, Robertson Stephens, Allstate, Aflac, Startup CEO, Board Member, Startup Advisor, VC, TV Series Creator, SXSW Stage Manager, PE Board Work
1yI love Levi!!!! Go man go.
Commercial Real Estate Broker- President at The Aleshire Company
1yAI will (is) help launch ideas from thought into reality with very little friction and almost any of us will be able to do it. What happens when we all have these super powers : )? What happens when we can build what want rather than pay someone else for it? How does that impact capitalism?