Jump to Content
Productivity and Collaboration

The human future of AI and how collaboration makes us smarter

November 14, 2024
https://storage.googleapis.com/gweb-cloudblog-publish/images/5415s_The_human_future_of_AI_v2_Blog_heade.max-2500x2500.png
The Google Workspace Team

Google Workspace Newsletter

Keep up with the evolving future of work and collaboration with insights, trends, and product news.

SIGN UP

Dr. Vivienne Ming is a theoretical neuroscientist, entrepreneur, author, and mother of two. Her AI inventions have launched a dozen companies and nonprofits with a focus on human potential: education, workforce, and health. We sat down with Ming, a longtime evangelist of collective intelligence and the power of collaboration in tools like Google Docs, to get her take on the AI landscape and “distributed innovation.”


Q: You once called yourself an AI snob in an interview. What did you mean by that?

My snobbery comes from an insistence that I understand mathematically what my models are actually doing. For most people, most of the time, throwing some built-in AI functionality at some ad copy or image generation on a slide deck is genuinely fine. For the work I do at The Human Trust, however, I need to know both how and why my models work. When my son was diagnosed with diabetes, I built perhaps the first machine learning for diabetes to track his blood sugar levels — I needed to know why it made the predictions it made. I developed facial recognition AI to reunite orphan refugees with their extended families, which also demanded a deeper understanding of the models as well as the psychology of face perception. I’m a snob because I want to know the limits and possibilities of AI, whether I built it myself or not. People’s education, jobs, and even lives depend on our work, so it’s worth being a snob.

Q: What are you most excited about when it comes to AI (generative or otherwise)? What makes that future human?

There are two capabilities that make me most excited about modern AI. The first is its ability to challenge us to be better. I've been working in AI and education for two decades, and one of the most consistent findings is that if AI tutors ever give students the answer, the students never learn anything. In our work, we look at how sophisticated models combining LLMs, reinforcement learning, and deep embedding can support learners through what we call “productive friction,” actively challenging the student rather than giving them the answer.

The second is the ability of LLMs to integrate the kind of incredibly complex, interdependent factors that make up messy human reality. For example, at The Human Trust we are developing an AI agent that supports suicide hotlines. The agent, which again combines many different underlying models, offers the suicide prevention worker tactics and helpful advice targeted to the specific caller’s experiences. There is no average person — no average student, no average caller in crisis. Modern AI is finally capable of respecting this profound truth.

https://storage.googleapis.com/gweb-cloudblog-publish/images/unnamed_23_ezjTWwr.max-1600x1600.png

A feature from Dr. Vivienne Ming’s weekly newsletter on socos.org 

Q: Can AI make us better collaborators and therefore help us build smarter, more innovative teams?

During the pandemic, when all of our professional lives were passing through cameras and collaborative documents, I analyzed interactions within teams across some of the biggest companies in the world. I found that the smartest teams were small, diverse, and flat. Importantly, diversity and flat hierarchies required each other: diverse teams created more new ideas but only when each team member contributed equally. With these insights, I built the Matchmaker AI, a system that analyzes communications within a network and then dynamically creates new connections and blocks existing ones to maximize collective intelligence. We found that The Matchmaker’s new social network can double innovation productivity across the network.

Q: Some years ago, Google completed Project Aristotle to understand the traits that make for a successful team. It came down to dependability, structure and clarity, meaning, impact, and psychological safety. How does your work on collective intelligence align with those findings?

We found a similar set of factors in building the Matchmaker. In particular, there was a crucial tension between trust and diversity that needed to be balanced within a group. Too little diversity and new ideas cease. (In fact, highly homogeneous teams were sometimes “dumber” than their individual members.) But too little trust (low psychological safety, low conscientiousness) and the individuals on those diverse teams stopped contributing.

Q: You’re a longtime user of Google Workspace and other Google products (music to our ears!). Tools like Google Docs reflect our belief in a “collaboration-first” approach to developing work. We share work with people early and often. How do you use commenting in Docs and asynchronous collaboration when you’re working on a project?

Collaborative documents are the shared intelligence of a collaborating team. It represents not just the individual contributions of each member, but the gain in intelligence from the hard work of that collaboration. Chats and message chains — ephemeral and serial — lack that collective intelligence superpower. Within a specific document, I personally use commenting for asynchronous discussions on the “whys” of document changes, while keeping the documents themselves free of the clutter of notes that aren’t helpful. This eliminates the confusion and misunderstandings that can plague distributed collaboration. Everything in the document is content; everything in the comments is action.

Posted in