Wharton management professor Ethan Mollick understands that it’s tempting to treat generative AI as if it were a real person.
It’s been trained on the entirety of human knowledge and can respond with precise answers to specific questions. AI has even been shown to respond to people in crisis with more empathy than some doctors and therapists, he said.
“The best way to work with it is to treat it like a person, so you’re in this interesting trap,” said Mollick, co-director of the Generative AI Lab at Wharton. “Treat it like a person and you’re 90% of the way there. At the same time, you have to remember you are dealing with a software process.”
This anthropomorphism of AI often ends in a doomsday scenario, where people envision a robot uprising. Mollick thinks the probability of computers becoming sentient is small, but there are “enough serious people worried about it” that he includes it among the four scenarios sketched out in his new book, Co-Intelligence: Living and Working with AI.
“The best way to work with [AI] is to treat it like a person, so you’re in this interesting trap.”— Ethan Mollick
An existential threat is unlikely, and so is the scenario that AI remains where it is now, stuck in a somewhat useful but clunky stage. Mollick wants his readers to focus on what he considers to be the two most likely scenarios in the middle: AI will continue to have either exponential or linear growth. And he wants everyone to get on board with exploring how AI can enhance their productivity and improve their lives.
“One of the main mistakes people make with AI is assuming that because it’s a technology product, it should only be used by technical people, and that just isn’t the case,” he said. “My argument has always been to use it for everything, and that’s how you figure what it’s good or bad at.”
Mollick spoke with Wharton marketing professor Stefano Puntoni about his book during a webinar for the AI Horizons series. The series is hosted by AI at Wharton to showcase emerging knowledge in the field of artificial intelligence. Puntoni, who has also conducted extensive research into AI and its applications, asked Mollick to address concerns ranging from human replacement to regulatory frameworks.
AI and Entrepreneurship
In addition to being an academic, Mollick is also an entrepreneur who co-founded a startup and advises a number of startups. He said AI is a “no-brainer solution” for many problems faced by founders who are too cash-poor to hire extra help.
Need a lawyer to review a contract? AI can help. Need a marketer to build a website or a coder for technical advice? AI can help. Need to write a grant application, a press release, or social media chatter? AI is the answer.
“The thing about entrepreneurs is you have to be a jack of all trades. You have to do many things, and entrepreneurs often get tripped up because of one or two of those things they can’t do,” Mollick said.
“My argument has always been to use it for everything, and that’s how you figure what it’s good or bad at.”— Ethan Mollick
The Responsibility of Tech Companies
Mollick communicates regularly with industry leaders and said the major AI producers take their security responsibilities seriously.
“I don’t think it’s just a fig leaf. They do seem to care when I talk to them,” he said.
Everyone agrees that regulation is necessary, but figuring out the details is difficult. Mollick said high-powered, open-source models can be easily stripped of their controls “with just a little bit of work” on the back end, which scammers can manipulate. On the front end, too much preemptive regulation could stifle experimentation and progress. Instead, Mollick advocates for “fast regulations” that can be enacted as problems arise.
“As harms emerge, we need to take action against those harms. But we also need to make sure that we are not getting in the way of potential good uses, because some of the bad uses are baked in,” he said. “What you want is regulators watching very closely and reacting, and we’re not there yet.”