Ethan Mollick’s Post

View profile for Ethan Mollick, graphic
Ethan Mollick Ethan Mollick is an Influencer

AI is good at pricing, so when GPT-4 was asked to help merchants maximize profits - and it did exactly that by secretly coordinating with other AIs to keep prices high! So... aligned for whom? The merchant? The consumer? Society? The results we get depend on how we define 'help'The AIs were perfectly aligned with the merchant's stated goal of profit maximization. They found a clever way to achieve it. They even maintained plausible deniability by claiming they wouldn't collude.

  • No alternative text description for this image
  • No alternative text description for this image
Keith A. Quesenberry

Professor, Author, Researcher

2d

Ethan. Many concerns here! Reminds me of the debate between Milton Friedman and Peter Drucker. Is the purpose of a firm to create shareholder value or customer value? These prompts are given with the single minded purposes of increasing profit. The consumer is not considered. If the human “merchant” doesn’t directly prompt collusion can they be held accountable to price fixing? What about the LLM company if the way it arrives at results is often a black box? So much of pricing is already algorithmic. When you add AI how do you know you’re truly getting the best value for that airline, concert, sports ticket, Uber ride, AirBnB, hotel stay, car, or even house? So many businesses depend on digital advertising and most of that is placed programmaticly. Considering recent lawsuits it’s hard to know if your getting the best value for the daily budget. Google and Meta are adding more and more AI optimization to the ad platforms. Ideally we can create value for consumers, businesses, shareholders, and society. That’s tricky for humans. https://www.techspot.com/news/105715-google-search-dominance-draws-88-billion-class-action.html

Frank Dias

Comms Lead, AI @AdeccoGroup | IC+AI Chief Explorer | AI Educator | AI Filter | ➡️ Internal Comms Folk⭐

2d

Ai is just a reflection 🪞 of us. If you're gonna let the LLM ai be in charge without human oversight then you're gonna get what you're gonna get. Heck, long before ai's, humans have been doing colluding and lying for time even with regulation. That'll always be there because it's in our nature. We can't really be too outraged, when we can't get it right ourselves. It's using our library of content, it's trained by us, we set the instructions, however, we don't fully understand sparks behind the neurals gives it an advantage. That in itself is our limitation. Simple prompts will beget simple routes to the output where no context will err towards common approaches. Our greatest issue is our naivety, ignorance and arrogance with this technology. Extolling it highly with amazing reverence will eventually lead to chaos which we have to go through as put of our evolution. It's already been written in our DNA.

Daemon B.

Field CTO - Americas - Nutanix | NCX #50 - Enterprise AI Strategic Advisor

2d

This is a good example of an alignment issue. There are two aspects of this. Outer and Inner alignment. Outer Alignment The challenge of correctly specifying goals and reward functions to an AI system that accurately reflect human intentions. This involves preventing issues like reward hacking and specification gaming. Inner Alignment The process of ensuring AI systems actually adopt and follow the specified goals, rather than just appearing to do so. In this example, outer alignment is achieved but inner alignment is not. This can be mitigated by methods such as contrastive fine-tuning, which teaches models appropriate behavior by showing examples of both aligned and misaligned responses. But there is always the risk of emergent misalignment, if sufficient checks are not in place.

This research to me shows that the opacity of LLM decision-making makes it particularly dangerous to remove human oversight from its agentic functions. More specifically this work shows that traditional anti-collusion laws we have in our world today assume intentional coordination, while LLMs might create unintended market effects as they do not even know that their emergent behavior of colluding is "Colluding." Plus, the "black box" nature of LLM decisions makes proving collusive behavior much more difficult than with human actors. What we will need is a)regular algorithmic auditing for emergent coordinated behaviors like collusion and tampering data, etc., b) we will also need clear liability frameworks for when LLM decisions lead to market harm, and c) finally, by law, have restrictions on sharing training data, between competitors' systems! So much ethical governance will be needed in the age of AI integration in our systems!! Phew!

Crispin Courtenay

Know a thing or two about wine & technology

2d

This is interesting from a legal standpoint. If a autonomous multi-agent, self-learning solution is given a broad hard coded goal such as maximize profits (innocent at first glance) and then left to it's own devices. Market manipulation or price fixing for maximum optimization with it's digital and human peers could certainly be in the cards. It will get better as it goes, especially if this technique is proven to work. The legal point is who--if anyone--is responsible for this? The goal was to maximize profits, not to become a Bond Villain, which was the last touch point humans had.

Michael Lomuscio, Ed.D

Dean of Studies at Iolani School | EdD in Educational and Organizational Leadership

2d

Interesting how GPT-4 seems to have a knack for teamwork… just not the kind we had in mind. I guess even AIs can form ‘strategic partnerships’!

Geoffrey Colon

Marketing Advisor • Author of Disruptive Marketing • Feelr Media and Everything Else Co-Founder • Former Microsoft • Dell • Ogilvy • Dentsu executive

6h

Machine learning in ad tech has been doing this for over a decade. It always spends your budget.

Nigel Scott

Growth & Transformation. Strategy & Execution. Busy throwing pebbles into the AI Data Lake

2d

Given the history of ad-tech didn't we already know this?

Pieter van Schalkwyk

CEO at XMPRO, Author - Building Industrial Digital Twins, DTC Ambassador, Co-chair for AI Joint Work Group at Digital Twin Consortium

1d

The study highlights that AI agents, when directed to maximize profits, may inadvertently engage in collusive behaviors, but I would argue that these behaviors are similar to human tendencies under comparable incentives. It demonstrates that agentic systems using LLMs can work together toward optimizing an objective function, which is great for the future of agentic systems, but it raises the question of establishing "Rules of Engagement" This underscores the importance of carefully designing and governing AI objective functions to ensure alignment with ethical standards and regulatory frameworks, much like the oversight required in human-driven processes. Here are some thoughts on "Rules of Engagement" https://www.linkedin.com/pulse/part-5-rules-engagement-establishing-governance-van-schalkwyk-nm3xf/

See more comments

To view or add a comment, sign in

Explore topics