AI: What It Will Take To Build Trust For Brands
Originally published in MediaPost
AI, a tool “more powerful than fire or electricity,” is considered by 60% of business leaders and consumers to be able to solve for the most important issues in modern society. At the same time 67% think AI will erode stakeholder trust over the next four years. AI is an inevitability, but there is low confidence in it leaving humans in any better state of affairs.
As a society, we seem to be becoming accustomed to a lack of trust. According to the 2018 Edelman Trust Barometer, trust in all sectors from NGOs to Media, Business and Government has decreased. We’ve entered the “fourth wave of the trust tsunami,” which is the loss of confidence in information channels and sources, brought on largely by the AI-assisted spreading of “fake news.” This has resulted in trust in social media platforms now falling below cigarette companies.
So, how can we stop AI from destroying hard-won trust? Probably the fact that as much as AI is the next frontier in super-charging business outcomes, trust is critical. A Millward Brown study shows that brands that maintained an above level of trust, since 2006, have achieved 70% growth while those that fall below average have seen a loss of 13%.
Regulation is helping to guide companies and brands throughout this wave, which we are starting to see come into place in the EU with GDPR, and more recently in California. There is also the promise of blockchain to create a trustless system to fend away bad actors, protect data and provide transparency. BotChain, for example, aims to “install” trust in the AI bot economy. While these systems will protect consumers and manage compliance, they are focused on mitigating damage more than cultivating emotional trust.
How, then, can brands leverage AI in a way that supercharges their business without damaging hard-won trust? We argue that this can be achieved through AI that puts the consumer interest first, and that keeps consumers in the driving seat.
AI in service of consumers
Amazon is a good example of how AI can be good for people and for profit simultaneously. While its tech competitors have dropped off, Amazon remains the most trusted company in the world for the third year running. Amazon's AI magnifies the single-mindedness of the company’s “obsessive compulsive focus on the customer.” A few years ago, Amazon revamped its AI efforts to create a “low-cost, ubiquitous computer with all its brains in the cloud that you could interact with over voice—you speak to it, it speaks to you,” resulting in the Echo with the Alexa platform behind it. Amazon offers its AI services to outsiders, turning a huge profit. Amazon can then nurture consumer trust as it does not profit from the sale of personal data.
AI that respects humans as decision-maker
AI can build up trust when it recognizes itself as a tool, and leaves the ultimate decision-making to humans. In consumer testing for a client’s AI assistant, it turned out that no matter how useful and augmentative the AI might be, there was no desire for it unless express permission is granted. A really smart suggestion was welcome, just as long as it didn’t act on its own accord. In the real world, Spotify’s Discover Weekly is a great example of a collaborative AI, offering up better music suggestions (not answers) based on studying your behavior. Waze is another, highlighting a range of route options, with implied pros and cons of each one, accepting there might be other factors, out of its purview, that might play into a final decision. Both are examples of AI “rooted in a deep respect for human agency”
Both platforms are doing their best, as machines, to curate what might be most delightful to your human tastes or preferences without presuming to actually know what those are. These platforms build up a sense of trust through mutual respect and understanding.
Inclusive AI
Much of the promise of AI is the ability to correct for human error, and human biases, at scale. However even without bad actors, AI can result in unintended consequences stemming from insidious biases in training data-sets.
Joy Buolamwini of The Algorithmic Justice League has, for example, drawn attention to the predominance of “pale male” benchmark data-sets underlying many AI algorithms. These data-ets, underlying the AI services of companies such as IBM, Microsoft, and Face++ over-represent lighter men in particular and lighter individuals in general, which could perpetuate exclusion.
As the US consumer base becomes more diverse in keeping with cultural shifts in the US, we are finding that ads that are progressive and that defy stereotypes are more effective. Studies from Millward Brown using Affectiva’s emotion AI tech, show that progressive ads are 25% more effective. Again, the incentive for brands and businesses to invest in a culture of inclusive AI might stem from what we’ve learned works in marketing and brand-building. Brands would do well to embrace “good” AI practices upfront, to preserve, and even increase, hard-won consumer trust.