Recently, I talked with George Abbott from The Insurer on #AI risks and #insurance, which resulted in below linked article. Insuring (Gen)AI errors, be it via innovative or traditional insurance products, has the potential of aggregation risk. This is in particular true if we look at (Gen)AI use cases, which are build on foundation models. Foundation models introduce common elements between GenAI and AI use cases, which make their error rates highly positively correlated, depending on the similarity between use cases. Any event, which induces a data drift with error spikes on the foundation model (even a foundation model update), could lead to insured losses across many use cases and insureds. Such risk needs to be considered and managed by insurers. On the positive side, one can reliably quantify the error frequency or hallucination rate of black box AI and GenAI models for many use cases - as my team at Munich Re demonstrated in a research paper we published together with Agni Orfanoudaki from University of Oxford. This insight allows to build a strong quantitative basis to underwrite and price AI error risks - and other AI risks related to AI errors. This in turn empowers insurers to support their corporate clients on their AI adoption journey and helps to bring reliable AI systems / AI agents to the market.
#AIrisk is present and has the inherent potential to aggregate, this risk needs to be modeled and budgeted for, said Michael Berger, Head of #InsureAI at Munich Re in a call with The Insurer. "When we look at every company that is essentially experimenting and potentially adopting AI, this could lead to a significant demand for #AI insurance." - Munich Re already is offering innovative coverage for AI solutions for #aiprovider and #aiadopters alike. #aiinsurance #ArtificialIntelligence
Video Journalist at The Insurer TV
2wgreat talking Michael Berger