Jan Beránek’s Post

View profile for Jan Beránek, graphic

Founder and CEO, FifthRow | Consulting as Software | Corporate Venture Building & Investing | Impact Investor | Product Mentor at Google

This week, we had an F500 customer compare our agents anchored in vetted sources to an internal ChatGPT instance asking about GLP1 drug developers. Their tool had 6/10 sources hallucinated. Seeing links for supporting information and our human biases creates a false sense of trust in the accuracy of search results. When first using a GenAI-based system, it gives you sources – click on them because: 1. Hallucinated sources – We see more and more of the sources hallucinated as more general GPT-based search engines appear 2. Hallucinations IN the source content – more and more content online is AI-generated…poorly People know about glue on pizza recommendations, but what’s more likely is a system trying to support it’s own hallucination with more hallucination in a form of sources. When the content provider uses GenAI to generate content and does not check the error proliferates across the internet.

To view or add a comment, sign in

Explore topics