Ravit Dotan, PhD’s Post

View profile for Ravit Dotan, PhD, graphic

Merging AI innovation and responsibility via consulting, research, training, and keynotes

How useful are AI summaries of meetings? Survey results are in. I think they show that the hype is unjustified so far. ➤ Background 👉 Last week I got another misleading AI-generated summary of a meeting. It’s pretty routine now. 👉So I ran a survey (here on LinkedIn) to see how common this experience is 👉Results show that it’s pretty common ➤ Results, out of 209 respondents: 👉Only 18% usually find these summaries useful 👉24% usually don’t find them useful 👉54% find them useful only sometimes ➤Reflections 👉I see this as another indication that LLMs have a long way to go 👉It’s important to make this point because the hype is too high - people have so much faith in LLMs that they throw caution to the wind. The result is bad outcome all around. 👉Small example: It would be kind of terrible if I were to act on the action items the AI summaries assign to me :) ➤ What do other people think? Comments welcome! #ai #aiethics #responsibleai

  • chart
Hernan Chiosso, CSPO, SPHR 💡

I use AI to help organizations conquer culture, people, product, process, and tech challenges. Fractional CHRO, HR Innovation Consultant, HRTech Product Manager, Remote work expert. productizehr.substack.com

5mo

I think it depends greatly on the tool, so it would be interesting to know what tool people are using. I have been using Otter.ai and Sana.ai for a while, to compare their performance in parallel, and I have found Sana.ai to be quite accurate in its summaries, and therefore quite useful. I also think the results would also depend on how the meeting is configured: does everybody have good headphones/microphones and a quiet enough space to take the meeting? Do people take turns talking, or do they overlap all the time? Are they sharing screens and referencing documents, jargon, or other internal information that may not be available to the AI meeting assistant? Additionally, with some tweaks to how you structure the meeting, you can get more value and accuracy. Maybe getting used to utilizing certain language when you are calling out action items, so that the AI assistant can recognize these more easily. It would probably be valuable to have these assistants take a bit more active role, instead of just note-taking, e.g.: "Hey, AI, can you please summarize the action items so far?" 'Hey, AI, here are 3 main questions that we would like to find answers to in this meeting. Can you please remind us during the meeting?" etc.

Daniel Brumund

Advisor AI and Digital Public Goods @GIZ FAIR Forward | Interested in #tech #justice #regeneration

5mo

I've noticed a trend where people sign up for meetings (especially via Zoom) and then don't participate themselves. Instead some AI tool "participates" instead of them. I must say that, personally, I find this both deeply annoying and incredibly disrespectful. I don't want to end up sitting in meetings full of chatbots. If you cannot make a meeting, ask a colleague, or ask the organisers if they can share notes. Engage with people. Connect. Please let us not give in to a bot-full future of online meetings... Plus I also see serious privacy and data security concerns. I'm seriously thinking about initiating internal regulations against this - to the point where I would ban people from joining meetings if they continuously sign up with chatbots. #RantOver 😂

Samantha Ziegel

AI + Healthcare + Working Moms Advocate

5mo

Do you know what tools people are reporting the inaccuracies or low value responses? I would be curious to learn what tools are doing it well vs. not and what is on the backend of each tool. It's easy to blame LLM's, though sometimes the way the LLM is being prompted or lack of grounding could be some of the cause...among other things :)

As long as someone who attended the meetings checks the action points, they are very useful for those who did not attend the meeting.

Jonathan Sands

Principal Data Scientist at MMR

5mo

Would be good to see which summarisers were used by the 209 respondents as there is wide variation in performance depending on the model/pipeline. Almost everyone where I work finds the MS Teams meetings summary both highly accurate and extremely useful due it's granular nature with the ability to expand the high level overview into more detailed points, and accurate referencing of the video segments corresponding with given parts of the transcript. On the other hand, a similar summarisation service applied to Outlook performs poorly. So I would say it's important to report the results per service to get a fair view of the landscape.

Daniel Onren Latorre

Product Exec & Advisor | Digital Placemaking & Wise Cities Trailblazer | Digital Turnaround Specialist: When Teams & Tech Need a Fresh Start/Reboot

5mo

LLMs are great for what they were designed for, translation. And less good for all other uses, especially summarization with consequential contexts. So much time has been wasted by not questioning the wrong tool choice by CS engineers.

Augustino Pham

Playing in the #infinitegame | Responsible Digital Future Advocate | Strategic Sales & Consulting

5mo

Truly agree with you on these "assistants" being overhyped. I've tried a couple and it becomes even more lackluster the longer the meeting becomes.

Conner Brew

Data-driven Ops Leader @ WillowTree

5mo

This is problem augmented by widespread misalignment about what meetings are for and what meeting notes are for. A poorly-focused and facilitated meeting is not going to be fixed with AI tools identifying action items and takeaways. Much like any other domain, lots of folks trying to use AI as a band-aid fix-all for deeper problems! 

Like
Reply
Annie Datesh

Chief Innovation Officer at Wilson Sonsini | Fast Company's Most Innovative Companies | Technologist, Attorney | 10x-ing practice efficiency and client experience

5mo

Hello trough of disillusionment, my old friend

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics