One of the reasons I am bullish on cognitive/behavioral scientists in the age of Gen AI is that they think deeply about human-AI computer interaction. Some people want complete automation. Others want a hybrid - use AI for the annoying, repetitive tasks, and leverage human expertise for the most important aspects. This idea was a core part of my PhD research. Humans and AI can collaborate in scientific theory development, building better theories in shorter periods of time. The AI performed well at finding models that predicted the data well. Humans were skilled at dissecting these uninterpretable models and being able to *explain* them. How can AI and humans collaborate in this new age of Gen AI? We are a little obsessive about this at Roundtable if you can't tell...
An illuminating (and really quite funny) panel last week at the Spring Summit on New Professional Pathways for AI researchers: https://lnkd.in/espwcyHG Chair: Felix Sosa (Harvard University) Panel: Mayank Agrawal (Roundtable) Skyler Wang (AI at Meta, McGill University) David Landy (Netflix) Ishita Dasgupta (Google DeepMind) Robert Glushko (University of California, Berkeley) https://lnkd.in/ewE7mgPA Thinking About Thinking, Inc
Student at New York University
7moStrong opinion! How you think about the possible bias that human interpretation introduced to AI model?