Luiza Jarovsky’s Post

View profile for Luiza Jarovsky, graphic
Luiza Jarovsky Luiza Jarovsky is an Influencer

Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. 🏛️Join our AI governance training (1,000+ participants) & my weekly newsletter (38,000+ subscribers)

🚨 [AI RESEARCH] The paper "AI, Algorithms, and Awful Humans" by Daniel Solove & Hideyuki MATSUMI is an ESSENTIAL read to understand the debate on 🚧 AI risk management & human oversight 🚧. Important quotes: "Some might be tempted to equate thinking to rationality, with emotions clouding lucid thought, but there are many dimensions to human decision-making beyond rationality, and these nonrational elements are often underappreciated. Professor Martha Nussbaum aptly argues that “emotions are suffused with intelligence and discernment” and involve “an awareness of value or importance.”40 Emotions are “part and parcel of the system of ethical reasoning.” Yet algorithms do not experience emotions. Algorithms might be able to mimic what people might say or do, but they do not understand emotions or feel emotions, and it is questionable how well algorithms will be able to incorporate emotion into their output." (p. 1930) - "The hope that humans and machines can decide better together is not just vague and unsubstantiated; in fact, strong evidence demonstrates that there are significant problems with combining humans and machines in making decisions. Humans can perform poorly when using algorithmic output because of certain biases and flaws in human decision-making.59 Far from serving to augment or correct human decision-making, algorithms can exacerbate existing weaknesses in human thinking, making the decisions worse rather than better. As Professors Rebecca Crootof, Margot Kaminski, and Nicholson Price observe, a “hybrid system” consisting of humans and machines could “all too easily foster the worst of both worlds, where human slowness roadblocks algorithmic speed, human bias undermines algorithmic consistency, or algorithmic speed and inflexibility impair humans’ ability to make informed, contextual decisions." (p. 1935) - "Good qualities in decision-making include a commitment to the scientific method, humility, feedback loops, fairness, morality, lack of bias, empathy, due process, listening to all stakeholders, diversity, practicality, accuracy as to facts, critical reflection, philosophical depth, open-mindedness, awareness of context, and much more. Some decisions might call for more accuracy, but others less so. For landing a plane, we want high accuracy, but for decisions about school admissions, credit scoring, or criminal sentencing, other values are also quite important. There is no one-size-fits-all approach to regulating AI, as the decisions it will be employed to help make are quite different and demand different considerations." (p. 1939) ➡️ Read the paper below. 🏛️ STAY UP TO DATE. AI governance is moving fast: join 36,400+ people who subscribe to my weekly newsletter on AI policy, compliance & regulation (link below). #AI #AIGovernance #HumanOversight #AIRegulation #AIRiskManagement

  • No alternative text description for this image
Luiza Jarovsky

Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. 🏛️Join our AI governance training (1,000+ participants) & my weekly newsletter (38,000+ subscribers)

1mo

➡️ Read the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4603992 🏛️ STAY UP TO DATE. AI governance is moving fast: join 36,400+ people who subscribe to my weekly newsletter on AI policy, compliance & regulation: www.luizasnewsletter.com

Joel Kowalewski

AI/Machine Learning Scientist, Writer, Instructor, PhD Computational Neuroscience

1mo

The article fails to recognize that there is really no human-machine dynamic. AI is trained on human generated data, so the "machine" distinction evokes sci-fi imagery here. If we find AI decisions objectionable, for instance, it must, in truth, be a rejection of our institutions or the world that we've built. If we invent a human-machine distinction, however, it interestingly denies our culpability. "It's the AI that's broken or at fault due to low quality data." The main issue is that we evolve technologies but not our thinking. Stanley Milgram -- in his famous and ethically questionable research -- anticipated the current human-machine invention in the 1960s. We would, in other words, prefer to "diffuse responsibility" to an arbitrary authority figure. What do I mean exactly? Well, if a "machine" exists to argue about--there is this scary word, an inhuman thing, with fictional magery attached to it -- our situation is so far from reality that the "machine" must be psychologically necessary. Oddly, then, once the debate has begun it's already lost. We will inevitability place the burden on this "machine." That's why we created it.

Conan Callen

Tools for buidling and managing digital worlds

1mo

The human experience is much messier than the life of a machine :)

WILLIAM SLATER

CISO, vCISO, M.S. in Cybersecurity, MBA, PMP, CISSP, CISA, SSCP, U.S. Air Force Veteran

1mo

#Yuge! Thank you, #LuizaJarovsky! Great #AI reading! Beware of the #AwfulHumans!

  • No alternative text description for this image
Adrian A.

Founder and Chairman

1mo

Insightful paperwork to read on the matter of algorithms and awful humans. Looking forward to reading more regarding this matter in the near future😁✌🏻

Pete Dietert

Software, Systems, Simulations and Society (My Opinions Merely Mine)

1mo

Absolutely key issues.

Abhishek Lal

Chief Digital Officer | PhD Scholar in Artificial Intelligence

1mo

Thank you Luiza Jarovsky for sharing this insightful paper.

Christos (LL.M.) Makris

Techno-Privacy🤖Lawyer🔹Law & Tech @Tilburg🔹AI Governance (Should) Meet Data Protection🔹🤫Privacy_Matters🔹Advocate for Societal Innovation

1mo

Thanks Luiza for providing us with over the 🔝 material 👏🏻👏🏻👏🏻

See more comments

To view or add a comment, sign in

Explore topics