Nick Carroll’s Post

Humanity's approach in pushing forward developing AGI continues to sorta puzzle me. As I see it, there are a few possible outcomes (in the short and long terms), depending on how successful the technology and application are: - We create really sophisticated but ultimately equally unhelpful chat bots, which permeate society to the increased frustration of literally everyone - We automate away manual labor jobs, increasing corporate profits at the expense of the lower class, and shift even more tax burden onto the middle class to compensate - We get productivity multiplying AGI, which is accessible mainly to the wealthy, and magnifies the wealth disparities in society - We get full AGI, which quickly evolves to displace humanity as the dominant life form, in whatever form that takes Now, I get how the people with wealth can think there are some "good" outcomes there, but I kinda still fail to see if/how normal people think this might be "good" for society (if they do at all). Academic points of course (since we as a society can't really stop it), but interesting in a "course of history" sense.

To view or add a comment, sign in

Explore topics