AI Reading for Wednesday April 17! AI safety, AI politics, AI economics. https://lnkd.in/enCinev3 MLCommons releases an AI safety benchmark. - IEEE Spectrum Marc Andreessen is red-pilled and big mad about Google and safety, his whole feed is something - Twitter Will AI demand really overwhelm our electrical grids? - FT Or do we just need to make the right investments? - Heatmap News GenAI risks and challenges for the economy - EY Economists don't actually know what AI will do, and some economists who think AI is going to be just great say it's going to eliminate jobs and require policy responses. - TIME AI-driven airline-style pricing is everywhere, with surge pricing, junk fees, and advertising low headline prices while gouging you on the necessary add-ons. - The Atlantic AI is seen as sustaining for large companies like big banks, hard for upstarts to keep up. - Fortune How GitHub Copilot is changing software development. - Bloomberg "Please read this CVE and write code for an exploit." - The Reg AI tools cut parts of Intel's chip design process from weeks to minutes. An example of singularity dynamics, the better the tech gets the faster it improves itself. - Tom's Hardware Killer robots, kamikaze drones and AIs change war significantly from a game theory perspective and are worth discussing from an arms control treaty standpoint. - Bulletin of the Atomic Scientists If you worked on-site in NY and moved to remote work in a low cost of living state, NY is using AI to hunt you down and hit you up for state income tax. - Business Insider Mistral in talks to raise at $5b valuation. - The Information Logitech adds an AI key and prompt-building tool. - The Reg Gimmickry or something people can actually use? There are template-based prompt generators, and AIs to help build prompts. At risk of stating the obvious, a terrific use of AI is to ask it to automate a frequent task with Windows AutoHotKey, or the Mac's built-in Automator app, or a script using zsh or Python or Power Automate or PowerShell or Zapier-like services. - Geeky Gadgets Or, get your own AI-powered exoskeleton to elude killer robots. - New Atlas Gardening with AI: software can recognize plants, make recommendations. Robot help is surely coming, and Merlin is Shazam for birds. - TechRadar Menteebot emerges from stealth with a new humanoid robot. - Engadget A robot roundup. - Yahoo News Adobe adds gen-AI tools to Premiere video editing tools, including object deletion and addition , generative extend, eventual support for 3rd-party models like Pika, Runway, and Sora. - YouTube As an alternative, use DaVinci if you don't want to pay Adobe subscriptions. - Creative Bloq Google might give you access to Gemini in the Chrome address bar. - TechRadar Researcher says he will beat hallucinations by training and calibrating uncertainty-aware LLMs. - Fortune
Druce Vertes’ Post
More Relevant Posts
-
Is someone changed their lack of ability? Answered by AI : Certainly! Here are a few real-world examples where people overcame a lack of ability: Elon Musk and Rocket Science: Initial Lack of Ability: Musk, originally a software and internet entrepreneur, had no formal training in aerospace engineering. How He Changed: When starting SpaceX, Musk read countless aerospace engineering books, consulted experts, and learned from failed experiments. Result: SpaceX successfully launched and landed reusable rockets, revolutionizing the space industry. Michael Jordan and High School Basketball: Initial Lack of Ability: Michael Jordan didn’t make his high school varsity basketball team on his first try. How He Changed: He worked relentlessly on his skills, training for hours every day to improve his game. Result: Jordan became one of the greatest basketball players of all time, winning six NBA championships. J.K. Rowling and Writing Success: Initial Lack of Ability: Rowling faced numerous rejections from publishers who believed her book about a young wizard wouldn’t sell. How She Changed: She refined her writing, continued revising her manuscript, and persevered through the rejections. Result: The Harry Potter series became one of the best-selling book series in history, establishing Rowling as a renowned author. Albert Einstein and Early Education: Initial Lack of Ability: As a child, Einstein struggled in school, especially with language and social skills, which led many to believe he had learning difficulties. How He Changed: Einstein eventually found his passion for mathematics and physics, diving deep into subjects that fascinated him. Result: He went on to revolutionize physics with the theory of relativity, becoming one of the most famous scientists in history. Oprah Winfrey and Public Speaking: Initial Lack of Ability: Oprah had a difficult childhood and faced significant setbacks in her early career as a news anchor. How She Changed: By focusing on building her confidence and connecting authentically with her audience, she honed her speaking and interviewing skills. Result: Oprah became a globally recognized media personality and talk show host, inspiring millions through her work. These examples show how people turned their initial lack of ability in certain areas into strengths through perseverance, practice, and a willingness to learn.
To view or add a comment, sign in
-
While no one knows exactly how artificial intelligence will reshape society, OpenAI plans to analyze the technology’s potential economic impact. On Oct. 22, OpenAI named Dr. Aaron “Ronnie” Chatterji as its first chief economist. In this role, he will oversee research on a range of AI-related topics, including how AI innovations may impact the global economy and how the workforce can utilize AI tools both now and in the future, according to OpenAI’s press release. “Dr. Chatterji’s deep understanding of the dynamics that shape our economy and society will guide OpenAI’s mission to create AI tools that fuel growth, solve complex challenges, and foster long-term prosperity,” Larry Summers, OpenAI board member and former U.S. Treasury Secretary, said in the press release. While this move marks a first for OpenAI, it won’t be Chatterji’s first high-profile role. Here are three things you may not know about OpenAI’s new chief economist.
Meet OpenAI’s first chief economist—3 things you may not know about him
cnbc.com
To view or add a comment, sign in
-
The battle between Yann and Elon on X is turning into a measuring contest! Yann claims to have published 80 tech research publications in 2 years?!!! I know several professors, directors, and managers who insist on adding their names to research conducted by their teams, even if they contributed nothing to the project. Research is rigorous work; you cannot feasibly contribute to hundreds of projects in a short period. It's disheartening how the "publish or perish" culture has propelled the social climbers in the research community. They fear becoming invalid and irrelevant without current research in their names—because that's all they have to cling to. The peer review process is increasingly becoming a "scratch my back, and I'll scratch yours" affair. (Not always, but it's significant.) The "steal the credit" game is both audacious and blatant. For instance, I know a professor who claims to have created Google Translate, having barely worked on some aspects of statistical machine translations (if directly any) and symbolic grounding! The boldness of some to claim credit as they ascend is astonishing. Moreover, nobody dares to point out that the emperor has no clothes due to potential repercussions. I want to highlight that there are many incredible, dedicated researchers out there who work on a few projects over years, tirelessly ensuring accuracy and reproducibility. And yet, their hard work and credit are often usurped by those in power. If I see any researcher credited with more than two substantial studies per year, I tend to seriously question their contribution, suspecting they are merely riding on their team's coattails. Solid research demands time, consistency, iteration, and innovation! #ArtificialIntelligence #Research #AI
To view or add a comment, sign in
-
For this week's nerd processor, I looked at LinkedIn posts from all 50 states to see what people are talking about when they talk about AI. For each state, I looked at the topic within AI that people are talking about much more than the average statistics would suggest. I expected to find regional patterns, but the extent of regional clustering I found stunned me. Look at all those regional clusters of colors! Who's doubling down on privacy, bias, and ethics? New England and the West Coast. Who's focused in on AI and education? The mid-Atlantic and the Great Lakes. Who's most worried about AI and politics? Lots of the South. (If you're a cultural geography nerd like me, you may notice echoes of Colin Woodard's American Nations framework in this map!) Full article in the comments.
To view or add a comment, sign in
-
Have you ever been in a meeting where everybody uses so many references to past companies and technologies, that you feel you need a Wikipedia deep dive to keep up? I'm a millennial (even a zillennial) so I haven't been around to have that many references at my fingertips (or funky t-shirts), they have always been stories from the past from my colleagues that I love to listen to but with a big spoiler of knowing who won. But as a millennial I have to use it to my advantage. Good thing I like learning from failures maybe even more than successes (as a bonus it allows to avoid the survivorship bias). Another good thing is that I'm not doing it as a history lesson, I actually do start from very practical needs we we have right now, and whoever uses past references better use them in the right context, because I will beat the analogy to death to understand what we can learn from it. So here is me trying to uncover the usefulness of Hadoop-Spark analogy for #AI #inference needs with Roman Shaposhnik https://lnkd.in/eu7Qyj_H
How Spark changed the big data and why it is relatable to AI industry innovation nowadays
https://www.youtube.com/
To view or add a comment, sign in
-
Why should we acknowledge the past legacy? Why spend time exploring what's done? How Spark changed the history of big data and why Hadoop did not catch up although it came earlier? How is all this relevant to AI industry changes that are happening as we speak? Our 2nd "Llama Shaving" podcast featuring both AIFoundry.org founders - enjoy!
Co-founder of Ainekko and AIFoundry.org, building Open Source AI future, ex-VC and OSS policy maker, mission-driven enterprenuer
Have you ever been in a meeting where everybody uses so many references to past companies and technologies, that you feel you need a Wikipedia deep dive to keep up? I'm a millennial (even a zillennial) so I haven't been around to have that many references at my fingertips (or funky t-shirts), they have always been stories from the past from my colleagues that I love to listen to but with a big spoiler of knowing who won. But as a millennial I have to use it to my advantage. Good thing I like learning from failures maybe even more than successes (as a bonus it allows to avoid the survivorship bias). Another good thing is that I'm not doing it as a history lesson, I actually do start from very practical needs we we have right now, and whoever uses past references better use them in the right context, because I will beat the analogy to death to understand what we can learn from it. So here is me trying to uncover the usefulness of Hadoop-Spark analogy for #AI #inference needs with Roman Shaposhnik https://lnkd.in/eu7Qyj_H
How Spark changed the big data and why it is relatable to AI industry innovation nowadays
https://www.youtube.com/
To view or add a comment, sign in
-
" If you find a mountain and you start climbing it and you spend two thirds of your life climbing it and you see that the peak is up there but you already have spent two thirds of you life and you see the peak is up there and you realize to reach at the top you have to go all the way back and start climbing all over again from another path 🌄 . No body wants to do that, people don't want to start over . And it's the nature of later in life that you won't want to start over, you just don't have the time to go back and look for a new path . ⏳ But that may be the best thing to do . And that's when you look back at the greatest artists and creators and they have the greatest ability to start over again . Elon have been called an idiot to start and overdoing something brand new that he supposedly is not qualified for or when Madonna and Paul Simon came out with a new album their existing fans usually hate it because they have adopted a completely new style they have learned somewhere else . So you have to be willing to be a fool and kind of have a beginners mind and go back to the beginning and start over and if you are not doing that you are just getting older . " This example from Naval Ravikant is my favorite on #beginners mindset I have professionally in past few years treaded on several paths only to start all over again . Like how I see Rajan Singh 's story through IIT , Wharton , McKinsey , IPS and start ups. Now imagine at the age of 72 ! 🏆 US representative Don Beter of Virginia enrolled at GMU to pursue a master's degree in C.S. with a concentration in Machine Learning . Yes at 72 ! Rep Beyer is the author of AI Foundation Model Transparency act and the lead cosponsor of CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & Artificial Intelligence Environmental Impacts act. This episode on AI in U.S Congress was a favorite podcast I heard this weekend. https://lnkd.in/gVTczpbe #beginnersmindset #AI #USCongress #computerscience #navalravikant
Practical AI: Machine Learning, Data Science: AI in the U.S. Congress on Apple Podcasts
podcasts.apple.com
To view or add a comment, sign in
-
I'm fed up with endless hype about artificial intelligence by "experts" pitching books, services, or a view of the future shaped by science fiction movies. Most people really don't understand what's real, what's valuable, or where it's going, so I decided to ask an expert some questions that I think everyone needs to understand about AI. I'm interviewing Kyle Polich, a working data scientist, expert on AI, and host of the Data Skeptic Podcast at the 2024 SkeptiCal Conference online July 20. Kyle is one of the best skeptics that I know, so I value his unique perspective. As many of you know, I did graduate work specializing in AI and spent years building silicon, writing algorithms, and working with customers. We've had many informal conversations previously on this topic, but this one is for those of you who want to be able to discuss artificial intelligence rationally, ask the right questions, and to understand how it affects you and society. I can't wait for "The Skeptical State of AI", and I think you'll find it valuable. Tickets and more information at https://skepticalcon.com
To view or add a comment, sign in
-
What does the recent wave of #GenerativeAI feel to someone who has seen AI evolve over more than 3 decades? Hear it out from Dr. Anand Rao, PhD, MBA in this episode of Leading with Data and get to know the fascinating work he is doing!
Join us for an insightful journey in the latest episode of Leading with Data! We're thrilled to have Dr. Anand Rao, PhD, MBA, Professor of Applied Data Science and AI at Carnegie Mellon University and former Global AI Lead at PwC, as our special guest alongside Kunal Jain. Tune in as Anand explores: https://lnkd.in/g9KMG8My ➡️ The evolving phases of AI throughout the years ➡️ Nature’s Calling and Swardharma – discovering what brings joy naturally Transitioning back to academia ➡️ His groundbreaking venture, Golden Sparrow, focused on AI Funding ➡️ Must-read book recommendations and invaluable career guidance for students Don't miss out on this enriching conversation! Watch now and gain invaluable insights. #analyticsvidhya #leadingwithdata #generativeai
To view or add a comment, sign in
-
In 2015, Mark Zuckerberg mused on the idea of a fundamental mathematical law that governs human social relationships and ultimately codifies us. He even speculated that there may be a unifying theory of humans that can be expressed in an equation, and was willing to bet on it. But what if machines start to mathematize our most cherished values—our group and individual values? Silicon Valley has a penchant for seeking out things that can be quantified through mathematics and machines. Mathematics excels at anything that it was instrumental in discovering, but it is not as robust when applied to things it cannot explain. The algorithms used in Silicon Valley restrict the scope of human social relationships to the kinds of values that are amenable to mathematical explanation. However, if we limit the scope of human social relationships to only what is possible to quantify, we risk creating a thin representation of what we truly care about. The problem is that in complex human social systems, anything that is easy to quantify tends to be a very superficial representation of our values. The more we try to mathematize human values, the more we focus on what can be easily expressed through mathematical formulas, and the more we neglect everything else. Values are hard to formulate. They exist at the level of individuals. As individuals, our values emerge and change across the span of our lives. Group values arise out of cultural processes which take time for us to cohere around, embed in institutions, and scale across society. They are constantly in flux—diversity of values arises out of the ambiguity they inherently contain. Each person, each group interprets things differently which makes it hard to control how society develops. The vagueness of goals enables different values to stay coherent, for many different paths to be trodden, and for a whole population of solutions to compete. Conversely, if a value is made of math, it is unambiguous and precise. The advantage is that it can more easily be scaled up and propagated. This might provide certainty, reduce ambiguity, and make the world less complex. But this would be at the expense of alternative or immeasurable values that underpin human diversity. #ai #artificialintelligence #generativeai #airesearch #complexity #chatgpt #complexchange #changemanagement #futureofwork #artificiality #mindforourminds
Values & Generative AI
artificiality.world
To view or add a comment, sign in
Explorer and Innovator at Moschetti LLC
7moAI hunting down remote work? ” I don’t like the sound of that….”