Do you know about the power and pitfalls of learning without any feedback? Neuroscientist Franziska Broker is studying how both humans and machines learn without supervision — like a child on their own — and has uncovered a puzzle: unsupervised learning can either help or hinder progress, depending on certain conditions. 💡 Read the full story here: https://lnkd.in/em2zWswS
Max Planck Institutes Tübingen’s Post
More Relevant Posts
-
🌟 Can conditional autoencoders extrapolate? One of the most interesting problems in machine learning is extrapolation of past knowledge. Here, we explore a very simple concept of whether conditional autoencoders can be used to improve analysis of data with known factors of variability, and extrapolate along this parameter. We explore these concepts for ferroelectric domain switching as a convenient toy model (and a way to learn about the polarization switching mechanisms in these materials), but the approach is clearly universal. One more of the first COVID year papers - in collaboration with Yongtao Liu, Bryan Huey, and Maxim Ziatdinov https://lnkd.in/d2NmY-aD
To view or add a comment, sign in
-
This is the best summary of our technological progression I have come across. “Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.”
Pondering Sam Altman’s Age Of Intelligence Essay — Forbes
apple.news
To view or add a comment, sign in
-
Do you know the mouse that shaped many of today's learning algorithms? Read this 👉https://lnkd.in/evYXM9qE if you want to know how old reinforcement learning really is, why it was inspired by cats, and how we can use it to understand human behavior.
To view or add a comment, sign in
-
I'm truly fascinated by the deep connections between artificial intelligence, machine learning, and mathematics. Whether it’s solving systems of linear equations, delving into eigenvalues and eigenvectors, exploring Hessian matrices, or understanding the intricacies of functions—from their domains and ranges to bijective functions and the vast world of probability—there's an endless journey of discovery. Every day offers something thrilling. What are your thoughts on this fascinating intersection? Are you as excited as I am about these endless learning opportunities?
To view or add a comment, sign in
-
I just discovered a great paper on my GraphML learning journey: https://lnkd.in/dSPiKTzC. This paper introduces a novel diffusion process for learning on graphs, treats GNNs as a PDE discretization (unlocking well-known PDE techniques for graphs), and overcomes issues like over-smoothing and over-squashing.
GRAND: Graph Neural Diffusion
arxiv.org
To view or add a comment, sign in
-
For anyone curious on catching up with recent works on LLM explainability, here there is a fantastic reading! #llm #explainableai
LLMs are huge piles of neurons that somehow give useful outputs, but we’re still not sure how they work under the hood: 𝗮𝘁 𝘄𝗵𝗮𝘁 𝗽𝗼𝗶𝗻𝘁 𝗱𝗼 𝗿𝗲𝗮𝗹 𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝘀 𝗲𝗺𝗲𝗿𝗴𝗲 𝗳𝗿𝗼𝗺 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹 𝗺𝗲𝘀𝘀? 🤔 Anthropic's team has made fascinating discoveries in the process of understanding LLMs, that I want to share today. Since their work has great figures, my post degenerated into a full article: so I published it as a Community Article on Hugging Face! 𝙍𝙚𝙖𝙙 𝙩𝙝𝙚 𝙖𝙧𝙩𝙞𝙘𝙡𝙚 𝙝𝙚𝙧𝙚👇👇
Extracting Concepts from LLMs: Anthropic and OpenAI’s recent discoveries 📖
huggingface.co
To view or add a comment, sign in
-
Quantum Techniques in Machine Learning (QTML) is an annual international conference focusing on the interdisciplinary field of quantum technology and machine learning.
To view or add a comment, sign in
-
LLMs are huge piles of neurons that somehow give useful outputs, but we’re still not sure how they work under the hood: 𝗮𝘁 𝘄𝗵𝗮𝘁 𝗽𝗼𝗶𝗻𝘁 𝗱𝗼 𝗿𝗲𝗮𝗹 𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝘀 𝗲𝗺𝗲𝗿𝗴𝗲 𝗳𝗿𝗼𝗺 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹 𝗺𝗲𝘀𝘀? 🤔 Anthropic's team has made fascinating discoveries in the process of understanding LLMs, that I want to share today. Since their work has great figures, my post degenerated into a full article: so I published it as a Community Article on Hugging Face! 𝙍𝙚𝙖𝙙 𝙩𝙝𝙚 𝙖𝙧𝙩𝙞𝙘𝙡𝙚 𝙝𝙚𝙧𝙚👇👇
Extracting Concepts from LLMs: Anthropic and OpenAI’s recent discoveries 📖
huggingface.co
To view or add a comment, sign in
-
My favorite part of machine learning is its crossover with cognitive science. This paper "Designing Ecosystems of Intelligence from First Principles" demonstrates that well, combining experts from philosophical and engineering worlds to break down what it means to create a network of processes that mimic human thought and collaboration: https://lnkd.in/gMwWyqUf
arxiv.org
To view or add a comment, sign in
8,033 followers
More from this author
-
Between Petri dishes, computers, and fieldwork
Max Planck Institutes Tübingen 2y -
The fundament of life: how molecules assert themselves against each other
Max Planck Institutes Tübingen 2y -
Google for the tree of life: how a biological search engine is revolutionizing life sciences
Max Planck Institutes Tübingen 2y