Dive into the dominant (and often connected) ways in which nonlinearity is implemented in modern ML. Read Manuel Brenner's latest article now! #MachineLearning #LLM
Towards Data Science’s Post
More Relevant Posts
-
The field of ML has evolved to balance these 2 forces: leveraging the simplicity of linear models where possible, while incorporating nonlinearity to handle the complexity of the world. Read more from Manuel Brenner. #LLM #MachineLearning
A Guide To Linearity and Nonlinearity in Machine Learning
towardsdatascience.com
To view or add a comment, sign in
-
Today we have a great debate: On the one hand companies and some researchers claim that models possess reasoning capability, on the other hand, others define LLMs as stochastic parrots. Read more in Salvatore Raieli's latest article now. #LLM #MachineLearning
The Savant Syndrome: Is Pattern Recognition Equivalent to Intelligence?
towardsdatascience.com
To view or add a comment, sign in
-
Hello Community👋 So Today, while reading through several articles, I encountered a mathematical problem that is simultaneously simple to understand and deeply complex, yet remains unsolved. #TheCollatzConjectureProblem It states that any positive integer will eventually reach the number 1 and get struck in that loop when a set of rules are applied to it. The rules are : (A) If the chosen number is even divide it by 2 but (B) If the chosen number is odd then multiple it by 3 and add 1 to it Repeat the above steps until it reaches the number 1 The conjecture is named after German mathematician Lothar Collatz, who proposed it in 1937. Despite its apparent simplicity, the Collatz conjecture has remained unsolved for over 60 years. #CollatzConjecture #Mathematics #ProblemSolving #NumberTheory #Logic Have a look at the following example for better understanding:
To view or add a comment, sign in
-
As ML continues to advance, understanding nonlinear embeddings is crucial. Manuel Brenner dives into how these transformations simplify tasks, including examples like support vector machines (SVMs) and emerging Kolmogorov-Arnold Networks (KANs). #ML #LLM
A Guide To Linearity and Nonlinearity in Machine Learning
towardsdatascience.com
To view or add a comment, sign in
-
Unveiling Patterns in Prime Numbers by Position: An Innovative Computational and Mathematical Approach
To view or add a comment, sign in
-
ENTROPY in INFORMATION THEORY Claude Shannon's 1948 paper, "A Mathematical Theory of Communication," introduced the Information Theoretic concept of Entropy, H(X). It is the Expectation of the information [-log p(x)] of a probability distribution [p(x)] with respect to that same distribution [p(x)]. However, if instead you take the Expectation with respect to some other distribution q(x), then you get the cross-entropy between p(x) and q(x). This is the famous "cross-entropy loss function" which plays such a central role in supervised machine learning. The cross-entropy loss is minimized when the algorithm's prediction p(x) exactly matches the corresponding label value q(x) for every x in X. #Artificialintelligence #Mathematics
To view or add a comment, sign in
-
I was working on this last night!!🧐 I’m testing how Shannon's Entropy can help reduce spam LLM input calls to AI agents. People jailbreaking AI agents enter random text like “adfadafxafsdf,” and with the kind of traffic we're seeing at Theoriq, these calls rack up costs fast. Here's what I'm thinking: 1️⃣ Just tell the agent to filter what it thinks is spam 🛠️ -- fast and simple, but risks blocking false positive legitimate user inputs 2️⃣ Use a regex script to detect random or low-entropy entries E.g., pattern = re.compile(r'^[a-zA-Z]{5,}$') then return bool(pattern.match(text)) 🔨 -- Lightweight and effective, but might miss more complex spam patterns 3️⃣ Use an entropy score with some experimental threshold 💡 -- Smarter detection but what's the right threshold? 4️⃣ Model-based detection 🧠 -- Powerful and advanced; leaving this to the Theoriq Research Team 5️⃣ Gate user and input access more aggressively 🔒🔑 -- Will frustrate real users, many who are new to AI and agents Wdyt?
CEO & Founder RETINA-AI Health, Inc. | Retina Specialist | Healthcare AI expert | Math/Comp Sci. | AI Engineer
ENTROPY in INFORMATION THEORY Claude Shannon's 1948 paper, "A Mathematical Theory of Communication," introduced the Information Theoretic concept of Entropy, H(X). It is the Expectation of the information [-log p(x)] of a probability distribution [p(x)] with respect to that same distribution [p(x)]. However, if instead you take the Expectation with respect to some other distribution q(x), then you get the cross-entropy between p(x) and q(x). This is the famous "cross-entropy loss function" which plays such a central role in supervised machine learning. The cross-entropy loss is minimized when the algorithm's prediction p(x) exactly matches the corresponding label value q(x) for every x in X. #Artificialintelligence #Mathematics
To view or add a comment, sign in
-
A classic. The entropy is a special case of cross-entropy which bridge the field of Probabilities theory and Information theory.
CEO & Founder RETINA-AI Health, Inc. | Retina Specialist | Healthcare AI expert | Math/Comp Sci. | AI Engineer
ENTROPY in INFORMATION THEORY Claude Shannon's 1948 paper, "A Mathematical Theory of Communication," introduced the Information Theoretic concept of Entropy, H(X). It is the Expectation of the information [-log p(x)] of a probability distribution [p(x)] with respect to that same distribution [p(x)]. However, if instead you take the Expectation with respect to some other distribution q(x), then you get the cross-entropy between p(x) and q(x). This is the famous "cross-entropy loss function" which plays such a central role in supervised machine learning. The cross-entropy loss is minimized when the algorithm's prediction p(x) exactly matches the corresponding label value q(x) for every x in X. #Artificialintelligence #Mathematics
To view or add a comment, sign in
-
The debate on the value of LLMs often overlooks an important question: what, in reality, are they? This months Frame explores this question through the lens of the work of philosopher Graham Harman argues that scientists and humanists—who are often at odds with one another—both get LLMs wrong, but for different reasons. By rejecting both these analyses as reductive, we see what an LLM truly is, and in doing so, open up a more productive, meaningful relationship with the technology. https://lnkd.in/ezVxQnKg #artificalintelligence #LLMs
This month's Frame: what is an LLM anyway?
stripepartners.substack.com
To view or add a comment, sign in
-
Harmonic Series A: Accelerate development of mathematical superintelligence
https://www.businesswire.com/news/home/20240923548068/en/Harmonic-Announces-Series-A-Funding-Round-To-Accelerate-Development-of-Mathematical-Superintelligence
To view or add a comment, sign in
639,393 followers