Adam Liska
London, England, United Kingdom
6K followers
500+ connections
View mutual connections with Adam
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Adam
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Websites
About
Co-founder of Glyphic, builder, and an aspiring beekeeper 🐝
View Adam’s full profile
Other similar profiles
-
Miljan Martic
LondonConnect -
Si-Qi LIU
LondonConnect -
Francisco Girbal Eiras
OxfordConnect -
Alexandre L.
LondonConnect -
Xuechen Liu
Greater Tokyo AreaConnect -
Héctor Martínez Alonso
Siri at Apple
CambridgeConnect -
Siddharth Sharma
LondonConnect -
Chris Lucas
United KingdomConnect -
Josh Levy-Kramer
LondonConnect -
Jan Freyberg
LondonConnect -
Mateusz Malinowski
United KingdomConnect -
Dominik Grewe
LondonConnect -
Divyansh Saxena
LondonConnect -
Jackie Kay
United KingdomConnect -
Rishabh Shukla
LondonConnect -
Audrūnas Gruslys
Machine learning researcher
London Area, United KingdomConnect -
Evangello Flouty
Greater LondonConnect -
Maureen Huang
Software Engineer at Bloomberg LP
Hong KongConnect -
Siddharth Sigtia
LondonConnect -
Amir Alansary
LondonConnect
Explore more posts
-
Arjun Bansal
Evaluations of LLM applications is a big buzzword right now... but when do you really need new tools? Our latest blog post from Wenzhe Xue cuts through the noise to show with examples how built-in libraries such as pytest are often enough to get started. As complexity grows along the following dimensions, we're here to help! Metric based ➡ Human review Off the shelf eval or LLM as a judge ➡ Custom eval models trained on your data Offline ➡ Online, realtime https://lnkd.in/gGb3Sjah
22 -
Dave Duggal
#RAG DOES NOT REDUCE HALLUCINATIONS IN #LLMS: Think I've called this out a few times, but the validation is nice - https://lnkd.in/eAi6zxW2. The article makes two well established points: 1) "Hallucination is a problem of reasoning and not relevance! Any amount of relevant text fed through RAG to a machine will retain the original perplexity and entropy in the system to hallucinate independent of the text"; and 2) "The initial challenge arises from the retrieval phase’s reliance on semantic similarity, which is quantified by inner product spaces or cosine similarities. This metric, while capturing semantic closeness, does not differentiate between factually accurate and inaccurate information." This should be clear to everyone, but the industry still shills flimsy RAG scaffolding anyway (shovels to gold miners).
10434 Comments -
Renchu (Richard) Song
One year ago, we started Epsilla (YC S23) with a big dream. Today, as we celebrate our first birthday, I want to share the story that brought us here. Many people ask why I left my job as a senior director at TigerGraph, took a huge pay cut, and founded my own company. Since I was a kid in elementary school, I've watched the story of Bill Gates building the Microsoft empire, and dreamed of one day building my own company. This dream grew stronger when I came to the US in 2014 pursuing my master's degree at Cornell, and applied to Y Combinator with my roommate, Hao Ma, with a simple anonymous social app idea. Even though it didn’t work out then, it planted the seed in my heart. During my time at Cornell, I met 4 other entrepreneurial students, Landice Gao, Xiaoyan Wu, Jiaqi Su, and Kejia Tao, and started a company focused on indoor navigation. I wished it could have gone further, but as an international student, I needed a job that could sponsor my working visa to stay in the US. So I joined TigerGraph at 2015, a pre-series A startup at that time. Since day 1, I was trying my best to grow as fast as I can. At TigerGraph, I learned a lot from industry leaders like Amol Ghoting, Li Chen, Mingxi Wu, Ruoming Jin, Yu Xu, Like Gao, Tim Zhang, Songting Chen, Shuo Yang, and many other talented colleagues. Most importantly, I met my future co-founders, Jing Qin and Eric Y., and our future advisor and angel investor, Jay (JieBing) Yu, PhD. When ChatGPT launched in November 2022, it blew my mind. I thought such advanced technology would appear in 2045 (according to "The Singularity Is Near"), but it was already here. However, I quickly realized that large language models (LLMs) didn’t have memory and their knowledge was fixed. Our friend Weimo Liu mentioned vector database to us, and I saw the huge potential of combining AI with data. This led to the birth of Epsilla, aiming to build the data and knowledge foundation for LLMs. Becoming a CEO from a technical background has been tough, especially for an introvert like me. I am so proud of my growth so far. In the past year, I’ve talked to more people than I have in my entire life. I’m grateful for all the interesting people I’ve met around the world, which wouldn’t have happened without starting this journey. Looking ahead, I’m very dedicated to the success of Epsilla. Our journey is just beginning, and I’m excited to see what the future holds for us. Thank you to everyone who has been part of this journey—our customers, partners, team, investors, and supporters. Your belief in our vision keeps us going every day. Here’s to one year of Epsilla, and many more to come! 🚀✨
10532 Comments -
Sal Matteis
Re-reading this from 2017 (a good 7.5 years ago) and way pre-ChatGPT event. What has changed, and what is now clear ? From the original post: ‘’🧬Biological evolution has taken us to the point we enjoy today over millions of years. 💡Technological evolution is different; moves in exponential manner. 🛠️Technological evolution has made it possible to educate more people, increase life span and significantly reduce the number of people under the poverty line. 🦾Technological evolution is now moving us to a new era. The Era of the AI. Man and intelligent machine will coexist and merge. More than 50% of current jobs will be lost. Are we prepared for this ? I doubt that we are at this point but we’ve got to try. What are the key questions we should be asking ourselves and our world leaders and government representatives ?’’ #ai #tech #genai #superintelligence #humanity #ethics
142 Comments -
Priyanka Nath
💡 Super Weights in LLMs - How Pruning Them Destroys a LLM's Ability to Generate Text ? TLDR - Super weights are crucial to performance of LLMs and can have outsized impact on LLM model's behaviour The presence of “Super weights” as a subset of outlier parameters. Pruning as few as a single super weight can ‘destroy an LLM’s ability to generate text – increasing perplexity by 3 orders of magnitude and reducing zero-shot accuracy to guessing’. 📜 https://lnkd.in/g_ps-CRf 💕 Subscribe to receive more such articles to your inbox - vevesta.substack.com #LLM #Machinelearning #research #latestResearch #datascience
6 -
Jacob Solawetz
The problem with most NLP benchmarks, is that they presume the landscape of possible routines remains fixed - encouraging researchers to swim up the same rivers. When you invent completely new routines, you need new benchmarks. For Arcee, this is proprietary knowledge transfer, which we had to swim up the security rivers of customer VPCs to uncover. As a nice rule of thumb, domain transfer gain compared to public benchmarks like PubMedQA or LegalBench (that are exposed to general pretraining) is often 10-20X in effectiveness. And for us, merging often unlocks capabilities that were never possible at all, like the internalization of employee titles in parametric memory.
28 -
Yusuf E.
If you're considering a career as an AI Engineer (as in https://lnkd.in/eBvGmfvB), this introductory course on LlamaIndex is an excellent starting point. You may be familiar with LangChain, and while LlamaIndex is somewhat less well-known, I believe it to be a superior and simpler alternative. Though keep in mind both platforms have their respective strengths and weaknesses. Unfortunately, this course doesn't dive into the intricacies of how LlamaIndex talks with LLMs through clever prompt engineering. (You can find out prompts by browsing their code though.) However, it will enable you to create effective LLM tools using just a few lines of code. The agentic approach it shows is particularly useful. https://bit.ly/4bvvciI
12 -
Ajit Banerjee
Big news today: XetHub has joined forces with Hugging Face (check it out here: https://lnkd.in/e-jxSeCf). Our missions are a perfect match—creating the GitHub of AI at a time when model and dataset sizes are skyrocketing. We're gearing up to handle petabytes to exabytes of data daily, and it's happening fast! Our experience at AAPL showed us the unnecessary hurdles in AI/ML data management and collaboration. So, about three years ago, we launched XetHub with Jeff Bezos's beer-tasting analogy (https://lnkd.in/gdPnTnPs) in mind: building tech that lets AI/ML developers innovate on their special sauce without sweating the infrastructure. First and foremost, I want to give a huge shoutout to the entire XetHub team for their hard work, dedication, and innovation! Massive thanks to Matt McIlwain and Ted Kummert at Madrona for backing us every step of the way. A personal shoutout to my friends and colleagues who’ve supported me with angel investments over my startups: Arish Ali, Brad Buda, Gordon Chaffee, Richard Dalzell, Cuong Do, Vikas Gupta, Reza Hussein, Boris Jabes, Tom Killalea, Shishpal Rawat, Mikhail Seregine, Anton Vaynshtok, Peter Vosshall, John Z. and Jon Ingalls! With over two decades in infrastructure, I’ve witnessed multiple paradigm shifts firsthand. My time at Inktomi during the dot-com boom (1998-2003) was exhilarating, and the current deep learning revolution brings back that same excitement. I couldn’t be more thrilled to join Hugging Face and unleash new features for the world’s largest ML community! 🚀
16442 Comments -
Pulkit Agrawal
Following the excellent example Rows.com, here's another from another hot company 🚀 folk (from renowned French studio Hexa, neé eFounders, with a $9M seed from Accel, Scott Belsky, Arash Ferdowsi, Nir Eyal, Nicolas Dessaigne and more) and their great UI... What I love about this one = smart empty states A lot of products are now using "cards" as a pattern when a user first lands into the product, to encourage them to take key actions But folk does a couple really smart things that I haven't always seen: 1️⃣ Sets priority Horizontally stacked cards often don't have this, and offer equally weighted CTAs for all the cards... but users want to make fewer decisions ⬇️ 👉 Folk makes the first CTA filled and contrasting with the others, so it's even easier to get started. 2️⃣ Generates data for me Often products require new users to do a bunch of work or commit to importing data before there is any reward or value. This creates friction and means more people miss the aha moment 🙈 👉 Folk offers "sample contacts" to use instead of my data, so I can get to see the value of a CRM without committing This reduces waiting time and prevents me bouncing Thanks Simo for the submission -- great example to other #founders to submit your user onboarding or in-app flows if you want some recognition 🔦 Even if you think its not the best, share an in-app flow to Chameleon's User Onboarding Olympics to earn cash and be in the pot for our 3 x $300 gold medals 🥇 We're all learning together 😇 Click here to upload a screenshot now: chameleon.io/go/olympics 🔵🟡⚫️🟢🔴 Or please repost ♻️ or comment 💬 to help your network find this great opportunity! I regularly share examples of good UI/UX and best practices for user onboarding and PLG, so follow me 🔔 (Pulkit) for more of that in your feed 👋
244 Comments -
Amr Awadallah
In our latest blog by Product Manager Nick Ma and Vivek Sourabh, we compared Boomerang with the latest embeddings from OpenAI and Cohere. OpenAI released its latest models, text-embedding-3-small and text-embedding-3-large, in January 2024. Cohere released its latest models, embed v3 light and embed v3, in November 2023. Performance Highlights: 🔹English: Matches OpenAI's text-embedding-3-small and Cohere's embed v3 light, just slightly behind their larger models. 🔹Multilingual: Surpasses OpenAI and Cohere, delivering superior performance across languages. Why Choose Boomerang? 🔹Compact & Powerful: Ideal for production with exceptional efficiency. 🔹Proven Metrics: Excels in Mean Reciprocal Rank (MRR) and Mean Average Precision (MAP). Discover how Boomerang can boost your projects. Sign up for a free account and connect with us on our forums or Discord for feedback and support. https://gag.gl/cpB7re?
193 Comments -
Jeremy Smith
AI Politician: How We Went Viral with Steve Endacott June was a busy time for Neural Voice, we were on news channels around the world talking about Neural Voice and how this technology can change the politics of the future. Take a look at some of my thoughts on the The IDL Podcast - Innovators, Disruptors, Leaders #ai #aipolitician #neuralvoice
22 -
Deniz Kavi
Using experimental data to improve performance of generative AI models can be a powerful way to improve protein fitness over rounds of design. This is why we now support fine-tuning our AI models using assay data such as but not limited to thermostability, enzyme activity, and antibody binding affinity. Get in touch to learn more on how our software recommends point mutations for property optimization based on experimental results!
32 -
Christian Force
I'm loving the introduction of Structured Outputs (https://lnkd.in/eFaK22Uy) into OpenAI's API. We've been doing a lot of work with hybrid pipelines: some LLMs steps, and some human steps -- so a lot of our handoffs happen in JSON, not in plain text. We had to build our own schema validation layer early in development, so this addition saves a bunch of work and maintenance headache. At the same time, we haven't updated any of our pipelines to use this feature -- not yet. Why not? In the past 6 months, the best-performing LLM for our use case has changed monthly, with the mantle passing back and forth between OpenAI, Anthropic, Mistral, and Llama as they release their latest models. I haven't seen any signs of this slowing, not yet. OpenAI has been a leader at adding development-friendly features to their API -- and this is a very smart play for them. They've got the budget to layer quality-of-life improvements around their core LLM offering and draw more developers to their platform. But we're still early enough in the cycle that *any* application built on top of them will want to be able to toggle frequently, to stay on the model that best supports its use case. While the breakneck pace of LLM releases continues, think carefully about your flexibility <> maintainability tradeoff!
102 Comments -
Christopher Foster-McBride
The Open Medical LLM Leaderboard on Hugging Face dropped last Friday (thanks Geoff Kwitko for the heads up), and it aims to track, rank and evaluate the performance of large language models (LLMs) on medical question answering tasks (see link below). It evaluates LLMs across a diverse array of medical datasets, including MedQA (USMLE), PubMedQA, MedMCQA, and subsets of MMLU related to medicine and biology. The leaderboard offers a comprehensive assessment of each model's medical knowledge and question answering capabilities. The datasets cover various aspects of medicine such as general medical knowledge, clinical knowledge, anatomy, genetics, and more. They contain multiple-choice and open-ended questions that require medical reasoning and understanding. More details on the datasets can be found in the "LLM Benchmarks Details" section in the link. Ps. Good to know that the GPT-4 model is leading the way (it is what I use on the Medical Coding and Documentation agent). Digital Human Assistants #llmhealthcare
7 -
Dhawal Chheda
Big news from Anthropic ! When it rains, it pours—and Anthropic just unleashed 3 exciting updates that are game-changers for AI enthusiasts: 🔥 New Sonnet 3.5 (33.4% -> 49.0% on SWE-bench!) 🔥 New Haiku 3.5 (outperforming 4o-mini!) 🔥 💻 Brand-new 'computer use' functionality to supercharge productivity! These updates are about to shake things up in the AI space. The progress on SWE-bench alone is mind-blowing. This is Anthropic making waves yet again. Who’s ready to dive in? Check out the details in the video! 👇 #Anthropic #AI #MachineLearning #Innovation #TechUpdates #AIDevelopment #ProductivityBoost #ArtificialIntelligence #DeepLearning #AIFuture #TechTrends
4 -
Shannon Williams
As a non-developer, I'm really excited about how AI can help anyone talk to APIs and systems to do programmatic things. UIs are massively inefficient and the potential to be able to work with any system or data just like you would work with another person is really exciting. To do that, however, we need more models that excel at running functions and tool calling. Especially among open source models, this capability has been sorely lacking. My colleagues Sanjay Nadhavajhala and Yingbei Tong have been working to extend popular open source models with this type of function calling, while at the same time not diminishing the models native intelligence. This week they released Rubra 0.1, a set of new models that are enhanced with function calling. I'm really excited to see this area develop quickly, as it will dramatically expand access to this type of tool calling AI. You can read their blog here, or visit http://rubra.ai to learn more.
425 Comments -
Marcelo Andrade Perino
Tons of great points in this pod - one in particular sticks out to me: Beyond just improving employee productivity (improving the input), **AI enables software that delivers the work product output (delivering the outcome).** So, software companies can now look like a services business in terms of what they deliver for a buyer. #AI #b2b #SaaS
3 -
Simrat Singh
Synthflow AI recently raised a chunky seed round of $7.4M. Their pitch is that their no-code platform makes it easier than ever for SMEs (which usually have low tech capabilities) to deploy voice agents for a multitude of use cases. And they do have a pretty good flow builder. At Hooman Labs, we too are striving to make voice agents as simple as possible to deploy. But we believe that while self serve is important to enable users to play around with voice agents, it is more critical to build a suite of plug and play agents (including necessary integrations) which don't need a lot of tinkering by the user. To do this, a voice agent provider needs to go deep into a vertical and understand the workflows of their user, end to end. We're doing this for e-commerce and building agents which solve calling needs of brands across the life cycle of the customer, from pre-sales to post. Some of the use cases we're working on are consultative selling, delivery updates, abandoned cart conversion, return requests and warranty claims. If you run an e-commerce brand and are looking to automate your calling function, would love to talk to you and understand how we can help. Tarun Rathore #voiceai #conversationalcommerce #ecommerce
31
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Adam Liska
-
Adam Liška 🔵⚪️🔵
Vicepresident Manufacturing | Transformace Průmyslu | Transformace Výrobních firem |
Prague, Czechia -
Adam Liska
Portfolio Director at Anthem
Littleton, CO -
Adam Liska
HR Generalist
Pittsburgh, PA -
Adam Liska
Associate Professor
Lincoln, NE
20 others named Adam Liska are on LinkedIn
See others named Adam Liska