Sean Knight
San Francisco, California, United States
5K followers
500+ connections
View mutual connections with Sean
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Sean
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View Sean’s full profile
Other similar profiles
-
Chris Bechtel
Los Angeles, CAConnect -
Steve Gumm
Aurora, ILConnect -
Haydon Young
New York City Metropolitan AreaConnect -
Jerem Febvre
Berkeley, CAConnect -
Alessandra Sales
San Mateo, CAConnect -
Laura Borghesi
New York, NYConnect -
Niall Weintraub
Los Angeles Metropolitan AreaConnect -
Guillaume "𝑮" Cabane
FranceConnect -
Jason Katz
New York City Metropolitan AreaConnect -
Kiki Burton
Washington, DCConnect -
Em Wingrove
Fort Lauderdale, FLConnect -
Drew Diskin, M.S.
Greater PhiladelphiaConnect -
Jonathan Powell
Helping Entrepreneurs Create Magic In Marketing, Sales, & Operations | DM Me For A Virtual Coffee ☕️
Richland, GAConnect -
Jessica McGlory
New York, NYConnect -
Sheri Miller
Visioneer l Advertising, Marketing and Digital Branding Guru.
Pismo Beach, CAConnect -
Ryan Huser
Orlando, FLConnect -
Amanda Goetz
New York, NYConnect -
john H.
New York, NYConnect -
Hamlet Azarian
Glendale, CAConnect -
Misha Tsidulko
United StatesConnect
Explore more posts
-
Anyscale
🚀 Looking to streamline fine-tuning for models like Llama 3, Mistral, and Mixtral? Our latest blog shows how Anyscale simplifies LLMs fine-tuning with Ray: 🔑 Key takeaways: • Scalable fine-tuning with LLM-Forge • Model Deployment with RayLLM Read more: https://lnkd.in/gQwD4A2U
34 -
CognitexAi
🔍 Transforming Intent Classification: From TF-IDF to SBERT 🚀 In the world of NLP, intent classification is a key player in understanding and predicting user intents. As models evolve, moving from traditional TF-IDF vectors to SBERT-based embeddings has opened new doors to more accurate and context-aware predictions. Let’s dive into how TF-IDF and SBERT differ, and why SBERT is making a difference for NLP applications in intent classification! 👉 #NLP #IntentClassification #TFIDF #SBERT #MachineLearning #AI
4 -
Together AI
New Blogpost: Fine-tuning LLMs for Multi-turn Conversations In this deep dive we cover: 🎛️ How to fine-tune with conversation data. 🎭 Implement loss masking to improve model performance. 🐍 Full code example fine-tuning Llama 3.1 8B on CoQA dataset. Read here: https://lnkd.in/gFxbpia6
20 -
Justin Gerrard
For the past decade, AI researcher Christopher Olah has been obsessed with artificial neural networks. One question in particular engaged him, and has been the center of his work, first at Google Brain, then OpenAI, and today at AI startup Anthropic where he is a cofounder. “What's going on inside of them?” he says. “We have these systems, we don't know what's going on. It seems crazy.” Olah believes that we’re on the path to this. He leads an Anthropic team that has peeked inside that black box. Essentially, they are trying to reverse engineer large language models to understand why they come up with specific outputs—and, according to a paper released today, they have made significant progress. The company’s researchers have identified the combination of artificial neurons that signify features as disparate as burritos, semicolons in programming code, and—very much to the larger goal of the research—deadly biological weapons. Work like this has potentially huge implications for AI safety: If you can figure out where danger lurks inside an LLM, you are presumably better equipped to stop it. By suppressing those features, Anthropic says, the model can produce safer computer programs and reduce bias. For instance, the team found several features that represented dangerous practices, like unsafe computer code, scam emails, and instructions for making dangerous products. #anthropic #ai #genai #neuralnetworks #tech https://lnkd.in/gKVjhP9T
51 Comment -
MIT Data-to-AI Lab
Can you use #llms for every machine learning task? Members of MIT Data-to-AI Lab Kalyan Veeramachaneni and Sarah Alnegheimish tried #llms for a well known task of unsupervised time series anomaly detection in their paper. In an article in VentureBeat they explore the affordances that #llms provide and how to make most of them. Article: https://lnkd.in/emvEDDkg Research paper: https://lnkd.in/ezNzHkBj
5 -
Spring Labs
In this blog post, we dive deeper into how different large language models (LLMs) tackle complex tasks like customer conversation segmentation and classification. We benchmark models including Claude Haiku, Claude Sonnet, Meta's Llama 3-70B Instruct, and Mistral AI’s Mixtral 8X7B, highlighting the importance of choosing the right model and fine-tuning prompts. At Spring Labs, we combine foundational models with tailored NLP techniques to ensure high accuracy, reliability, and scalability in our conversational intelligence engine. Discover more insights in our latest blog post below! https://lnkd.in/gYrE3x9a #Fintech #ArtificialIntelligence #Banking #LLMs
6 -
Databricks Mosaic Research
New blog post! We explore one method for customizing LLMs — Continued Pre-Training (CPT) — and provide guidance on executing this process effectively: https://lnkd.in/gVD2fAHP Continued Pre-Training refers to a cost-effective alternative to pre-training large language models (LLMs) from scratch. While LLMs are increasingly adept at solving general tasks, they can often fall short on specific domains that are dissimilar to the data they were trained on. In such cases, how do you effectively and efficiently adapt an open-source LLM to your needs? With CPT, Mansheej Paul, Brett Larsen, Connor Jennings, and Cody Blakeney demonstrate how to enhance a small LLM’s factual knowledge performance to match that of a much larger LLM by augmenting the small model’s general knowledge with specialized information. Check out the post for details!
502 Comments -
SystemsDigest
The latest update for #Confluent includes "Atomic Tessellator: Revolutionizing Computational Chemistry with #DataStreaming" and "Handling the Producer Request: Kafka Producer and Consumer Internals, Part 2". #DataPipelines #Cloud https://lnkd.in/dtgUKR_3
-
Domenic Ravita
🚀 "Came for the speed, stayed for the syntax."💡 That's what Marco Gorelli had to say about using Polars. 🐻❄️ This educational session from our recent Plotly Community call, presented by Mike Purtell, provides a data scientist's perspective on use of Polars with Plotly. https://lnkd.in/esyKAJv3 #datascience #python #pandas #polars #plotly
131 Comment -
Infinite 8 Industries, Inc.
#New #Microservice - Infinite 8 introduces new micro-service for highly sensitive Anomaly Detection, across industries. The following chart, shows results on the Numenta Anomaly Benchmark (NAB) dataset. NAB datasets are specifically curated to test and benchmark anomaly detection algorithms in real-world scenarios, and are used across industries to simulate various conditions, such as sudden changes in sensor readings, network traffic anomalies, or shifts in financial data. #InfiNET #v4 | https://lnkd.in/guPC-yZr
-
Aimpoint Digital
Discover how Retrieval Augmented Generation (RAG) is revolutionizing the way businesses leverage Large Language Models (LLMs)! We are excited to share our latest insights, written by Elizabeth Khan and Shruti Misra, PhD, on how RAG enhances LLM performance by combining up-to-date, domain-specific data with advanced text generation. This results in more accurate, contextually relevant outputs and ensures cutting-edge data retrieval. Learn how we implemented a state-of-the-art RAG solution using Snowflake Cortex and Dataiku, transforming our client’s interaction with their internal data. 👉 Read more here https://lnkd.in/e6B-u6q5
24 -
Rev
Get a behind-the-scenes view into Rev's 12-year journey developing groundbreaking ASR technology from our VP of AI, Miguel Jetté! Discover how our focus on data accumulation, AI research, and human-in-loop reinforcement learning has shaped our industry-leading ASR model: https://bit.ly/3WLpuVT
61 -
Neha Bajwa
It's really exciting to see that Neo4j placed as a "strong performer" in The Forrester Wave™: Vector Databases, Q3 2024. Vector databases are a critical component of GenAI applications, but Vector capability alone is not sufficient. GenAI applications need an implementation of GraphRAG to complement Vector—Industry gurus are finally admitting what we at Neo4j have known all along - GenAI needs GraphRAG to reduce hallucinations and improve accuracy and explainability more than Vector alone. https://lnkd.in/gtm8jYCn #graphrag #genai #KG #generativeAI #neo4j
31 -
Data Surge
Check out the latest blog from our fellow Data Scientist and Data Surgeon Drew Lehe sharing his thoughts on Graph Data Science (GDS). Graph Data Science is a cutting-edge field helping solve many complex data problems. In this particular blog, Drew demonstrates how GDS helped improve the results of an entity resolution issue for an existing Data Surge client. Click the link below to learn more. At Data Surge innovation is a key piece of what we deliver. That innovation includes AI/ML, MLOps, and GenAI. We apply these innovations to support Network Analytics, Entity Resolution as well as Data Governance and Responsible AI. To learn more about Data Surge's services or book a demo of our solution accelerators, visit our website at www.datasurge.com. #AI #ML #GraphDataScience #GenAI #Data #Modernize #Democratize #Transform
11 -
Rendered.ai
In the latest⚡episode of RADICL's DIB Innovators podcast, Rendered.ai's CEO shares his vast experience in how #syntheticdata is shaping #AI development across industries, especially in the #defense and #space sectors. Tune into this episode to learn more about #syntheticdata's role #computervision #innovation and why many #datascientists struggle to achieve performant models within the timeframes needed. 🎧 Listen now on Apple Podcasts (https://buff.ly/414b71c) or Spotify (https://buff.ly/4i0uZYV).
4 -
VeloDB
Session recording 📹: ZHI LIU Product Manager at RisingWave discussed real-time data enrichment and analytics using RisingWave and Apache Doris: "Streaming process and OLAP are not like a one-to-replace-the-other relationship. It's more like: Each of the processing paradigm tries to solve a different use case. So when people first come up with stream processing, they ask the question like: In real world, data is generated continuously, so why don't we process data continuously so that we can acquire insights continuously? The idea is that you are facing an infinite stream of data where you have a fixed query, so whenever there are new data coming to the system, the query reacts to the data." "The idea of OLAP is more like: You have a finite set of data, and business analysts will try to explore what's contained in that data set, so they will come up with different queries. Based on the insights, the modify the queries and try to get more different insights." https://lnkd.in/ghRPu4-X With this, we’ve wrapped up all sessions from the Apache Doris Meetup @Singapore! If you’re still eager to connect with the Apache Doris community, don’t miss the upcoming online webinar. Colin Wang will provide an in-depth introduction to 🌟Real-Time Analytics with Apache Doris: Everything You Need to Know 🌟. Stay tuned and join us! https://lnkd.in/g4jeZvHt
8 -
Gazet International
Snowflake, the AI Data Cloud company, announced it will host the Llama 3.1 collection of multilingual open source large language models (LLMs) in Snowflake Cortex AI. This includes Meta’s Llama 3.1 405B, with Snowflake developing an inference system stack for real-time, high-throughput inference. Optimized by Snowflake’s AI Research Team, Llama 3.1 405B supports a 128K context window, offering real-time inference with 3x lower latency and 1.4x higher throughput. Additionally, it allows fine-tuning on a single GPU node, reducing costs and complexity for developers within Cortex AI. To learn more - https://lnkd.in/dbV2Jb8S #Snowflake #AIDataCloud #Llama3.1 #OpenSourceAI #CortexAI #AIApplications #AITechnology #MachineLearning #TechInnovation #GIawards #GazetInternational #GI
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Sean Knight in United States
-
Sean Knight
Omaha, NE -
Sean Vannoy
Experienced Transportation Professional | Husband | Real estate | Knight | Investor | Fun uncle | Consultant | Community Builder | Project manager | Eagle Scout |
Greater Philadelphia -
Sean Knight
SWE @ Jump Trading
Chicago, IL -
Sean Knight
Senior Software Engineer
Indialantic, FL
144 others named Sean Knight in United States are on LinkedIn
See others named Sean Knight