We’re growing our team at Weaviate! 🚀 Join us in shaping the future of AI. Open roles include: • Technical Trainer: https://lnkd.in/g93Tnkim • Social Media & Content Intern: https://lnkd.in/eGFfwDjc • Site Reliability Engineer: https://lnkd.in/gCRM9drN • Research Engineer: https://lnkd.in/gvJ3YcX7 • Software Engineer, Database: https://lnkd.in/gs-ATX_6 • Senior Full Stack Engineer: https://lnkd.in/g93Tnkim • QA Engineer: https://lnkd.in/gzcK8JPG • Technical Writer: https://lnkd.in/gd7ym8Qd Explore all positions and apply here: https://lnkd.in/dzibHnwZ
Weaviate
Technologie, informatie en internet
Amsterdam, North Holland 23.597 volgers
The AI-native database for a new generation of software.
Over ons
Weaviate is a cloud-native, real-time vector database that allows you to bring your machine-learning models to scale. There are extensions for specific use cases, such as semantic search, plugins to integrate Weaviate in any application of your choice, and a console to visualize your data.
- Website
-
https://weaviate.io
Externe link voor Weaviate
- Branche
- Technologie, informatie en internet
- Bedrijfsgrootte
- 51 - 200 medewerkers
- Hoofdkantoor
- Amsterdam, North Holland
- Type
- Particuliere onderneming
- Opgericht
- 2019
Locaties
-
Primair
Amsterdam, North Holland, NL
Medewerkers van Weaviate
Updates
-
Weaviate heeft dit gerepost
Introducing Weaviate Embeddings – Say goodbye to the headaches of creating and managing vector embeddings. We’re excited to announce the preview of Weaviate Embeddings, a new embedding service in Weaviate Cloud that simplifies creating vector embeddings for AI developers—eliminating the need to connect to external providers. With Weaviate Embeddings, you’ll enjoy: • Leading OSS and proprietary models: Access cutting-edge open source and proprietary models. At preview, we offer Snowflake Arctic-Embed (open-source), with commercial models coming early next year. • Co-located models and data: Reduce latency and improve performance by hosting both your data and models in Weaviate Cloud. • Cost efficient and GPU-powered: Leverage GPU-powered inference with pay-as-you-go pricing based on tokens consumed—pay only for what you use. Customize dimensions as needed. • Secure, enterprise-ready deployment: Benefit from SOC2 certification, role-based access controls, and strict data isolation for enterprise-grade security. 📰 Read more in our blog post: https://lnkd.in/g6WeCGGY
-
Introducing Weaviate Embeddings – Say goodbye to the headaches of creating and managing vector embeddings. We’re excited to announce the preview of Weaviate Embeddings, a new embedding service in Weaviate Cloud that simplifies creating vector embeddings for AI developers—eliminating the need to connect to external providers. With Weaviate Embeddings, you’ll enjoy: • Leading OSS and proprietary models: Access cutting-edge open source and proprietary models. At preview, we offer Snowflake Arctic-Embed (open-source), with commercial models coming early next year. • Co-located models and data: Reduce latency and improve performance by hosting both your data and models in Weaviate Cloud. • Cost efficient and GPU-powered: Leverage GPU-powered inference with pay-as-you-go pricing based on tokens consumed—pay only for what you use. Customize dimensions as needed. • Secure, enterprise-ready deployment: Benefit from SOC2 certification, role-based access controls, and strict data isolation for enterprise-grade security. 📰 Read more in our blog post: https://lnkd.in/g6WeCGGY
-
Docling parses PDFs faster than you ever thought possible. And now, you can use it with Weaviate. What is Docling? Docling is an open source Python library by @IBM that uses advanced AI models to parse documents FAST. Why should you use Docling? Docling is incredibly accessible and it’s FAST. Here’s what sets it apart from other AI-powered document parsers: • Doesn’t require massive memory or compute • Runs on commodity hardware • Offers hierarchical chunking & metadata extraction • Can be run in a Jupyter notebook Want to try it out? Check out this recipe notebook by Mary Newhauser that demonstrates how to run RAG over PDFs with Docling and Weaviate. 🔗 Link: https://lnkd.in/gWBFyvhS
-
Docling parses PDFs faster than you ever thought possible. And now, you can use it with Weaviate. What is Docling? Docling is an open source Python library by IBM that uses advanced AI models to parse documents FAST. Why should you use Docling? Docling is incredibly accessible and it’s FAST. Here’s what sets it apart from other AI-powered document parsers: • Doesn’t require massive memory or compute • Runs on commodity hardware • Offers hierarchical chunking & metadata extraction • Can be run in a Jupyter notebook Want to try it out? Check out this recipe notebook by Mary Newhauser that demonstrates how to run RAG over PDFs with Docling and Weaviate. 🔗 Link: https://lnkd.in/gWBFyvhS
-
We’re all in on vector search 🎲 Come say hi at booth 1506 at #AWSreInvent this week! 👋 Book a 1:1 meeting: https://lnkd.in/eFj3xG_v
-
Traditional search relies heavily on matching exact keywords. But vector search is all about understanding meaning and context. Instead of just matching (key) words, vector search converts your data into mathematical representations (vectors) that capture the semantic meaning of your content. Here's how it works in Weaviate: 1️⃣ Data Storage & Vectorization - Each piece of data is stored as both the original object AND its vector representation You have two options: - Let Weaviate create the vectors for you using built-in vectorizers - Bring your own pre-computed vectors (perfect for custom embeddings) 2️⃣ Search Capabilities Weaviate offers several powerful search approaches: - Pure vector (semantic) search: Find similar content based on meaning - Keyword search: Traditional word matching - Hybrid search: Combines both approaches for better results 3️⃣ Under the Hood To make searches lightning fast (even with billions of vectors), Weaviate uses: - Approximate Nearest Neighbor (ANN) index for rapid vector searches - Inverted index for quick filtering and boolean operations 4️⃣ Real-World Applications People are using Weaviate for: - Internal document search systems that understand concepts, not just keywords - Retrieval Augmented Generation applications, like chatbots - Recommendation systems - Enhanced site search for ecommerce or forums 📄 Read more in our blog post: https://lnkd.in/gxRXmmJK 🔍 Or learn how to implement vector search at our workshop tomorrow: https://lnkd.in/ggE58wBj
-
We're very excited to have an amazing new member join the team today! 🔥 Damien Gasparina (Gasparina Damien) - Enterprise Solution Engineer ⌨️ Welcome to Weaviate! 👋 #weaviate #techjobs #teamweaviate
-
🤔 A language learning app that actually adapts to how you learn? Meet 𝗪𝗲𝗮𝗹𝗶𝗻𝗴𝗼, our innovative demo app that's changing how we think about personalized language learning. See it live at AWS re:Invent 2024! What makes It special: • 🔍 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝘀𝗲𝗮𝗿𝗰𝗵 in Wealingo allows users to find questions based on the meaning behind their queries, rather than being tied to exact keywords. This flexibility helps users quickly locate lessons that are most relevant to their current needs. • ⭐ Wealingo uses 𝗪𝗲𝗮𝘃𝗶𝗮𝘁𝗲 and 𝗟𝗟𝗠𝘀 to dynamically generate a personalized question that fits the user's exact needs and is tailored to their current skill level, helping them learn in context, no matter how niche the request. The result? A learning experience that's truly personal, context-aware, and evolves with every interaction. Visit us at @AWS re:Invent 2024: https://lnkd.in/eFj3xG_v
-
Why would you want to use Agentic RAG instead of 'normal' RAG? Agentic RAG is, in contrast to normal RAG, using agents to make a decision on what to do, instead of following a pre-defined pipeline. 𝐊𝐞𝐲 𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬 1. Smart Routing: Agents automatically decide which knowledge sources to query and whether to search internal databases or the web. 2. Intelligent Queries: The system writes its own optimized search queries - no more worrying about perfect search terms! 3. Advanced Processing: Goes beyond simple retrieval to actually process and format the information for your specific needs. 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 • More accurate responses • Autonomous task completion • Better human collaboration • Smarter information retrieval Whilst more powerful, Agentic RAG can be slower, as it involves multiple LLM passes to ‘decide’ before you receive a response. But this can be mitigated by using smaller language models for the decision agents, and a larger model for the response. You could retrieve the information, summarise each item, and send it via email, all with your agentic RAG. Find out more: https://lnkd.in/dUqS3cxS