New course! Prevent hallucinations, off-topic responses, and data leaks in your LLM applications in "Safe and Reliable AI via Guardrails," taught by Shreya Rajpal, founder of Guardrails AI. Guardrails are safety mechanisms and validation tools built into AI applications, acting as a protective framework that prevents your application from revealing incorrect, irrelevant, or sensitive information. This course will show you how to build robust guardrails from scratch that mitigate common failure modes of LLM-powered applications. You'll also learn how to access a variety of pre-built guardrails on the GuardrailsAI hub ready to integrate into your projects. You'll implement these guardrails in the context of a RAG-powered customer service chatbot for a small pizzeria. 🍕 Enhance your application's reliablitiy today! Enroll for free: https://hubs.la/Q02Y3xWj0
DeepLearning.AI
Software Development
Palo Alto, California 1,092,802 followers
Making world-class AI education accessible to everyone
About us
DeepLearning.AI is making a world-class AI education accessible to people around the globe. DeepLearning.AI was founded by Andrew Ng, a global leader in AI.
- Website
-
http://DeepLearning.AI
External link for DeepLearning.AI
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- Palo Alto, California
- Type
- Privately Held
- Founded
- 2017
- Specialties
- Artificial Intelligence, Deep Learning, and Machine Learning
Products
DeepLearning.AI
Online Course Platforms
Learn the skills to start or advance your AI career | World-class education | Hands-on training | Collaborative community of peers and mentors.
Locations
-
Primary
2445 Faber Pl
Palo Alto, California 94303, US
Employees at DeepLearning.AI
Updates
-
Meta and Anthropic altered their policies to allow their models to be used by the U.S. government's defense and national security agencies. 1️⃣ Meta will provide its Llama models through multiple providers; some partners like Scale AI have created specialized Llama-based models for tasks like intelligence analysis and military planning. 2️⃣ Anthropic's Claude models will support U.S. defense through a Palantir-hosted platform, assisting with data analysis and decision-making. Learn more in #TheBatch: https://hubs.la/Q02Yh9DV0
-
DeepLearning.AI reposted this
The Llama 3.2 DeepLearning.AI course instructed by Amit Sangani is now available on Coursera. Check out the course and sign up now! https://bit.ly/4hPourK
-
DeepLearning.AI reposted this
Advance your AI skills with Prediction Guard! In partnership with Intel Labs and DeepLearning.AI, Prediction Guard is offering a FREE multimodal RAG course. Learn to build sophisticated question-answering systems that process videos, images, and text together for seamless interactions. If you're an AI developer, data scientist, or engineer, this is your chance to sharpen your RAG skills with cutting-edge methods like vector databases and large vision-language models (LVLMs). Sign up for free: https://intel.ly/3YOrRYL #IntelLiftoff #AIInnovation #MachineLearning
-
This week, in The Batch, Andrew Ng discusses why large language models are increasingly fine-tuned to fit into agentic workflows. Plus: 📈 Tencent's new model outperforms Llama 3.1 in key benchmarks 🛡️ Meta, Anthropic enter military AI market 🗳️ AI hub guides voters with verified info Read #TheBatch now: https://hubs.la/Q02Y8sxZ0
-
DeepLearning.AI reposted this
New short course: Safe and Reliable AI via Guardrails! Learn to create production-ready, reliable LLM applications with guardrails in this new course, built in collaboration with Guardrails AI and taught by its CEO and co-founder, Shreya Rajpal. I see many companies worry about the reliability of LLM-based systems -- will they hallucinate a catastrophically bad response? -- which slows down investing in building them and transitioning prototypes to deployment. That LLMs generate probabilistic outputs has made them particularly hard to deploy in highly regulated industries or in safety-critical environments. Fortunately, there are good guardrail tools that give a significant new layer of control and reliability/safety. They act as a protective framework that can prevent your application from revealing incorrect, irrelevant, or confidential information, and they are an important part of what it takes to actually get prototypes to deployment. This course will walk you through common failure modes of LLM-powered applications (like hallucinations or revealing personally identifiable information). It will show you how to build guardrails from scratch to mitigate them. You’ll also learn how to access a variety of pre-built guardrails on the GuardrailsAI hub that are ready to integrate into your projects. You'll implement these guardrails in the context of a RAG-powered customer service chatbot for a small pizzeria. Specifically, you'll: - Explore common failure modes like hallucinations, going off-topic, revealing sensitive information, or responses that can harm the pizzeria's reputation. - Learn to mitigate these failure modes with input and output guards that check inputs and/or outputs - Create a guardrail to prevent the chatbot from discussing sensitive topics, such as a confidential project at the pizza shop - Detect hallucinations by ensuring responses are grounded in trusted documents - Add a Personal Identifiable Information (PII) guardrail to detect and redact sensitive information in user prompts and in LLM outputs - Set up a guardrail to limit the chatbot’s responses to topics relevant to the pizza shop, keeping interactions on-topic - Configure a guardrail that prevents your chatbot from mentioning any competitors using a name detection pipeline consisting of conditional logic that routes to an exact match or a threshold check with named entity recognition Guardrails are an important part of the practical building and deployment of LLM-based applications today. This course will show you how to make your applications more reliable and more ready for real-world deployment. Please sign up here: https://lnkd.in/gRU6qhzN
-
OpenAI researchers introduced MLE-bench, a new metric to assess coding agents’ abilities to tackle machine learning tasks. Built on 75 manually selected Kaggle competitions, MLE-bench evaluates how well agents like AIDE, ResearchAgent, and CodeActAgent solve real-world machine learning problems, such as toxicity detection and eruption prediction. Read our summary of the paper in #TheBatch: https://hubs.la/Q02X_SKX0
OpenAI’s MLE-bench Tests AI Coding Agents
deeplearning.ai
-
Twice a week, Data Points brings you the latest AI news, tools, models, and research in brief. In today’s edition, you’ll find news about Hugging Face, Mistral, Nvidia, X, and more! Read the latest issue and subscribe today 👉 https://lnkd.in/di7aXb3b
-
Colleen Fotsch has gone from athletics to data engineering, and now maybe acting? How did she do as Colleen the software engineer, marketing exec, and data analyst? Gathering stakeholder requirements is a big part of data engineering. In the new Data Engineering professional certificate you get to watch Colleen and Joe Reis 🤓 have a simulated conversation so you can see how they actually go! They did the acting so you don't have to. You can focus on learning how to be an effective data engineer with the Data Engineering Professional Certificate. Learn more and enroll today: https://hubs.la/Q02XVqf60