Cloud Masters Episode #112
Observability of LLMs in Google Cloud
Cloud Masters Episode #112

With DoiT Spot Scaling, automate your AWS Spot Instances to save up to 90% on compute spend without compromising reliability.

Cloud Masters
Cloud Masters
Observability of LLMs in Google Cloud
Loading
/
Cloud Masters
Cloud Masters
Observability of LLMs in Google Cloud
Loading
/

Episode notes

About the guests

Sascha Heyer
Sascha Heyer, a Senior Machine Learning Specialist at DoiT, stands out as a Google Developer Expert and Google Cloud Innovator. He has been crucial in helping over 306 companies grow in the field of Machine Learning. Sascha believes in keeping things simple, a mindset that has helped clarify complex tech concepts. Moreover, as an author, this expertise is showcased through engaging YouTube presentations and insightful Medium articles (https://medium.com/@saschaheyer), effectively demystifying complex tech topics for a broad audience.
Eduardo Mota
Eduardo is a Senior Machine Learning Specialist, providing architecture advice to hundreds of companies wishing to deploy AI and ML solutions to enhance their product or resolve operational issues.
Sascha Heyer, a Senior Machine Learning Specialist at DoiT, stands out as a Google Developer Expert and Google Cloud Innovator. He has been crucial in helping over 306 companies grow in the field of Machine Learning. Sascha believes in keeping things simple, a mindset that has helped clarify complex tech concepts. Moreover, as an author, this expertise is showcased through engaging YouTube presentations and insightful Medium articles (https://medium.com/@saschaheyer), effectively demystifying complex tech topics for a broad audience.
Eduardo is a Senior Machine Learning Specialist, providing architecture advice to hundreds of companies wishing to deploy AI and ML solutions to enhance their product or resolve operational issues.

Related content

The cost impact of Large Language Models (LLMs) in production
We cover the ever-growing importance of Large Language Models (LLMs) in applications, how LLM costs can easily compound once in production, and breaking down the costs associated with using LLMs.
No longer a pipe dream — Gen AI and data pipelines
Exploring the impact that Gen AI will have on data pipelines and data engineering overall.
Gaining visibility over your LLMs with LLMStudio
Mentioned in the podcast, LLMStudio is designed to streamline interactions with large language models (LLMs). It focuses on prompt engineering, a critical skill for getting the most out of LLMs.

Schedule a call with our team

You will receive a calendar invite to the email address provided below for a 15-minute call with one of our team members to discuss your needs.

You will be presented with date and time options on the next step