🚨 Starting from December we are introducing new format for the #NextGenLakehouse Channel. We are going to release 4 videos per month: -Databricks/ISV Interview -MVP Interview -Getting started video -What's new in Databricks with Lara Rachidi Maria Zervou Quentin Ambard We are also going to focus on one specific topic every 2 weeks by posting the best articles, blog posts, interviews..... Topics: Data Sharing,Platform, Governance, ETL, Data Warehousing, GenAI, ML, Apps, Observability, Dashbording... Youtube: https://lnkd.in/e3hsNJMp Newsletter: https://lnkd.in/eMvWZJRR Spotify: https://lnkd.in/e76UuShD Youssef Mrini Twitter: https://x.com/YoussefMrini Quentin Ambard Twitter: https://x.com/qAmbard #NextGenLakehouse #Databricks
Youssef Mrini’s Post
More Relevant Posts
-
Youssef and team are providing some powerful "training and enablement" content in a fun unique way. Highly recommend following their channel and leveraging the insights shared
🚨 Starting from December we are introducing new format for the #NextGenLakehouse Channel. We are going to release 4 videos per month: -Databricks/ISV Interview -MVP Interview -Getting started video -What's new in Databricks with Lara Rachidi Maria Zervou Quentin Ambard We are also going to focus on one specific topic every 2 weeks by posting the best articles, blog posts, interviews..... Topics: Data Sharing,Platform, Governance, ETL, Data Warehousing, GenAI, ML, Apps, Observability, Dashbording... Youtube: https://lnkd.in/e3hsNJMp Newsletter: https://lnkd.in/eMvWZJRR Spotify: https://lnkd.in/e76UuShD Youssef Mrini Twitter: https://x.com/YoussefMrini Quentin Ambard Twitter: https://x.com/qAmbard #NextGenLakehouse #Databricks
To view or add a comment, sign in
-
📢 Attention Snowflake developers! There's a new kid in the Snowflake Cortex AI town - CLASSIFY_TEXT! 💡 You can use this new LLM-backed function right in your SQL queries to classify free-form text data into categories you provide. Test it and see if it works for your scenarios. ➡️ Make sure you read the docs for some guidance: https://lnkd.in/dzXsiTGn #Snowflake #SnowflakeCortexAI #LLM #GenAI #SQL
To view or add a comment, sign in
-
Sometimes the best way to learn is by doing! 👩🏫 A few weeks ago, I was lucky enough to meet with one of Snowflake's Principal Software Engineers working on Cortex, Snowflake's ML service. I found the overview and demo so insightful, that I set out to summarize what I learned. With a single call to a SQL function, Cortex makes ML look easy, but the best part of this overview is deep diving into the model to understand all the complexity that actually goes into a Cortex function, like TOP_INSIGHTS! 🤯 Check it out here: https://lnkd.in/gjjAvrBw #snowflake #datascience
To view or add a comment, sign in
-
Importance of MLOps in Data Science Join Boktiar Ahmed Bappy for an insightful session on the critical role of MLOps in Data Science. Learn how MLOps bridges the gap between machine learning models and real-world applications, ensuring seamless deployment, scalability, and maintenance. Join us : https://lnkd.in/dr4X54Yz Don’t miss this opportunity to gain valuable insights and stay ahead in the world of Data Science! #MLOps #DataScience #Webinar #Upskilling #MachineLearning
To view or add a comment, sign in
-
📖 Did you know that you can run SQL queries directly on your DataFrames without creating a temporary view in Spark? 🔖 Before Spark 3.4 version, if you wanted to use SQL with your DataFrames, you'd have to take an extra step: df = spark.read.csv("/path/to/my/file.csv") df.createOrReplaceTempView("my_table") spark.sql("SELECT * FROM my_table").show() 🔖 But since Spark 3.4, you can streamline your code like this: df = spark.read.csv("/path/to/my/file.csv") spark.sql("SELECT * FROM my_table", my_table=df).show() Just pass your DataFrame as a parameter to spark.sql(), and you're all set. It's a small change that makes your code cleaner and saves time, especially when working with multiple DataFrames. #data #ai #dataengineering #databricks #spark #sql #bigdata
To view or add a comment, sign in
-
🚀 𝐏𝐘𝐒𝐏𝐀𝐑𝐊 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞 - 𝐃𝐚𝐲 1️⃣ - 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧 1️⃣ 🌟 🌟𝐏𝐘𝐒𝐏𝐀𝐑𝐊 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞/𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 📊 --------------------------------------------- 🎯 𝐏𝐑𝐎𝐁𝐋𝐄𝐌 𝐒𝐓𝐀𝐓𝐄𝐌𝐄𝐍𝐓 --------------------------------------------- Actors and Directors Cooperation Write a PYSPARK program to generate a report that provides pairs (actor_id, director_id) where the actor has cooperated with the director at least 3 times. --------------------------------------------- 📝 𝐒𝐜𝐡𝐞𝐦𝐚 𝐀𝐧𝐝 𝐃𝐚𝐭𝐚 : --------------------------------------------- Difficult Level : EASY 𝐈𝐧𝐩𝐮𝐭 𝐃𝐚𝐭𝐚 : schema = StructType([ StructField("ActorId",IntegerType(),True), StructField("DirectorId",IntegerType(),True), StructField("timestamp",IntegerType(),True) ]) data = [ (1, 1, 0), (1, 1, 1), (1, 1, 2), (1, 2, 3), (1, 2, 4), (2, 1, 5), (2, 1, 6) ] 🚀 Your Task: Implement a PYSPARK program to find pairs where actors and directors have collaborated at least 3 times. 🔗 Connect with me for more Data Engineering insights and challenges. Need guidance with Data Engineering? Shoot me a DM! 🚀 #PYSPARKChallenge #DataEngineering #BigData #CodingChallenge #SparkSQL #DataProcessing #LinkedInLearning Let's see those PYSPARK solutions roll in! 🌐✨
To view or add a comment, sign in
-
Importance of MLOps in Data Science Join Boktiar Ahmed Bappy for an insightful session on the critical role of MLOps in Data Science. Learn how MLOps bridges the gap between machine learning models and real-world applications, ensuring seamless deployment, scalability, and maintenance. Join us : https://lnkd.in/gefS7acw Don’t miss this opportunity to gain valuable insights and stay ahead in the world of Data Science! #MLOps #DataScience #Webinar #Upskilling #MachineLearning
To view or add a comment, sign in
-
📢 Databricks Delta Live Table in Spark🚀 How "🔼 Delta Live Table" can be explained step-by-step: 👉🏽 Interviewer: Share your experience with Databricks Delta Live Table. Interviewee: It's a game-changer for our big data project, ensuring data consistency and accuracy with real-time streaming. Interviewer: How did it help? Interviewee: Schema Evolution allowed us to adapt data schemas without disrupting pipelines, ensuring smooth real-time data ingestion. Interviewer: Any other benefits? Interviewee: Time Travel provided historical data snapshots for trend analysis and confident data-driven decisions. Interviewer: Query performance improvements? Interviewee: Advanced indexing and data-skipping techniques optimized query performance, reduced latency, and boosted efficiency. Interviewer: Impact on data reliability? Interviewee: ACID compliance ensured data reliability and consistency, even with concurrent read/write operations. Interviewer: Ease of integration? Interviewee: Seamless integration with Spark workflows, minimal code changes, and simplified API made adoption easy. Interviewer: Perfect fit for your project? Interviewee: Absolutely! Delta Live Table provided the reliability, efficiency, and real-time capabilities needed for success. #onestepanalytics #DatabricksDelta #DataEngineering #InterviewExperience #Databricks #Autoloader #Spark #DataIngestion #DataPipelines #DataAnalytics #InterviewPreparation #DataScience
To view or add a comment, sign in
-
Importance of MLOps in Data Science Join Boktiar Ahmed Bappy for an insightful session on the critical role of MLOps in Data Science. Learn how MLOps bridges the gap between machine learning models and real-world applications, ensuring seamless deployment, scalability, and maintenance. Join us :https://lnkd.in/gz7BHeXT Don’t miss this opportunity to gain valuable insights and stay ahead in the world of Data Science! #MLOps #DataScience #Webinar #Upskilling #MachineLearning
To view or add a comment, sign in
-
🚀 10/90 Days – Journey to Becoming a Data Scientist 🚀 Today marks another step forward in my machine learning journey! I successfully completed Day 10 of my course completed lambda function along with learn how we can write a function to save time more efficiently also learn about Map() and filter function() updated notes :- https://lnkd.in/dvDJyvq9 Tommorow target is to complete Advance function assignment #datascience #learninpublic
To view or add a comment, sign in
Digital Marketing Specialist | YouTube Video SEO | Google Ads Campaigns | Facebook Page Management | Shopify Dropshipping Expert
1moExciting update! Looking forward to this new format and the deep dives into Databricks and all things data. The bi-weekly topics sound like a fantastic way to stay on top of trends—can’t wait!