🙋 Going to PyConDE & PyData Berlin this year? Don't miss this session by Marie-Kristin Wirsching, data scientist at inovex GmbH, on how to build engaging data stories with Streamlit and Snowflake! Explore a proof-of-concept, tracing the conception of a data story to the implementation of a Streamlit app in Snowflake, by using open source datasets from Deutsche Bahn. 📊 👉 https://buff.ly/3vM50kX #Python #AI #DataScience
Streamlit’s Post
More Relevant Posts
-
Turning complex data insights into actionable narratives is a key factor of returning value. However, the presentation of data stories in a visual manner and the continual alteration in those stories present an issue. Find out how Snowflake and Streamlit make it easier for us to create fun data applications with Marie-Kristin talk. The optimal approach to this is to learn more about the capabilities that the option of the responds best to, and to listen to a person for whom passion is driven by AI, and also NLP since it is Marie-Kristin. #pycon #berlin #inovex
🙋 Going to PyConDE & PyData Berlin this year? Don't miss this session by Marie-Kristin Wirsching, data scientist at inovex GmbH, on how to build engaging data stories with Streamlit and Snowflake! Explore a proof-of-concept, tracing the conception of a data story to the implementation of a Streamlit app in Snowflake, by using open source datasets from Deutsche Bahn. 📊 👉 https://buff.ly/3vM50kX #Python #AI #DataScience
Next Stop: Insights! How Streamlit and Snowflake Power Up Data Stories Marie-Kristin Wirsching PyConDE & PyDataBerlin 2024 conference
2024.pycon.de
To view or add a comment, sign in
-
How to Create Well-Styled Streamlit Dataframes, Part 1: Using the Pandas Styler: Streamlit and the pandas Styler object are not friends. But, we will change that! Continue reading on Towards Data Science » #MachineLearning #ArtificialIntelligence #DataScience
How to Create Well-Styled Streamlit Dataframes, Part 1: Using the Pandas Styler
towardsdatascience.com
To view or add a comment, sign in
-
#Technology #DataAnalytics #DataDriven How to Create Well-Styled Streamlit Dataframes, Part 1: Using the Pandas Styler: Streamlit and the pandas Styler object are not friends. But, we will change that! Continue reading on Towards Data Science » #MachineLearning #ArtificialIntelligence #DataScience
How to Create Well-Styled Streamlit Dataframes, Part 1: Using the Pandas Styler
towardsdatascience.com
To view or add a comment, sign in
-
🚀 Day 15 of My Data Structures Learning Challenge: Arrays! 🚀 Today, I continued my exploration of arrays by tackling the challenge of concatenation of arrays. This involves combining two or more arrays into a single, larger array. 📚 Key Takeaway: This challenge reinforced my understanding of array manipulation and the importance of efficient algorithms for combining data. By exploring different approaches to concatenate arrays, I gained insights into the trade-offs between time and space complexity. Challenge of the Day: I successfully implemented a solution to concatenate two arrays into a single, larger array using both iterative and recursive methods. This exercise helped me strengthen my problem-solving skills and explore the efficiency of different approaches. 💡 Looking forward to tomorrow's challenge, where I'll continue to explore more advanced array manipulations and delve deeper into other data structures! If you have any tips, suggestions, or resources related to arrays or other data structure topics, I would love to hear from you! Let's continue this learning journey together! 💪 #100DaysChallenge #LearningJourney #DataStructures #Arrays #CodingChallenge #ContinuousLearning #TechGrowth #leetcode
To view or add a comment, sign in
-
🚀 Excited to share insights from 'DAY 9' where I explored powerful NumPy function tricks that streamline problem-solving. 💡 From optimizing array operations to leveraging advanced techniques, these tricks are game-changers for efficiency and productivity. In this session, I deep-dived into various NumPy functions that not only simplify complex computations but also enhance code readability and performance. Each function revealed new possibilities and strategies to tackle data challenges with precision. 🔍 Curious about these NumPy function tricks? Join me in discovering how they can elevate your data science toolkit. Let's empower our problem-solving skills together! #DataScience #NumPyTricks #PythonProgramming #ProblemSolving #LinkedInLearning
To view or add a comment, sign in
-
💡 Everything you need to know about Dataiku 🌐 #Dataiku is a powerful #platform designed to bring data enthusiasts and professionals together. It's where magic happens: raw #data transforms into beautiful, insightful, and impactful stories. What makes Dataiku stand out? It's all about #empowerment and #collaboration: 👨💻 For #datascientists, it's a playground for coding, #machinelearning, and predictive modeling, with support for languages like #Python and R. 📊 For #analysts, it offers intuitive drag-and-drop interfaces to clean, manipulate, and visualize data without a single line of #code. 🤝 For #business professionals, it's a bridge connecting them to the data world, helping make informed decisions without being data experts. You want to get more information? Watch our #video and get your spot for our next #OpenHouse on April 24.
To view or add a comment, sign in
-
🚀 Day 15 of 100 Days of Data Structures Challenge 🚀 Today, I tackled the "Squares of a Sorted Array" problem, which involved transforming a sorted integer array into another sorted array of squares. Utilizing the two-pointer technique, I efficiently managed to solve this in O(n) time complexity! Key Takeaways: - Understanding the importance of absolute values when dealing with squares. - Leveraging pointers to maintain time efficiency in array operations. - Reinforcing the significance of problem-solving strategies like two-pointer methods. Feeling accomplished and motivated to continue this journey of learning and growth! 💪 #100DaysOfCode #DataStructures #CodingChallenge #ProblemSolving #LearningJourney
To view or add a comment, sign in
-
Super excited to share a major milestone for DataPilot: we've surged from 100,000 to 150,000 downloads in just three months! 🎉 We are helping thousands of data and analytics engineers accelerate their work right inside their IDE (Power User Extension), Git, CI/CD (DataPilot Python Package). Whether you are writing dbt models, tests, docs, or need some complex SQL query explanations or translations between different SQL dialects, DataPilot’s AI agents can significantly boost the data development process for you! 🚀 (details in comments) Our deepest thanks to all our users for your active engagement and invaluable feedback 🙏. Your insights empower us to continuously improve and elevate DataPilot to greater heights. #data #analytics #genai
To view or add a comment, sign in
-
🧑💻Airflow DAGs or Directed Acyclic Graphs 📉 ……………………………………………………………………… DAGs are the fundamental building blocks of Airflow workflows. They define the sequence of tasks your workflow will execute, ensuring everything runs smoothly and dependencies are met. Think of a DAG as a recipe: 1️⃣ Each task in your workflow is like an ingredient. 2️⃣ DAG defines the order in which you add these ingredients (tasks) to create final dish (output). 3️⃣ The acyclic part ensures there are no circular dependencies, preventing your recipe from getting stuck in an infinite loop! Benefits of Using DAGs: ✅ Clarity and Organization ✅ Dependency Management ✅ Flexibility and Reusability The core components of an Airflow DAG (Directed Acyclic Graph) can be broken down into 3 main parts : ❇️ Tasks: These are the individual units of work within your workflow. They represent the specific actions you want Airflow to perform, such as running a Python script, transferring data between systems, or querying a database. ❇️ Dependencies: Dependencies define the order of execution for your tasks. They ensure that tasks only run when their predecessors have completed successfully. You can define dependencies between tasks directly in your DAG code. ❇️ Schedules: Schedules define when your DAG should run. You can set up your DAG to run on a specific schedule (e.g., daily, hourly) or manually trigger its execution. Airflow's scheduler component monitors these schedules and triggers DAG runs accordingly. Here are some additional components that can be used within a DAG: 🈯️ XComs (XCom stands for "cross-communication"): These are temporary variables used to store and share data between tasks within a DAG run. This allows tasks to pass information to each other, facilitating data exchange throughout the workflow. 🈯️ Variables: Variables are a way to store reusable configurations or values within Airflow. You can access and use these variables within your DAG tasks, promoting code maintainability and simplifying configuration changes. 🈯️ Hooks: Hooks provide connections to external systems or resources. They act as interfaces between your DAG and external services, allowing tasks to interact with databases, cloud storage, or other systems. Remember, DAGs are designed to be acyclic, meaning there are no circular dependencies between tasks. Manish Kumar Singh #ApacheAirflow, #DataPipelines, #DAGs, #UnleashThePowerOfWorkflows! #data #sql #dataengineer
To view or add a comment, sign in
-
Exciting. We have the first speakers to announce for the NO SLIDES conference. And I can't wait watch their talks: Sarah Levy will talk about "Should we build this in Looker or dbt?" - "This demo proposes a 'reverse flow' approach, automating the integration of business logic crafted by analysts within Looker into dbt" Glenn Vanderlinden will talk about "The impact of event design on analysis quality" - "Tracking everything you need with a minimal amount of events sounds counter intuitive. But what if I tell you it actually reduces the errors.." Janos Botyanszki, PhD will talk about "Lifetime Value Modeling with the "Buy Til You Die" framework" - "We will demo the method in python using publicly available data, and the focus will be on the high-level business insights you gain from your data throughout the modeling journey." Debadyuti Roy Chowdhury will talk about "Magical Data Flows - stream enrich aggregate events in flow" - "We will parse JSON payloads, Apply operations like Filter, Map, Group, Join, Split, Merge etc. We will also aggregate metrics in time windows." All live demos! All will be awesome. Make sure to sign up for the conference now (even when it still plenty of days until it starts): https://lnkd.in/dHAbc_Nm And if you like to get featured like below on our website and in a post, make sure to apply as a speaker on the website.
To view or add a comment, sign in
89,128 followers