Here's my DuckDB solution to Advent of Code day 10, Hoof It The solution I’ve used for parts 1 and 2 are identical, other than counting the score in the final SELECT statement 😋 Start with the usual exploding of the data into a row per point on the map, and add a helper table for the surrounding points Next, walk through the map using a recursive CTE The cross join on the helper table adds a step in the potential route for each direction, then the inner join only keeps the ones that are one height above Keeping track of the initial X, Y coordinates (the “trailhead”) enabled us to group by it in the solution: • For part 1, just count the distinct final positions for each trailhead, then sum them • For part 2, just count the total number of routes for each trailhead, then sum them How did you solve the day 10 problem? #sqlwithbill #sql
Bill Wallis’ Post
More Relevant Posts
-
Calling all PS Query/SQL enthusiasts! We're exploring a new HEUG discussion group just for you. Share your thoughts by taking our quick survey – your input will help shape this new space! 🔗 https://bit.ly/3Ass4Y3 #HEUGCommunity #CommunityDiscussion #PSQuery #SQL
To view or add a comment, sign in
-
🚀 Day 17 of the #30DaySQLQueryChallenge: Task Completed! Today's challenge required identifying users who logged in for 5 or more consecutive days, considering that a user can log in multiple times during a day. Utilized CTE, window functions (such as first_value, last_value, rank), subquery, and dateadd function to achieve the desired outcome. 📊 Challenges curated by Thoufiq Mohammed 🌟 Eager to keep advancing in SQL! #SQL #TechTFQ #ChallengeAccepted 🚀
To view or add a comment, sign in
-
Ever wished you could bring Excel-style pivoting directly into your SQL workflows? Now you can with the new pivot_table extension in DuckDB! With DuckDB’s latest version 1.1, you can now easily create sharable extensions using only SQL macros—no need for complex code. What does this mean for you? - Effortlessly pivot your data, just like you would in Excel, but with the power of SQL! - Wrap C++ libraries (or any other language) directly in DuckDB using community-driven extensions. - Simplify your workflow while boosting your data analysis capabilities. Want to learn more? Check it out: https://lnkd.in/eXJxNJDq #DuckDB #SQLExtensions #DataAnalysis #PivotTables #SQL
To view or add a comment, sign in
-
🎬 A short video demoing MariaDB's "Temporary Tables". Handy when you need to perform complex #SQL queries and manipulations without permanently altering your #database structure. 👉https://lnkd.in/dqje7RJw
Using Temporary Tables for Intermediate Results and Simplification
https://www.youtube.com/
To view or add a comment, sign in
-
Temporary Tables. What are they, when would you use them and how? Alejandro Duarte has you covered! #mariadb #sql #database
🎬 A short video demoing MariaDB's "Temporary Tables". Handy when you need to perform complex #SQL queries and manipulations without permanently altering your #database structure. 👉https://lnkd.in/dqje7RJw
Using Temporary Tables for Intermediate Results and Simplification
https://www.youtube.com/
To view or add a comment, sign in
-
In SQL there are ranking functions to uniquely rank rows. My favorite one is ROW_NUMBER (). I use this in conjunction with a cte and partition by to quickly remove duplicate rows. The three remaining ranking functions are RANK (), DENSE_RANK (), and NTILE.
To view or add a comment, sign in
-
So I didn't make it this morning. If you are still interested in this topic as I am, you can watch the session on youtube together with the sessions from other speakers. Here is the link for Erik's presentation: https://lnkd.in/eMXc6sNY I really don't want to miss this.
I'm proud to announce that I have just completed the Title slide for my presentation 😀 I will be speaking at the #NewStarsOfData conference on April 5, 2024. If you're interested in learning about the internal optimization techniques used by SQL Server Optimizer, particularly with Nested Loops joins, feel free to register. For more information, visit: https://lnkd.in/dn_JNERy
To view or add a comment, sign in
-
DuckDB_% Metadata functions are a hidden gem 💎 Here's a nice way to work with nested JSON with #duckdb 👇 The duckdb_tables() contains a SQL column with the definition of each table, as a CREATE TABLE statement. Below is an example to grab a CREATE TABLE statement, leveraging the automated schema detection when working with nested JSON 🪄
To view or add a comment, sign in
-
I’m excited to share that I’ve completed Introduction to SQL on DataCamp! I learned about key SQL commands like SELECT, WHERE, AS, DISTINCT, and VIEW.
To view or add a comment, sign in
-
By default, all table data in Postgres are physically stored using the “heap” method. So every database is a set of 1Gb files (”segments”) and each file is logically split into 8Kb pages. Actual table rows are put into any page with enough free space. Vacuum and autovacuum are efficient ways to maintain the tables without bloat. Vacuum goes through all table files looking for row versions that were updated or deleted by already completed transactions and frees the space on the pages. Read more: https://lnkd.in/eeDVyDMk
To view or add a comment, sign in
📌 I'll be sharing my DuckDB solutions as I go -- find the full list in the comments of the post below, or check out the #sqlwithbill hashtag to see all my recent posts https://www.linkedin.com/posts/bill-wallis_sqlwithbill-sql-activity-7270430895344336897-0Nuu The full SQL is available here: https://github.com/Bilbottom/advent-of-code/tree/main/advent_of_code/solutions/year_2024/day_10