LeetCode 𝟕𝟖. 𝐒𝐮𝐛𝐬𝐞𝐭𝐬 (Asked by Reddit, Twitter) 𝐐̲𝐮̲𝐞̲𝐬̲𝐭̲𝐢̲𝐨̲𝐧̲ Given an integer array nums of unique elements, return all possible subsets (the power set). The solution set must not contain duplicate subsets. Return the solution in any order. Example 1: Input: nums = [1,2,3] Output: [[],[1],[2],[1,2],[3],[1,3],[2,3],[1,2,3]] Example 2: Input: nums = [0] Output: [[],[0]] 𝐒̲𝐨̲𝐥̲𝐮̲𝐭̲𝐢̲𝐨̲𝐧̲ 1. initialize the result list and an empty subset list, then start the DFS from index 0. List<List<Integer>> result = new ArrayList<>(); List<Integer> subset = new ArrayList<>(); 2. DFS Recursive Function: Base Case: When the index reaches the length of nums, it means we have considered all elements, so we add the current subset to the result. if(index>=nums.length){ result.add(new ArrayList<>(subset)); return; } Recursive Case: Include Current Element: We add the current element to the subset and proceed to the next element. subset.add(nums[index]); dfs(index+1,nums,subset,result); We backtrack by removing the last added element and proceed to the next element without including the current one. subset.remove(subset.size()-1); dfs(index+1,nums,subset,result); 𝐓̲𝐢̲𝐦̲𝐞̲ ̲𝐚̲𝐧̲𝐝̲ ̲𝐬̲𝐩̲𝐚̲𝐜̲𝐞̲ ̲𝐜̲𝐨̲𝐦̲𝐩̲𝐥̲𝐞̲𝐱̲𝐢̲𝐭̲𝐲̲ The overall time complexity is O(n⋅2^n) 2^n comes from the number of subsets. n comes from the time to create each subset when adding it to the result list. The overall space complexity is O(n⋅2^n) Number of Subsets: There are 2^n subsets. Space for Each Subset: Each (or half of the) subsets can have up to n elements ------------------------------------------------------------------- Resources: NeetCode: https://lnkd.in/gtPrxrCN Greg Hogg: https://lnkd.in/g5Q4sYpz #dsa #coding #programming #leetcode
Interview Prep’s Post
More Relevant Posts
-
#remove_element #leetcode_29 Given an integer array nums and an integer val, remove all occurrences of val in nums in-place. The order of the elements may be changed. Then return the number of elements in nums which are not equal to val. Consider the number of elements in nums which are not equal to val be k, to get accepted, you need to do the following things: Change the array nums such that the first k elements of nums contain the elements which are not equal to val. The remaining elements of nums are not important as well as the size of nums. Return k. Custom Judge: The judge will test your solution with the following code: int[] nums = [...]; // Input array int val = ...; // Value to remove int[] expectedNums = [...]; // The expected answer with correct length. // It is sorted with no values equaling val. int k = removeElement(nums, val); // Calls your implementation assert k == expectedNums.length; sort(nums, 0, k); // Sort the first k elements of nums for (int i = 0; i < actualLength; i++) { assert nums[i] == expectedNums[i]; } If all assertions pass, then your solution will be accepted. Example 1: Input: nums = [3,2,2,3], val = 3 Output: 2, nums = [2,2,_,_] Explanation: Your function should return k = 2, with the first two elements of nums being 2. It does not matter what you leave beyond the returned k (hence they are underscores). Example 2: Input: nums = [0,1,2,2,3,0,4,2], val = 2 Output: 5, nums = [0,1,4,0,3,_,_,_] Explanation: Your function should return k = 5, with the first five elements of nums containing 0, 0, 1, 3, and 4. Note that the five elements can be returned in any order. It does not matter what you leave beyond the returned k (hence they are underscores). Constraints: 0 <= nums.length <= 100 0 <= nums[i] <= 50 0 <= val <= 100
To view or add a comment, sign in
-
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target. You may assume that each input would have exactly one solution, and you may not use the same element twice. You can return the answer in any order. Example 1: Input: nums = [2,7,11,15], target = 9 Output: [0,1] Explanation: Because nums[0] + nums[1] == 9, we return [0, 1]. Example 2: Input: nums = [3,2,4], target = 6 Output: [1,2] Example 3: Input: nums = [3,3], target = 6 Output: [0,1] Constraints: 2 <= nums.length <= 104 -109 <= nums[i] <= 109 -109 <= target <= 109 Only one valid answer exists. Follow-up: Can you come up with an algorithm that is less than O(n2) time complexity? SOLUTION:it based on hasmap and it achieve o(n) complexity here code is public class sum{ public int[] sum(int[] nums, int target) { HashMap<Integer, Integer> map = newHashMap<>(); for (inti=0; i < nums.length; i++) { intcomplement= target - nums[i]; if (map.containsKey(complement)) { return new int[] { map.get(complement), i }; } map.put(nums[i], i); } return new int[] {}; } publicstaticvoidmain(String[] args) { sum=new sum(); int[] nums1 = {2, 7, 11, 15}; int target1=9; int[] result1 = ts.sums(nums1, target1); System.out.println("Example 1: " + result1[0] + ", " + result1[1]); int[] nums2 = {3, 2, 4}; int target2=6; int[] result2 = ts.sums(nums2, target2); System.out.println("Example 2: " + result2[0] + ", " + result2[1]); int[] nums3 = {3, 3}; int target3=6; int[] result3 = ts.sums(nums3, target3); System.out.println("Example 3: " + result3[0] + ", " + result3[1]); } } } this is i fine solution for ThinkAI Organization
To view or add a comment, sign in
-
##OCTOBERSQL:RIGHT JOIN## 🌟Definition: A RIGHT JOIN returns all rows from the right table and the matched rows from the left table. If there's no match, NULL values are shown for the left table. 🌟Right Table Focus: The join starts by retrieving all rows from the right table, even if there's no match in the left table. Syntax: sql Copy code SELECT columns FROM table1 RIGHT JOIN table2 ON table1.column = table2.column; 🌟NULL Values: If there are no matching rows from the left table, the corresponding columns from the left table will contain NULL. 🌟Key Difference from LEFT JOIN: A LEFT JOIN returns all rows from the left table; a RIGHT JOIN returns all rows from the right table. 🌟Use Case: It's used when you need to retrieve all rows from the right table, even when there’s no corresponding data in the left table. 🌟Matching Condition: Rows are matched based on a condition specified in the ON clause, typically on a key column (e.g., IDs). 🌟Order of Tables Matters: In a RIGHT JOIN, the position of the tables determines which side's rows will be fully returned. Switching table order affects the result. 🌟Performance: RIGHT JOINs can perform similarly to LEFT JOINs but might be less commonly used, as LEFT JOINs are more intuitive for many cases. 🌟Alternative: You can often rewrite a RIGHT JOIN as a LEFT JOIN by switching the order of tables, achieving the same result. #SQL #DataScience #Database #DataAnalytics #BigData #DataEngineering #SQLServer #DataManagement #CloudComputing #AI #MachineLearning #Tech #Coding #BusinessIntelligence #SoftwareDevelopment
To view or add a comment, sign in
-
Day 37 🔥 Binary Trees and Heaps 1️⃣ Serialization and Deserialization of a Binary Tree Observation: Use a breadth-first search (BFS) approach to convert a binary tree into a string (serialization) and reconstruct it back into a binary tree (deserialization). A queue is used to process each node level by level. Approach: Serialization: Start with an empty queue and add the root to it. Traverse the tree using BFS. For each node, if it is null, append "n " to the result string, otherwise append its value. Add the left and right children of non-null nodes to the queue. Return the result string, which represents the serialized tree. Deserialization: If the input string is empty, return null. Split the string into values. Create the root node from the first value and add it to a queue. Process each node by assigning its left and right children based on the subsequent values in the list. Continue this until all nodes are processed. Return the root of the reconstructed binary tree. 2️⃣ Flatten Binary Tree to Linked List Observation: Convert a binary tree into a linked list by performing a preorder traversal. The nodes should be flattened such that all the left pointers become null and only the right pointers remain, forming a linear linked list. Approach: Use a helper method finder to perform a preorder traversal and store the nodes in a queue. Start from the root, poll nodes from the queue, and modify the tree by making the right child of each node point to the next node in the queue. Set the left child of each node to null. Continue this until all nodes are processed. 3️⃣ Find Median from Data Stream Observation: Use two heaps (priority queues) to efficiently find the median from a stream of integers. The max-heap stores the smaller half of the numbers, while the min-heap stores the larger half. Approach: Adding a Number: If the max-heap is empty or the new number is smaller than the maximum of the max-heap, add it to the max-heap. If adding the number causes the max-heap to exceed half the size of the total elements, move the largest element from the max-heap to the min-heap. Otherwise, add the number to the min-heap. If the min-heap grows too large, move the smallest element to the max-heap. Finding the Median: If the total number of elements is even, the median is the average of the maximum from the max-heap and the minimum from the min-heap. If odd, the median is the maximum from the max-heap. #DSA #BinaryTree #Heaps #Algorithms #CodingPractice #InterviewPrep #TechInterviews #LeetCode #GeeksforGeeks #CodeDaily #ProblemSolving #JobReady #CareerGrowth #Placements
To view or add a comment, sign in
-
#5 Have a look to the basic visual representation of Definition, Algorithm, Time Complexity, Advantages ane Disadvantages of Merge Sort Algorithm Merge sort is a divide-and-conquer algorithm that divides the input array into two halves, recursively sorts each half, and then merges the sorted halves to produce the final sorted array. [MergeSort] MERGE_SORT [DATA, N] This algorithm sorts the elements in the array DATA using the Merge Sort technique. 1. If N <= 1, return [Base case: array is already sorted.] 2. Set MID: = N / 2 [Find the midpoint of the array.] 3. Call MERGE_SORT(DATA[0, MID], MID) [Recursively sort the left half.] 4. Call MERGE_SORT(DATA[MID, N], N - MID) [Recursively sort the right half.] 5. Call MERGE(DATA, MID, N) [Merge the sorted halves.] [End of merge sort algorithm.] MERGE [DATA, MID, N] This function merges two sorted subarrays into one sorted array. 1. Create temporary arrays LEFT[0, MID] and RIGHT[0, N - MID] [Allocate memory for the left and right subarrays.] 2. Copy DATA[0, MID] to LEFT[0, MID] and DATA[MID, N] to RIGHT[0, N - MID] [Copy data to temporary arrays.] 3. Set indices I, J, and K to 0 [Initialize indices for merging.] 4. Repeat while I < MID and J < N - MID: a) If LEFT[I] <= RIGHT[J], set DATA[K] = LEFT[I] and increment I and K. b) Otherwise, set DATA[K] = RIGHT[J] and increment J and K. 5. Copy any remaining elements of LEFT[] and RIGHT[] to DATA[] if there are any. 6. Free memory for LEFT[] and RIGHT[]. 7. Exit [End of merge function.] OR [MergeSort] MERGE_SORT [DATA, N] This algorithm sorts the elements in the array DATA using the merge sort technique. Step 1: If N <= 1, then exit (base case) Step 2: Let MID = N / 2 Step 3: Create two temporary arrays LEFT and RIGHT of size MID and N - MID, respectively Step 4: Copy the first MID elements of DATA to LEFT and the remaining elements to RIGHT Step 5: Call MERGE_SORT on LEFT with size MID Step 6: Call MERGE_SORT on RIGHT with size N - MID Step 7: Merge the sorted arrays LEFT and RIGHT back into DATA Step 8: Exit This algorithm recursively divides the array into two halves, sorts them separately, and then merges them together to achieve the sorted array. Time Complexity: - Best case: O(n log n) - Worst case: O(n log n) - Average case: O(n log n) The time complexity remains consistent regardless of the input data distribution, making merge sort a reliable choice for sorting large datasets. Advantages: 1. Stable sorting algorithm (preserves the order of equal elements). 2. Guarantees worst-case time complexity of O(n log n). 3. Well-suited for sorting linked lists as well as arrays. 4. Efficient for large datasets due to its divide-and-conquer approach. Disadvantages: 1. Not as efficient for small datasets or nearly sorted arrays compared to other sorting algorithms like insertion sort. 2. Recursive nature might lead to stack overflow for very large input sizes (though this can be mitigated by using an iterative version of merge sort).
To view or add a comment, sign in
-
Count pairs with given sum Given an array of N integers, and an integer K, find the number of pairs of elements in the array whose sum is equal to K. Example 1: Input: N = 4, K = 6 arr[] = {1, 5, 7, 1} Output: 2 Explanation: arr[0] + arr[1] = 1 + 5 = 6 and arr[1] + arr[3] = 5 + 1 = 6. Expected Time Complexity: O(N) Expected Auxiliary Space: O(N) Constraints: 1 <= N <= 105 1 <= K <= 108 1 <= Arr[i] <= 106 approach:-> here this question is easy as here we just need to find the half part in the map and this means that as if we iterate through the array then if we want to know that the part that add with ith element and that will make the sum equal to k is present in the map. so here map element is the element that we already iterated and we add that element to map; > so here in. condition we check for (k-arr[i]) as this is the number that will add to arr[i] and make it equal to k. and if it found then we add that number present in map at mp[arr[i]] to count and if this (k-arr[i]) is not present in map then we add this arr[i] element to map so that in future if any element which make sum equal to k by adding with arr[i] then there we need arr[i] and for that we have to add this element to the map. >code is: //User function template for C++ class Solution{ public: int getPairsCount(int arr[], int n, int k) { // code here map<int,int> mp; int count=0; for(int i=0;i<n;i++){ if(mp.find(k-arr[i])!=mp.end()){ count+= mp[k-arr[i]]; } mp[arr[i]]++; } return count; } }; count+= mp[k-arr[i]]; -> here this means that after arr[i] added to the map there can be any number of elements found that have the sum equal to k by addition with arr[i].
To view or add a comment, sign in
-
🚨Data Structures and #Algorithms Cheatsheet🚨 (Master these Concepts First... 🔹 𝗔𝗿𝗿𝗮𝘆𝘀 ↳ Contiguous memory allocation ↳ Fixed-size vs. dynamic arrays ↳ Common operations: insert, delete, search 🔹 𝗛𝗮𝘀𝗵 𝗧𝗮𝗯𝗹𝗲𝘀 ↳ Key-value pair storage ↳ Hash functions and collisions ↳ Applications: caching, lookups 🔹 𝗟𝗶𝗻𝗸𝗲𝗱 𝗟𝗶𝘀𝘁𝘀 ↳ Singly vs. Doubly Linked Lists ↳ Node structure and traversal ↳ Memory-efficient dynamic data structure 🔹 𝗦𝘁𝗮𝗰𝗸𝘀 & 𝗤𝘂𝗲𝘂𝗲𝘀 ↳ LIFO (Last In, First Out) - Stack ↳ FIFO (First In, First Out) - Queue ↳ Applications: undo functionality, task scheduling 🔹 𝗧𝗿𝗲𝗲𝘀 ↳ Binary Trees and Binary Search Trees ↳ Traversals: Inorder, Preorder, Postorder ↳ Applications: hierarchical data, parsers 🔹 𝗚𝗿𝗮𝗽𝗵𝘀 ↳ Directed vs. Undirected Graphs ↳ Adjacency Matrix and List representation ↳ Graph algorithms: Dijkstra’s, A* 🔹 𝗥𝗲𝗰𝘂𝗿𝘀𝗶𝗼𝗻 ↳ Base case and recursive case ↳ Applications: backtracking, tree traversal ↳ Common pitfalls: stack overflow, infinite loops 🔹 𝗦𝗲𝗮𝗿𝗰𝗵𝗶𝗻𝗴 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝘀 ↳ Linear Search ↳ Binary Search ↳ Applications: finding elements in datasets 🔹 𝗦𝗼𝗿𝘁𝗶𝗻𝗴 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝘀 ↳ Quick Sort ↳ Merge Sort ↳ Applications: ordering and ranking 🔹 𝗕𝗙𝗦 𝘃𝘀. 𝗗𝗙𝗦 ↳ Breadth-First Search (queue-based) ↳ Depth-First Search (stack-based) ↳ Use cases: shortest path vs. exploring all paths 🔹 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗣𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 ↳ Optimal substructure and overlapping subproblems ↳ Tabulation vs. Memoization ↳ Applications: knapsack, longest subsequence 𝙁𝙤𝙧 𝘿𝙚𝙩𝙖𝙞𝙡𝙚𝙙 𝘾𝙝𝙚𝙖𝙩𝙨𝙝𝙚𝙚𝙩, 𝘊𝘩𝘦𝘤𝘬 𝘵𝘩𝘦 𝘗𝘋𝘍 𝘣𝘦𝘭𝘰𝘸. PDF Credit :- Zero To Mastery (ZTM) 🔥 𝙄𝙛 𝙮𝙤𝙪 𝙛𝙤𝙪𝙣𝙙 𝙩𝙝𝙞𝙨 𝙝𝙚𝙡𝙥𝙛𝙪𝙡, 🔥 Hit the like button & ♻️ Repost to help someone preparing for their interview! 𝙁𝙤𝙧 𝘿𝙚𝙩𝙖𝙞𝙡𝙚𝙙 𝘿𝙎𝘼 𝙍𝙤𝙖𝙙𝙢𝙖𝙥 𝘊𝘩𝘦𝘤𝘬 𝘰𝘶𝘵 𝘵𝘩𝘪𝘴 :- https://lnkd.in/gs3eqgKY 𝙁𝙤𝙧 𝘼𝙧𝙧𝙖𝙮 #interview 𝙌𝙪𝙚𝙨𝙩𝙞𝙤𝙣𝙨 𝘊𝘩𝘦𝘤𝘬 𝘰𝘶𝘵 𝘵𝘩𝘪𝘴 :- https://lnkd.in/gGd6vCAK
To view or add a comment, sign in
-
Collections in Rust: Part III HashSet<T> A set is a data structure that stores a collection of unique elements , with no duplicates allowed. Sets can be implemented using a variety of data structures, including arrays, linked lists, binary search trees, and hash tables. A HashSet in Rust is like a HashMap<T, ()>, that is, a HashMap with only keys in it. When you add an element to a HashSet, it is inserted into a HashMap with the key being the element, and the value is set to (). If you insert an element that is already present in the HashSet, then the new value will replace the old (this matches the behavior of HashMap if we add an element with a key that already exists). This is handy especially when you don't need more than one item or when you want to know if you already have an item with a certain value. The HashSet uses hashing to efficiently determine membership. It relies on the Hash and Eq traits of the key to compute the hash and to compare keys for equality. As with the HashMap type, a HashSet requires that the elements implement the Eq and Hash traits. This can frequently be achieved by using #[derive(PartialEq, Eq, Hash)]. You can create a HashSet using the new() method: let mut months = HashSet::new(); You can also use the with_capacity() method to specify capacity that the HashMap can hold without reallocating: let mut months:HashSet<&str> = HashSet::with_capacity(14); You can add items, as in HashMap, using the insert() method: months.insert("January"); months.insert("February"); months.insert("March"); months.insert("April"); You can use the contains() method to check if a value is present in a set. The method returns true if the specified element is present in the set, otherwise returns false: println!("{:?}", months.contains("January")); // Output: true println!("{:?}", months.contains("August")); // Output: false You can iterate over the HashSet using the .iter() method, but there are two things to keep in mind. First, the iteration is done in an arbitrary order. Second, iterating over a Set takes O(capacity) time instead of O(len) because it internally visits empty buckets too. Example: for x in months.iter() { println!("{x}"); } You can remove a specific element from the set if it exists using the remove() method: months.remove("March"); If you want to remove all items from the HashSet, use the clear() method. It removes all items from the set, leaving it empty: months.clear();
To view or add a comment, sign in
-
𝐀𝐩𝐚𝐜𝐡𝐞 𝐈𝐜𝐞𝐛𝐞𝐫𝐠 or 𝐃𝐞𝐥𝐭𝐚 𝐋𝐚𝐤𝐞? - Let’s breakdown and find the winner 🏆 🚀 𝐒𝐜𝐡𝐞𝐦𝐚 𝐄𝐯𝐨𝐥𝐮𝐭𝐢𝐨𝐧: ----------------------- 𝐈𝐜𝐞𝐛𝐞𝐫𝐠: Supports renaming, reordering, and dropping columns easily, giving more flexibility for schema changes. 𝐃𝐞𝐥𝐭𝐚 𝐋𝐚𝐤𝐞: Supports schema evolution but with more constraints. 𝐖𝐢𝐧𝐧𝐞𝐫: 𝐈𝐜𝐞𝐛𝐞𝐫𝐠 🏆 - due to greater flexibility in managing schema changes without rewriting data. 📁 𝐏𝐚𝐫𝐭𝐢𝐭𝐢𝐨𝐧𝐢𝐧𝐠: ----------------- 𝐈𝐜𝐞𝐛𝐞𝐫𝐠: Has dynamic partitioning, adapting to changing data patterns and optimizing queries. 𝐃𝐞𝐥𝐭𝐚 𝐋𝐚𝐤𝐞: Relies on static partitions, well-suited for Spark but can be less flexible in complex setups. 𝐖𝐢𝐧𝐧𝐞𝐫: 𝐈𝐜𝐞𝐛𝐞𝐫𝐠 🏆 🏆 - due to its dynamic and flexible approach to partitioning. 🔄 𝐒𝐭𝐨𝐫𝐚𝐠𝐞 & 𝐌𝐞𝐭𝐚𝐝𝐚𝐭𝐚 𝐌𝐚𝐧𝐚𝐠𝐞𝐦𝐞𝐧𝐭: ---------------------------------------- 𝐈𝐜𝐞𝐛𝐞𝐫𝐠: Uses a central catalog system that works across multiple engines (Spark, Flink, Trino), providing a unified metadata layer. 𝐃𝐞𝐥𝐭𝐚 𝐋𝐚𝐤𝐞: Uses _delta_log files, which are great for Spark but don’t work as smoothly across engines. 𝐖𝐢𝐧𝐧𝐞𝐫: 𝐈𝐜𝐞𝐛𝐞𝐫𝐠 🏆 🏆 🏆 - due to broader compatibility with various data engines and storage systems. 💻 𝐌𝐮𝐥𝐭𝐢-𝐄𝐧𝐠𝐢𝐧𝐞 𝐒𝐮𝐩𝐩𝐨𝐫𝐭: ---------------------------- 𝐈𝐜𝐞𝐛𝐞𝐫𝐠: Designed for cross-engine compatibility with support for Spark, Flink, and Trino. 𝐃𝐞𝐥𝐭𝐚 𝐋𝐚𝐤𝐞: Closely integrated with Spark, making it ideal for Spark-centric environments but limited with other engines. 𝐖𝐢𝐧𝐧𝐞𝐫: 𝐈𝐜𝐞𝐛𝐞𝐫𝐠 🏆 🏆 🏆 🏆 - especially if you’re using or planning to use multiple engines. 🔍 𝐓𝐢𝐦𝐞 𝐓𝐫𝐚𝐯𝐞𝐥 & 𝐕𝐞𝐫𝐬𝐢𝐨𝐧𝐢𝐧𝐠 : ------------------------------ 𝐈𝐜𝐞𝐛𝐞𝐫𝐠: Supports time travel and adds branching and tagging, which helps teams needing more flexibility. 𝐃𝐞𝐥𝐭𝐚 𝐋𝐚𝐤𝐞: Also supports time travel, but without branching and tagging. 𝐖𝐢𝐧𝐧𝐞𝐫: 𝐈𝐜𝐞𝐛𝐞𝐫𝐠 🏆 🏆 🏆 🏆 🏆 - for branching and tagging capabilities that enhance flexibility in complex workflows. 📊 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 & 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲: ---------------------------- 𝐃𝐞𝐥𝐭𝐚 𝐋𝐚𝐤𝐞: With Unity Catalog on Databricks, Delta Lake offers robust governance and access control features. 𝐈𝐜𝐞𝐛𝐞𝐫𝐠: Unity Catalog integration is coming soon, but Delta Lake has the edge for now in Databricks. 𝐖𝐢𝐧𝐧𝐞𝐫: 𝐃𝐞𝐥𝐭𝐚 𝐋𝐚𝐤𝐞 🏆 - if on Databricks with Unity Catalog 𝐅𝐢𝐧𝐚𝐥 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲𝐬: —————————— If you’re looking for cross-engine compatibility and advanced flexibility, Iceberg wins in most categories. For Spark-focused projects and built-in governance in Databricks, Delta Lake is your best bet. So What’s your experience with Iceberg or Delta Lake? Let’s share in the comments! #dataengineering #iceberg #deltalake
To view or add a comment, sign in
-
Today I tackled a leetcode problem.🧠 📄🛑Convert Sorted Array to Binary Search Tree. 108 Solution:https://lnkd.in/gZ-6HgJC 👀🖇️Given an sorted array `nums`, transform it into a height-balanced binary search tree. *Example:*💡 Input: `nums = [-10,-3,0,5,9]` Output: ->. 0 / \ -3 9 /. / -10. 5 *Complexity:* - Time complexity: O(n) - Space complexity: O(log n) for recursive approach, O(n) for iterative approach *Hints:* 1. Use recursion to divide the array into smaller sub-arrays. 2. Select the middle element as the root node. 3. Ensure the left and right subtrees are height-balanced. *Brief Solution:* ``` # Definition for a binary tree node. # class TreeNode: # def __init__(self, x): # self.val = x # self.left = None # self.right = None class Solution: def sortedArrayToBST(self, nums): if not nums: return None mid = len(nums) // 2 root = TreeNode(nums[mid]) root.left = self.sortedArrayToBST(nums[:mid]) root.right = self.sortedArrayToBST(nums[mid+1:]) return root ``` *Real-time Example:*🕒💻 Suppose you're building a database indexing system, and you need to efficiently store and retrieve data. Converting a sorted array into a balanced binary search tree allows for: - Fast search (O(log n)) - Efficient insertion and deletion (O(log n)) - Improved data organization and scalability *Step-by-Step Solution:* 1. Define the TreeNode class. 2. Create a recursive function `sortedArrayToBST` that takes the input array `nums`. 3. Base case: If `nums` is empty, return `None`. 4. Calculate the middle index `mid`. 5. Create a new TreeNode with the middle element's value. 6. Recursively call `sortedArrayToBST` on the left and right sub-arrays. 7. Assign the resulting nodes to the left and right child pointers of the current node. 8. Return the root node. *Iterative Approach:*👇🛠️ ``` class Solution: def sortedArrayToBST(self, nums): if not nums: return None root = TreeNode(0) stack = [(root, 0, len(nums) - 1)] while stack: node, left, right = stack.pop() mid = (left + right) // 2 node.val = nums[mid] if left <= mid - 1: node.left = TreeNode(0) stack.append((node.left, left, mid - 1)) if mid + 1 <= right: node.right = TreeNode(0) stack.append((node.right, mid + 1, right)) return root ``` This iterative approach uses a stack to simulate recursion, eliminating the need for recursive function calls.
To view or add a comment, sign in
781 followers