Today, I tackled Problem No. 180: M-Coloring Problem in my DSA challenge. 🎨 Problem Overview: The task was to determine if we can color an undirected graph using at most 'm' colors such that no two adjacent vertices share the same color. Approach: To solve this problem, I implemented a backtracking algorithm that works as follows: 1. Recursive Coloring: Start from the first vertex and try to assign one of the 'm' colors. If successful, recursively attempt to color the next vertex. 2. Validation: Before assigning a color to a vertex, ensure that it doesn't conflict with the colors of adjacent vertices. 3. Backtrack: If assigning a color leads to a dead end, backtrack and try the next color. Key Takeaways: 1. Backtracking is a powerful technique for solving constraint satisfaction problems like graph coloring. 2. Understanding the adjacency matrix representation of graphs is crucial for implementing effective algorithms. 3. This problem illustrates how simple graph problems can have deeper implications in fields like scheduling and resource allocation. Personal Reflection: This challenge reinforced my understanding of graph theory and algorithm design. I found it intriguing how coloring problems can relate to real-world scenarios. If anyone has alternative strategies or insights on this topic, I'd love to discuss! #DataStructures #Algorithms #GraphTheory #Backtracking #DSAChallenge
Rohit Chauhan’s Post
More Relevant Posts
-
𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐭 𝐈𝐧-𝐏𝐥𝐚𝐜𝐞 𝐀𝐫𝐫𝐚𝐲 𝐓𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧 𝐔𝐬𝐢𝐧𝐠 𝐌𝐨𝐝𝐮𝐥𝐚𝐫 𝐀𝐫𝐢𝐭𝐡𝐦𝐞𝐭𝐢𝐜 Today, I tackled an interesting problem: transforming an array in-place without using extra space. Here's a nifty solution using modular arithmetic! 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐒𝐭𝐚𝐭𝐞𝐦𝐞𝐧𝐭: Given an array nums of length n, where each element is a permutation of [0, 1, 2, ..., n-1], we need to build a new array such that each element at index i is nums[nums[i]], all in-place. 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: To solve this, I used a clever encoding and decoding technique with modular arithmetic. This allows storing two pieces of information in one array element. 𝐄𝐧𝐜𝐨𝐝𝐢𝐧𝐠 𝐅𝐨𝐫𝐦𝐮𝐥𝐚: 𝓃𝓊𝓂𝓈[𝒾]=𝓸𝓁𝒹_𝓋𝒶𝓁𝓊𝓮+(𝓃𝓮𝓌_𝓋𝒶𝓁𝓊𝓮%𝓃)×𝓃 𝒪𝓁𝒹 𝒱𝒶𝓁𝓊𝓮: The current value at nums[i]. 𝒩𝓮𝓌 𝒱𝒶𝓁𝓊𝓮: The desired value nums[nums[i]], shifted to higher bits using multiplication by n. 𝐃𝐞𝐜𝐨𝐝𝐢𝐧𝐠 𝐅𝐨𝐫𝐦𝐮𝐥𝐚: 𝓃𝓮𝓌_𝓋𝒶𝓁𝓊𝓮=𝓃𝓊𝓂𝓈[𝒾]//𝓃 Dividing by n isolates the new value stored in the higher bits. The term 𝓸𝓁𝒹_𝓋𝒶𝓁𝓊𝓮 (which is less than 𝒏) contributes to the remainder and does not affect the quotient. The term (𝓃𝓮𝓌_𝓋𝒶𝓁𝓊𝓮%𝓃)×𝓃 contributes to the quotient because it is effectively multiplied by 𝓃. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞: Given nums = [2, 0, 1]: Encode: nums[0] = 2 + (1 % 3) * 3 = 2 + 3 = 5 Decode: new_value = 5 // 3 = 1 This technique showcases the power of mathematical insights in designing efficient algorithms. Try it out next time you face an in-place array transformation challenge! #AlgorithmDesign #CodingTricks #ModularArithmetic #InPlaceTransformation #ProgrammingTips
To view or add a comment, sign in
-
The One Graph Concept That Stumped Me. . . . Have you ever encountered a concept that just wouldn't click? For me, it was finding an articulation point in graph theory. I spent an entire day trying to understand it, and even now, it feels like a brand-new challenge every time I revisit it! 😅 Here are my top 10 must-know graph algorithms: 1. Depth-First Search (DFS) 2. Breadth-First Search (BFS) 3. Dijkstra's Algorithm 4. Bellman-Ford Algorithm 5. Floyd-Warshall Algorithm 6. Kruskal's Algorithm 7. Prim's Algorithm 8. Kosaraju's Algorithm 9. Kahn's Algorithm 10. Graph Coloring Algorithm These algorithms are essential for tasks like finding the shortest paths, detecting cycles, and more. Each has its unique charm and complexity. Which graph concept or algorithm has been your toughest nut to crack? Let's discuss and learn from each other! 👇 #Graphs #Algorithms
To view or add a comment, sign in
-
Day 30 🔥 Binary Trees 1️⃣ Zigzag Level Order Traversal Observation: This is a variation of the standard level order traversal, but the direction of traversal alternates at each level (left to right, then right to left). Approach: Use a queue to store nodes level by level. For each level, use an array to store node values. Insert elements either left-to-right or right-to-left depending on the current direction. After processing all nodes in a level, flip the direction. Return the final list of level values after completing the traversal. --- 2️⃣ Max Path Sum Observation: The path in the binary tree can start and end at any node. The goal is to find the maximum sum path between any two nodes. Approach: Use recursion to explore left and right subtrees. At each node, calculate the maximum path sum through the left and right child. Keep track of the maximum path sum globally. The result at each node is the maximum of either the left or right child’s value plus the node’s value. --- 3️⃣ Binary Tree from Preorder and Inorder Traversal Observation: Preorder traversal gives the root node first, and inorder traversal helps identify the left and right subtrees. Approach: Build a hash map for the inorder traversal to quickly find the root index. Recursively build the left and right subtrees based on the root node's position in the inorder array. Repeat this process to construct the entire tree. --- #DSA #Algorithms #ProblemSolving #CodingPractice #InterviewPrep #TechInterviews #Placements #BinaryTree #DataStructures
To view or add a comment, sign in
-
#Day79 of My #CodingJourney in #C++: Exploring Depth-First Traversal in Graphs! 🚀 📌 Today's Challenge: Depth-First Traversal (DFT) Implementation! We're delving into graph algorithms today, specifically implementing Depth-First Traversal to visit all vertices of a graph. This involves recursion, adjacency lists, and a keen understanding of graph structures. Code Breakdown 1. Recursive DFT Function (`dft`): - Purpose: Traverses the graph depth-first from a given vertex. - Steps: - Marks the current vertex as visited. - Visits all adjacent unvisited vertices recursively. - Uses an iterator to traverse the adjacency list for efficient neighbor access. 2. Driver Function (`depthFirstTraversal`): -Purpose: Ensures all connected components are visited. - Steps: - Creates a visited array to track visited vertices. - Iterates through all vertices, calling `dft` for each unvisited one. - Dynamically allocates the `visited` array and cleans up memory after traversal. Key Learnings ✅ Graph Representation: Using adjacency lists for efficient edge storage and traversal. ✅ Recursive Traversal: Applying recursion to explore depth-first paths in the graph. ✅ Memory Management: Leveraging dynamic memory for arrays and ensuring proper cleanup. ✅ Graph Components: Understanding how to traverse disconnected components by iterating through all vertices. Looking Ahead 1. Explore Other Traversal Techniques: - Breadth-First Traversal (BFT) for level-order exploration. 2. Advanced Applications: - Detecting cycles, finding connected components, and topological sorting. 3. Optimize Graph Algorithms: - Implement using iterative approaches or advanced data structures like stacks for DFT. 4. Master Variants: - Practice on directed, undirected, weighted, and unweighted graphs. This problem underscores the versatility of recursion and the elegance of graph algorithms. Each line of code contributes to building a foundation for tackling more complex graph problems. Let's keep exploring and advancing! 🌟 #DepthFirstSearch #GraphTraversal #Recursion #GraphAlgorithms #DataStructures #100DaysOfCode #DSA
To view or add a comment, sign in
-
This week I had a chance to read through (what I assume to be the publically available pre-print of) Modern Likelihood-Frequentist Inference by Donald A. Pierce and Ruggero Bellio: https://lnkd.in/gwDx5fdz They give a historical and mathematical exposition of higher-order likelihood inference. I cannot claim to understand the subject matter, but I am excited to learn more about the topic and to improve my math stats so that I can absorb the material a bit deeper. I'm personally interested in learning more about this topic because I hear it is a good option to explore for small sample size studies. On the topic of small sample size studies, I often read conversations of the benefits and dangers of using the bootstrap with small sample sizes: https://lnkd.in/gejQRT8w In Modern Likelihood-Frequentist Inference there was a nice paragraph comparing higher order likelihood to the parametric bootstrap (pg 11):
To view or add a comment, sign in
-
""" HOW TO Animate Linear Algebra Vectors In MatPlotLib """ In my TOWARD coding transformers from scratch series of posts, this is part of the understanding Linear Algebra part better through animating linear algebraic operations. I would LOVE to see some posts from you guys using this to animate some linear algebra processes. I will post as many HOW-TOs before I engage in the same. It's totally OK to leverage from my code (I would appreciate any mentions), and I hope it helps even if just a little. Until Next Time, Thom #data #analytics #datascience #artificialintelligence #linearalgebra
To view or add a comment, sign in
-
🚀 Introducing a Learnable Bias Term, Δ𝑏, in LoRA! 🚀 ⚡️ Part 2 of the series "Breaking Down LoRA" ⚡️ After diving deeper into LoRA (Low-Rank Adaptation), implementing it from scratch in PyTorch (code: https://lnkd.in/dub47uBj), I experimented with a new idea that could potentially enhance its performance even further. Let’s take a quick look! 🤔 For a vanilla linear layer, the output is: 𝑦 = 𝑥⋅𝑊ᵀ + 𝑏 (Reference: https://lnkd.in/d5gff8zP). For a LoRA-adapted linear layer, the output becomes: 𝑦 = 𝑥⋅(𝑊 + (α/𝑟)⋅Δ𝑊)ᵀ + 𝑏 This works well, but I noticed something: during fine-tuning with LoRA, the bias term (𝑏) stays fixed. This means the transformation is still pivoting around the original pre-trained model's bias, which may limit flexibility. 🛠 My Solution? I introduced a learnable bias term, Δ𝑏, into the LoRA linear layer. However, it doesn't make sense to scale this term using ΔW's scaling factor. Hence, we add another hyper-parameter, β, making the output: 𝑦 = 𝑥⋅(𝑊 + (α/𝑟)⋅Δ𝑊)ᵀ + (𝑏 + β⋅Δ𝑏) Since the dimension of Δ𝑏 matches 𝑏, I could have simply unfrozen the original bias. However, to stay consistent with LoRA's core philosophy—preserving the original model and maintaining the ability to attach/detach adapters—I added it as a separate term. Here's the code: https://lnkd.in/dub47uBj 🏆 The Results? • A modest improvement in accuracy on average compared to conventional LoRA for my experiments (pretrained on flattened MNIST & fine-tuned on flattened FashionMNIST). • This comes at the cost of ~5% ▲ in trainable parameters. 🧑💻 Check out the notebooks: 1️⃣ Without Δ𝑏: https://lnkd.in/dt5Bf5VM 2️⃣ With Δ𝑏: https://lnkd.in/dg4dq422 Is the accuracy gain worth the extra parameters? 🤔 Feel free to try it yourself and let me know what you think!? 📚 Catch-up on previous parts: • Part 1: 🚀 Implementing LoRA (Low-Rank Adaptation) from Scratch in PyTorch! 🚀 https://lnkd.in/dEvGTZwi #MachineLearning #PyTorch #LoRA #FineTuning #AI #DeepLearning #LLMs #ML #Experimentation
To view or add a comment, sign in
-
In Vortex Math, there's a repeating numerical pattern that is based on doubling and digit reduction. The key idea is that numbers follow a certain cycle, and when we keep doubling numbers, they eventually form a repeating sequence. Here's how it works: Step-by-Step Explanation: 1. Start with 1 and double it: 1 × 2 = 2 2. Double 2: 2 × 2 = 4 3. Double 4: 4 × 2 = 8 4. Double 8: 8 × 2 = 16 → Reduce digits (1 + 6 = 7) 5. Double 7: 7 × 2 = 14 → Reduce digits (1 + 4 = 5) 6. Double 5: 5 × 2 = 10 → Reduce digits (1 + 0 = 1) As you can see, after doubling 5, the cycle returns to 1. So, the pattern repeats itself: 1, 2, 4, 8, 7, 5 The Cycle: This cycle of 1 → 2 → 4 → 8 → 7 → 5 keeps repeating infinitely. Visual Representation: In the image attached: The numbers 1, 2, 4, 8, 7, 5 are arranged in a circle, connected by lines that show the progression through this doubling pattern. The lines indicate how each number connects to the next in the sequence. The Importance of 3, 6, and 9: The numbers 3, 6, and 9 do not appear in the cycle. In Vortex Math, these numbers are often considered special or significant. Some interpretations suggest that 3, 6, and 9 represent a higher level of understanding or energy in this system, as they remain outside of the main doubling pattern. Conclusion: Vortex Math highlights a fascinating recurring pattern that emerges when doubling numbers and using digit reduction. This pattern follows a specific sequence, and it's believed to hold deeper meaning or structure in the universe, with the numbers 3, 6, and 9 often seen as key elements in this system.
To view or add a comment, sign in
-
3D Clustering with Graph Theory: The Complete Guide
3D Clustering with Graph Theory: The Complete Guide
towardsdatascience.com
To view or add a comment, sign in
-
🔍 Understanding the Two-Pointer Technique The Two Pointer Technique is a powerful tool for solving array and string problems efficiently. It's particularly useful when dealing with sorted arrays or when you need to find pairs that satisfy certain conditions. Example: Finding two numbers in a sorted array that sum up to a target value. Consider the array: [1, 2, 3, 4, 5, 6] and a target sum 7. Initialize Two Pointers: left at the start (index 0, value 1) right at the end (index 5, value 6) Check the Sum: Sum of array[left] + array[right] = 1 + 6 = 7 (matches the target) Since the sum equals the target, we've found our pair: (1, 6). Why it works: Moving left rightwards increases the sum. Moving right leftwards decreases the sum. This ensures we efficiently find the pair without checking all possible pairs. The Two Pointer Technique optimizes solutions for problems involving sorted arrays, pairs with specific conditions, and more, reducing time complexity significantly. #DataScience #Algorithms #ProblemSolving #TwoPointerTechnique #CodingTips
To view or add a comment, sign in
Passionate about Learning DSA | HTML & CSS | Advancing in JavaScript Development 🚀
2moVery informative