Stack-based Algorithms: Stacks in DFS and Backtracking #algorithms #apalgorithms #stack #dfs #backtracking #dfsalgo #algorithmmastery #algorithmupdates #algorithmicthinking #algotrading https://lnkd.in/gYnkWrDF
Anshul Pal’s Post
More Relevant Posts
-
Quick Sort: Fast Sorting in DSA Quick Sort is a highly efficient, divide-and-conquer sorting algorithm that works by choosing a "pivot" element and partitioning the array around it. Elements smaller than the pivot go to the left, and larger ones go to the right. This process is repeated recursively, leading to an average time complexity of O(n log n). Key Points: Partitioning: Splits data into sub-arrays around a pivot. Efficiency: Great for large datasets and often faster than merge sort due to in-place sorting. Applications: Common in sorting databases, search engines, and any scenario needing efficient data organization. #QuickSort #Algorithms #Sorting #DataStructures #Efficiency #Coding Code:https://lnkd.in/darTTuKW
To view or add a comment, sign in
-
Get {50%} 𝗢𝗙𝗙 (𝗖𝗼𝗱𝗲 - LLM50) on our 𝗟𝗟𝗠 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 𝗣𝗿𝗲𝗽 𝗖𝗼𝘂𝗿𝘀𝗲 -https://lnkd.in/grTzEtpH =========================== 🚀 𝗠𝗲𝗺𝗼𝗥𝗔𝗚: 𝗔 𝗥𝗲𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝗮𝗿𝘆 𝗥𝗔𝗚 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 𝗳𝗼𝗿 𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗱 𝗘𝘃𝗶𝗱𝗲𝗻𝗰𝗲 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹! MemoRAG is transforming the way we approach retrieval-augmented generation (RAG) by incorporating a super-long memory model for global understanding across massive datasets. 🌐 Unlike traditional RAG frameworks, MemoRAG doesn’t just focus on explicit queries. Instead, it taps into its global memory to recall query-specific clues, creating responses that are not only more accurate but also contextually rich. 🔥 𝗞𝗲𝘆 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀 🔥 1. 𝗚𝗹𝗼𝗯𝗮𝗹 𝗠𝗲𝗺𝗼𝗿𝘆: Processes up to 1 million tokens in a single context, offering unmatched depth and breadth across data. 2. 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝗯𝗹𝗲 & 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲: Fine-tune and adapt MemoRAG to new tasks with only a few hours of training! 3. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗖𝗹𝘂𝗲𝘀: Bridges raw inputs to answers using clues derived from memory—unlocking insights from even the most complex queries. 4. 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝗖𝗮𝗰𝗵𝗶𝗻𝗴: Up to 30x faster context pre-filling through advanced caching, chunking, indexing, and encoding. 5. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗥𝗲𝘂𝘀𝗲: Long contexts can be encoded once and reused, improving efficiency for tasks with repetitive data needs. 🆕 𝗟𝗶𝘁𝗲 𝗠𝗼𝗱𝗲 𝗼𝗳 𝗠𝗲𝗺𝗼𝗥𝗔𝗚 🆕 You can now experience MemoRAG’s powerful pipeline with just a few lines of code! Ideal for GPUs with 16GiB or 24GiB memory, the Lite Mode simplifies getting started while maintaining exceptional performance. ✨ 𝗕𝗮𝘀𝗶𝗰 𝗨𝘀𝗮𝗴𝗲 & 𝗠𝗼𝗱𝗲𝗹 𝗖𝗼𝗺𝗽𝗮𝘁𝗶𝗯𝗶𝗹𝗶𝘁𝘆 MemoRAG works seamlessly with HuggingFace models, using the MemoRAG.memorize() method to build global memory across long input contexts. 🧠 𝗟𝗼𝗻𝗴 𝗟𝗟𝗠𝘀 𝗮𝘀 𝗠𝗲𝗺𝗼𝗿𝘆 𝗠𝗼𝗱𝗲𝗹𝘀 🧠 MemoRAG also supports long-context LLMs like Meta-Llama-3.1-8B-Instruct and Llama3.1-8B-Chinese-Chat, optimizing memory through MInference. Check out the provided notebooks and scripts for detailed usage and unlock the power of MemoRAG! 💡 #MemoRAG #RAGFramework #LLMs #GlobalMemory #AI #NLP #MInference #GenerativeAI #MasteringLLM
To view or add a comment, sign in
-
Sparse data refers to datasets with many features with zero values. Dealing with it efficiently is paramount in different fields, especially in machine learning, but also in many scientific settings, like e.g. X-ray diffraction. Blosc2 has support for sparse data in the sense that can encode very efficiently runs of zeros at different levels inside the format (blocks, chunks and frames). This is why it can (in combination with the Shuffle filter and the Zstd codec) compress significantly better than bitshuffle+(LZ4|Zstd), and *much* better than canonical representations (like COO, CSR, CSC or BSR) for sparse data coming from X-ray diffraction. See our results on slides 25-30 of our report: https://lnkd.in/dj63wys7
Blosc2 and Efficient Sparse Data Handling
blosc.org
To view or add a comment, sign in
-
Data Structures and Algorithms: Queue from Stacks! Today's challenge was to manually implement a queue by manipulating two stacks. Essentially, you peek, push, and pop elements from one stack to the other and back again, and return the element that you want in order to implement the queue's First In, First Out principles as opposed to the Stack's LIFO. There seems to be no real world application for this, but a theoretical problem to gain better understanding of these data structures. #softwareengineer #justkeepcoding #algorithms
To view or add a comment, sign in
-
TIL #duckdb uses Adaptive Radix Trees as one if its two key indexing methods... At their core, radix trees optimize for space by compressing paths where nodes only have a single child. Instead of wasting memory with unnecessary branches, it collapses them, leading to reduced storage overhead compared to a traditional trie. Some databases (like duckdb) use radix trees over b-trees or skip-lists to optimize fast in-memory lookups. Unlike copy-on-write tries, radix trees will typically perform in-place updates to optimize space. Here’s why Radix Trees are great for point queries: Efficient Memory Usage: Unlike a standard trie, Radix Trees compress nodes. So, instead of storing a node for each character, paths are compacted when they’re unambiguous, which reduces memory usage. Fast Prefix Lookups: This structure is incredibly fast for prefix-based lookups. If you're implementing something like an autocomplete feature, Radix Trees allow you to quickly find all entries that start with a given prefix. Minimized Overhead for Sparse Data: Since Radix Trees only store key prefixes where necessary, they handle sparse datasets very well, reducing unnecessary nodes compared to tries. Here’s a simple breakdown on how DML behave with Radix trees... Insertion: Radix trees work similarly to tries, but they merge nodes wherever possible. This keeps the tree shallow and fast for searching. Reads: It’s efficient because you're skipping unnecessary intermediate nodes, making prefix matching super quick. One thing to keep in mind though: while radix trees are great for memory efficiency, they might introduce some complexity when implementing edge cases—especially when handling node splits during insertions or deletions. #dataengineering #databases #algorithms
To view or add a comment, sign in
-
🌟 Day 109: Advancing in the DSA with Development Journey 🌟 Hello everyone, Welcome to Day 109 of my Development Journey in advancing Data Structures and Algorithms. Today, I solved the problem of the Largest odd number in the string. will see you on day 110! #DataStructures #optimal #day109 #Algorithms #Arrays #ProblemSolving #SoftwareEngineering #ContinuousLearning #Linkediny
To view or add a comment, sign in
-
https://lnkd.in/g7YcJVuF A key factor of this book and its associated implementations is that all #algorithms (unless otherwise stated) were designed by us, using the theory of the algorithm in question as a guideline. #datastructures #computerscience #datascience
Data Structures and Algorithms: Annotated Reference with Examples
freecomputerbooks.com
To view or add a comment, sign in
-
🌟 Day 36: Advancing in the DSA with Development Journey 🌟 Hello everyone, Welcome to Day 36 of my Development Journey in advancing Data Structures and Algorithms. Today, I solved the problem on the array. Implementation of the largest and smallest number in the array. will see you on day 37! #DataStructures #Algorithms #Arrays #ProblemSolving #DevelopmentJourney #SoftwareEngineering #ContinuousLearning #LinkedInLearning
To view or add a comment, sign in
-
🌟 Day 85: Advancing in the DSA with Development Journey 🌟 Hello everyone, Welcome to Day 85 of my Development Journey in advancing Data Structures and Algorithms. Today, I solved the problem of Find How many times the array is Rotated. will see you on day 86! #DataStructures #optimal #day85 #Algorithms #Arrays #ProblemSolving #DevelopmentJourney #SoftwareEngineering #ContinuousLearning #LinkedInLearning
To view or add a comment, sign in
-
🚨Open Intent Classification library just got updated to 0.0.4 version 🚨 I intend the library to be the go-to library for classification tasks by providing several easy to use implementations. Changelog: 1. Added OpenAI based classifier Next: 1. Add DSPy example 2. Add additional open source LLMs https://lnkd.in/dhpdhx5n
open-intent-classifier
pypi.org
To view or add a comment, sign in