***** Polymorphic methods in Scala ***** Concept: Methods in Scala can be parameterized by type as well as by value. The syntax is similar to that of generic classes. Type parameters are enclosed in square brackets, while value parameters are enclosed in parentheses. Code: def listOfDuplicates[A](x: A, length: Int): List[A] = { if(length < 1) Nil else x :: listOfDuplicates(x, length - 1) } println(listOfDuplicates[Int](3, 5)) println(listOfDuplicates("Ha", 3)) Explanation: The method listOfDuplicates takes a type parameter A and value parameters x and length. Value x is of type A. If length < 1 we return an empty list. Otherwise we prepend x to the list of duplicates returned by the recursive call. (Note that :: means prepend an element on the left to a list on the right.) In the first example call, we explicitly provide the type parameter by writing [Int]. Therefore the first argument must be an Int and the return type will be a List[Int]. The second example call shows that you don’t always need to explicitly provide the type parameter. The compiler can often infer it based on context or on the types of the value arguments. In this example, "Ha" is a String so the compiler knows that A must be a String. Reference: Scala Doc. Watch it: https://lnkd.in/g-q3MMfV #polymorphic #methods #scala #programming #language #coding #learn #together
Nataraja Murthy 🕴️’s Post
More Relevant Posts
-
👉 Day 56/100 of solving LeetCode || DSA problems in Scala. Problem Name ➡543. Diameter of Binary Tree https://lnkd.in/gqvjyzwq Problem Description - Given the root of a binary tree, return the length of the diameter of the tree. The diameter of a binary tree is the length of the longest path between any two nodes in a tree. This path may or may not pass through the root. The length of a path between two nodes is represented by the number of edges between them. Approach for the Solution - 1. Initialize a variable diameter to zero and define the function diameterOfBinaryTree that resets diameter and calls longestPath on the root. 2. In longestPath, return zero if the node is null. 3. Recursively compute the longest paths for the left and right children. 4. Update the diameter with the maximum of its current value and the sum of the left and right longest paths, then return the height of the current node as one plus the maximum of the two paths. Code in Scala - object Solution { var diameter = 0 def diameterOfBinaryTree(root: TreeNode): Int = { diameter = 0 longestPath(root) diameter } def longestPath(root : TreeNode):Int ={ if(root == null) return 0 val leftLongestPath = longestPath(root.left) val rightLongestPath = longestPath(root.right) diameter = Math.max(diameter, (leftLongestPath+rightLongestPath)) 1 + Math.max(leftLongestPath,rightLongestPath) } } ✌ Keep Learning. #scala #tree #datastructures #leetcode
To view or add a comment, sign in
-
Why settle for just one programming language when you can solve a problem in six different ones? New post: https://lnkd.in/gQcvSKih in which I tackle a puzzle with a similar approach in R, Dyalog APL, Julia, Haskell, Python, and Rust, with a bonus solution in J. If you can spot improvements, have a different approach, or would like to add a solution in another language, let me know! #R #Rstats #APL #Julia #Haskell #Python #Rust
To view or add a comment, sign in
-
👉 Day 83/100 of solving LeetCode || DSA problems in Scala. Problem Name ➡2696. Minimum String Length After Removing Substrings https://lnkd.in/g6xw_r8j Problem Description - You are given a string s consisting only of uppercase English letters. You can apply some operations to this string where, in one operation, you can remove any occurrence of one of the substrings "AB" or "CD" from s. Return the minimum possible length of the resulting string that you can obtain. Approach for the Solution - 1. Initialize an empty stack. 2. Iterate over each character in the string. 3. If the stack is empty, or if the top element and the current character do not form a pair ('A' with 'B' or 'C' with 'D'), push the current character onto the stack; otherwise, pop the top element from the stack. 4. Return the size of the stack, which represents the minimum length of the string after removing valid pairs. Code in Scala - object Solution { def minLength(s: String): Int = { val stack = scala.collection.mutable.Stack[Char]() for(char <- s){ if(stack.isEmpty) stack.push(char) else{ if(stack.top == 'A' && char == 'B') stack.pop() else if(stack.top == 'C' && char == 'D') stack.pop() else stack.push(char) } } stack.size } } ✌ Keep Learning. #scala #tree #datastructures #leetcode
To view or add a comment, sign in
-
New blog post: Primer on row-major▤ and column-major▥ ordering of matrices and how it affects performance. 🚀 We walk you through implementations of both ordering in Mojo🔥, illustrating how performance changes between implementations and why to choose one ordering over another. We also compare the performance Mojo implementations of matrix operations vs. NumPy! (guess which is faster 😇) https://lnkd.in/gAAucSCS #mojo #machinelearning #python #artificialintelligence #ai #engineering #ml #matrix #programming #engineering #llms #startup #news
Modular: Row-major vs. column-major matrices: a performance analysis in Mojo and NumPy
modular.com
To view or add a comment, sign in
-
Take a modern language(Python superset) running on modern compiler technology (LLVM MLIR) and get a speed boost of over 25x over row-major (C-based) memory and over 4.7x over column-major (fortran-based) memory in #numpy. Modular's #Mojo is looking pretty 🔥🔥🔥!!! I can't wait to get some more time to work on implementing geospatial functions in this! Look out #GeoAI, #Mojo🔥 can save you time (and money 💰💰💰) developing and running algorithms! To me, that's something worth running #GEOINT on! Hey Modular, any chance we can get this on Red Hat flavored Linux builds?!?!? 🤷♂️
New blog post: Primer on row-major▤ and column-major▥ ordering of matrices and how it affects performance. 🚀 We walk you through implementations of both ordering in Mojo🔥, illustrating how performance changes between implementations and why to choose one ordering over another. We also compare the performance Mojo implementations of matrix operations vs. NumPy! (guess which is faster 😇) https://lnkd.in/gAAucSCS #mojo #machinelearning #python #artificialintelligence #ai #engineering #ml #matrix #programming #engineering #llms #startup #news
Modular: Row-major vs. column-major matrices: a performance analysis in Mojo and NumPy
modular.com
To view or add a comment, sign in
-
🚀 When it comes to training large language models or processing vast amounts of data, computational efficiency is key! 💡 Check out the different levels of processing efficiency to learn how to turbocharge your projects: 1️⃣ Nested For Loops: While intuitive, nested loops quickly become computationally inefficient when dealing with large data sets and demanding operations. 2️⃣ Vectorization: Managing your data within Numpy arrays can significantly reduce overhead and speed up calculations because it is built for big data. 3️⃣ Numba: Libraries like Numba take Python functions and compile them as machine level code, rivaling the speed of compiled languages like C. Numba also provides tools to access GPUs for large-scale parallel processing and even more speed. 💻 Ready to unlock true processing speed? Take a look at my full article for a comparison of these techniques as well as example code of how to implement them! https://lnkd.in/epVr59zS #quantitativefinance #python #numpy
Achieving High Performance in Python
http://rileydunnaway.wordpress.com
To view or add a comment, sign in
-
Choosing between lists and arrays in Python can be crucial depending on your use case. Here's a quick comparison to help you decide which one to use: Dynamic Size: Lists: Can grow or shrink dynamically. Arrays: Size is fixed at the time of creation. Data Types: Lists: Can store elements of different data types. Arrays: Typically stores elements of the same data type. Built-in Methods: Lists: Extensive set of methods (append(), remove(), sort(), etc.). Arrays: Limited methods specific to numerical operations. Memory Usage: Lists: More flexible but can be less memory-efficient. Arrays: More memory-efficient for large numerical data. Syntax: Lists: Defined using square brackets ([]). Arrays: Defined using the array module from the standard library or numpy package. Performance: Lists: Slower for numerical operations. Arrays: Faster for numerical operations due to optimized C libraries. Example: Lists: lst = [1, 'apple', 3.14] Arrays: arr = array('i', [1, 2, 3]) or np.array([1, 2, 3]) Library: Lists: Built-in. Arrays: Requires array module or third-party packages like numpy. Use Cases: Lists: General-purpose, suitable for collections of items that can include mixed data types and require dynamic resizing. Arrays: Best for numerical data processing where performance and memory efficiency are critical, especially with libraries like numpy. Understanding these differences can help you write more efficient and effective Python code. Happy coding! #Python #Coding #Programming #Lists #Arrays #DataScience #PythonProgramming #TechTips #SoftwareDevelopment #MachineLearning #AI #BigData #PythonTips #CodeNewbie
To view or add a comment, sign in
-
The past two days have been incredibly insightful and packed with learning. We delved deep into the foundations of compiler design, focusing on: Lexer and Lexical Analysis: The first crucial step in designing a compiler. A lexer sees a program as a continuous sequence of characters without any structure. Lexical analysis breaks this sequence into the smallest meaningful units called lexemes or tokens, depending on the language (C++, Python, Java, etc.). This step also involves removing whitespace and comments from the string. Regular Expressions, DFA, and NDFA: We learnt how regular expressions are used in lexical analysis, and how deterministic (DFA) and non-deterministic finite automata (NDFA) play a pivotal role in tokenizing the input program. Tokens and Lexemes: Lexemes are grouped into common tokens that play similar roles during syntax analysis. For example, in the expression x = 10;, x, =, 10, and ; are tokens. Parser and Semantic Analysis: We touched upon the basics of parsing and semantic analysis, learning how lexemes are grouped to form larger structures. This grouping is crucial for syntax analysis. Abstract Syntax Tree (AST): We learned how parsers generate an AST, a tree representation of the abstract syntactic structure of source code. Each node of the tree denotes a construct occurring in the source code. Lex and Yacc Files: These tools help in generating lexical analyzers and parsers, respectively. We had hands-on experience writing, understanding, and executing code using various commands. Learning to see a program as just a string of characters and then breaking it down into meaningful components has been a fascinating experience. The detailed lab experiments allowed us to see the code closely, understand it, write it, and execute it using different commands, making the learning process highly practical and engaging. I'm excited for the upcoming sessions, where we will dive deeper into parsing, semantic analysis, code generation techniques, and further enhance our understanding of compiler design. 🌐 Thank you for following my journey! Stay tuned for more updates. 🚀 #ACMIndia #SummerSchool #NVIDIA #AI #MachineLearning #Compilers #PCCOE
To view or add a comment, sign in
-
#AI #AIThisWeek 𝗦𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝘁𝗼 𝗡𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿 : https://lnkd.in/guxfrUSM 𝗣𝗬𝗧𝗛𝗢𝗡 𝗧𝗜𝗣 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗠𝘂𝗹𝘁𝗶𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗣𝗼𝗼𝗹 Python's multiprocessing library enables task execution in parallel by creating separate processes for each task, effectively utilizing multiple CPU cores. This method bypasses the Global Interpreter Lock (GIL), allowing for concurrent task processing. The library includes Process and Pool classes for managing processes, and it supports inter-process communication through pipes and queues. By dividing tasks into independently executable units, multiprocessing can significantly reduce execution times for parallelizable tasks. from multiprocessing import Pool import time def process_request(request): time.sleep(1) return f"Processed {request}" requests = ['req1', 'req2', 'req3', 'req4', 'req5'] # Sequential processing start = time.time() results_seq = [process_request(req) for req in requests] print(f"Sequential: {time.time() - start:.2f} seconds") # Output: Sequential: 5.00 seconds Running the same processes in parallel will speed up the execution. Here is how you can do it using multiprocessing.Pool. # Concurrent processing with multiprocessing Pool start = time.time() with Pool(5) as p: results_pool = p.map(process_request, requests) print(f"Pool: {time.time() - start:.2f} seconds") # Output: Pool: 1.11 seconds
To view or add a comment, sign in
-
Time series #forecasting (again) with #machinelearning in #python "The novelty of Stumpy is its matrix profile computation. The matrix profile enables the quick identification of motifs (recurring patterns), anomalies (outliers), and shapelets (discriminative subsequences) within time series data." https://lnkd.in/e8jkP9R4
Stumpy: A Powerful and Scalable Python Library for Modern Time Series Analysis
https://www.marktechpost.com
To view or add a comment, sign in