"In this series of blog posts, you will learn how to collect high-level information about a program’s interaction with memory. This process is usually called memory profiling. Memory profiling helps you understand how an application uses memory over time and helps you build the right mental model of a program’s behavior. Here are some questions it can answer: * What is a program’s total memory consumption and how it changes over time? * Where and when does a program make heap allocations? * What are the code places with the largest amount of allocated memory? * How much memory a program accesses every second?" https://lnkd.in/dgFcs9U7
Geraldo Netto’s Post
More Relevant Posts
-
Memory usage and efficiency with Rust https://lnkd.in/e62QT23D
Rust Internals: Memory Usage and Efficiency
trey-rosius.github.io
To view or add a comment, sign in
-
Time for #KirShmWeekly ! I know, it has been less than a week, but it's Sunday night -- the perfect time. To say the least, the progress so far is great. It will be impossible (nor reasonable) to tell you about everything, but I'll try my best to give a short overview Big news -- fetching the data for CPU, GPU, RAM, MB and PSU is almost fully finished! Objects of each class are created properly (even though some use temporary solutions), with their properties being ready to use. This means that the compatibility check is VERY close (next sprint) Already had some code refactoring, which allowed me to see how I can further improve my code. ENUMs turned out to be very helpful! A nice level of type safety in a dynamically typed language. Writing tests was also proven to be life-saving since it allows you to check for some very peculiar mistakes (at some point I got really silly, but thankfully had enough tests to cover it) More technical details are available on my repository's discussion page (https://lnkd.in/dEVMZT-9) You can check out the source code and the documentation on the corresponding pages (although the wiki is WIP) Have a great week!
Daily reports · KirillSchmidt AQA_A-level_by-Kirill-Shmidt · Discussion #3
github.com
To view or add a comment, sign in
-
Here’s the updated post for the next day: 🚀 Day 52 of the #CrackYourPlacement Challenge ✅ Today’s focus was on arrays and caching techniques, which gave me a deeper understanding of window problems and cache implementations. Here's what I worked on: 1. Find the Maximum of Minimums for Every Window Size: This problem deals with finding the maximum of minimums for every window size in a given array. It was a challenging problem that pushed me to optimize sliding window techniques. https://lnkd.in/ddpN5jvT 2. LRU Cache Implementation: Implementing a Least Recently Used (LRU) cache is a fantastic way to understand data structures like hash maps and doubly linked lists. It’s widely used in system design and performance optimization. https://lnkd.in/dvHPmEkG Total questions solved so far: 123 GitHub Repo: https://lnkd.in/dxGsbKmD I’m excited to keep pushing forward and tackling these challenging problems. Let's crack those placements together! 💪 #CrackYourPlacement #CodingChallenge #Day52 #KeepLearning #ThanksArshGoyal #ProblemSolving #PlacementPreparation
Find maximum of minimum for every window size in a given array - GeeksforGeeks
geeksforgeeks.org
To view or add a comment, sign in
-
While loop is used in this program.
See my C++ program in action
programiz.com
To view or add a comment, sign in
-
A few months ago I found Memray: a modern memory profiling tool for Python. Memray actually has a pretty good docs for Python memory management. And it can also profile native (Pandas, Numpy, etc). #Python #MemoryManagement #Memray #MemoryProfiling #PythonTools #Programming #Bloomberg #SoftwareDevelopment #TechTools #Coding #PythonDevelopers #ProfilingTools #TechInnovation #OpenSource https://lnkd.in/dXhz99uX
Memory overview ¶
bloomberg.github.io
To view or add a comment, sign in
-
[GO WEEKLY] MASTERING GO PERFORMANCE - eBPF and PGO OPTIMIZATION TECHNIQUES While most software operates in user space, profiling tools like pprof can be limited by overhead. eBPF and PGO offer valuable alternatives for Go performance optimization by tapping into lower-level system insights. In the previous Go Weekly edition, Phat Nguyen, our backend engineer, highlighted the applied introduction to eBPF and PGO with Go. - eBPF (Extended Berkeley Packet Filter): Originally for filtering network packets, now used for tracing syscalls, functions, network packets, etc., enhancing system performance, monitoring, and security.. - eBPF is a powerful tool for deep kernel insights with many use cases, including systems programming, observability, and security. For profiling, consider using tools like Parca, Pyroscope, or PGO (from Go 1.20). - PGO (Profile-Guided Optimization), introduced in Go 1.20, collects profile data to optimize future builds, improving performance by 2-14% from the second build onward. - Grab experimented with PGO on services using TalariaDB, orchestrated services, and a monorepo service. - Result: TalariaDB service: 10% CPU, 30% memory, and 38% volume usage reduction. Orchestrated service: ~5% reduction, not substantial for the effort. eBPF and PGO are still relatively new in the Go ecosystem but hold immense potential for performance optimization. Continued experimentation will likely uncover even more innovative applications and best practices. Read our Go Weekly at: https://lnkd.in/gnbJrH8w #dwarves #software #Golang — Dwarves Notes (https://memo.d.foundation/) combine our team’s collective know-hows, R&Ds, and operation approaches. Connect and learn alongside other tech fellows: - Discord: discord.gg/dwarvesv - Github: https://lnkd.in/gZZ2eZMu - Website: https://d.foundation
To view or add a comment, sign in
-
Understanding the Performance Impact: Pass by Value vs. Pass by Pointer in Go When I first learned about functions in computer science, I always wondered: what difference does it make whether you pass arguments by value or by pointer? It turns out, the difference is massive, especially when you dive into the assembly level! Take a look at the stats from my latest benchmarks in Go: • fooPBV (pass by value) takes 1300 ns/op • fooPBP (pass by pointer) takes 2.07 ns/op This is 99.84% faster for the pointer-based approach—without Go compiler optimizations! After applying optimizations like function inlining, both methods perform almost identically. The key difference lies in how memory is handled: • Pass by value: A new copy of the variable is created in memory, which involves time-consuming data transfers. • Pass by pointer: Only the memory address is copied, which is a fixed size (depending on whether you're on a 32-bit or 64-bit machine). So, the next time you're working with large data structures or performance-sensitive code, consider the power of passing by pointer! You can take a look at the assembly code here: https://lnkd.in/gcTAyjxj Inspired by Arpit Bhayani's Go lang internals video to implement this. #GoLang #CodingTips #SoftwareEngineering #TechInsights #ProgrammingTips
To view or add a comment, sign in
-
What *is* great Ruby memory management? In this 3-part series, we cover how Ruby memory works, common issues and their causes, and solutions—and how a Scout's monitoring tool is the best way to go from great ...to awesome. https://lnkd.in/g-H52aTb
Ruby memory mastery: a Scout roadmap to monitoring like a pro | part 1
https://www.scoutapm.com
To view or add a comment, sign in
-
Not everytime, hashes have faster access times than array. Using a textbook algorithm to solve a performance problem is a naive way to look at code. You should start using proper tools to understand memory management. The first step to blind optimisation is to reduce the copies of data being made. The second step is to use tools like valgrind, strace etc to un-blind yourself. Even if an algorithm helps, one should always benchmark it.
To view or add a comment, sign in
-
⚙️ Understanding Go Slices vs. C Dynamic Arrays In Go, slices are a high-level abstraction over arrays, offering more flexibility and ease of use compared to dynamic arrays in C, which require manual memory management. Here's a comparison between Go slices and dynamic arrays in C: 1. Memory Management: - Go Slices: Memory management is automatic. When a slice grows beyond its capacity, Go automatically allocates a larger underlying array and moves data to it. Developers don't need to worry about allocating or freeing memory. - C Dynamic Arrays: You must manually manage memory using functions like `malloc()` and `realloc()`. If you don’t manage it correctly, this can lead to memory leaks or crashes due to improper allocation or deallocation. 2. Ease of Use: - Go Slices: Slices come with built-in functions such as `append()`, which seamlessly expands the slice. You also have `len()` and `cap()` to get the length and capacity, making operations straightforward. - C Dynamic Arrays: You have to manually resize arrays with `realloc()`. Keeping track of the current size and capacity is also up to the developer, which adds complexity and can lead to errors. 3. Safety - Go Slices: Go has runtime bounds checking, which prevents out-of-bounds access, improving safety and reducing potential bugs. - C Dynamic Arrays: Accessing memory outside the bounds of the array can lead to undefined behavior, and the compiler or runtime does not enforce bounds checking. 4. Performance: - Go Slices: While Go abstracts away manual memory management, slices are still efficient. However, when slices grow, the internal resizing can cause temporary performance hits. - C Dynamic Arrays: Dynamic arrays in C can be more performant in certain low-level cases since they give developers full control over memory allocation. However, this requires more effort to optimize. Conclusion: Go slices provide a simpler, safer, and more user-friendly way to handle dynamic collections of data, automating much of the complexity involved in resizing and memory management. On the other hand, C’s dynamic arrays offer more control but at the cost of increased complexity and risk of errors.
To view or add a comment, sign in
Senior Software Engineer at Grasshopper
8moI love this blog from Denis Bakhvalov! Also, his book and the Perf ninja challenges are awesome! Learnt a lot from there.