🔍 A Beginner’s Guide to Threads, Processes, and Context Switching 🔍 Hello, fellow developers! Ever wondered how threads and processes work in computing? Or what does context switching mean? Check out my latest Medium article where I break down these concepts with easy-to-understand examples! https://lnkd.in/eB6FhZyc In this article, I explain threads, processes, and context switching with simple, relatable examples, making it perfect for beginners and those looking to refresh their knowledge. #Threads #Processes #ContextSwitching #Multithreading #Concurrency #SoftwareDevelopment
Orkhan Mikayilov’s Post
More Relevant Posts
-
Senior .Net Developer 👉 I provide white label software development services to software agencies | Software Design,Database Design, Docker, Microservices
𝑪𝒐𝒏𝒄𝒖𝒓𝒓𝒆𝒏𝒄𝒚 𝐯𝐬 𝑷𝒂𝒓𝒂𝒍𝒍𝒆𝒍𝒊𝒔𝒎 (part1) 💎𝑪𝒐𝒏𝒄𝒖𝒓𝒓𝒆𝒏𝒄𝒚: 𝑫𝒆𝒂𝒍𝒊𝒏𝒈 𝒘𝒊𝒕𝒉 𝒍𝒐𝒕𝒔 𝒐𝒇 𝒕𝒉𝒊𝒏𝒈𝒔 𝒂𝒕 𝒐𝒏𝒄𝒆 💎𝑷𝒂𝒓𝒂𝒍𝒍𝒆𝒍𝒊𝒔𝒎: 𝑫𝒐𝒊𝒏𝒈 𝒍𝒐𝒕𝒔 𝒐𝒇 𝒕𝒉𝒊𝒏𝒈𝒔 𝒂𝒕 𝒐𝒏𝒄𝒆 --------------------------- ✔️ 𝐂𝐨𝐧𝐜𝐮𝐫𝐫𝐞𝐧𝐜𝐲 refers to the ability of a system to 𝐡𝐚𝐧𝐝𝐥𝐞 𝐦𝐮𝐥𝐭𝐢𝐩𝐥𝐞 𝐭𝐚𝐬𝐤𝐬 𝐬𝐢𝐦𝐮𝐥𝐭𝐚𝐧𝐞𝐨𝐮𝐬𝐥𝐲, but it doesn’t necessarily mean that those tasks are being executed at the same exact moment. ✔️It’s more about 𝐭𝐚𝐬𝐤 𝐬𝐰𝐢𝐭𝐜𝐡𝐢𝐧𝐠—where multiple tasks are 𝐢𝐧𝐭𝐞𝐫𝐥𝐞𝐚𝐯𝐞𝐝 and progress appears to happen simultaneously, even if only one task is actually being worked on at any given instant. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞: Think of a 𝐬𝐢𝐧𝐠𝐥𝐞-𝐜𝐨𝐫𝐞 𝐂𝐏𝐔 handling multiple tasks --------------------------- ✔️𝐏𝐚𝐫𝐚𝐥𝐥𝐞𝐥𝐢𝐬𝐦 is about actually executing multiple tasks 𝐬𝐢𝐦𝐮𝐥𝐭𝐚𝐧𝐞𝐨𝐮𝐬𝐥𝐲. ✔️In parallelism, two or more tasks are being 𝐞𝐱𝐞𝐜𝐮𝐭𝐞𝐝 𝐬𝐢𝐦𝐮𝐥𝐭𝐚𝐧𝐞𝐨𝐮𝐬𝐥𝐲 in different threads, typically on different CPU cores or processors. ✔️Parallelism is focused on 𝐝𝐨𝐢𝐧𝐠 𝐦𝐚𝐧𝐲 𝐭𝐚𝐬𝐤𝐬 𝐚𝐭 𝐞𝐱𝐚𝐜𝐭𝐥𝐲 𝐭𝐡𝐞 𝐬𝐚𝐦𝐞 𝐭𝐢𝐦𝐞. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞: On a 𝐦𝐮𝐥𝐭𝐢-𝐜𝐨𝐫𝐞 𝐂𝐏𝐔, parallelism happens when different tasks are assigned to different CPU cores, so each core runs its assigned task at the same time ----------------------------------------------------------------------------------- #Concurrency #Parallelism #Multithreading #AsynchronousProgramming #PerformanceOptimization #HighPerformanceComputing #Threading
To view or add a comment, sign in
-
Two Threads, One Core: How Simultaneous Multithreading Works Under the Hood https://lnkd.in/eQTs42Mu
Two Threads, One Core: How Simultaneous Multithreading Works Under the Hood
blog.codingconfessions.com
To view or add a comment, sign in
-
What do you think about this: Two Threads, One Core: How Simultaneous Multithreading Works Under the Hood (https://lnkd.in/gTJJmjhE)
Two Threads, One Core: How Simultaneous Multithreading Works Under the Hood
blog.codingconfessions.com
To view or add a comment, sign in
-
Senior Platform Engineer | Senior Cloud Infrastructure Engineer | Data Platform Engineer | AWS | GCP | Kubernetes | Terraform | Golang | Kafka | Flink | Java
I think outside of data structures and algorithms, which are fundamentals, what people really need to wrap their heads around is the difference between "memory" and "on-disk." This distinction is profound when building low-latency and high-performance stateful applications. The challenge you need to deal with most of the time in stateful applications is how much you can store in memory and how you ensure low-latency and guarantee fault-tolerance with high availability, balancing between reading from memory vs. on-disk. You know memory is volatile, and no matter how much speed you get, if you don't flush to disk, you have no way to guarantee fault-tolerance of your application when the process restarts. Similarly, disk I/O is an expensive operation that can consume all of your CPU cycles if not careful, and you can't afford to stream processing between memory and on-disk, so you need to sort of "buffer-read" and choose your buffer size carefully. What's more, you need to consider the network bandwidth of the bytes transferred and the maximum transmission unit within your network. I'll keep it short here. But deeper understanding of memory and on-disk operations is really crucial. #softwareengineering #programming #coding #cloudcomputing
To view or add a comment, sign in
-
Two Threads, One Core: How Simultaneous Multithreading Works Under the Hood
Two Threads, One Core: How Simultaneous Multithreading Works Under the Hood
blog.codingconfessions.com
To view or add a comment, sign in
-
Polyglot Software Engineer (primarily JavaScript/Node) - I write 20% of code to bring 80% of the profit
Caught up with this today: Two Threads, One Core: How Simultaneous Multithreading Works Under the Hood (https://lnkd.in/gVPrEtEy)
Two Threads, One Core: How Simultaneous Multithreading Works Under the Hood
blog.codingconfessions.com
To view or add a comment, sign in
-
In the fourth edition of "Computer Organization and Design", published in 2008, Patterson & Hennesy discussed multicore processors and programmers' understanding of hardware: “Modern computer technology requires professionals of every computing specialty to understand both hardware and software. The interaction between hardware and software at a variety of levels also offers a framework for understanding the fundamentals of computing. Whether your primary interest is hardware or software, computer science or electrical engineering, the central ideas in computer organization and design are the same. … The recent switch from uniprocessor to multicore microprocessors confirmed the soundness of this perspective, given since the first edition. While programmers could ignore the advice and rely on computer architects, compiler writers, and silicon engineers to make their programs run faster or be more energy efficient without change, that era is over. For programs to run faster, they must become parallel. While the goal of many researchers is to make it possible for programmers to be unaware of the underlying parallel nature of the hardware they are programming, it will take many years to realize this vision. Our view is that for at least the next decade, most programmers are going to have to understand the hardware/software interface if they want programs to run efficiently on parallel computers.” Nowadays, with the availability of Golang (announced in 2009, version 1.0 released in 2012) and Elixir (initial commit in 2009, version 1.0 released in 2014), it seems that researchers have been quite successful in making parallel programming easier than ever. However, the prediction made by Patterson & Hennesy was not so precise in terms of timing — everything happened much faster. https://lnkd.in/eEZg9b23 https://lnkd.in/eKMzZc4d
To view or add a comment, sign in
-
Recently, I came across with this post https://lnkd.in/epEHzRGj, this post remember me when I was making applications at low level and required to take in consideration microprocessor cycles and memory usage. As mobile developer, always need to think in users who has low device specifications, where performance and memory are points to have in consideration. What are you experience with performance when you develop your applications? I read you on the comments
"Clean" Code, Horrible Performance
computerenhance.com
To view or add a comment, sign in
-
I realized that I need to read "Project Oberon: The Design of an Operating System, a Compiler, and a Computer" for inspiration for the native OS for the 8/16-bit retro computer I want to build. The original book was released in 1992 and then it was updated in 2013 to target a custom RISC CPU implemented in an FPGA, rather than the NS32032, which lost out to the Motorola 680x0 and to RISC chips. The language, OS, and compiler are all open source. I suspect the Oberon system arrived too late to make an impact because users and developers already had too much invested in the more-complex and more-powerful OS's. Being single-tasking and single-user, with a tiled window text UI, it was probably too weird to catch on. Smalltalk and Lisp suffered similar fates, despite the enthusiasm of their communities. I'm mostly interested in the underlying architectural decisions, because I think the way to make users comfortable is to give them something more familiar, like an MS-DOS lookalike command interpreter with multiple virtual desktops and the flexibility to build any GUI on top. But there are still details I need to sort out on my own, like how linking and loading should work. https://lnkd.in/gX_6_QFw)
Oberon (operating system) - Wikipedia
en.wikipedia.org
To view or add a comment, sign in
-
Go Routine A Goroutine in Go (Golang) is a lightweight thread managed by the Go runtime. It allows functions to run concurrently but not necessarily in parallel, as the Go runtime multiplexes Goroutines onto multiple OS threads for better performance. Key points about Goroutines in Go: 1. Concurrency: Goroutines enable concurrent programming in Go, allowing multiple functions to execute independently and asynchronously. 2. Lightweight: Goroutines are lightweight compared to system threads, making it practical to create thousands of Goroutines in a single application. 3. Syntax: Goroutines are created using the "go" keyword followed by a function call. For example: ```go go functionName() ``` 4. Asynchronous Execution: Goroutines execute asynchronously, allowing the main program to continue running without waiting for the Goroutine to finish. 5. Communication: Goroutines communicate with each other using channels, which are Go's built-in synchronization primitives. 6. Efficient Scheduling: The Go runtime scheduler manages the execution of Goroutines, scheduling them on available OS threads based on the number of CPU cores. Overall, Goroutines in Go provide a convenient and efficient way to achieve concurrency and parallelism in Go programs.
To view or add a comment, sign in
(Reverse) Android Engineer
3moThe best explainer of all the time! Perfect article👍