Main Differences Between The Windows and Linux Operating System
Main Differences Between The Windows and Linux Operating System
Main Differences Between The Windows and Linux Operating System
Report including supporting examples: Both Linux and Windows are Operating systems with their own advantages and disadvantages which differ in many areas. At first there were no operating system with graphic user interface, it was only DOS (Disk Operating System) the main operating system for IBM PC compatible during the 1980s 1990s. Everything changed in 1983 when Microsoft announced the development of Windows, a graphical user interface for its own MS-DOS (Microsoft Disk Operating System). MS-DOS is a non-graphical command line operating system, which is no longer used; however the command line is now more commonly known as the Windows command line and is still used by many users. Linux also use a command line, similar with the MS-DOS, but far more powerful being more flexible you can do many more things. Linux command line provides insights into how computers really work, and moreover Linux command line is virtually identical to the command line used on every other Unix-like operating system, therefore more operating systems are being learned simultaneously. Windows command line
Linux kernel was first initialised by a student, Linus Torvalds, who wanted to create a new free operating system in response to the already existing Windows. Since then the main difference between this two operating systems is the fact that Linux distribution is free, while Microsoft Windows can earn $50,000 $150,000 US dollars per each license copy. Also, Graphical User Interfaces of Windows and Linux differ in many ways. Starting with the fact that Linux offers you the chance to choose your GUI. Linux itself doesn`t have a specific GUI, this are separate programs running under, some kind of applications that give the OS features. Instead, the Windows GUI is an integral component of the OS and is not replaceable. Windows interface:
Linux Zorin OS is the Linux distribution that I use because its interface is very similar with Window`s. It allows me to use it alongside Microsoft Windows, while it gives me access to all Microsoft Windows file. Another different feature is its 4 screens. You can switch from a screen to another, just like you would have 4 monitors, and still this doesn`t affect its speed.
Another important difference is that there are only 60-100 known viruses for Linux, none of them actively spreading nowadays, while there are more than 60,000 viruses for Windows, and to get rid of them you have to pay about $20 to $400 for an Antivirus. Remaining on the software field, Linux offers a large variety of available programs, utilities and games, but Windows has a much larger selection of available software, being more commonly used operating system, most of the software is compatible only with Microsoft Windows, an example could be Microsoft Office which was design especially for Windows, while Linux use Libre Office. Moreover, Linux is open source which gives you the opportunity to learn how the system works by checking
out the code of the kernel, playing around with it to see what will happen. Overall, Linux is an OS for developers, having lots of tools for developing software, managing your devices, system admin, while Windows was designed for home use, having a friendlier user interface.
Report: A typical memory hierarchy starts with a small, expensive, and relatively fast unit cache, followed by a larger, less expensive, and relatively slow main, next in the hierarchy is by far larger, less expensive, and much slower magnetic memories that consist typically of the disk and the tape. Its ability to move information into the fast memory and accessing it many times before replacing is possible due to a phenomenon called locality of reference. Memory hierarchy use two form of locality: spatial (for example: consecutive instructions in a straight line program) and temporal (for example and instruction in a program loop). Also, the memory system consists of two broad classes: - central memory the working memory; - auxiliary memory secondary memory or backing store. Both memory types has the following key properties: - speed which refers to the access time (the average time to reach a storage location and access its content) and the cycle time (how fast can the memory be accessed on a continuous basis); - type of access the data can be access directly or only after crossing over other data; - density and capacity how many bits can be stored per memory unit; - volatility how long the memory can retain the data in a readable form; - component cost cost/bit the faster a memory device is, the more expensive; - power dissipation reliability and cost implications. One of the strength of memory hierarchy could be the fact that it take advantage of temporal locality (it`s moving data from slower memory, to faster memory and unused data from faster memory it`s moved to slower memory). Another one could be the advantage of spatial locality (when you want to move a word from slower memory to faster memory, adjacent words are moved at the same time). But I think that the
most important strength is the fact that large amount of memory that costs as little as the cheap storage near the bottom, serves data to programs at the rate of the fast storage near the top. To conclude, the performance of a computer system is related directly to its execution time: CPU time = IC x CPI x Clock period IC = number of instructions executed CPI = average clock cycles required per instruction The faster the processor, the faster the date and programs have to be accessed and delivered. Therefore, the goal for the next year is to provide a memory system with cost almost as low as the cheapest level of memory and speeds almost as fast as the fastest level.
REFERENCES: Blanchet Gerard, Dupouy Bertrand Computer Architecture, 2013, p. 156 - 158 Newman Robert, Gaura Elena, Hibbs Dominic Computer Systems Architecture, 2002, p. 50 - 55
Results with supporting examples: The FIFO technique takes the time spent by a block in the cache as a measure for replacement. The block that has been in the cache the longest is selected for replacement regardless of the recent pattern of access to the block. This technique requires keeping track of the lifetime of a cache block. Therefore, it is not as simple as the random selection technique. The basis for a block replacement in this technique is the time spent in the main memory, instead of the pattern of usage of it. Intuitively, the FIFO technique is reasonable to use for straight line programs where locality of reference is not of concern.
REFERENCES: Abd-El-Barr, Mostafa; El-Rewini, Hesham. Fundamentals of Computer Organization and Architecture, 2005. p 138.