Saturday, June 12, 2010

AN OPERATING SYSTEM

Components
The components of an operating system all exist in order to make the different parts of a computer work together. All software—from financial databases to film editors—needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or complex as an internet connection.
 
The user interface


An example of the command line. Each command is typed out after the 'prompt', and then its output appears below, working its way down the screen. The current command prompt is at the bottom.

An example of a graphical user interface. Programs take the form of images on the screen, and the files, folders, and applications take the form of icons and symbols. A mouse is used to navigate the computer.

Every computer that receives some sort of human input needs a user interface, which allows a person to interact with the computer. While devices like keyboards, mice and touchscreens make up the hardware end of this task, the user interface makes up the software for it. The two most common forms of a user interface have historically been the Command-line interface, where computer commands are typed out line-by-line, and the Graphical user interface, where a visual environment (most commonly with windows, buttons, and icons) is present.
 
Graphical user interfaces
Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementations of Microsoft Windows and the Mac OS, the GUI is integrated into the kernel.

While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. GNU/Linux and Mac OS X are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space, however versions between Windows NT 4.0 and Windows Server 2003's graphics drawing routines exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.

Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE is a commonly found setup on most Unix and Unix-like (BSD, GNU/Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.

Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for the most part due to various reasons, eventually eclipsed by the widespread adoption of GNOME and KDE. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).

Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.
 
The kernel
A kernel connects the application software to the hardware of a computer.

Outside of firmware, the operating system provides the most basic level of control over the hardware. It manages memory addresses in the RAM, it controls which processes access the different modes of the CPU, and it organizes the data on disks with file systems. These not only streamline the ability of many different programs to be run at once on all of these parts; it also makes sure that faulty or malicious code does not damage the hardware.
 
Program execution

The operating system acts as an interface between an application and the hardware. The user interacts with the hardware from "the other side". The operating system is a set of services which simplifies development of applications. Executing a program involves the creation of a process by the operating system. The kernel creates a process by assigning memory and other resources, establishing a priority for the process (in multi-tasking systems), loading program code into memory, and executing the program. The program then interacts with the user and/or other devices and performs its intended function.
 
Interrupts
Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative—having the operating system "watch" the various sources of input for events (polling) that require action—can be found in older systems with very small stacks (50 or 60 bytes) but fairly unusual in modern systems with fairly large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.

When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or from the running program.

When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called device driver, which may be either part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.

A program may also trigger an interrupt to the operating system. If a program wishes to access hardware for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel will then process the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it will trigger an interrupt to get the kernel's attention.
 
Protected mode, supervisor mode, and virtual modes
Main articles: Protected mode and Supervisor mode



Privilege rings for the x86 available in protected mode. Operating systems determine which processes run in each mode.

Modern CPUs support multiple modes of operation. CPUs with this capability use at least two modes: protected mode and supervisor mode. The supervisor mode is used by the operating system's kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is written and erased, and communication with devices like graphics cards. Protected mode, in contrast, is used for almost everything else. Applications operate within protected mode, and can only use hardware by communicating with the kernel, which controls everything in supervisor mode. CPUs might have other modes similar to protected mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one.

When a computer first starts up, it is automatically running in supervisor mode. The first few programs to run on the computer, being the BIOS, bootloader and the operating system have unlimited access to hardware - and this is required because, by definition, initializing a protected environment can only be done outside of one. However, when the operating system passes control to another program, it can place the CPU into protected mode.

In protected mode, programs may have access to a more limited set of the CPU's instructions. A user program may leave protected mode only by triggering an interrupt, causing control to be passed back to the kernel. In this way the operating system can maintain exclusive control over things like access to hardware and memory.

The term "protected mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting (for example, by killing the program).
 
Memory management

Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already used by another program. Since programs time share, each program must have independent access to memory.

Cooperative memory management, used by many early operating systems assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs, or viruses may purposefully alter another program's memory or may affect the operation of the operating system itself. With cooperative memory management it takes only one misbehaved program to crash the system.

Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU) which doesn't exist in all computers.

In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses will trigger an interrupt which will cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel will generally resort to terminating the offending program, and will report the error.

Windows 3.1-Me had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced indicating a segmentation violation had occurred, however the system would often crash anyway.
Virtual memory
Many operating systems can "trick" programs into using memory scattered around the hard disk and RAM as if it is one continuous chunk of memory called virtual memory.
Main article: Virtual memory

The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.

If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.

When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.

In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
Further information: Page fault
 
Multitasking
Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute.

An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.

An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.

Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well.

The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)

On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP, (since Windows NT was targeted at professionals.)

Wednesday, June 9, 2010

INTERNET PROTOCOL BASICs

The Internet Protocol (or IP) is the main computing protocol that allows for the communication of data across a network. Using this protocol, computers can deliver "packets," or units of data, to other computers and devices based on their unique IP addresses. It is the standard used by home and business computers, routers, browsers, and all networking software, and is the foundation of the Internet Protocol Suite.

    Protocols Defined

  1. At its most basic, a protocol is a set of rules that enables two computers to talk to each other. It's a computing standard that defines the syntax and regulations of a connection across a network: how to detect the other computer, how to send a message to it, how to format that message, and so on.
  2. Internet Protocol Addresses

  3. The Internet Protocol uses unique addresses, simply called IP addresses, in order to identify two computers or devices in a network. There are two standards of IP addresses. The most common is IPv4 (IP Address Version 4), which consists of four bytes, each represented by a value between 0 and 255, and each value separated by a period: 127.0.0.1, for example. A newer standard, IPv6, has also emerged; it consists of 16 bytes, resulting in longer addresses and more possibilities for variation. Every networked computer is assigned an IP address, although often modems can change IP addresses dynamically.
  4. The Internet Protocol Suite

  5. The Internet Protocol Suite is a set of protocols used in combination for different networking tasks. The Internet Protocol and the Transmission Control Protocol (TCP) are the two underlying protocols that all other protocols utilize, so the suite is commonly referred to as TCP/IP. While the Internet Protocol handles each transmission of each packet of data, the TCP is like an overseer, organizing data into packets and sending them to the IP, and on the flip side, rebuilding files from the individual packets the IP sent.
  6. Layers of the Internet Protocol Suite

  7. The Internet Protocol Suite is divided into four "layers" of communication: from bottom to top, there is the Data Link Layer, the Internet (or Network) Layer, the Transport Layer, and the Application Layer. Occasionally, the actual hardware involved is referred to as a bottommost fifth layer, called the Physical Layer. Very simply, the Link Layer links the computers, the Internet Layer allows the IP to transfer packets across the link, the Transport Layer uses the TCP to organize the packets, and the Application Layer consists of protocols for specific types of transferring.
  8. Application Layer Protocols

  9. While the Internet Protocol itself is the foundation for network communication, many other protocols you may recognize exist in the topmost Application Layer. These include HTTP (HyperText Transfer Protocol, which allows Web users to request websites from remote servers), FTP (File Transfer Protocol, which allows for the rapid transfer of files across the web), and POP3 and SMTP (Post Office Protocol 3 and Simple Mail Transfer Protocol, two protocols for sending and receiving e-mail).

Tuesday, June 8, 2010

Introduction to Hardware

What is Hardware?

Your PC (Personal Computer) is a system, consisting of many components. Some of those components, like Windows XP, and all your other programs, are software. The stuff you can actually see and touch, and would likely break if you threw it out a fifth-story window, is hardware.

Not everybody has exactly the same hardware. But those of you who have a desktop system, like the example shown in Figure 1, probably have most of the components shown in that same figure. Those of you with notebook computers probably have most of the same components. Only in your case the components are all integrated into a single book-sized portable unit.
Figure 1
The system unit is the actual computer; everything else is called a peripheral device. Your computer's system unit probably has at least one floppy disk drive, and one CD or DVD drive, into which you can insert floppy disks and CDs. There's another disk drive, called the hard disk inside the system unit, as shown in Figure 2. You can't remove that disk, or even see it. But it's there. And everything that's currently "in your computer" is actually stored on that hard disk. (We know this because there is no place else inside the computer where you can store information!).
Figure 2
The floppy drive and CD drive are often referred to as drives with removable media or removable drives for short, because you can remove whatever disk is currently in the drive, and replace it with another. Your computer's hard disk can store as much information as tens of thousands of floppy disks, so don't worry about running out of space on your hard disk any time soon. As a rule, you want to store everything you create or download on your hard disk. Use the floppy disks and CDs to send copies of files through the mail, or to make backup copies of important items.

Random Access Memory (RAM)

There's too much "stuff" on your computer's hard disk to use it all at the same time. During the average session sitting at the computer, you'll probably use only a small amount of all that's available. The stuff you're working with at any given moment is stored in random access memory (often abbreviated RAM, and often called simply "memory"). The advantage using RAM to store whatever you're working on at the moment is that RAM is very fast. Much faster than any disk. For you, "fast" translates to less time waiting and more time being productive.

So if RAM is so fast, why not put everything in it? Why have a hard disk at all? The answer to that lies in the fact that RAM is volatile. As soon as the computer is shut off, whether intentionally or by an accidental power outage, every thing in RAM disappears, just as quickly as a light bulb goes out when the plug is pulled. So you don't want to rely on RAM to hold everything. A disk, on the other hand, holds its information whether the power is on or off.

The Hard Disk

All of the information that's "in your computer", so to speak, is stored on your computer's hard disk. You never see that actual hard disk because it's sealed inside a special housing and needs to stay that way. Unlike RAM, which is volatile, the hard disk can hold information forever -- with or without electricity. Most modern hard disks have tens of billions of bytes of storage space on them. Which, in English, means that you can create, save, and download files for months or years without using up all the storage space it provides.
In the unlikely event that you do manage to fill up your hard disk, Windows will start showing a little message on the screen that reads "You are running low on disk space" well in advance of any problems.  In fact, if that message appears, it won't until you're down to about 800 MB of free space. And 800 MB of empty space is equal to about 600 blank floppy disks. That's still plenty of room!

The Mouse

Obviously you know how to use your mouse, since you must have used it to get here. But let's take a look at the facts and buzzwords anyway. Your mouse probably has at least two buttons on it. The button on the left is called the primary mouse button, the button on the right is called the secondary mouse button or just the right mouse button. I'll just refer to them as the left and right mouse buttons. Many mice have a small wheel between the two mouse buttons, as illustrated in Figure 3.
Figure 3
 
The idea is to rest your hand comfortably on the mouse, with your index finger touching (but not pressing on) the left mouse button. Then, as you move the mouse, the mouse pointer (the little arrow on the screen) moves in the same direction. When moving the mouse, try to keep the buttons aimed toward the monitor -- don't "twist" the mouse as that just makes it all the harder to control the position of the mouse pointer.
If you find yourself reaching too far to get the mouse pointer where you want it to be on the screen, just pick up the mouse, move it to where it's comfortable to hold it, and place it back down on the mousepad or desk. The buzzwords that describe how you use the mouse are as follows:
  • Point: To point to an item means to move the mouse pointer so that it's touching the item.
  • Click: Point to the item, then tap (press and release) the left mouse button.
  • Double-click: Point to the item, and tap the left mouse button twice in rapid succession - click-click as fast as you can.
  • Right-click: Point to the item, then tap the mouse button on the right.
  • Drag: Point to an item, then hold down the left mouse button as you move the mouse. To drop the item, release the left mouse button.
  • Right-drag: Point to an item, then hold down the right mouse button as you move the mouse. To drop the item, release the right mouse button.

The Keyboard

Like the mouse, the keyboard is a means of interacting with your computer. You really only need to use the keyboard when you're typing text. Most of the keys on the keyboard are laid out like the keys on a typewriter. But there are some special keys like Esc (Escape), Ctrl (Control), and Alt (Alternate). There are also some keys across the top of the keyboard labeled F1, F2, F3, and so forth. Those are called the function keys, and the exact role they play depends on which program you happen to be using at the moment.
Most keyboards also have a numeric keypad with the keys laid out like the keys on a typical adding machine. If you're accustomed to using an adding machine, you might want to use the numeric keypad, rather than the numbers across the top of the keyboard, to type numbers. It doesn't really matter which keys you use. The numeric keypad is just there as a convenience to people who are accustomed to adding machines.
Figure 4
Most keyboards also contain a set of navigation keys. You can use the navigation keys to move around around through text on the screen. The navigation keys won't move the mouse pointer. Only the mouse moves the mouse pointer.
On smaller keyboards where space is limited, such as on a notebook computer, the navigation keys and numeric keypad might be one in the same. There will be a Num Lock key on the keypad. When the Num Lock key is "on", the numeric keypad keys type numbers. When the Num Lock key is "off", the navigation keys come into play. The Num Lock key acts as a toggle. Which is to say, when you tap it, it switches to the opposite state. For example, if Num Lock is on, tapping that key turns it off. If Num Lock is off, tapping that key turns Num Lock on.

Combination Keystrokes (Shortcut keys)

Those mysterious Ctrl and Alt keys are often used in combination with other keys to perform some task. We often refer to these combination keystrokes as shortcut keys, because they provide an alternative to using the mouse to select menu options in programs. Shortcut keys are always expressed as:
key1+key2
where the idea is to hold down key1, tap key2, then release key1. For example, to press Ctrl+Esc hold down the Ctrl key (usually with your pinkie), tap the Esc key, then release the Ctrl key. To press Alt+F you hold down the Alt key, tap the letter F, then release the Alt key.

Sunday, June 6, 2010

AN OPERATING SYSTEM

Mainframes
Through the 1950s, many major features were pioneered in the field of operating systems, including batch processing, input/output interrupt, buffering, multitasking, spooling, and runtime libraries. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959 the SHARE Operating System was released as an integrated utility for the IBM 704 and IBM 709 mainframes.

In 1964, IBM produced the System/360 family of mainframe computers, available in widely differing capacities and price points, for which a single operating system OS/360 was provided, which eliminated costly, incompatible, ad-hoc programs for every individual model. This concept of a single OS spanning an entire product line was crucial for the success of System/360 and, in fact, IBM's current mainframe operating systems are distant descendants of this original system; applications written for the OS/360 can still be run on modern machines. In the mid-'70s, the MVS, the descendant of OS/360 offered the first[citation needed] implementation of using RAM as a transparent cache for data.

OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during update. When the process is terminated for any reason, all of these resources are re-claimed by the operating system.

An alternative CP-67 system started a whole line of operating systems focused on the concept of virtual machines.

Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the KRONOS and later the NOSPLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games. Burroughs Corporation introduced the B5000 in 1961 with the MCP, (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler, and indeed the MCP was the first OS to be written exclusively in a high-level language – ESPOL, a dialect of ALGOL. MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS400, IBM made an approach to Burroughs to licence MCP to run on the AS400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the UnisysClearPath/MCP line of computers.

UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this was a batch-oriented system that managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BASIC system.

General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed to General Comprehensive Operating System

Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community.

In the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying architecture to appear to be the same as others in a series. In fact most 360's after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations. But soon other means of achieving application compatibility were proven to be more significant.

The enormous investment in software for these systems made since 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. The notable supported mainframe operating systems include:
Burroughs MCPB5000,1961 to Unisys Clearpath/MCP, present.
IBM OS/360IBM System/360, 1966 to IBM z/OS, present.
IBM CP-67IBM System/360, 1967 to IBM z/VM, present.
UNIVAC EXEC 8UNIVAC 1108, 1967, to OS 2200 Unisys Clearpath Dorado, present.
 
Microcomputers operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the (GCOS).

PC-DOS was an early OS for personal computers that featured a command line interface.


Mac OS by Apple Computers became the first widespread OS to feature a graphical user interface. Many of its features such as windows and icons would later become commonplace in GUIs.

The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as Monitors. One notable early disk-based operating system was CP/M, which was supported on many early microcomputers and was closely imitated in MS-DOS, which became wildly popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS), its successors making Microsoft. In the 80's Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative Graphical User Interface (GUI) to the Mac OS.

The introduction of the Intel 80386 CPU chip with 32-bit architecture and pagingmultitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the Unix-like NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X.

The GNU project was started by activist and programmer Richard Stallman with the goal of a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991 Finnish computer science student Linus Torvalds, with cooperation from volunteers over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU userlandsystem software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention which Stallman and the Free Software Foundation remain opposed to, preferring the name "GNU/Linux" instead. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.
capabilities, provided personal computers with the ability to run and