Subscribe

RSS Feed (xml)

Google Reader or Homepage
Add to My Yahoo!
Subscribe with Bloglines
Subscribe in NewsGator Online

Add to My AOL
Add to Technorati Favorites!





Powered By

Skin Design:
Free Blogger Skins

Powered by Blogger

Monday, July 28, 2008

Multiprogrammed Batched Systems

Multiprogrammed Batched Systems

Spooling provides an important data structure: a job pool. Spooling will generally result in several jobs that have already been read waiting on disk, ready to run. A pool of jobs on disk allows the operating system to select which job to run next, to increase CPU utilization. When jobs come in directly on cards or even on magnetic tape, it is not possible to run jobs in a different order. Jobs must be run sequentially, on a first-come, first-served basis. However, when several jobs are on a direct-access device, such as a disk, job scheduling becomes possible.

The most important aspect of job scheduling is the ability to multiprogram. Off-line operation and spooling for overlapped I/O have their limitations. A single user cannot, in general, keep either the CPU or the I/O devices busy at all times. Multiprogramming increases CPU utilization by organizing jobs such that the CPU always has one to execute.

The idea is as follows. The operating system keeps several jobs in memory at a time (Figure 1.2). This set of jobs is a subset of the jobs kept in the job pool (since the number of jobs that can be kept simultaneously in memory is usually much smaller than the number of jobs that can be in the job pool.) The operating system picks and begins to execute one of the jobs in the memory. Eventually, the job may have to wait for some task, such as a tape to be mounted, or an I/O operation to complete. In a non-multiprogrammed system, the CPU would sit idle. In a multiprogramming system, the operating system simply switches to and executes another job. When that job needs to wait, the CPU is switched to another job, and so on. Eventually, the first job finishes waiting and gets the CPU back. As long as there is always some job to execute, the CPU will never be idle.


Figure 1.2 - Memory Layout For A Multiprogramming System

This idea is common in other life situations. A lawyer does not have only one client at a time. Rather, several clients may be in the process of being served at the same time. While one case is waiting to go to trial or to have papers typed, the lawyer can work on another case. If she has enough clients, a lawyer never needs to be idle. (Idle lawyers tend to become politicians, so there is a certain social value in keeping lawyers busy.)

Multiprogramming is the first instance where the operating system must make decisions for the users. Multiprogrammed operating systems are there¬fore fairly sophisticated. All the jobs that enter the system are kept in the job pool. This pool consists of all processes residing on mass storage awaiting allo¬cation of main memory. If several jobs are ready to be brought into memory and there is not enough for all of them, then the system must choose among them. Making this decision is job scheduling. When the operating system selects a job from the job pool, it loads that job into memory for execution. Having several programs in memory at the same time requires having some form of memory management. In addition, if several jobs are ready to run at the same time, the system must choose among them. Making this decision is CPU scheduling. Finally, multiple jobs running concurrently require that their ability to affect one another be limited in all phases of the operating system, including process scheduling, disk storage, and memory management. These considerations are discussed throughout the text.

Simple Batch Systems

Simple Batch Systems

Early computers were (physically) enormously large machines run from a console. The common input devices were card readers and tape drives. The common output devices were line printers, tape drives, and card punches. The users of such systems did not interact directly with the computer systems. Rather, the user prepared a job—which consisted of the program, the data, and some control information about the nature of the job (control cards)—and submitted it to the computer operator. The job would usually be in the form of punch cards. At some later time (perhaps minutes, hours, or days), the output appeared. The output consisted of the result of the program, as well as a dump of memory and registers in case of program error.

The operating system in these early computers was fairly simple. Its major task was to transfer control automatically from one job to the next. The operating system was always (resident) in memory (Figure 1.1).

To speed up processing, jobs with similar needs were batched together and were run through the computer as a group. Thus, the programmers would leave their programs with the operator. The operator would sort programs into batches with similar requirements and, as the computer became available, would run each batch. The output from each job would be sent back to the appropriate programmer.

A batch operating system, thus, normally reads a stream of separate jobs-(from a card reader, for example), each with its own control cards that predefine what the job does.


Figure 1.1
- Memory Layout For A Simple Batch System

When the job is complete, its output is usually printed (on a line printer, for example). The definitive feature of a batch system is the lack of interaction between the user and the job while that job is executing. The job is prepared and submitted, and at some later time, the output appears. The delay between job submission and job completion (called turnaround time) may result from the amount of computing needed or from delays before the operating system starts to process the job.

In this execution environment, the CPU is often idle. This idleness occurs because the speeds of the mechanical I/O devices are intrinsically slower than those of electronic devices. Even a slow CPU works in the microsecond range, with thousands of instructions executed per second. A fast card reader, on the other hand, might read 1200 cards per minute (17 cards per second). Thus, the difference in speed between the CPU and its I/O devices may be three orders of magnitude or more. Over time, of course, improvements in technology resulted in faster I/O devices. Unfortunately, CPU speeds increased even faster, so that the problem was not only unresolved, but also exacerbated.

The introduction of disk technology has helped in this regard. Rather than the cards being read from the card reader directly into memory, and then the job being processed, cards are read directly from the card reader onto the disk. The location of card images is recorded in a table kept by the operating system. When a job is executed, the operating system satisfies its requests for card-reader input by reading from the disk. Similarly, when the job requests the printer to output a line, that line is copied into a system buffer and is written to the disk. When the job is completed, the output is actually printed. This form of processing is called spooling; the name is an acronym for simultaneous peripheral operation on-line. Spooling, in essence, uses the disk as a huge buffer, for reading as far ahead as possible on input devices and for storing output files until the output devices are able to accept them.

Spooling is also used for processing data at remote sites. The CPU sends the data via communications paths to a remote printer (or accepts an entire input job from a remote card reader). The remote processing is done at its own speed, with no CPU intervention. The CPU just needs to be notified when the processing is completed, so that it can spool the next batch of data.

Spooling overlaps the I/O of one job with the computation of other jobs. Even in a simple system, the spooler may be reading the input of one job while printing the output of a different job. During this time, still another job (or jobs) may be executed, reading their "cards" from disk and "printing" their output lines onto the disk.

Spooling has a direct beneficial effect on the performance of the system. For the cost of some disk space and a few tables, the computation of one job can overlap with the I/O of other jobs. Thus, spooling can keep both the CPU and the I/O devices working at much higher rates.

Operating System

Operating System

An operating system is a program that acts as an intermediary between a user of a computer and the computer hardware. The purpose of an operating system is to provide an environment in which a user can execute programs. The primary goal of an operating system is thus to make the computer system convenient to use. A secondary goal is to use the computer hardware in an efficient manner.

What Is an Operating System?

An operating system is an important part of almost every computer system. A computer system can be divided roughly into four components: the hardware, the operating system, the applications programs, and the users.

The hardware — the central processing unit (CPU), the memory, and the input/output (I/O) devices — provides the basic computing resources. The applications programs — such as compilers, database systems, games, and business programs — define the ways in which these resources are used to solve the computing problems of the users. There may be many different users (people, machines, other computers) trying to solve different problems. Accordingly, there may be many different applications programs. The operat¬ing system controls and coordinates the use of the hardware among the various applications programs for the various users.

An operating system is similar to a government. The components of a computer system are its hardware, software, and data. The operating system provides the means for the proper use of these resources in the operation of the computer system. Like a government, the operating system performs no useful function by itself. It simply provides an environment within which other programs can do useful work.

We can view an operating system as a resource allocator. A computer system has many resources (hardware and software) that may be required to solve a problem: CPU time, memory space, file storage space, I/O devices, and so on. The operating system acts as the manager of these resources and allocates them to specific programs and users as necessary for tasks. Since there may be many—possibly conflicting—requests for resources, the operating system must decide which requests are allocated resources to operate the computer system efficiently and fairly.

A slightly different view of an operating system focuses on the need to control the various I/O devices and user programs. An operating system is a control program. A control program controls the execution of user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.

There is also no universally accepted definition of what is part of the operating system and what is not. A simple viewpoint is that everything a vendor ships when you order “the operating system" should be considered. The memory requirements and features included, however, vary greatly across systems. Some take up less than 1 megabyte of space (1 megabyte is 1 million bytes) and lack even a full-screen editor, whereas others require hundreds of megabytes of space and include spell checkers and entire "window systems." A more common definition is that the operating system is the one program running at all times on the computer (usually called the kernel), with all else being applications programs. This last definition is more common and is the one we generally follow.

It is easier to define operating systems by what they do than by what they are. The primary goal of an operating system is convenience for the user. Operating systems exist because they are supposed to make it easier to compute with them than without them. This view is particularly clear when you look at operating systems for small personal computers.

A secondary goal is efficient operation of the computer system. This goal is particularly important for large, shared multiuser systems. These systems are typically expensive, so it is desirable to make them as efficient as possible. These two goals—convenience} and efficiency—are sometimes contradictory. In the past, efficiency considerations were often more important than conve¬nience. Thus, much of operating-system theory concentrates on optimal use of computing resources.

To see what operating systems are and what operating systems do, let us consider how they have developed over the past 35 years. By tracing that evolution, we can identify the common elements of operating systems, and see how and why these systems have developed as they have.

Operating systems and computer architecture have had a great deal of influence on each other. To facilitate the use of the hardware, operating systems were developed. As operating systems were designed and used, it became obvious that changes in the design of the hardware could simplify them. In this short historical review, notice how operating-system problems led to the introduction of new hardware features.