Processor Management Functions Of An Operating System.

Q2. Explain the processor management functions of an operating system.

Ans. As the name itself suggests, Processor Management means managing the process or processor that is, the CPU. Therefore, this very function is also termed as CPU Scheduling.

Multiprogramming, undoubtedly, improves the overall efficiency of the computer system by getting more work done in less time as the CPU may be shared among a number of active programs which are present in the memory at the same time. While CPU is executing a job, it has to wait for the job; if the job requires certain Input/output operation, the CPU waits for the Input/output operation to get over and that wait time is CPU’s idle time. In place of making CPU sit idle, another job takes over the use of CPU, increasing efficiency thereby and reducing CPU idle time.

The benefits of multiprogramming are as follow:

  • Increased CPU utilization
  • Higher total job throughput

Throughput is the amount of work accomplished in a given time interval, for example, 15 jobs per hour.

Throughput is an important measure of system performance. It is calculated as follow:

Throughput = (The number of jobs completed) / (Total time taken to complete the jobs)

Another important factor that influences throughput is priority assigned to different jobs that is, job scheduling.

  1. Job Scheduling

Job scheduling not only assign priority to jobs but also admits new jobs for processing at appropriate times. Before we start with job scheduling techniques, let us first understand basic terminology.

Program is set of instructions submitted to computer. Process is a program in execution. Job and Process are the terms which are almost used interchangeably.

Process State

A Process is a program in execution. During execution, the process changes its states. The state of a process changes its states. The state of a process is defined by its current activity. A process can have these states:

new, active, waiting or halted.

 

Figure: Process States

 

 

 

 

 

 

 

Process Control Block

Each process is represented in the Operating System by a data block called Process Control Block. A PCB contains following information.

  • Process State
  • Program counter: It indicates the address of next instruction to be executed.
  • CPU Register: Stores system related information.
  • Memory limits
  • List of open files
            Pointer         Process state

                     Process Number

                     Process Counter

                          Register

                     Memory Limits

                   List Of Open Files

                                  .

                                  .

                                  .

Figure: Process Control lock

Many factors determine which scheduling technique should be used in order to have the best possible results. The criteria are as follow:

  • CPU Utilization. There should be maximum possible CPU utilization.
  • Turnaround time. There should be minimum possible turnaround time.
  • Waiting time. Waiting time should be minimum.
  • Response time. System should give fastest response time.
  • Throughput. Throughput should be maximum possible.

 

Considering all these factors, the scheduling technique is chosen. There are two types of scheduling:

  • Non-Preemptive scheduling
  • Preemptive scheduling

 

  • Non-Preemptive Scheduling

In this type of scheduling, a scheduled job always completes before another scheduling decision is made. Therefore, finishing order of the jobs is also same as their scheduling order. The scheduling techniques which use non-preemptive scheduling are:

  • First Come First Served (FCFS) Scheduling
  • Shortest Job Next (SJN) Scheduling
  • Deadline Scheduling

First Come First Served (FCFS) Scheduling

This is the simplest scheduling technique which is managed by FIFO (First In First Out). That is, the process, which requests the CPU first, is allocated the CPU first. A queue is maintained called ready queue in which all the processes, that want CPU time, are entered. The CPU executes the jobs in the ready queue one by one. Batch processing is one obvious example of FCFS scheduling in which all jobs in the batch are executed one by one. But turnaround time for the very first job in the batch is the best and for the very last job, it is worst. (Turnaround time is the delay between job submission and job completion).

Shortest Job Next (SJN) Scheduling

In SJN Scheduling, whenever a new job is to be admitted, the shortest of the arrived jobs is selected and given the CPU time. Throughput remains the same as in FCFS scheduling but waiting time improves. SJN associates with each job the length of its next CPU burst. (CPU burst is the CPU time required by a job to execute its continuous executable part.)

Deadline Scheduling

In deadline scheduling, the job with the earliest deadline is selected for scheduling. Deadline of a job is the time limit within which a job must be over. If a job overshoots its deadline, it is said to be Deadline over run. Deadline over run is calculated as

K = C –D

Where K is deadline overrun; C is job completion time and D is deadline for a job.

Erratic behavior might result if reliable deadline data is not available.

 

  • Preemptive Scheduling

In contrast to non-preemptive scheduling, a scheduling decision can be made even while the job is executing whereas in non-preemptive scheduling. Decision is made only after job completes its execution. Therefore, preemptive scheduling may force a job in execution to release the processor, so that execution of some other job can be undertaken, in order to improve throughput considerably. The techniques which use preemptive scheduling are:

  • Round Robin Scheduling
  • Response Ratio Scheduling

Round Robin Scheduling

Round Robin (RR) Scheduling is aimed at giving all programs equal opportunity to make progress. This is implemented by ensuring that no program gets a second opportunity to execute unless all other programs have had at least one opportunity. A small unit of time, called a time quantum or time slice, is defined. The ready queue (queue of programs waiting for CPU time) is treated as a circular queue. The programs in the ready queue are processed for the defined time slice, one by one.

Response Ratio Scheduling

Response Ratio is calculated as follows:

Response Ratio = (Elapsed time) / (Execution time received)

The job with highest response ratio is preferred over others. When a short job arrives, its response ratio is high, so it is scheduled for execution immediately. A longer job would achieve high enough ratio only after a subsequent wait.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: