SCHEDULING
5.1 SCHEDULING
Definition:
Scheduling is the major concept in multitasking and multiprocessing operating system design, and in real-time operating system design. In advanced operating system, many processes run than the CPUs available to run them. Scheduling refers the way processes are assigned to run on the available CPUs. This assignment is carried out by software known as a scheduler or sometimes referred to as a dispatcher. Objectives of Scheduling are: Maximize CPU utilization Throughput: jobs per unit time.
Minimize
Turnaround time: Total time from submission of task to completion of task.
Waiting time: Total time for which job waits in ready queue for resource . Response time: Time it takes from when a request was submitted until the first response is produced.
5.2 Some fundamental scheduling Algorithms:
First Come First Served: First Come, First Served (FCFS), is the simplest scheduling algorithm, FIFO simply queues processes in the order that they arrive in the ready queue.
Since context switches only occur upon process termination, and no reorganization of the process queue is required, scheduling overhead is minimal.
Throughput can be low, since long processes can hog the CPU
Turnaround time, waiting time and response time can be high for the same reasons above
No prioritization occurs, thus this system has trouble meeting process deadlines.
The lack of prioritization means that as long as every process eventually completes, there is no starvation. In an environment where some processes might not complete, there can be starvation.
It is based on Queuing.
Shortest Job First (SJF): With this strategy the scheduler arranges processes with the least estimated processing time remaining to be next in the queue. This requires advanced knowledge or estimations about the time required for a process to complete.
If a shorter process arrives during another process' execution, the currently running process