Operating System

An operating system is a software which acts as an interface between the end user and computer hardware. Every computer must have at least one OS to run other programs. An application like Chrome, MS Word, Games, etc needs some environment in which it will run and perform its task.

Types of Operating system :

Batch Operating System
Multitasking/Time Sharing OS
Multiprocessing OS
Real Time OS
Distributed OS
Network OS
Mobile OS

1. Batch Operating System :

A batch operating system is an operating system in which same type of processes are batched together for execution. Its a relatively faster system than traditional system.

The user of a batch operating system never directly interacts with the computer. In this type of OS, every user prepares his or her job on an offline device like a punch card and submit it to the computer operator.

2. Multitasking/Time Sharing OS :

Time sharing system is where each process is alloted a particular time span and the process has to finish its completion within that time span. If it is failed to complete its execution then CPU control goes to the immidiate next process.

Time-sharing operating system enables people located at a different terminal(shell) to use a single computer system at the same time. The processor time (CPU) which is shared among multiple users is termed as time sharing.

3. Real Time OS :

Real time system is defines as a data processing system in which each task has a deadline to complete. Real Time Operating System (RTOS) adheres to this deadline as missing a deadline can cause affects ranging from undesired to catastrophic. A real-time system has well-defined, fixed time constraints

A real time operating system time interval to process and respond to inputs is very small. Examples: Military Software Systems, Space Software Systems.

4. Distributed Operating System :

A distributed operating system is an extension of the network operating system that supports higher levels of communication and integration of the machines on the network.

Distributed Operating System is a system where distributed applications are running on multiple computers linked by communication lines, such as high speed buses or telephone lines.Distributed systems use multiple central processors to serve multiple real time application andmultiple users .

In simple words Distributed systems use many processors located in different machines to provide very fast computation to its users.

5. Network Operating System :

A network operating system, or NOS, is system software that is designed primarily to controls the various devices like printers, disk drives on a computer network and how they communicate with each other .

Network Operating System runs on a server. It provides the capability to serve to manage data, user, groups, security, application, and other networking functions.

Some Important Points :

An operating system is a software which acts as an interface between the end user and computer hardware

Operating systems were first developed in the late 1950s to manage tape storage

The kernel is the central component of a computer operating systems. The only job performed by the kernel is to the manage the communication between the software and the hardware

Two most popular kernels are Monolithic and MicroKernels

Process, Device, File, I/O, Secondary-Storage, Memory management are various functions of an Operating System

Batch, Multitasking/Time Sharing, Multiprocessing, Real Time, Distributed, Network, Mobile are various types of Operating Systems

Inter-Process Communication :

Inter process communication is a mechanism which allows different processes to communicate and synchronize shared actions by using message passing and shared memory.

This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user’s behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods

Types of Process :

Independent process.
Co-operating process.

An independent process is not affected by the execution of other processes while a co-operating process can be affected by other executing processes. Though one can think that those processes, which are running independently, will execute very efficiently but in practical, there are many situations when co-operative nature can be utilised for increasing computational speed, convenience and modularity. Inter process communication (IPC) is a mechanism which allows processes to communicate each other and synchronize their actions. The communication between these processes can be seen as a method of co-operation between them. Processes can communicate with each other using these two ways:

1. Shared Memory

2. Message passing

An operating system can implement both method of communication.

Communication between processes using shared memory requires processes to share some variable and it completely depends on how programmer will implement it. One way of communication using shared memory can be imagined like this:

Suppose process1 and process2 are executing simultaneously and they share some resources or use some information from other process, process1 generate information about certain computations or resources being used and keeps it as a record in shared memory. When process2 need to use the shared information, it will check in the record stored in shared memory and take note of the information generated by process1 and act accordingly.

Processes can use shared memory for extracting information as a record from other process as well as for delivering any specific information to other process.

For the communication between processes via message passing, processes communicate with each other without using any kind of of shared memory. If two processes p1 and p2 want to communicate with each other, they proceed as follow:

Establish a communication link (if a link already exists, no need to establish it again.)

Start exchanging messages using basic primitives.

An IPC facility provides at least the two operations:

1. send(message, destinaion) or send(message)

2. receive(message, host) or receive(message)

• Messages sent by a process can be of either fixed or variable size.

• If only fixed-sized messages can be sent, the system-level implementation is straight forward. However , it makes the task of programming more difficult.

• If it is variable-sized messages then it require a more complex system-level implementation, but the programming task becomes simpler.

• If processes P and Q want to communicate, they must send messages to and receive messages from each other, a communication link (such as shared memory, hardware bus, or network) must exist between them.

CPU scheduling :

CPU scheduling is a process which allows one process to use the CPU while the execution of another process is on hold(in waiting state) due to unavailability of any resource like I/O etc, thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair.

Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The selection process is carried out by the short-term scheduler (or CPU scheduler). The scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them.

The aim of CPU scheduling is to make the system efficient, fast and fair.

Another component involved in the CPU scheduling function is the Dispatcher. The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler.

Different time in CPU scheduling :

1. Arrival Time :
The time at which process enter the ready queue or state.
Real life example —
Let’s say you go to the bank to deposit the money.
You enter the bank at 10 o’clock.
10 o’clock is your arrival time.

2. Burst Time (Duration time) :
The time required by a process to get executed on CPU.
Real life example —
Actual time takes to deposit money to the cashier.
If the cashier takes 15 minutes to update your bank balance.
15 minutes is your Burst time.

3. Completion time :
The time at which a process complete it’s execution.
Real life example —
Let’s assume you come out at 11 o’clock from the bank.
11 o’clock is your completion time.

4. TurnAround Time :
The amount of time taken to execute the particular process.
Real life example —
Let’s say you enter the bank at 10 o’clock and exit at 11 o’clock.
You spend 1 hour inside the bank.
1 hour your turnaround time.
Turnaround time = (completion time – arrival time)

5. Waiting Time :
the amount of time a process has been waiting in the ready queue.
Real life example —
Suppose you spend 1 hour inside the bank but your work is done in 15 minutes. 45 minutes is your waiting time.
Waiting Time = (turnaround time – burst time)

6. Response Time :
The amount of time from when a request was submitted until the first response is produced.
Real life example —
When you get the cashier first time. That is your response time.
Response Time = (the time at which a process get the CPU first time – arrival time)

Non-Preemptive Scheduling :

Under non-preemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.

Non-preemptive Scheduling is used when a process terminates, or a process switches from running to waiting state. In this scheduling, once the resources (CPU cycles) is allocated to a process, the process holds the CPU till it gets terminated or it reaches a waiting state. In case of non-preemptive scheduling does not interrupt a process running CPU in middle of the execution. Instead, it waits till the process complete its CPU burst time and then it can allocate the CPU to another process.

Preemptive Scheduling :

In this type of Scheduling, the tasks are usually assigned with priorities. At times it is necessary to run a certain task that has a higher priority before another task although it is running. Therefore, the running task is interrupted for some time and resumed later when the priority task has finished its execution.

Difference between Preemptive Scheduling and Non Preemptive Scheduling :

1. In preemptive scheduling the CPU is allocated to the processes for the limited time whereas in Non-preemptive scheduling, the CPU is allocated to the process till it terminates or switches to waiting state.

2. The executing process in preemptive scheduling is interrupted in the middle of execution when higher priority one comes whereas, the executing process in non-preemptive scheduling is not interrupted in the middle of execution and wait till its execution.

3. In Preemptive Scheduling, there is the overhead of switching the process from ready state to running state, vise-verse, and maintaining the ready queue. Whereas in case of non-preemptive scheduling has no overhead of switching the process from running state to ready state.

4. In preemptive scheduling, if a high priority process frequently arrives in the ready queue then the process with low priority has to wait for a long, and it may have to starve. On the other hands, in the non-preemptive scheduling, if CPU is allocated to the process having larger burst time then the processes with small burst time may have to starve.

Preemptive scheduling attain flexible by allowing the critical processes to access CPU as they arrive into the ready queue, no matter what process is executing currently. Non-preemptive scheduling is called rigid as even if a critical process enters the ready queue the process running CPU is not disturbed.

The Preemptive Scheduling has to maintain the integrity of shared data that’s why it is cost associative as it which is not the case with Non-preemptive Scheduling.

Scheduling Criteria :

There are many different criterias to check when considering the “best” scheduling algorithm, they are:

1. CPU Utilization :
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of the time(Ideally 100% of the time). Considering a real system, CPU usage should range from 40% (lightly loaded) to 90% (heavily loaded.)

2. Throughput :
It is the total number of processes completed per unit time or rather say total amount of work done in a unit of time. This may range from 10/second to 1/hour depending on the specific processes.

3. Turnaround Time :
It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of the process to the time of completion of the process(Wall clock time).

4. Waiting Time :
The sum of the periods spent waiting in the ready queue amount of time a process has been waiting in the ready queue to acquire get control on the CPU.

5. Load Average :
It is the average number of processes residing in the ready queue waiting for their turn to get into the CPU.

6. Response Time :
Amount of time it takes from when a request was submitted until the first response is produced. Remember, it is the time till the first response and not the completion of process execution(final response).

In general CPU utilization and Throughput are maximized and other factors are reduced for proper optimization.

Scheduling Algorithms :

To decide which process to execute first and which process to execute last to achieve maximum CPU utilisation, computer scientists have defined some algorithms, they are:

1. FCFS :
First Come, First Served Scheduling (FCFS) Algorithm:This is the simplest CPU scheduling algorithm. In this scheme, the process which requests the CPU first, that is allocated to the CPU first. The implementation of the FCFS algorithm is easily managed with a FIFO queue. When a process enters the ready queue its PCB is linked onto the rear of the queue. The average waiting time under FCFS policy is quiet long. Consider the following example: Process CPU time.

                                Process          CPU time
                                P1                  5
                                P2                  3
                                P3                  2
                                P4                  4

                                
                        


1. First Come First Serve, is just like FIFO(First in First out) Queue data structure, where the data element which is added to the queue first, is the one who leaves the queue first.

2. A perfect real life example of FCFS scheduling is buying tickets at ticket counter.

3. This is used in Batch Systems.

4. It’s easy to understand and implement programmatically, using a Queue data structure, where a new process enters through the tail of the queue, and the scheduler selects process from the head of the queue.

 

Disadvantages :
1. It is Non Pre-emptive algorithm, which means the process priority doesn’t matter.

2. Not optimal Average Waiting Time.

3. Resources utilization in parallel is not possible, which leads to Convoy Effect, and hence poor resource(CPU, I/O etc) utilization.

Convoy Effect is a situation where many processes, who need to use a resource for short time are blocked by one process holding that resource for a long time

2. Shortest Job First(SJF) Scheduling :
Shortest Job First scheduling works on the process with the shortest burst time or duration first.

This is the best approach to minimize waiting time.
This is used in Batch Systems.
It is of two types:
Non Pre-emptive
Pre-emptive

To successfully implement it, the burst time/duration time of the processes should be known to the processor in advance, which is practically not feasible all the time.

This scheduling algorithm is optimal if all the jobs/processes are available at the same time. (either Arrival time is 0 for all, or Arrival time is same for all)

3. Priority Scheduling :
Priority is assigned for each process.
Process with highest priority is executed first and so on.
Processes with same priority are executed in FCFS manner.
Priority can be decided based on memory requirements, time requirements or any other resource requirement.

4. Round Robin Scheduling :
A fixed time is allotted to each process, called quantum, for execution.
Once a process is executed for given time period that process is preemptied and other process executes for given time period.
Context switching is used to save states of preemptied processes.