written 6.1 years ago by |
1 Layered Approach
A system can be made modular in many ways. Qne method is the layered approach, in which the operating system is broken into a number of layers(levels). The botton layer (layer 0) is hardware. the highest (layerN) is user interface. This layering structure is depicted in Figure 1.2.1
Fig. 1.2.1 Layered Operating System
An operating-system layer is an implementation of an abstract object made up of data and the operations that can manipulate those data. A typical operating-system layer-say, layer M -consists of data structures and a set of routines that can be invoked by higher-level layers. Layer M, in turn, can invoke operations on lower-level layers.
The main advantage of the layered approach is simplicity of construction and debugging. The layers are selected so that each uses functions (operations) and services of only lower-level layers. If an error is found during the debugging of a particular layer, the error must be on that layer, because the layers below it are already debugged. Thus, the design and implementation of the system are simplified.
The major difficulty with the layered approach involves appropriately defining the various layers. Because a layer can use only lower-level layers, careful planning is necessary. For example, the device driver for the backing store (disk space used by virtual-memory algorithms) must be at a lower level than the memory-management routines, because memory management requires the ability to use the backing store.
2 Virtual Machine(VM)
IBM VM370 divided a mainframe into nmltiple virtual machines, each ruming its own operating system. A major difficulty with the VM virtual-machine approach involved disk systems. Suppose that the physical machine had three disk drives but wanted to support seven virtual machines. Clearly, it could not allocate a disk drive to each virtual machine, because the virtual-machine software itself needed substantial disk space to provide virtual memory and spooling. The solution was to provide virtual disks-termed minidisk in IBM's VM operating system -that are identical in all respects except size. The system implemented each minidisk by allocating as many tracks on the physical disks as the minidisk needed. Once these virtual machines were created, users could run any of the operating systems or software packages that were available on the underlying machine. For the IBM VM system, a user normally ran CMS-a single-user interactive operating system.
Main advantage of VM is being able to share the same hardware yet run several different execution environments (that is, different operating systems) concurrently. Another important advantage is that the host system is protected from the virtual machines, just as the virtual machines are protected from each other. A virus inside a guest operating system might damage that operating system but is unlikely to affect the host or the other guests. Because each virtual machine is completely isolated from all other virtual machines, there are no protection problems. A virtual-machine system is a perfect vehicle for operating-systems research and development.
Fig. 1.2.2 (a) Non-virtual Machine, (b) Virtual Machine
- A major advantage of virtual machines in production data-center use is system consolidation which involves taking two or more separate systems and running them in virtual machines on one system. Such physical-to-virtual conversions result in resource optimization, as many lightly used systems can be combined to create one more heavily used system.
3 Kernel Based
A kernel is a central component of an operating system. It acts as an interface between the user applications and the hardware. The sole aim of the kernel is to manage the communication between the software (user level applications) and the hardware (CPU, disk memory etc). The main tasks of the kernel are : Process management, Device management Memory management Interrupt handling I/O communication File system...etc.
There is difference between Kernel and Kernel OS.
So, we can say that Linux is a kernel as it does not include applications like file-system utilities, windowing systems and graphical desktops, system administrator commands, text editors, compilers etc. So, various companies add these kind of applications over linux kernel and provide their operating system like ubuntu, suse, centOS, redHat etc.
Kernels may be classified mainly in two categories
- Monolithic
Micro Kernel
1 Monolithic Kernels
Earlier in this type of kernel architecture, all the basic system services like process and memory management, interrupt handling etc were packaged into a single module in kernel space.
This type of architecture led to some serious drawbacks like
1) Size of kernel, which was huge.
2) Poor maintainability, which means bug fixing or addition of new features resulted in recompilation of the whole kernel which could consume hours In a modern day approach to monolithic architecture, the kernel consists of different modules which can be dynamically loaded and un-loaded.
This modular approach allows easy extension of OS's capabilities. With this approach, maintainability of kernel became very easy as only the concerned module needs to be loaded and unloaded every time there is a change or bug fix in a particular module. So, there is no need to bring down and recompile the whole kernel for a smallest bit of change. Also, stripping of kernel for various platforms (say for embedded devices etc) became very easy as we can easily unload the module that we do not want.
Linux follows the monolithic modular approach
2 Microkernels
- This architecture majorly caters to the problem of ever growing size of kernel code which we could not control in the monolithic approach. This architecture allows some basic services like device driver management, protocol stack, file system etc to run in user space. This reduces the kernel code size and also increases the security and stability of OS as we have the bare minimum code running in kernel. So, if suppose a basic service like network service crashes due to buffer overflow,then only the networking service's memory would be corrupted, leaving the rest of the system still functional.In this architecture, all the basic OS services which are made part of user space are made to run as servers which are used by other programs in the system through inter process communication (IPC). eg: we have servers for device drivers, network protocol stacks, file systems, graphics, etc. Microkernel servers are essentially daemon programs like any others, except that the kernel grants some of them privileges to interact with parts of physical memory that are otherwise off limits to most programs. This allows some servers, particularly device drivers, to interact directly with hardware. These servers are started at the system start-up. So, what the bare minimum that microKernel architecture recommends in kernel space?
• Managing memory protection
• Process scheduling
• Inter Process communication (IPC)
Apart from the above, all other basic services can be made part of user space and can be run in the form of servers.