written 5.7 years ago by |
All distributed systems consist of multiple CPUs. There are several different ways the hardware can be arranged. The important thing related to hardware is that how they are interconnected and how they communicate with each other. It is important to take a deep look at distributed system hardware, in particular, how the machines are connected together and how they interact.
Many classification schemes for multiple CPU computer systems have been proposed over the years, but none of them have really implemented. Still, the most commonly used taxonomy is Flynn's (1972), but it was in basic stage. In this scheme, Flynn took only two things to consider i.e. the number of instruction streams and the number of data streams.
Single Instruction, Single Data Stream (SISD)
A computer with a single instruction stream and a single data stream is called SISD. All traditional uni-processor computers (i.e., those having only one CPU) fall under this category, from personal computers to large mainframes. SISD flow concept is given in the figure below.
Single Instruction, Multiple Data Stream (SIMD)
The next category is SIMD, i.e. single instruction stream, multiple data stream. This type uses an array of processors with only one instruction unit that fetches an instruction, and multiple data units which work in parallel. These machines are used where there need to apply the same instruction for multiple data example, adding up all the elements of 64 independent vectors. Some supercomputers are SIMD. Figure below shows the SIMD flow structure.
Multiple Instruction, Single Data Stream (MISD)
The next category is MISD i.e. multiple instruction streams, single data stream. This structure was worked when there are multiple different instructions to operate on the same type of data. In general MISD architecture is not use more in practical.
Multiple Instruction, Multiple Data Stream (MIMD)
The next category is MIMD, which has multiple instructions performances on multiple data units. This means a group of independent computers; each has its own program counter, program, and data. All distributed systems are MIMD, so this classification system is not more useful for simple purposes.
This was Flynn’s classification. For advance study, let's divide all MIMD computers into two groups: The computers that have a shared memory, usually called multi-processors, and those that have their own local memory, sometimes called multi-computer.
The main concept of a multiprocessor is, there is a single virtual address space that is shared by all CPUs. If any CPU writes, for example, if one CPU writes value 44 to address 1000, then any other CPU subsequently reading from its location 1000 will get the value 44(but they can’t write on same address location simultaneously). All the machines share the same memory.
On the other hand in a multi-computer, every machine has its own private memory i.e. distributed memory. If one CPU writes the value 44 to address 1000, when another CPU reads address 1000 it will get whatever value was there before. After writing 44 will not affect its memory at all. A simple example of a multi-computer is a collection of personal computers connected by a network.
In the above figure, the two categories described can be based on the architecture of the interconnection network. There are two categories at the bottom of the figure, bus and switched.
In the bus, there is a single network, backplane, bus, cable, or another medium that connects all the machines. In a cable television system, the cable company runs a wire down the street, and other subscribers have taps running to it from their television sets.
Switched systems do not have a single backbone like cable television. But, there are individual wires from machine to machine, having different wiring patterns in use. Messages move through the wires, and the final switching decision is made at each routing stage to route the message to an outgoing destination. The public telephone system for the whole world is organized in this way.
The next uppermost dimensions in our taxonomy are that in some systems the machines are tightly coupled, and in others, they are loosely coupled. In a tightly-coupled system, delay is short for message passing, and also the data rate is high; that means, the number of bits per second that can be transferred is large. In the loosely-coupled system, it is just the opposite of it. The inter-machine message delay is large and the data rate is low.
For example, two CPU chips on the same printed circuit board and connected by wires etched onto the board are likely to be tightly coupled, whereas two computers connected by a 2400 bit/sec modem over the telephone system are certain to be loosely coupled. Tightly-coupled systems used in more applications as parallel systems (working on a single problem) and loosely-coupled ones tend to be used as distributed systems (working on many unrelated problems), But this is not always true. One popular example is a project in which hundreds of computers all over the world worked together for factor a huge number (about 100 digits). Each computer was assigned a different range of divisors to try, and they all worked on the problem in their given time and also reporting the results back by email when they finished the task.
On the whole, multiprocessors tend to be more tightly coupled than multi-computers, because they can exchange data at memory speeds, but some fiber optic based multi-computer can also work at memory speeds.