written 7.0 years ago by | • modified 2.8 years ago |
Mumbai University > Computer Engineering > Sem 8 > parallel and distributed systems
Marks: 5M
written 7.0 years ago by | • modified 2.8 years ago |
Mumbai University > Computer Engineering > Sem 8 > parallel and distributed systems
Marks: 5M
written 7.0 years ago by | • modified 7.0 years ago |
states that potential program speedup is defined by the fraction of code (P) that can be parallelized
speedup=1/(1-P)
If none of the code can be parallelized, P = 0 and the speedup = 1 (no speedup).
If all of the code is parallelized, P = 1 and the speedup is infinite (in theory).
If 50% of the code can be parallelized, maximum speedup = 2, meaning the code will run twice as fast.
Introducing the number of processors performing the parallel fraction of work, the relationship can be modeled by:
speedup=(1/(P+S))/N
where P = parallel fraction, N = number of processors and S = serial fraction.
It soon becomes obvious that there are limits to the scalability of parallelism. For example:
Speedup | |||
---|---|---|---|
N | P = .50 | P = .90 | P = .99 |
10 | 1.82 | 5.26 | 9.17 |
100 | 1.98 | 9.17 | 50.25 |
1000 | 1.99 | 9.91 | 90.99 |
10,000 | 1.99 | 9.91 | 99.02 |
100,000 | 1.99 | 9.99 | 99.90 |
However, certain problems demonstrate increased performance by increasing the problem size. For example:
2D Grid Calculations 85 seconds 85%
Serial fraction 15 seconds 15%
We can increase the problem size by doubling the grid dimensions and halving the time step. This results in four times the number of grid points and twice the number of time steps. The timings then look like:
2D Grid Calculations 680 seconds 97.84%
Serial fraction 15 seconds 2.16%
Problems that increase the percentage of parallel time with their size are more scalable than problems with a fixed percentage of parallel time.