- Sequential consistency model is a weaker consistency model, which represents a relaxation of the rules.
- It is also must easier (possible) to implement.
- Definition of “Sequential Consistency”:
The result of any execution is the same as if the (read and write) operations by all processes on the data-store were executed in the same sequential order and the operations of each individual process appear in this sequence in the order specified by its program.
- In other words: all processes see the same interleaving set of operations, regardless of what that interleaving is.
- A sequentially consistent data-store – the “first” write occurred after the “second” on all replicas.
- A data-store that is not sequentially consistent – it appears the writes have occurred in a non-sequential order, and this is NOT allowed.
- The sequential consistency model as defined by Lamport is a weaker memory model than strict consistency.
- Problem with Sequential Consistency
- With this consistency model, adjusting the protocol to favour reads over writes (or vice-versa) can have a devastating impact on performance (refer to the textbook for the gory details).
- For this reason, other weaker consistency models have been proposed and developed.
- Again, a relaxation of the rules allows for these weaker models to make sense.
- Sequential Consistency Requirements
- Each processor issues requests in the order specified by the program – Do not issue the next request unless the previous one has finished
- Requests to an individual memory location (storage object) are served from a single FIFO queue.
– Writes occur in a single order
– Once a read observes the effect of a write, it’s ordered behind that write.