CAP and PACELC Theorems

Siddheshwar Kumar
4 min readFeb 20, 2023

--

This post talks about theorems which are direct consequence of the physical limitations (like slow and unreliable networks, upper bound on the speed with which data can travel, possibility of node failures) in a distributed data system. These limitation forces us to not have all the good things always but rather pick one at the cost of the other.

CAP is well known across industry but also the one which is hugely misunderstood. So this post tries to clarify some of the confusing aspects and also talks about the theorem which builds on CAP (i.e. PACELC).

In a separate post, I talk about these physical constraints. Read more about distributed system patterns, here.

CAP Theorem

CAP or Brewer’s theorem is one of the most popular techniques to understand a shared data system. CAP stands for Consistency, Availability and Partition tolerance (or more appropriately netsplit tolerance). Partition tolerance means that you are communicating over an asynchronous network that may delay or drop an arbitrary number of messages.

Network partitioning can’t be ignored (in the distributed world) so you can choose either Consistency or Availability (i.e. CP or AP). So, practically there are just two choices and not three! It was proposed by Eric Brewer in the year 2000 which got formally proved in 2002 by Gilbert and Lynch.

  1. (Linearizable) Consistency or Atomic Data Objects

Under this consistency guarantee, total order must exist on all operations such that each operation looks as if it were completed at a single instance. This means that all read operations that occur after a write operation completes must return the value of this (or a later) write operation. Keep in mind that CAP consistency talks about Linearizable consistency. More consistency models are explained here.

2. Availability

Every request received by a non-failing node must result in a response. This means any algorithm used by the service must eventually terminate. In some way, it’s a weak definition of availability as it puts no (upper bound) on the time to serve a request. But, on the other hand, this is a strong definition of availability during severe network failure — every request must terminate. If we go strictly by definition -CAP availability and availability are NOT the same.

3. Partition Tolerance (Tolerance to network split)

The network is allowed to lose arbitrarily many messages sent from one node to another. When a network is partitioned, all messages sent from nodes in one component to nodes in another component are lost. The atomicity requirement, therefore, implies that every response will be atomic, even though arbitrary messages sent might not be delivered. The availability requirement implies that every node receiving a request from a client must respond, even though arbitrary messages that are sent may be lost.

Choices are very simple; if network split happens, do you want to keep answering but possibly with old/bad data (AP) or should it just stop responding unless it can give the latest copy (CP).

Confusions/Misses from CAP

  1. CAP describes a very specific model of the system. It talks about communicating over an asynchronous network where packets might get lost. What if nodes get crashed/rebooted? What if a node runs out of space or hits a bug?
  2. It gives the impression that we can choose any two out of three; which is certainly not the case (i.e. there is NO CA) if we are talking about distributed systems.
  3. It gives the impression that — all is well if there is no network partitioning.

The above list is not complete, here is the full list; and these limitations/confusions led to PACELC!

PACELC Theorem (pronounced as “pass-elk”)

Abadi generalized Brewer’s CAP principle by defining the PACELC. It covers some of the omissions and confusions created by CAP.

If there is a partition (P), how does the system trade-off availability and consistency (A and C); else (E), when the system is running normally in the absence of partitions, how does the system trade-off latency and consistency (L and C)?

Note that the latency/consistency tradeoff (ELC) only applies to systems that replicate data. Otherwise, the system suffers from availability issues upon any type of failure or overloaded node. Because such issues are just instances of extreme latency, the latency part of the ELC tradeoff can incorporate the choice of whether or not to replicate data. More details, here.

The default versions of Dynamo, Cassandra, and Riak are PA/EL systems: if a partition occurs, they give up consistency for availability, and under normal operation, they give up consistency for lower latency.

This post covered some physical and logical limitations of distributed data systems.

Happy Learning! If you enjoyed it, please clap 👏 for it.

References:

--

--

Siddheshwar Kumar
Siddheshwar Kumar

Written by Siddheshwar Kumar

Principal Software Engineer; enjoy building highly scalable systems. Before moving to this platform I used to blog on http://geekrai.blogspot.com/

No responses yet