The Trust Machine — Part2: A model of the entire DLT space

Designing humanities trust layer for the digital world

The Fundamentals

To be able to develop such a model, we first need to understand some fundamental facts about distributed ledgers.

1. Two forms of communication

All existing DLTs use two forms of communication that have different properties.

2. Nodes in a DLT have a relativistic perception of time

Nodes in a decentralized network are by definition spatially separated (distributed). Since information can only travel at a maximum speed, they will naturally see messages in a different order.

3. Consensus means voting

A direct consequence of nodes seeing messages in a different order is that to reach consensus in the presence of conflicts, we need to vote which of the conflicting messages is supposed to win.

Classical consensus

Against popular belief, consensus research did not start with Satoshi in 2008 but decades earlier in the 1980s with the work on the algorithms of the Paxos family and its successors. Early on, these algorithms provided only fault tolerance (resilience against crashed or non-responsive validators) but were later extended to be secure against arbitrary byzantine faults (validators that are lying or actively trying to break the system).

1. They only work with a few dozen validators

The algorithms are based on knowing the opinions of all other validators which requires nodes to regularly query each other using a point-to-point communication.

2. They only work in a fixed committee setting

All network participants need to agree upfront on the identities of all validators which prevents a naive deployment in an open and permissionless setting.

3. They are vulnerable to DDOS attacks

The protocols are based on directly querying each other using the point-to-point communication protocol which makes them susceptible to DDOS attacks.

Nakamoto consensus

For a long time it seemed like we knew everything about consensus and its limitations until something unexpected happened.

Benefits

These two small changes (considering only a single opinion and using the gossip protocol to distribute it) had a huge impact on the properties of the consensus algorithm:

Trade offs

Even though, Nakamoto consensus has a lot of really compelling features and is the first approach that enables robust distributed consensus in large scale networks, it also has it’s trade offs:

  1. Probabilistic Finality
    The first trade off that Nakamoto consensus makes compared to Classical consensus is that it uses a probabilistic finality. This means, that things in Nakamoto consensus never get really final (just harder and harder to revert). In Bitcoin, things are usually considered to be final once they are approved by 6 blocks.
  2. Slow confirmations
    Since things have to be confirmed by a certain amount of blocks before they can be considered to be “irreversible” and since blocks have to be issued with a relatively large delay, it takes a really long time until things can be considered to be confirmed.

Defining the model

The two named consensus mechanism are so different in their properties that they are very likely at opposite ends of the DLT spectrum.

Probabilistic vs deterministic finality

One of the simplest and most obvious classifications is the separation between protocols that have a:

  • deterministic finality, which means that things are truly final and there is no chance for rollbacks in the system.
  • probabilistic finality, which means that things get harder and harder to revert but they will never get truly final.

Time to finality

Another important metric is the time that it takes for transactions to be finalized.

Scalability (supported network size)

Every DLT is based on a peer-to-peer network of distributed nodes where at least some of these nodes act as validators (or consensus producers).

Security & Robustness

Another very important metric is the security and robustness of the protocol and following the last example, we are going to discuss every quadrant individually.

Level of freedom (sybil protection)

The DLT revolution was started by lowering the requirement for an absolute consensus on the identities of the validators to an absolute consensus on the weight of the gossiped messages.

Summary

We have introduced a model that allows us to classify DLTs into 5 different categories. These categories dictate most of their fundamental properties (independently of the remaining design decisions).

Implications

Nakamoto consensus is the most robust, most secure and most scalable consensus mechanism that we know and the probabilistic finality through a replicated data structure like the blockchain has not only provided us with a tool that completely eliminates agreement failures but it has also proven itself to be secure as long as more than 50% of the validators are honest.

The emergence of Bitcoin maximalism

The realization, that a new technology can only be successful if it is better than existing solutions, in combination with the previous observation, led to a huge rift in the crypto community.

Hybrid solutions

To overcome these limitations, most contemporary DLTs try to combine different solutions to get the best of multiple worlds.

The unexplored quadrant

If we look at our model, we realize that there is a single quadrant (subset of validators / gossip protocol), that is currently not covered by any consensus mechanism.

Conclusion

We have developed a model that allows us to judge the potential benefits and drawbacks of different consensus mechanisms independent of their specific design decisions.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Hans Moog

I am a hacker, feminist, futurist and tech enthusiast working for IOTA and trying to make the world a better place (whatever that means) :P