The Trust Machine — Part1: Sybil Protection
Designing humanities trust layer for the digital world
This is the first of a series of blog posts that will gradually introduce the concepts and ideas behind a novel DLT architecture that aims to solve all existing inefficiencies of contemporary DLTs.
The current effort of the IOTA Foundation is to make the network decentralized in a secure way so we have been focusing on vetted principles, first. With Nectar around the corner we are finally going to have the time to look at each building block and improve or simplify key elements.
I believe that going the safe route is definitely the best option for a project of the size of IOTA, but I also believe that Coordicide is not the end of the road but merely the beginning.
All of the concepts and ideas that I am going to describe are very radical and not backed by extensive simulations or research.
They are heavily inspired by conversations with the early developers of IOTA like Paul Handy or Come from Beyond so this will be an attempt to develop a full and clear specification of my vision of the “ultimate” version of IOTA.
The reason why IOTA has taken so long to become decentralized is because we didn’t want to take any shortcuts. The network that we are going to design is not only going to be faster and simpler but also more efficient, more secure, more robust, more decentralized and more scalable than any existing technology.
The first part of this series discusses the sybil protection problem which is currently not even perceived to be an actual problem by most projects and which I consider to be the reason why contemporary DLTs are so hard to scale.
Society is the first decentralized network
If we see humans as nodes that transact with each other and that communicate via gossip then the Scalability Trilemma has been solved tens of thousands of years ago with the rise of the first societies.
Society scales because it has established a way to tell honest and malicious actors apart. It uses a mechanism called trust which is a funny name for:
“I think that somebody is better off being honest than betraying me, because he would either lose his reputation, face legal consequences or miss out on the good things that I would be willing to do for him.”
This intuitive perception of real world game theory in combination with time forms the basis for a reputation system (trust) that allows us to build complex societies even though our capacity of getting to know other individuals is limited.
It captures all kinds of human relations from family and friends to relations between businesses, corporations and nation states.
Let’s look at an Example:
If Audi wouldn’t deliver the cars that people buy and instead just steal their money, then they wouldn’t be able to sell cars for much longer and would lose trust very fast. It is therefore in their own interest to be honest and continue their business.
Virtual trust in the digital world
Bitcoin and it’s corresponding Proof of Work are creating a game to mimic a similar mechanism in the digital world. A miner that has access to a certain amount of hash power would always be better off to secure the network than to attack it.
Bitcoin is equivalent to a society that agreed to a rule that whenever somebody wants to perform an economic activity, he first needs to find a company that is willing to dig a hole where he is going to put his receipt.
Since digging is a lucrative business, digging companies are competing to make bigger and bigger holes, which is starting to create problems for the environment.
Proof of Stake tries to solve these inefficiencies by paying rich people to confirm transactions instead. This is indeed more energy efficient but it comes with its own trade-offs and implications for the resulting system:
- The rich keep getting richer.
- If the rich ever decide to censor somebody or roll back history then we no longer have the digged holes to convince people of the truth.
Nobody would build a real-world society based on these principles for obvious reasons but when it comes to securing an open and permissionless DLT, proving access to a scarce resource is considered to be the best of the best.
But this kind of attack protection is not just very inconvenient and inefficient, but it also has several other problems:
1. It is hard to shard
Since the processing capabilities of nodes are limited by their hardware, the only way to reliably scale a DLT is sharding.
With sharding, nodes distribute the total amount of work among all network participants so that every node only has to process a subset of all transactions.
Only seeing a subset of all transactions automatically means: only seeing a subset of the statements of the validators. If hash power or staked tokens can be seen like a wall that protects the DLT, then sharding means that you have to split that wall into a lot of smaller pieces which will then be used to build the walls of the additional shards.
If the amount of shards gets really large then at some point the wall will be so small that it will be virtually non-existing. This problem is known as the Scalbility Trilemma.
Contemporary DLTs try to work around this issue by making the validators get randomly teleported between the shards in regular intervals which makes it harder to plan an attack.
This doesn’t just complicate the protocols a lot but it also doesn’t really solve the underlying problem. Sharding can only reliably work to an arbitrary size if creating new shards does not lower the height of the wall.
2. You exclude a lot of honest actors
If participation as a validator is limited to access to a scarce resource, then you automatically exclude a lot of validators that are simply too poor but otherwise perfectly honest.
3. Economies of scale leads to increasing centralization
Anything that is based on a scarce resource automatically favors actors that have a better or cheaper access to the underlying resource.
This cost advantage to operate a validator leads to a centralization of power around the actors with the cheapest access to the underlying resource.
4. Game theory does not always hold
Humans are not always rational players (in the context of game theory) which means that there might be situations where they deviate from the expected behavior.
Popular examples are the usual “gun to the head” as well as the “I want to see the world burn” scenario where very powerful actors are “unexpectedly” incentivized to break the protocol.
The more centralized the system is, the worse these problems become.
5. The network is attackable from the outside
Since it is possible to acquire the resource that is used as a sybil protection mechanism in secret, a very powerful attacker might be able to break the system if he is willing to spend enough money (i.e. before a war). The network would as a result be rendered completely unusable.
Real trust in the digital world
We have discussed the problems of virtual trust in the digital world, but what about real trust?
Trust isn’t limited to the real world. In fact, every time we buy something online, we trust the merchant to deliver the goods. Furthermore, trust isn’t just limited to a 1 to 1 relation between two peers. It can also form complex networks.
Let me give you an example: I once had the chance to dine with one of the founders of AirBNB (long before it was big). And it happened to be the case that a guy was sitting on the same table as us that was owning a huge hotel brand in the US.
When we were discussing the ideas behind AirBNB he said that he was 100% certain that a system like that would never work. He has been running hotels since decades and from first hand experience, he could tell us that guests tend to be horrible. They regularly steal equipment or even destroy hotel rooms.
As it turned out to be the case, he was wrong and the introduction of a reputation system that honored honest behavior with something simple as yellow stars was enough to turn the doomed business into a gold mine.
What the yellow stars achieve is that it allows two people who do not otherwise know each other to establish a trust relationship based on the past experience of other members of the same platform. AirBNB becomes the middleman for establishing this trust relationship.
Distributed trust as a middleman
DLTs work in a very similar way - they act as a middleman between parties that do not know each other but want to transact. In contrast to AirBNB, we however do not trust a single entity but we trust the protocol and distribute the trust among the validators.
What is nice about using such a protocol is that it is much harder to misuse trust. In the real world we regularly have things like corruption where people misuse the trust that others put into them to do bad things but this is just a result of too little supervision by the society.
Good examples are Wirecard or Enron. If their books would have been open and accessible to anybody then they wouldn’t have had a chance to get away with it. It is this fact, that things in a DLT happen “out in the open”, which makes them so secure.
Actors only act malicious if there is a chance that they get away with it. The more validators there are, the harder it becomes to get away with it and the less likely it becomes that actors try to be malicious or in other words: Opportunity makes the thief! More decentralization is good for security.
Integrating trust directly into the DLT
So if trust is not limited to the real world then why has nobody ever tried to build a system that uses trust as a sybil protection mechanism?
The answer is simple: We currently have no technology that would be powerful enough to do this in an open and permissionless way. Trust is too subjective, fuzzy and asymmetric as that we could easily use that to reach consensus in a blockchain.
The benefits of such a system would however be so huge that I would even say that:
The first DLT that manages to integrate real world identities and trust in an open and permissionless way as a sybil protection will make all other cryptocurrencies obsolete.
It wouldn’t just be orders of magnitude more decentralized but also orders of magnitude more efficient and secure. Everybody could be a validator even without owning any tokens or expensive hardware.
The only thing that would be necessary to become a validator would be to create a decentralized identity (a public-/private-key pair) and publish the public key so people know that this identity belongs to the issuer (very much like you are communicating your phone number).
Node operators can then add public keys of actors they trust to a local list of trusted entities. This list is completely subjective and not limited in size. It can contain everything from family and friends, over colleagues and local businesses to actors like governments and international organizations. It is completely up to the node operator to decide who they deem trustworthy.
To break the protocol, an attacker would need to corrupt more than 50% of all of a nodes trusted actors. The more people that start using IOTA and that publish their decentralized identity the more options there are to select trusted actors and the more secure the network becomes.
But even a situation where somebody manages to corrupt more than 50% of the trusted nodes is not really a problem because it is possible to edit the list and ignore the attack. This renders the network virtually unattackable. Since IOTA uses a data structure that votes on every single transaction individually it is even possible to revert individual decisions without affecting other transactions (if there is a social consensus that this should be done).
This is a very powerful concept because it weakens the cold hearted code is law principles to a code is law with social supervision.
Imagine a smart contract being hacked that manages the retirement funds of a whole country. With IOTA and a trust based model, the society as a whole could decide to just issue a 2nd spend of the same funds back to the smart contract and then everybody just approves this spend instead. All of the other transactions are completely unaffected by this.
It is important to note that node operators that do not agree with the rollback can simply remove the validators that performed it from their list. The resulting two factions of the network that have different beliefs regarding the funds in question can still cooperate on all other payments and don’t need to hard-fork the whole system. Instead, they would just mutually not accept payments that originate from the questionable incident. It is reasonable to assume, that society will find a social consensus how to proceed with the situation.
It is also reasonable to assume, that the community would maintain a list of collectively trusted entities (like companies, governments and so on) so that even node operators that do not want to put to much effort into maintaining this list could conveniently setup a node.
If we want to build a system that uses real world trust then I believe that such a subjectively maintained list of other self-proclaimed decentralized identities is the simplest and most straight forward way to model the trust network in an open and permissionless way.
We have discussed a “new” form of sybil protection that uses trust into real world identities against potential attackers.
By making this unconventional step we not only create a network with superior properties but we also create a powerful tool for society to manage their financial truth in times of disaster and make the digital world a natural extension of the real one.
PS: I am not entirely sure how I will structure the remaining content and it might take a few days until I’ve had the time to write the next part but I will try to write at least one post every 1–2 weeks.