What Holds Real Decentralization Back

Opinion
21.04.2020

Vladimir Popov, the founder of Synergis who had co-authored a book about Web 3.0, highlighted the principal problems of DPoS and LPoS algorithms, explained flaws in their architecture, and hypothesized about decentralized networks of the future. This is an adaptation of the original Russian-language article Vladimir wrote exclusively for ForkLog.

Concentration of Wealth

Many people still think that blockchain is about trust, although it was actually created with the goal of building a trustless environment in mind.

The latest version of the Cardano client does certain things only if there are enough honest participants. What if there isn’t? What if somebody wants to take over the system, no matter how expensive it is?

Things like this have happened even within blockchain communities: EOS whales were attacking Ethereum, Justin Sun tried to take over Steem through Steemit, and so on.

In my opinion, it can lead to grim consequences given the exponential growth. Currently, the cryptocurrency market is small, so it’s about time to polish security at the architecture level.

The share of assets owned by the top 10 addresses in Bitcoin and Cardano

The share of assets owned by the top 10 addresses in Bitcoin and Cardano

Here is Cardano’s 33.05% against Bitcoin’s 5.61%.

Protection from the 51% attack has to do with the top 10, 20, 30, or so addresses. This is particularly prominent in EOS.

Of course, metrics like total supply, levels of network decentralization or distribution across nodes, number of users, etc. should be taken into account. But it doesn’t change the nature of the problem.

Another even more complicated fundamental question is the importance of considering different aspects of decentralization when building decentralized social networks.

Hard and Soft

If a cryptocurrency isn’t prone to the 51% attack, it is centralized. The same goes for Sybil attacks.

Despite the fact that blockchain can be instrumental in eliminating the threat of Sybil attacks on p2p networks (TOR-net, i2p, torrent networks, etc.), it remains vulnerable to such attacks itself.

PoS projects always reach the conclusion that today there can be about a thousand of supernodes or other similar units. There are 21 block producers in EOS with 72 on the waiting list, there are 100 validators in Cosmos, and so on.

Under normal conditions, these few may be more effective than hundreds of thousands of miners or tens of thousands of full nodes. But the work isn’t always normal and stable, the system isn’t always protected. This is the basis for Nassim Taleb’s black swan and antifragility theories. It works equally well in medicine, education, and technology.

What if the network throughput, as it is represented in many DPoS and LPoS systems, will deviate to an extent greater than the one accounted for by the protocol? What if Facebook with its half a million servers would migrate to EOS, Cosmos, Cardano, or Ethereum? Would any p2p network hold up to such load?

We’ve already seen how networks failed not because of problems with architecture, but because of trivial hardware bugs. In the case of Ripple and Moneygram, the load can be monitored online.

Studying the cases, I’ve come to a simple but bitter conclusion: we are still hoping that maths will prevail over the real physical world. This isn’t the case. What do Facebook and Telegram do to scale further when the software is at its limits? They buy hardware and “expand” the network capabilities that way.

What About P2P?

First of all, there are no cheap servers and the problems left unsolved by Golem and others will be coming back. Among those, dealing with high latency, which is extremely important for decentralized social networks (DSN), and replacing failed hardware are top priorities.

Then, even if everything goes as planned, the approval from ⅔ of the voters will be needed. It will take time. If there is no approval, ideological problems arise.

Finally, all of the available solutions (moving to the Dapp level, introducing sidechains, etc.) have their own problems with centralization and trust. This diverges from the trustless environment blockchain is meant for.

Problems with hardware are related to the software-based limit on network effectiveness and the network effect coefficient. According to Metcalfe’s law, “the effect of a telecommunications network is proportional to the square of the number of connected users of the system.” Therefore, the network effect coefficient is exponential and all the related problems will eventually explode.

The problem gets more complicated when all levels of decentralization and distribution are considered in terms of:

  • the number of users
  • their ownership shares
  • nodes
  • miners
  • developers.

Let’s take the numbers similar to those of the Bitcoin network:

It is apparent that an attack aimed at mining pools isn’t a good idea, considering the mining difficulty and the prices for S17+ and S19. Looking for bugs akin to the epic bug in The DAO would be more reasonable. The weakest link is the centralized solutions that use p2p networks to profit.

Another way is not to destroy, but to harness the resource, whether it’s a mining malware or ransomware targeting ASIC operators.

In this regard having more tokens has a positive effect on the capabilities of a decentralized or a distributed network.

Atomic swaps, decentralized applications, and sidechains all aim to increase the general stability and interconnection.

Supernode Hypothesis and Advantages of Thinking In Advance

In some blockchains like Bitcoin, Ethereum, and their forks, any user can set up a full node and check everything themselves. In this case, there are protection mechanisms like Winkle, which is a second-level client-driven validation where each client adds the hash of the previous block to the transaction they sign.

If a regular user can’t do it, they have to trust supernodes. In this case, you won’t be needing 51% to pull off a destructive Sybil attack on the system, but much less.

The number of attacks will never be infinite, but it will be inversely proportional to the number of supernodes.

This scenario with compromised supernodes aiming to harm the system may look too far-fetched, but there are real examples to look at:

  • The first computer viruses weren’t making any profit but were disrupting all sorts of systems. Simply because some people wanted to test their skills in creating something “living.”
  • At the peak of the ICO hype, about 10% of the offerings were getting hacked. Except for a few cases of exploiting bugs and vulnerabilities of the blockchain, most attacks involved hacking the administrator’s computer, phishing, social engineering, etc.

It is quite possible to attack sophisticated high-tech systems through promotive ones.

Here lies the main problem with building decentralized networks: we see a node as something abstract, while it is still a PC sitting somewhere or even a virtual server running the software, which is how between 25% and 60% of all Ethereum nodes work. We talk about buying out, reserving, or leasing a share of the network capacity forgetting about the limitation and discrepancies at the physical level.

Therefore we get the next conclusion. In PoW systems the hardware-based scaling is built-in algorithmically in a form of mining difficulty. Although new ASICs and GPUs aren’t directly influencing the number of nodes, nobody would create a large pool without setting up full nodes, so the correlation is direct. In DPoS and LPoS systems, this ratio relies on trust and is a vulnerable aspect of such networks and a debated topic.

This leads to the notion that the next step of decentralization is to make supernodes and regular users as equal as possible. Have a smartphone? You can be a miner, a node, a validator, an oracle, etc.

Meanwhile, I am still confident that such a transition will only work if the paradigm changes from subjective to transactional reputation: you did something useful, you get a rating. Passports, biometric verification, and similar things look more like digital slavery than an increase in the extent of decentralization.

To model the future of DSN, we have to understand that PoW and PoS have architectural limits on the extent to which they can be decentralized.

While on the transport, session, presentation, and application levels of the OSI model things are progressing, there are lots of questions related to the physical, channel, and network levels. This means that the main problem remains: there’s a killswitch allowing governments, internet providers, etc. to technically kill our access to the lower node, pool, and so on. Wi-Fi, 5G, and mesh-networks give only conceptual solutions and there aren’t many actual implementations.

Follow us on Twitter and Facebook and join our Telegram channel to know what’s up with crypto and why it’s important.

Found a typo? Highlight text and press CTRL+ENTER

Subscribe to our Newsletter

<

Related posts

Tags: , , ,