By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Why Standard Decentralization Metrics are Needed for Measuring Blockchain Decentralization

Exploring existing decentralization metrics for blockchains and why a standard overarching metric is needed.
Louise Velayo
October 4, 2023
Table of contents

I started my journey with Aviate Labs almost a year ago, managing infrastructure for one of the node providers for the Internet Computer blockchain network. It quickly became apparent that my role existed due to the need for the network to decentralize the infrastructure layer of the blockchain. This meant that node providers would have to maintain, update, and perform other interventions on their node machines themselves.

As I gained more experience and knowledge on blockchains, I learned that infrastructure is not the only subsystem of the blockchain that needs to be decentralized to achieve the decentralization of that blockchain. 

It became clear that decentralization is a fundamental characteristic of blockchains, but despite this, it seems to lack both a standard definition and a standard method of measurement. This captured my interest and motivated me to produce an academic paper as a master thesis, which is ongoing. 

This article summarizes the insights and findings of my initial research.

Defining Decentralization 

Let’s see what Google shows when I type “Define decentralization”. The answer: the transfer of control of an activity or organization to several local offices or authorities rather than one single one. Essentially, this describes how power is distributed instead of concentrated. Though this explanation is correct, in a blockchain context, I knew that this did not capture the full picture. I decided to dig deeper.

Seeing as I work closely with the Internet Computer network, I was curious to know how they defined it. It was defined as follows: By a decentralized public network, we mean a network of computers that is publicly accessible, geographically distributed, and not under the control of a small number of individuals or organizations. This definition highlights infrastructure decentralization. Thus, it becomes clear that decentralizing the infrastructure behind a blockchain network is critical to achieving decentralization. But is that the only subsystem of the network where decentralization needs to be achieved?

Seeing as the definition comes directly from the Internet Computer Whitepaper, I would say that the definition’s scope is limited to the Internet Computer network and thus, too specific of a definition to apply to other blockchain networks. Consequently, I continued my search for a higher-level definition. 

Upon the recommendation of a colleague, I was able to read a report by Trail of Bits, a company that provides technical security assessments and advisory to some of the world's most targeted organizations.

Their report investigates the extent to which blockchains are decentralized, specifically the two most popular blockchains of Bitcoin and Ethereum. As they are the father of proof-of-work and proof-of-stake blockchains respectively, it can be inferred that conclusions on the state of their decentralization can be applied to other blockchains following the same consensus mechanisms, but of course factoring in certain nuances. Because of this, I believe their definition of decentralization can be a starting point for the high-level definition of decentralization that can be applied to other blockchains. 

Instead of providing a one-paragraph definition, Trail of Bits identifies six sources of centralization in blockchains. As stated in their report, they are as follows: 

  • Authoritative centrality: What is the minimum number of entities necessary to disrupt the system? This number is called the Nakamoto coefficient, and the closer this value is to one, the more centralized the system. This is also often referred to as “Governance Centrality”.
  • Consensus centrality: Similar to authoritative centrality, to what extent is the source of consensus (e.g., proof-of-work [PoW]) centralized? Does a single entity (like a mining pool) control an undue amount of the network’s hashing power?
  • Motivational centrality: How are participants disincentivized from acting maliciously (e.g., posting malformed or incorrect data)? To what extent are these incentives centrally controlled? How, if at all, can the rights of a malicious participant be revoked?
  • Topological centrality: How resistant is the consensus network to disruption? Is there a subset of nodes that form a vital bridge in the network, without which the network would become bifurcated?
  • Network centrality: Are the nodes sufficiently geographically dispersed such that they are uniformly distributed across the internet? What would happen if a malicious internet service provider (ISP) or nation-state decided to block or filter all DLT traffic?
  • Software centrality: To what extent is the safety of the DLT dependent on the security of the software on which it runs? Any bug in the software (either inadvertent or intentional) could invalidate the invariants of the DLT, e.g., breaking immutability. If there is ambiguity in the DLT’s specification, two independently developed software clients might disagree, causing a fork in the blockchain. An upstream vulnerability in a dependency shared by the two clients can similarly affect their operation.

These sources of centrality shed light on the different ways concentrations of power can arise. However, this raised a couple of questions for me. Are these all the sources of centrality? Do these sources contribute equally to the decentralization of the blockchain? Does a blockchain need to achieve centralization in all sources to be considered decentralized?

It seems that to answer these questions, we need to quantify decentralization in the context of each of the sources mentioned above. In other words…

We Need Standard Metrics!

“One accurate measurement is worth a thousand expert opinions.” - Grace Hopper

Currently, most of us in this industry are working under the assumption that blockchain projects are decentralized because, well, blockchains are decentralized. This is largely due to the lack of an overarching analytical definition.

The seemingly naive use of the term can create an image of power distribution in a blockchain that looks more ideal than it is. In practice, it gives the benefit of organizational law to participants in the system without its accompanying obligations.

Angela Walch coined the term “Veil of Decentralization” to describe this issue. 

This “veil” prevents many from seeing the actions of key actors within the system, effectively shielding them from liability. At this point, you realize that concentrations of power may still exist in a blockchain, but our romanticized view of decentralization prevents us from seeing behind the smoke and mirrors.

An example of “The Veil” in action can be applied to software developers who write code that can be put onto the blockchain. Each time they commit code, it can be seen as an exercise of their power. Seeing as only a few developers tend to have the commit keys to make this possible, we can say that there is a high concentration of power in the developer component of the blockchain. In her paper, Walch provides real examples of actions within the Bitcoin and Ethereum networks that demonstrate this concentration of power and how it is shielded by the veil of decentralization.

Having concrete numbers to measure decentralization can be a step toward lifting this veil. It will enable the community to objectively evaluate different blockchains and blockchain projects. Furthermore, it will increase the transparency of the industry and allow everyone, from blockchain enthusiasts to governments, to gain a better understanding of the overall state of the industry.

Lastly, having metrics will help to answer the questions I posed at the end of the previous section. Quantifying as much as possible can be a starting point in determining what sources of centrality have a significant impact on a blockchain’s decentralization.

There have been some attempts at establishing metrics to define decentralization. Below we explore the approach from ConsenSys Research and Balaji S. Srinivasan.

What Metrics are Available Now?

Similar to Trail of Bits, ConsenSys and Srinivasan break down a blockchain into, what they call, “subsystems of decentralization”. ConsenSys identifies factors of that subsystem that can be measured whereas Srinivasan proposes a method of quantifying decentralization, which he then applies to the different subsystems of a blockchain. I will first share my findings from ConsenSys’ and then Srinivasan’s approach.

ConsenSys: Subsystems of Decentralization

The table above is from ConsenSys Research, one of the leading software companies in the Ethereum blockchain. What strikes me initially is that a lot of data is missing. But what is more important to note is that the metrics listed above are incomplete. This was acknowledged in the footnotes of their research, where they indicate that the reason the table is still incomplete is that the importance of the listed metrics are still under question. 

Another reason why the table may be incomplete is that there may not exist a measurement technique for that specific metric, with respect to that specific blockchain. 

Nevertheless, I wanted to include this table in this article as it highlights a bottom-up approach that could help standardize the definition of decentralization. By starting with the individual and quantifiable metrics, each subsystem’s level of decentralization can be quantified. However, reaching this stage still requires an evaluation of the significance of each metric being measured.

Srinivasan: Quantifying Decentralization

Srinivasan’s attempt at quantifying decentralization has, in my opinion, proposed a metric that appears intuitive to use and easy to apply to other blockchains. However, there are limitations to his proposed metric, which I will discuss later.

The Lorenz curve is shown in red above. As the cumulative distribution diverges from a straight line, the Gini coefficient (G) increases from 0 to 1 | Figure from Matthew John.

Srinivasan proposes two metrics/coefficients that are based on The Lorenz Curve and the Gini Coefficient, the explanation of which is out of the scope of this article, but you can find an explanation here

You may be wondering, isn’t this used by economists to measure wealth or income inequality within a population? That’s right. But thinking about it another way, too much inequality can be analogous to too much centralization.

The six subsystems, and their corresponding metric, which Srinivasan used to measure the decentralization of Bitcoin and Ethereum | Figure from Balaji S. Srinivasan

As I mentioned earlier, Srinivasan also identifies different subsystems within a blockchain in his analysis. So to apply The Lorenz curve and Gini Coefficient, you can think of the blockchain as “the economy” and the metric of that system as “the wealth” or “income” being measured.

Maximum Gini Coefficient

To obtain this value, one must measure the Gini Coefficient for all the subsystems identified. By default, the values range from zero to one. Taking the system with the highest coefficient will reveal the most centralized subsystem of the blockchain. 

Choosing different subsystems will change the Maximum Gini Coefficient. Thus, the relevance of this metric could be questioned. Does centralization in this specific subsystem mean that the blockchain is centralized? The answer to this can vary between blockchains.

“If you can not measure it, you can not improve it” - Lord Kelvin

Moreover, this metric does not factor in how many entities it requires to compromise the system. Consider the following example explained by Srinivasan: “Specifically, for a given blockchain suppose you have a subsystem of exchanges with 1000 actors with a Gini coefficient of 0.8, and another subsystem of 10 miners with a Gini coefficient of 0.7. It may turn out that compromising only 3 miners rather than 57 exchanges may be sufficient to compromise this system, which would mean the maximum Gini coefficient would have [falsely] pointed to exchanges rather than miners as the decentralization bottleneck.”

Though this is not the most robust metric, it does serve as an indicator of where the blockchain is most centralized, which already provides more insight into the state of decentralization in certain blockchains.

Minimum Nakamoto Coefficient

This metric improves upon the Maximum Gini Coefficient. It is defined as the minimum number of entities in a given subsystem to get 51% of the total capacity. If this is done for all the subsystems in a blockchain, the minimum number from all those subsystems is thereby the Minimum Nakamoto Coefficient. In other words, it is the minimum number of entities that, when compromised, can compromise the system as a whole. Intuitively, the higher this number, the more decentralized that blockchain is.

Similar to the Maximum Gini Coefficient, this value depends on how the subsystems are defined, and the importance given to each of the subsystems.

For example, if the Minimum Nakamoto Coefficient was determined by the number of codebases, some may argue that this has a negligible effect on the decentralization of the blockchain as a whole.

So What’s the Problem?

It seems that there are metrics available. Why don’t we have a standard metric then? 

It was shortly mentioned throughout the analysis of the different metrics, so you may already know the answer. 

Essentially, to have a standard metric, we need to have a standard definition of the subsystems across all the blockchains. This can prove to be difficult as not all blockchains are made equal. The effects of the subsystems on the decentralization of each blockchain respectively vary from blockchain to blockchain. 

Furthermore, I believe that blockchain-specific communities should establish what sources/subsystems of (de)centralization should be sufficiently decentralized in order to confidently claim that the blockchain in question is decentralized. By doing so, a standard list of subsystems can be identified. Without one, metrics like the Minimum Nakamoto Coefficient will be meaningless as they would always change depending on how the subsystems are defined..

For example, a case can be made to consider the spokesperson of a blockchain to be a source of centralization seeing as their opinions can heavily influence user behavior. In Ethereum’s case, that would reduce its Minimum Nakamoto Coefficient to one as there is one main spokesperson, Vitalik Buterin. 

Additionally, the metrics used to measure the decentralization of the subsystem still need to be discussed. By comparing ConsenSys’ and Srinivasan’s approach, you realize that ConsenSys identifies more metrics per subsystem. Are some of these extra metrics redundant? Or does Srinivasan’s approach fail to capture the full picture?

Finally, the method of gathering the data per metric still needs to be determined. A standard and objective approach needs to be agreed upon. For some metrics, this is easier said than done. For example, how would one quantify the governance of the network layer of the blockchain? Count the number of ISPs? Would such a number even be meaningful if the geographical location of the nodes are not considered? Or, if we don’t understand what permissions the ISP has over the data being passed through?

Closing Thoughts

“Measurement is fabulous. Unless you’re busy measuring what’s easy to measure as opposed to what’s important.” - Seth Godin

The cat is out of that bag. Blockchain is here to stay and its use cases, namely cryptocurrencies and NFTs, are hard to ignore. In fact, in the past years, we have seen multiple hearings in the US about crypto, entire countries using crypto as legal tender, and many individuals making a living from NFT games. 

It seems to me that blockchain has accelerated at a rate that proves to be difficult for regulations to keep up with. Nevertheless, regulation is on the way, and for it to be effective, analytical metrics should be the backbone of any discourse that concerns decentralization.

Moving forward, if decisions are not made using such metrics, but instead on the preliminary assumption that “blockchains are decentralized”, they will be susceptible to the “streetlight effect”. That is, the decisions will be based on matters that have been illuminated, and not the ones remaining in the dark, which could have equal, if not more, importance.

Furthermore, in the pursuit of metrics, we must be careful not to let easy-to-calculate quantitative metrics trump the more relevant but difficult-to-measure metrics. This is known as Gresham’s Law of Measurement. Nevertheless, this shouldn’t stop the pursuit of those easy-to-calculate metrics as 1.) they can already be used to increase transparency and 2.) they can function as objective starting points for the difficult-to-measure metrics.

I believe the bottom line is that if we want a standard metric(s), the blockchain community needs to agree upon the subsystems that contribute to a blockchain’s decentralization. From there, we can confidently search for and define the metrics that characterize these subsystems, which could indicate how decentralized that subsystem is. If this is done for all subsystems, then the decentralization of that blockchain as whole can be evaluated as well.

Establishing a standard metric is one of the factors I believe will cause an increase in the adoption of blockchain technology. Companies would be more willing to create blockchain-based projects, consumers would feel safer owning crypto-assets and legal or regulatory decisions would be better informed. Because of this, I believe that the pursuit of a standard decentralization metric should be a high priority for the blockchain industry.

Let’s go after it!