Skip to main content

15 posts tagged with "blogs"

View All Tags

· 4 min read
NEST Protocol

Unlike in the real world, the value of a public chain is determined by consensus cost, not by application. The application is only the risk that affects that value, which is a very disruptive view. For this we need to explore a little more closely.

Imagine we have a bunch of code, like the Bitcoin code, and we're asking what it does, which gets into product thinking. Product thinking is consistent with our intuition that if something doesn't have use value, it doesn't deserve our attention. But there is a downside to product thinking, which is that the products we see are provided by companies or individuals with a strong centralized point of view. This makes it seem that when we understand the product, only I am interacting with the product itself and the service provider behind the product, and the use value of the product is mainly reflected in the product itself or the service provider. Sometimes a good product development may determine how many users there are.

However, Bitcoin is different. When we use Bitcoin, it's not the Bitcoin code or some organization that is providing us with a service, but the millions of miners behind the code that are providing the service, which is very different from a centralized product. The first thing is that the code itself doesn't have a lot of value. (Note: If there is a lot of value, the code is all open source, you can copy one to provide the service for free, is everyone switching to the new code?) The second is that a person running this code and making your transactions go smoothly is also not of much value. It is only when the miner's computing power is large enough and the miners are dispersed enough that the transfer has "value".

This is similar to the difference between weather forecast and WeChat. As long as the weather forecast company is operating, anyone who opens the weather forecast app will be able to get the service they want; but WeChat is not. If your friends around you don't use it, even if Tencent opens a million servers and puts it there, it is of no value to you. This kind of interdependence between users is somewhat similar to the dependence on consensus.

So, when we "consume" Bitcoin, we are not "consuming" the Bitcoin code. Because the code can be copied by anyone, it has very little value, but rather we are "consuming" the consensus provided by millions of miners. This consensus includes the unity, authenticity, and immutability of the data on the chain, as well as the recognition of the intrinsic value of bitcoin and the expectations of future use scenarios, which is the difference between product thinking and consensus thinking. Consensus does not necessarily point to a definite application, and a definite application does not determine the value of the blockchain.

What the blockchain has to offer is not predetermined by the developer at the beginning. While the vast majority of blockchain systems have an initial functional definition, eventually this functionality will move towards abstraction, i.e. the application may be beyond the designer's imagination and simply the same as predetermined at the most abstract level. For example, Bitcoin was intended to be used for payments, but now it looks like it is not equipped to be a payment and instead moves to value storage, but both are consistent at the level of transferring money. Another example is Ethereum, whose whitepaper pushed for applications that didn't flourish while various decentralized financing and decentralized finance developed, but again, all of these applications use smart contracts as a feature.

The idea of expecting a specific application to drive the development of a decentralized system is a bit of a give-and-take. If the consensus itself is not built up, that is, the embodied consensus cost has not come up, then it is difficult to develop applications based purely on a software function; and even if the raw subsidies, will eventually return to the original nature of things. Ultimately, it depends on the consensus building, "rigid" use of back-end pull front-end way, success is very unlikely.

Let's focus on the consensus building, which is defined by each blockchain system. For example, some are consensus for on-chain transaction data, some are consensus containing smart contracts, some are consensus containing market prices, etc. Each consensus has its own intrinsic value, and all have a cost. Moreover, consensus is not like ordinary products that remove the friction of the world and let the water flow down; instead, consensus is the formation of new value, which is the water flowing upwards. So, the higher the cost of consensus, the greater the value, and this is what makes blockchain decentralized systems special.

· 4 min read
NEST Protocol

DeFi refers to decentralized financial protocols implemented with smart contracts, including asset trading, lending, insurance, various derivatives, etc. With the exception of credit services, realistic financial services can be implemented through DeFi protocols. These protocols are decentralized and automated, with no third-party institutions managing and maintaining them, so the risk control of the contracts becomes an industry challenge.

DeFi has both financial and technological attributes and contains the following risks:

  • Code risk. Including the risk of the underlying code of Ethereum, the risk of smart contract code, the risk of wallet code, etc. For example, the famous DAO incident, the Uniswap vulnerability attack problem, and various wallet theft incidents are all caused by code risk.

  • Arbitrageurs risk. Mainly the business design process is left with loopholes, being reasonably attacked or manipulated. For example, the FOMO3D was blocked attack, dZx wrongly used the Uniswap oracle machine which is not resistant to attack, and was reasonably suppressed price to steal assets. This type of person is called an arbitrageur. Arbitrageurs have both disadvantageous and advantageous sides to a DeFi project.

  • Market volatility risk. DeFi is designed to lack some counter variables, leading to market extremes that occur in the event of a position penetration. For example, MakerDAO has experienced the risk of extreme market volatility.

  • Oracle risk. The oracle machine provides global variables and is the basis for most DeFi. If the oracle machine suffers an attack or a stoppage, the downstream DeFi will fall into collapse. We believe that oracle machines will become the most important infrastructure for future DeFi, and oracle machines with any centralization risk will eventually go extinct.

  • Technology risk. This refers to the risk of ordinary users who are unfamiliar with smart contracts and blockchain using a "convenience" interaction tool developed by a centralized team.

Any DeFi project should be designed in mind with the risks of code vulnerabilities, hackers, market volatility, arbitrageurs, etc. A complete process is not just about good tips in the documentation, but also about risk management tools. Most of these tools are done in a decentralized way, and a few are done in a community governance way (mainly referring to on-chain governance). Here we propose a DeFi risk management framework, mainly divided into ex ante, halfway and ex post.

  • Ex ante: The main focus is on formal verification of the contract code. It includes figuring out the boundaries of methods, resources, and even instructions used by the contract, and the impact of the relevance of these methods, instructions, and resources in the combination process. This is not traditional software development testing thinking; this is a concept close to mathematical argumentation. Good contract development should be based on a combination of methods that have been validated.

  • Halfway: Halfway mainly focus on the downtime design and exception trigger design. That is, the contract is able to identify and intervene in attacks, including automatic downtime design and governance downtime design. The anomaly trigger is a kind of control management of the contract operation process, exceeding the expected phenomenon. Exception triggering is generally automatic, and some risk management variables are corrected by exception triggering. See the beta factor and anti-blocking attack settings in the NEST Oracle Machine system, which is one of the industry's pioneering practices considering downtime and abnormal triggering.

  • Ex post: Ex post risk management has several components. The first is a vulnerability in the code that needs to be fixed, typically through on-chain governance, i.e., DAO governance. The second is an attack on the governance asset itself, at which point a contract fork is required! This is a blind spot that the industry has overlooked. The next is to reduce losses by insuring the contract against possible risks through an insurance mechanism. Finally, the community can work with various institutions to track losses through the tracking of on-chain data. For on-chain governance and contract forking, see the design of NEST, which is an innovation.

The current understanding of security in the industry is too early and too traditional. If we cannot change our thinking to introduce new ideas such as boundaries, completeness, consistency, formal verification, downtime, exception triggering, governance, and forking, we cannot adapt to future development.

· 5 min read
NEST Protocol

Since the emergence of BTC, more and more people are using "wallets". Blockchain wallet is essentially a private key management tool. After the developer has developed the wallet, it is shared on the internet and does not need to be updated. Users download it and have nothing to do with the developer. The private key and any wallet information are not available to the developer. But since the advent of ETH, the chain allows for complex logic, so DAPPs have come into view. This is a little more complex interaction than transferring money, the wallet is no longer developed and never updated, but gradually becomes a DAPP platform. As a result, the interaction between users and openers has suddenly increased.

However, there is a legal issue to explore here. This problem is a bit of a paradox, and of course we give a scenario later to deal with this ethical dilemma.

Case

Let's describe the simplest case first: if a wallet, completely abandons interaction and is completely open source, then logically the developer can declare that it is not responsible for any risk to that wallet. In fact, the developer cannot be held responsible either. As an example: a bug in the wallet that transfers assets meant for address A to address B could be a disaster for many people. But there is also a situation where anyone can use the bug to create a false "loss" (i.e. address B is also their own address) and submit evidence of the bug to claim against the developer. This poses the problem of how to determine the authenticity of the "loss".

In a traditional centralized structure, this loss can be retrieved from the backend server. In blockchain, the mechanism of complete anonymity and the uncontrolled "server" (blockchain) cannot determine the authenticity of the loss at all! This gets into a dilemma of who should prove the case. Worse still, if left unchecked, a developer may intentionally develop a free "phishing" wallet, allowing users' funds to be transferred to an unknown address in some obscure way, and how can this be proven and accounted for?

The case above is the simplest. Back to reality, the problem is even more serious when there is frequent interaction between the wallet and the user, and when there is a server.

Paradox

First of all, the existence of a server is to provide information services, which must exist a subject, otherwise how to pay for the cost of the server? When this subject exists, then is the association relationship determined as a legal relationship? Or what kind of legal relationship is determined? Second, similar to the simple example above, and more directly, someone reverse engineers the wallet and makes the "same" wallet, and then proves that it caused "damage" (e.g., by making a video of the use of the wallet that caused the damage, etc.). Can he claim compensation from the developer or the entity that provided the server? This is the first step. The question in this first step is whether this evidence is credible and who will prove it? Is it too much to ask for a non-technical person to prove that it is indeed a bug in the wallet (and not a reverse-engineered "wallet" bug) and that the loss is real (and not a right-handed one)? The second step is, if the problem is really with the wallet, is such a process fair to the user? Or will it lead to unscrupulousness of various "phishing" programs? If the whole process requires the developer and the server provider to prove their innocence, does this not already assume that the developer is responsible for this? Then the scale of loss is not defined by the developer, but is self-directed by the user, so who can afford such exposure?

The above is a paradox, the user needs security, and the developer cannot afford the "loss" caused by insecurity, there is a missing link.

In the traditional Internet world, this risk is borne by the company or the project owner because they can control the whole process of the product or service. But in blockchain, these so-called developers cannot fully control the whole process of providing interactions, the most important part is done on the chain, they actually only provide a "front-end tool".

How to end the wallet paradox?

The future direction to solve this problem is expected to be two.

The first is to entrust all users to developers or project owners, such as centralized exchanges. I give you the coins and give up the private key management, then the responsibility is entirely on you, this direction is the logic of the closed loop.

The second is for developers to open source and define all their program code clearly (proving that the code and program are consistent), upload it to a third-party neutral platform, and leave it to the market to evaluate. Those who want to use it, use it, then accept the disclaimer, the developer has nothing to do with it. This second one also closes the logical loop, but the holes in the code are left to the users in the market to digest. The more difficult question is, how should a DAPP that provides information services, i.e., a product with a server, arrange liability for the information service part? This requires legal follow-up, because this piece is not as serious as the first problem, and not as completely unrelated as the second problem, which is the most complicated situation in the whole wallet problem. This is the most complicated situation in the whole wallet issue. It is a long infrastructure construction process that requires the determination of the responsibility boundary of the application on the public chain, which involves many legal packages.

· 4 min read
NEST Protocol

Blockchain technology builds a decentralized world. Imagine: a complete network that doesn't depend on any individual, regardless of time, with contracts, AI, and other things on it interacting frequently. Value flows between them, but not only is there no middleman in the process, it doesn't even need to be operated. Information and data automatically get to where they're supposed to go, and all it takes is for you to hit a line of code. What people agree to do will definitely happen even 500 years from now. What kind of amazing world is this? If this does not represent progress, what else can?

However, the decentralized world is not easy to build and understand. And there’s a strict rule, which we propose as the decentralization first principle: in the whole process of value exchange, as long as there is one process that is centralized, the whole process is centralized.

The idea of centralization always interferes with your perception of decentralization. There is a need to find some criteria to judge whether something is decentralized or not, just like judging the color of gold. According to the economic formulation, the difference between centralization and decentralization lies in the inclusion of some kind of individual or organizational agency risk. Risk means that if these individuals or organizations do not act according to the principal's wishes, the principal may suffer losses. In a decentralized process, there is only trust in the mechanism and the code, and no dependence on a specific individual or organization. This feature inspired us to propose the first principle of decentralization: in the whole process of value exchange, as long as there is one link that is centralized, the whole process is centralized.

This seemingly simple conclusion is actually very powerful, and can directly test the merits of all self-proclaimed decentralized projects on the market: whether they are as decentralized as they claim to be or not. You can try to find out the "centralization" of any one of them from one project to another - even if you only have the power to upload and download but not to modify (such as some off-chain aggregation with zero knowledge proof), it is centralized because you can refuse to perform.

No matter how advanced you seem to be in smart contract development, or use the so-called blocks and chains, as long as there is centralized control, it is centralized. Do not just advertise as decentralized. For example, Makerdao (which directly controls the important parameters of the contract: price variables), as well as bZx, which claims to freeze user assets, and various private chains, and even most of the nodes are in their own hands but externally claimed distributed DPOS public chain. These can only be described as blockchain or smart contract applications, not decentralized systems, because their creators can exercise artificial control over the assets on the chain or important variables on the chain. Of course, in order for this control to be less obverse, it is usually claimed that they use multi-signature or decentralized node arrangements. These are not self-certifying if they cannot be open source, or joined without audit, and are therefore classified as centralized rather than weakly decentralized.

The first principle of decentralization is so important and simple, yet no one in the industry has seriously bothered to refine and appeal to this as a judgment criterion. It is mainly because more and more project parties are carrying out various centralization operations, as well as some so-called realists have adopted a compromise attitude and do not care about the decentralization success of projects, nor do they classify and standardize them, which is a huge loss and drain on the industry.

In fact, of all the blockchain innovations, the truly disruptive idea is the insistence on decentralization like NEST's oracle machine. It's not just Satoshi Nakamoto's whimsy, it's also a set-up for the survival of a whole new technology network that gives pure technology creation a self-evolving "life". The exploration of this direction has just begun, and we are easily moving towards eclecticism, which is a lazy attitude in the face of difficulties, and speculative idea in the industry chaos that needs to be guarded against.

· 5 min read
NEST Protocol

Introduction

In the world of blockchain, the cost of consensus determines the value of the blockchain. This cost is the computational power according to the current BTC model. Many people may focus on the application of blockchain, thinking that as long as there are enough applications on the chain, its value is the greatest. This is a typical thinking in operating around certain products, which is not applicable to blockchain. A public chain, if its consensus cost is high enough, i.e., the computational power is large enough, it is indeed more valuable compared to low computational power public chains. Many people will question this statement, saying that how can the computational power, which is maintained by the money of centralized institutions, represent value? This statement actually confuses the value and risk. The computational power determines the value, and the composition of the computational power and the applications on the chain determine the risk.

The cost of consensus determines the value of the blockchain

The consensus mechanism is the most special thing about blockchain. Unlike real-world consensus, blockchain is based on an algorithmic process that agrees on the transaction data on the chain, which in turn creates value. The value in our lives is also based on consensus. But other than the price on the exchange, most consensus does not have a serious process or coercive power, so it is hard to quantify. However, the interesting thing about blockchain is that this consensus can be metricized. For example, BTC cites an arithmetic metric, POS through a coin holding test, DPOS based on voting, etc. Do these differences in consensus represent differences in value and risk?

Usage scenario and the consensus risk

We believe this holds true, i.e., the cost of consensus determines the value of the blockchain. If the public chain or the corresponding token has the usage scenario, it will also indirectly reduce the consensus risk. On the one hand, the consumption of tokens is settled, on the other hand, the secondary market of tokens has a stable expectation, which enhances the purchasing power and thus brings certainty to the provider of computing power. These two aspects are regulating the consensus risk of public chains, but the consensus cost is always what determines the value.

Consensus mechanism and consensus risk

According to the design of BTC, the cost of completing a 51% attack becomes exaggerated when the computational power is large enough. Even if a successful computational attack is achieved, miners can fork from the attacked block and reassert the original consensus. It makes no sense for an attacker to get a blockchain system with no consensus. Therefore, once a public chain is maintained based on decentralized consensus, it will become very powerful, and the cost of this consensus, which seems irrelevant to the actual production, is the root of the value, while the production activity just makes the risk of consensus go down.

To exaggerate, even if BTC is not available anywhere, or no one wants to use it, it still has its current value as long as its consensus cost remains in the tens of billions per year.

What about ETH

ETH 2.0 shifted the consensus mechanism from POW to POS, which has a great impact on ETH, and this impact is mainly reflected in the high or low cost of assessing consensus. Currently the industry is not fully aware of this issue and simply analyzes the impact of the consensus change on applications and development. I think this is putting the cart before the horse, because how many applications are available for a blockchain system is only a matter of risk, not value. The merge is a switch in the value point of Ethereum, turning the value point of the token into a reduction in the supply of tokens.

A public chain with applications has a lower risk of maintaining value; a public chain without applications may have potential risk at the same consensus cost. We need to spend more effort to think about the comparative analysis between POW and POS, and how to compare the cost of both in one framework. In fact, as many people know, for a long time, the applications on EOS, from the number of participants to not worse than ETH, but the market cap is less than 20% of ETH, some people think it was due to the fact that asset-heavy projects like USDT or DEFI were not issued on EOS, which is not true.

Conclusion

The fundamental reason is that the consensus cost of POS is much lower than POW, no matter how many applications are on it, it can only surface that the risk is lower for a given consensus cost, it does not represent a higher value. However, there is no good framework for comparing the cost of POS and POW, and more people need to study and improve it.

· 6 min read
NEST Protocol

Introduction

The flash loan is an on-chain advancement. A successful call of assets within the contract gives more assets returned for that call, while a failed call of the contract means no change in assets. And within that transaction, you have the assets within the contract at your disposal, as long as they are repaid within that transaction. This process does not place any restrictions on the use of assets and has a great degree of freedom, similar to real-world liquidity financing. It can be argued that this structure enables a single transaction to be credited. Since the transaction is done in one block, we can call this credit "one block of credit". Some people may think that a "one block credit" has no advantage in terms of time and will have little value. Indeed, this "one block of credit" is unattractive when you consider the many credit assets we have in the real world, but there is one situation that becomes very important, and that situation is arbitrage!

How does flash loans work?

Flash Loans, utilizing the transactional characteristics of a trade (ACID: Atomicity, Isolation, Consistency, Duration), has a callback design for the loan model, as follows.

  1. In addition to specifying the amount of the loan, the loan function entry needs to specify a receiving contract address

  2. The receiving contract needs to meet the requirements given by the loan specification

  3. The loan interface calls the specified callback function of the receiving contract and transfers the money, while recording the status of the balance before the transfer

  4. The receiving contract receives the loan transfer at the same time, the callback function is called, and can use the received assets to execute custom contract logic, or call other contract functions

  5. After the execution of the receiving contract function is completed, it returns to the callback point of the substitute repayment function, at which time it is able to get the third part of the pre-transfer balance status, at which time a check is made to check whether the repayment is received, and if not, it enters an exception state and the whole transaction is revoked.

As explained above, the callback mechanism is designed to take advantage of the transactional nature of the trade to complete a supervised "credit lending process".

Credit on the blockchain

The anonymity of the blockchain makes the concept of "credit" redundant. All on-chain lending and leverage requires full collateral. Even if someone were to go the other way and try to create a weak credit through some kind of "identity" asset, that would essentially be another type of collateral (notice the word asset, you know it's a collateral). In the world of blockchain, you can't have credit if you can't guarantee the performance. And the only way to guarantee performance is generally collateral. Various types of the decentralized lending and on-chain stablecoin solutions, all conform to this logic.

A risk-free arbitrage opportunity have never completely disappeared in the real world

Unlike the traditional commodity economy, in the financial world, once a risk-free arbitrage opportunity arises, the demand will no longer be a sloping downward curve, but a straight line perpendicular to the horizontal axis (arbitrage yield), i.e. the demand will be infinite! Any participant, who has the capital or the ability to finance, will try to participate in this opportunity in any way possible. This is the ideal situation. In the real world, where real arbitrage opportunities arise, there is often an important constraint: "borrowing constraints". That is, the person who finds the opportunity does not have the capital and cannot finance it to complete the arbitrage, resulting in an opportunity that may only be exploited by a few, or even not exploited for a long time. This inefficient allocation of resources is not tolerated by most financiers, so there is an unrelenting critique of lending restrictions. Since in the real world there are difficulties in determining whether an arbitrage opportunity is truly risk-free and whether the lent liquidity can be returned in a timely manner, lending restrictions and partial "market inefficiencies" have never completely disappeared.

Flash loans have solved "market inefficiencies"

But the existence of flash loans on the blockchain could make the lending restrictions and resulting "market inefficiencies" of the traditional financial world disappear altogether! This is because on the chain, this flash loan solves both of the aforementioned problems: 1. whether the arbitrage opportunity is real; and 2. whether the loan can be repaid in a timely manner. For example, if there is a contract A that exposes an arbitrage opportunity, any pool O in the blockchain world that supports the lightning loan, and a participant s who finds the opportunity, s does the following arbitrage contract B for the opportunity: call the asset X in O and interact with the contract A. If the arbitrage is successful, the asset X in O is returned. If the arbitrage is successful, the asset X within O is returned, and if the arbitrage fails, i.e., the asset returned to O is lower than X, contract B is revoked. In such a contractual process, the authenticity of the arbitrage opportunity is verified by whether the borrowed assets can be effectively returned, thus satisfying both conditions 1 and 2. Noting that here O and s can be any pool of funds and individuals, meaning that the whole process is free of obstacles and selectivity, thus fully reflecting the homogeneity of participants and financial needs, and perfectly solving the problem of inefficient resource allocation.

Conclusion

Of course, the perfect implementation of Flash Lending is also guaranteed by certain basic conditions: a large pool of assets supporting the Flash Lending interface and allowing contracts to call each other. These two conditions, if met, will build a perfect decentralized financial system. Its efficiency would far exceed that of traditional finance, not just because of code execution, but because of the perfect resolution of arbitrage risk and the credit required for arbitrage. But it's worth noting that if contract calls create new arbitrage opportunities, this is not the behavior we would expect to drive market effectiveness. This is a completely new type of financial attack, because if the transactional nature of the contract is exploited and restructured externally to create a new transaction, it is not really an arbitrage that is ineffective for the market, but rather undermines market effectiveness. But in any case, the creation of a new type of credit, which helps overcome the "lending constraint", is an important discovery in the blockchain space and provides much help for subsequent development.

· 3 min read
NEST Protocol

Introduction

People who are new to blockchain often fall into a perplexity as to what kind of economic language to use to describe the significance of blockchain. Some say it's about cost, but don't see the cost of which industry has improved because of blockchain; likewise, some say it's about trust, and don't see trust quantifiable under blockchain described. So many people turn to the idea that the value of blockchain lies in immutability. These are all very superficial understandings; in the language of economics, blockchain essentially changes agency risk in principal-agent relationships.

Agency risk in real world.

Agency risk is everywhere. You entrust someone to do something for you and they may not fulfill it; you send something to someone that is not looked after; or you pay for a service that ends up going wrong, these are all agency risks. What is really worrying is that your assets entrusted to someone else are caused a huge loss, for example, we invest in funds, we invest in P2P, etc. Agency risk is the risk that the agent does not have the ability or does not follow the rules to fulfill the principal-agent relationship.

The code is the law, and the code is the primary risk.

All of the above seems very common and has nothing to do with blockchain; but in fact, the real meaning of blockchain is here: the algorithmization of agency risk is realized through public ledger and decentralized consensus. That is, the agency risk is implicit in the code, which is open source and deterministic to begin with, so the agency risk is completely known. This is the real value of blockchain. De-trust means to remove the agency risk in the general sense: the risk of trust in people and institutions. Reducing cost means avoiding the risk of various adverse selection, which brings down the social cost. So blockchain is essentially a matter of principal-agent framework. The change in technology has brought about a revolution in principal-agent, and we no longer put the core risk on some third-party institution. The code is the law, and the code is the primary risk.

conclusion

In a world full of agency risk, we have established a whole set of incentive systems based on laws, systems, etc. to maintain the effectiveness of the principal-agent structure so that the world can function properly. But the cost is also enormous. Imagine all kinds of corporate scandals and regulatory corruption, which are inevitable risks of past models, and no matter how well designed our incentives are, they cannot eliminate this fundamental risk. One could say that the evolution of human society is the story of constantly shifting models to deal with agency risk. However, this change, brought about by blockchain technology, has the most radical dissipation of agency risk; there is no longer any need to trust any third party, be it authority, hero, or saint, but only the code! Trust the algorithm! It was a dazzling moment in financial history that deserves a deep and comprehensive excavation.

· 4 min read
NEST Protocol

Introduction

We can give a few examples to illustrate the agency risk on the blockchain. Let's start with the best known in the industry, Bitcoin, where ideally, we would not have to rely on any third party to either preserve assets or transfer them. This is what many people refer to as the "sanctity of private property" guaranteed by cryptography. It is not without risk, for example, the code can be wrong (although it has been tested for 10 years, but not impossible), but the code is open source from the beginning, and this risk is fully disclosed and unmodifiable for everyone (except forks). More importantly, this process has no possibility for an individual or institution to have any influence on the transfer of BTC, and we can credibly complete peer-to-peer payments. This major technological innovation has completely changed the economic model of the past, bringing us to a new era of trusting algorithms rather than individuals.

USDT

In looking at another famous blockchain project USDT, the digital dollar issued on Ether. While USDT is on the blockchain, the system that guarantees the value of USDT is off-chain, i.e. a promise by the issuer, Tether, that each USDT is equal to one dollar. While Tether has done a lot to ensure this promise is valid, such as escrowing bank accounts, auditing, etc., for the whole process to be truly smoothly executed, we must trust Tether, the auditors, the custodian banks, etc. This is a big difference from BTC trusting the algorithm completely. Although blockchain is used, its value contains a huge proxy risk, and once Tether Inc. and others don't cash out, USDT becomes a string of code, not a dollar.

platform coins

There is a special class of assets in the blockchain industry called platform coins, which reflect the value of a particular exchange platform in terms of fee waivers, transaction pricing, profit rebates, etc. This class of asset, such as BNB, serves as a token for the Binance platform, which, regardless of whether it uses blockchain technology, essentially contains the same huge agency risk as USDT, i.e., the Binance exchange can change, revoke, or even replace the value embodied in that token. What can we do? The only thing we can do is trust cryptocurrency.

Stable coin

In addition, there is a special class of assets, such as MakerDAO's stable coin DAI. Essentially, it is a stable option, a DAI generated through ETH collateral, an option based on ETH. It has a strict pricing formula, which is consistent with the design that we have in mind that there is no agency risk, only algorithmic risk. But the DAI also has a problem in that the price variable that determines its value-at-risk is artificially entered, and that price has no good validation mechanism, but is only entered from time to time through a few internal nodes, and we close positions based on the input prices of these so-called nodes. Obviously, this risk is not algorithmic, but requires trust that these nodes do not do evil or do not make mistakes. While Maker has a rollback mechanism, it again introduces the risk of trust in the rollback: who determines that a rollback is needed? On what basis can they be trusted?

Decentralized and agency risk

In the blockchain world, a complete value interaction process, as long as there is a link with agency risk, then it is actually different source from BTC, i.e., we still introduce human risk, not code risk. So, in the blockchain world, decentralized level and agency risk are two sides of one coin, different ways of expression, the former is an emotional description, and the latter is the rational definition. It is entirely appropriate for us to use de-agency risk to measure the degree of decentralization. Not only can we use it to judge the degree of decentralization of a system, but we can also dig out which areas of agency risk can be solved by blockchain, so that we can really enter the era of blockchain application.

· 5 min read
NEST Protocol

The difference between DAI and USDT

In collateral, collateralized lending USDT and collateralized generation DAI have similar risks, but there is huge difference in terms of the return. Obviously, on-chain acceptance is higher for USDT, whether it is financing, on-chain quotes, or DEX transaction valuation, USDT is far more dominant. Off-chain, on the other hand, centralized exchanges set USDT directly as the denominated currency, while DAI hardly gets the same treatment. There are multiple reasons for this, including the time to consensus, the difficulty of scaling up, etc. Also, if you enter the fiat world, USDT has a 1:1 (approximate) currency reserve with the USD for redemption, while DAI must be converted to USD by trading into USDT or some OTC service provider, and the cost of this conversion, as well as the scale of liquidity, is also far inferior to USDT. Therefore, from a purely usage perspective, the demand for collateral-generated DAI is dependent on the cooperation of various project parties on the chain, which is what everyone calls the acceptance of DeFi.

Can DAI be copied?

Since DAI itself is a derivative, there is no specificity, i.e., if we copy a contract to generate a DAI2, it would be almost the same to DAI. Moreover, from a gaming point of view, if at the beginning the price of the collateralized asset is 100 and 50 DAI are collateralized according to the maximum 50% collateralization rate, when the price falls to 80, the second person continues to collateralize the pool of DAI and can collateralize to get 40 DAI, at this point, each collateralized asset supports 45 DAI. But if the second person copies a contract for DAI and pledges DAI2 according to the same rules, at this time, each pledged asset supports 40 DAI2, then ask the third person, for the same asset, would you rather accept a pledged asset supporting 45 DAI at $1, or a pledged asset supporting 40 DAI2? The choice is unquestionably DAI2. So, if DAI is to attract people who don't care about the size of the collateral backing, what needs to be done?

DAI's ecology is fragile

It is conceivable that the first thing is to increase over-the-counter USD exchange to ensure that DAI can be turned into USD at any time, this path is actually the DAI as an intermediate credential, the Maker community really do the job USD lending or USD liquidity provision. This is something that I think all collateralized stable coins need to consider, right? The next thing is to find ways to increase the application of DAI, such as into various mining to gain excess revenue. Obviously, why would anyone allow you to capture this value? Why do you need you where you can use USDT? These questions depend entirely on the value provided by the Maker team in the development of the contract with other DeFi. But this value is certainly also unstable, because downstream contracts can always replace the DAI, or even write a DAI2 themselves. the picture that looks incredibly harmonious today is actually very fragile: a system always needs someone to provide value, and it's not sustainable for you to run to another system to grab the revenue without doing anything.

Think about Parasset

Therefore, there must be some basic hard needs to support the intrinsic value of collateralized stablecoins. From the Maker community perspective alone, we do not see this need, as mentioned earlier, because it is too easily replicated, lacks scarcity, and self-reinforcing properties, and therefore cannot constitute a unique asset on the chain. Secondly, there is no origin of this demand within the community: what exactly is the project that needs DAI and must be DAI? This kind of question cannot be answered in Maker, it is not like USDT, there is no way to guarantee a consistent credit rating even after the scale is made bigger and then replicated, while a de-credit DAI is completely consistent. A more reasonable idea is that the mortgage assets themselves for mortgage generation DAI must require DAI to complete some kind of logical closure, which constitutes the fundamental supply-demand relationship: mortgage assets - DAI - mortgage asset generation. In such a scenario of collateralized stablecoin logic, we only see Parasset with this in mind: NEST-based collateralized PSUD and PETH are themselves used effectively in the quoting system to generate more NEST or NEST quotes. Conversely, the use of DAI does not have this correlation and dependency to systematically reduce the financial stress and risk of asset volatility for NEST eco-miners. Moreover, the more NEST that are collateralized the more PUSD will be used in preference. This involves the difficulty of manipulating and influencing prices, indirectly forming a self-reinforcing property, while a Maker that is completely isolated from ETH is simply a standard contract only (as is copying one).

There is an inherent demand in Parasset

If the demand is a concentric circle, the collateralized operation needs to build the bottom concentric circle. Especially for collateralized stable coins, there must be an inherent demand from the collateralized assets themselves to differentiate from the various DAI1, DAI2 and form self-enhancing properties. And this stable demand provides the insurance fund with clear premium income, thus forming the demand and liquidity origin, without the need for the project side to perform OTC services over-the-counter (which is essentially a centralized thing). When this underlying concentric circle is formed, it will gradually spread to the use of some projects within the ecology, such as trading, lending, and derivatives, and thus gradually spread to the cooperative community, the general community, and the off-chain market. The foundation of these basic logics determines the long-term rationality of a blockchain project and continues to gain favorable information in a long evolution. Success is compared to how much information will be in its favor in the future, not how much noise is currently chasing it, of that we are convinced.

· 3 min read
NEST Protocol

Introduction

The decentralized world is not easy to build and understand. And there’s a strict rule, which we propose as the decentralization first principle: in the whole process of value exchange, as long as there is one process that is centralized, the whole process is centralized.

It seems a bit too "cruel" to use the first principle to demand the current decentralized world (no project except the public chain itself satisfies it), so can we lower the standard a bit and classify decentralization according to its degree? Here, we propose several criteria of decentralization, drawing on the effective market.

Strong decentralization

Each step is free from any risk of centralized agency and is fully compliant with the first principles. For example, the public chain implemented based on POW or POS mechanism, the assets written on the chain do not depend on the fact under the chain, such as a kind of ERC20 of NETH, which is generated by paying 1 ETH into the contract to generate a NETH, and paying 1 NETH into the contract to get another ETH. The contract is open source and does not have any management authority. Such assets, which are homogeneous with ETH, do not depend on any offline team, and are decentralized. In contrast, OMG, HT and other assets must rely on the project team to "capture" the value, although they are written in blockchain, but also not decentralized.

Semi-strong decentralization

Considering that the value recognition of the vast majority of assets may contain the risk of centralization, we lower the standard of strong decentralization by disregarding the step of value recognition and only considering the interaction of assets on the chain. Various DAPPs or applications emphasize the decentralized provision of value or services on the chain, so semi-strong decentralization means that the whole process of these on-chain services no longer contains the risk of centralization. For example, ERC20 is semi-strongly decentralized as a service itself; in addition, the fully open-source NEST lending contract is semi-strongly decentralized. In short, the only difference between semi-strong decentralization and strong decentralization is that the only risk of centralization in the entire process of value exchange is the source of value, while the rest of the process is completely decentralized. There is a common saying that once a thing is on-chain, it is completely decentralized, so semi-strong decentralization preserves the risk of centrality in the on-chain step, but does not add additional risk of centrality.

Weak decentralization

Regardless of whether the source of value is decentralized or not, there still exists some steps in the whole process of completing the value exchange through voting, multi-signature, and decentralized nodes, which is called weak decentralization. Weak decentralization is not a further weakening on the basis of semi-strong decentralization, but it means that even if the native asset you use is ETH, it is still weakly decentralized with solutions such as voting, multi-signature, and decentralized nodes in the process of value exchange. So weak decentralization emphasizes the risk of some kind of semi-centralization in the on-chain interaction process. For example, Chainlink, which uses a variety of decentralized nodes to provide prices, is itself weakly decentralized because of the possibility of cheating with that kind of decentralization (many people emphasize collateral, and collateral is a guarantee that the cost of cheating is not the possibility of cheating).