https://preview.redd.it/zqkx7xvg5ii71.png?width=689&format=png&auto=webp&s=4e57e1f3068e498231213b2fd582c020c49aa55b
With the gradual popularity of communication and visualization mobile terminal devices, people are flocking to the concept of web3 or meta universe. And how to take the road to a new world? But there is no clear answer. DFINITY has explored the path of “Internet computer” in 5 years of exploration experience. Will it become an effective solution?
Planet invited Paul Liu, the core technical engineer of DFINITY, to interpret it for us from the bottom of its technology.
Paul Liu is a core technical engineer at Dfinity. Before joining Dfinity, Paul worked as a research scientist at Intel Labs for 7 years. During the Intel Lab period, he built a highly optimized Haskell compiler for the X86 architecture. Paul has a Ph.D. from Yale University and studied under Dr. Paul Hudak, one of the inventors of the Haskell compiler. Paul is a member of the Haskell seminar and IFL, and has published a large number of academic papers.
Introduction
First, let me introduce DFINITY. This is a non-profit organization headquartered in Switzerland. All revenue can only be used for one purpose, which is to participate in the development and promotion of the Internet Computer, a decentralized open source network project. Although this project is led by DFINITY, the governance system has been launched since the day it went live, and the actual physical nodes of the network are also independently operated by many third-party independent operators. At present, it has been online for three months, and 53 operators have deployed 209 nodes in 20 data centers.
The entire project belongs to the holders of governance tokens, that is, the entire community. DFINITY will continue to participate in the development and promotion of the entire platform as a major technical contributor, but we are only one of the contributors. In just over three months of going online, many other community teams have already participated. The development of this platform is inseparable from the contribution of the entire community. To further promote decentralization is our main goal at the moment.
For answers to common questions, I prepared the answers in advance, so I may proceed faster. I also hope to leave more time for the on-site questioning session. I believe that some of the friends who participated in this AMA may have a certain understanding of Internet Computer, but many people may be exposed to some of the concepts here for the first time. So I will talk more about background knowledge at the beginning, I hope you can bear it.
As the creator of the Internet Computer platform, DFINITY’s vision is blockchain singularity, which means that all applications that can run on the Internet should be built with blockchain technology.
In order to do this, we have added a layer of protocol based on blockchain consensus technology above the TCP/IP level and below the application level. We call it Internet Computer Protocol (ICP). This set of protocols constructs a virtual subnet by exchanging data between multiple physical nodes (computers).
https://preview.redd.it/j0g8rphh5ii71.png?width=689&format=png&auto=webp&s=422f48bedca3d02f907e6424e841147b9bc36ab3
The nodes in the subnet reach a consensus on the input and output, mutually verify the calculation results, and can communicate with other subnets. Multiple subnets are combined together to build a virtual computer. The capacity can increase with the increase of subnets. Anyone can run programs on it, access other people’s programs, and so on.
But this sounds no different from our current Internet, especially the concept of micro service. Then why can’t the current Internet be called Internet Computer?
The difference lies in this set of ICP protocols. The purpose of this protocol is to ensure that all programs are executed correctly and their status cannot be tampered with. When a program calls another program, it can trust that the call will be executed correctly. Due to the lack of this layer of protocol on the current Internet, all programs have to solve the cumbersome problems of availability, reliability, and mutual authorization, etc., and thus bring various incompatibility and security burdens.
The core of this is trustworthy computing. There is a saying called trustless trust, which I think is very appropriate, without trust (partial) trust (whole). The development of the blockchain from Bitcoin to today also proves the power of trusted computing. But most of the applications are still concentrated in the financial field, and our goal is to expand to a broader Internet field. Why can’t I run a website directly on the blockchain? Why is the historical data of the entire chain needed to verify the calculations on the blockchain? Only by solving these problems head-on can the blockchain become the core technology of the Internet, not just at the level of recording and transfer.
Community Super Visit
Q1: Internet Computer provides a brand-new program construction paradigm and has its own set of “jargon”. Can you briefly introduce these “jargon” and what do you think are the most useful infrastructure for developers?
Pail: You can talk about it from several different angles. From the end user’s point of view, accessing an application on an Internet Computer is basically the same as accessing an ordinary website, and the user does not need to pay any fees. This has the same meaning as when using traditional cloud services, the cost is borne by the project party. Most other blockchains charge users for gas, which requires pre-installed wallet software, and the threshold is relatively high.
The cost of operating an application, including computing and storage, is measured in cycles of Internet Computer’s native token. The price of cycles is anchored to SDR, 1SDR = 1 Trillion Cycles. The price of SDR is weighted by a basket of currencies set by the International Monetary Fund, including the US dollar and RMB, and is relatively stable.
Back to the user’s point of view, they don’t have to care about the concept of cycles. But many applications need to process user logins. For this reason, Internet Computer has also launched an anonymous identity management system, which we use as Internet Identity. This system is completely based on web standards, and users do not need to install wallet software to use it.
All of these are to lower the threshold for users to use, so that the application of blockchain can really go out. Internet Identity is mainly to solve the problem of logging in to multiple devices with one identity. Moreover, the code name of this identity is different in different applications, which can prevent the user’s behavior from being maliciously tracked.
Finally, users may also be interested in participating in the governance of Internet Computer. This is a neuron voting system called NNS, which is one of our innovations. It is also at the application level, but it has a special permission, that is, it can manage all the subnets of Internet Computer and all aspects of the entire system, including node running code, version upgrades, creating new subnets, accessing new nodes, etc. Etc.
Finally, users may also be interested in participating in the governance of Internet Computer. This is a neuron voting system called NNS, which is one of our innovations. It is also at the application level, but it has a special permission, that is, it can manage all the subnets of Internet Computer and all aspects of the entire system, including node running code, version upgrades, creating new subnets, accessing new nodes, etc. Etc.
To participate in voting, you first need to hold ICP tokens and lock a certain number of ICPs to get a neuron. The weight of the vote is related to the number of locks, the lock duration, and the age of the neuron. Participating in voting will also be rewarded. The amount of reward has nothing to do with whether you vote for or against. It can also follow the decisions of other neurons and automatically vote. In general, these settings are designed to link users’ voting behavior with the long-term interests of the platform and reward users for their contributions.
Talking about the user’s perspective, let’s look at it from the developer’s perspective. The application running on Internet Computer is encapsulated in a lightweight container called canister. The concept of the docker container that is usually familiar to everyone is a bit different. Canister not only encapsulates the code, but also automatically persists the state of the container. It can be simply understood as a long-running operating system process. The state of the process, including memory and message queue, is automatically saved and will not be lost due to power-on and shutdown. This means that the concept of file system has been stripped off Internet Computer, and developers do not need to consider reading and writing files and hard disks to save data, which is a considerable simplification.
Canister’s code uses the bytecode of WebAssembly (Wasm), the latest lightweight virtual machine technology. Any language that can be compiled into Wasm can be used as a development language. The two languages that currently support better are Rust and Motoko, but it is also possible to use the C language. Motoko is a programming language developed by us. It takes advantage of some Internet Computer features and comes with automatic memory management. Unlike Rust and C at the bottom, it belongs to the abstraction level of Javascript, TypeScript or Swift. It is easier to get started. Of course, the ecology is still under development, and library functions have yet to be enriched.
Another thing that developers need to understand is that the communication mode between canisters is asynchronous and belongs to the actor model. That is, each canister is its own process, and communicates with other canisters by sending messages, that is, asynchronous method calls. The processing of a canister’s internal message queue is single-threaded, and there is no need to consider locks. Each method call is atomic. Familiar with actor model programming is easy to get started.
To develop an application, the canister container is usually used as the back-end, and the front-end interaction can be in the browser or a separate APP. It was mentioned before that Internet Computer can run the website directly. This means that canister can implement the http request interface by itself, and return the web page including Javascript to the user’s terminal. The front-end and back-end can be packaged together into a canister and deployed on Internet Computer.
For front-end development, we have ready-made libraries to use, both Javascript and Rust. When the front-end needs to call the back-end code, just make an asynchronous function await call directly, and the bottom layer has been implemented by library functions. If you need to know more, there is an interface and data encoding format called Candid, which supports the implementation of multiple languages. Canister uses Candid to describe external interfaces and data types.
In general, what developers need to understand is around the concept of Canister. WebAssembly, Actor model, Orthogonal Persistence (automatic persistence), Motoko, Candid. I also recommend to learn about SystemAPI, which is the standard of Internet Computer interface https://sdk.dfinity.org/docs/interface-spec/
This information is very detailed, involving all aspects of the entire system, and we have made a lot of formal efforts to define the semantics of the interface, which is convenient for developers to understand the behavior of the system.
If you are doing system-level development, such as consensus protocols, virtual machines, etc., then there is more to talk about. You can go to the video series of the technical library on the DFINITY official website https://dfinity.org/technicals
Q2: Compared with traditional platforms such as Alibaba Cloud, Tencent Cloud, AWS, etc., what is the difference between Internet Computer? They are also the company’s self-built private cloud services, and also use data centers, remote backup, and multi-node operation.
Pail: The current cloud service platforms are all based on a basic setup. You must rely on the provider of this platform to maintain the security of the platform, maintain network connectivity, uninterrupted computing, and no data loss, etc.
Although the interests of the commercial platform itself and the users it serves, although most of the time do not conflict, they are not completely consistent. There is a concept Platform Risk that everyone should be familiar with, so I won’t go into it here.
But the most important point is that these cloud platform infrastructure providers do not want to become commodities (replaceable commodities), and they are doing their best to retain and lock customers.
Internet Computer first exists as a decentralized network. The nodes inside are all operated by third parties and run in different data centers. The governance of a real network is handed over to users, and is not dominated by node operators or data centers.
Therefore, there is no centralized commercial organization to make all the decisions. The design of the entire governance system is to proceed from a long-term perspective as much as possible, hoping to maintain the consistency of the interests of users and the development of the platform. This platform is paid to node operators. Whether a certain node is run by Zhang San or Li Si has no influence at all. This is a free market. So for Internet Computer, the hardware and network infrastructure have become a commodity.
Looking back at the development history of the entire PC industry, we can actually see that it is an inevitable law of history that infrastructure (such as PC hardware) has become a commodity, and I believe it will not be an exception for cloud services.
It can be said that computing platforms such as Internet Computer have been separated from the construction of hardware infrastructure. This kind of business model, if there is no decentralization, if there is no blockchain technology, it is impossible to imagine. But today it can become a reality, which is the best interpretation of the progress of the times.
From Bitcoin to Ethereum all the way, some people just saw the currency price hype and Ponzi scam and took a negative attitude towards this emerging thing. In fact, the change of the times is just in sight.
In addition to the consistency of benefits, another aspect requires more advanced technology to simplify system redundancy, thereby saving the entire platform overhead, which also means saving users.
Earlier, we also talked about the advantages of trusted computing. In fact, there is a distributed advantage and the advantage of using cutting-edge encryption technology. They mean that many traditional technical maintenance tasks, such as firewalls, are basically no longer necessary. If a customer wants to make good use of these current cloud platforms, it must invest a lot in operation and maintenance. And Internet Computer can save a lot of costs in this regard.
https://preview.redd.it/sqtxuxuj5ii71.png?width=691&format=png&auto=webp&s=6cda6d08d909bdef2c8396f10118b64cc6d9e568
The third point is tokenization, which is the tokenization of applications. This can be said to be the next trend in the development of the entire Internet application, which is unstoppable. Traditional cloud service providers also provide bridging components with the blockchain at best, and its architecture is inevitably quite bloated after a complete set. Since Internet Computer can directly run websites and applications, as a native blockchain, it is very easy to integrate tokenization.
Q3: Every smart contract on Internet Computer is “scalable”. Specifically, how does the protocol extension work at the technical level? Are there any cases of extension?
Pail: Scalability has several dimensions. One is storage space, one is network traffic, and the other is computing power. How many transactions can be processed per unit time. Whether it is scalable is mainly to see whether it can bypass known bottlenecks. On a public platform, we also need to consider how to allocate limited resources between different users and between different applications.
The main idea in the design of Internet Computer is to scale out, which is to solve the bottleneck by adding resources and creating new subnets. This is basically the same idea as mainstream web applications. When an application cannot handle all user requests through one canister, a reasonable approach is to use multiple canisters at the application level to process part of the user requests. That is to say, when designing an application, you need to take this into consideration, and at least leave a possibility of migrating to the new architecture. At present, I know that the design of OpenChat is based on multiple canisters. DSCVR also has such room, but it is still concentrated in one canister.
From the system level, through canister expansion, the current threshold of 4G memory can be surpassed. In terms of computing, it also started from the guiding ideology of concurrency, and did not choose the global atomic design of Ethereum. Therefore, different canisters process their own messages in their own threads. As long as the hardware load allows, the performance of other canisters will not be affected. As for the network, bandwidth basically determines the upper limit of expansion. Any blockchain cannot avoid this physical bottleneck. It can only be divided into different subnets corresponding to Internet Computer.
Of course, there are also various optimization schemes at the system level that can improve performance. We have been doing this work, hoping to give full play to the performance of the hardware.
Q4: Which types of Dapp are more suitable for loading on it? We found that there are relatively few DeFi protocols on Internet Computer. In the future, what direction will Dapp on Internet Computer have?
Pail: DeFi mainly needs liquidity to promote. For security reasons, the function of canister transfer ICP has not yet been opened, which also limits liquidity. However, this restriction is temporary. At present, since the launch of the entire network, the stability is still good. I believe that this restriction will be lifted through NNS voting at an appropriate time. I believe that many developers are ready, and the explosion of DeFi applications is only a matter of time.
Personally, I am still very optimistic about the social dapp on Internet Computer. Once this track has the blessing of tokenization, it will grow very quickly, and it will definitely not be inferior to DeFi and NFT games. There are also some social dapps on other blockchains, but they are all subject to the threshold of starting. After all, the correct use of wallets has already stumped many users. The dapp on Internet Computer uses Web standard technology and can be accessed by any browser.
Another direction I am optimistic about is applications for individual users and small and medium-sized enterprises. Like project management, file sharing, creator economy (podcast, vlog, web documents, etc.), although there are more mature solutions on the Internet, there are always platform risks. The platform risks of cloud services have also been mentioned earlier. I believe everyone has a certain personal experience of the monopoly of giants in various other fields. Now that the decentralized structure is a new possibility, the platform itself should become a transparent existence, instead of entrenched in the upper reaches of the food chain and swallowing the interests of users with overlord clauses.
In the final analysis, which track has a future depends on whether its application can quickly gather value. This value does not mean how much your project is locked, because this amount can change at any time. It’s about how many connections you have established with users, and how many connections you have established with other applications. As the trust deepens, as the use increases, it will become more and more valuable. The code can be pasted, but this association cannot be copied. If used properly, tokens can accelerate the accumulation of value to a certain extent, but ultimately it depends on the intrinsic value of the project itself.
Q5: Canister, as a container running by Webassembly, carries the environment running on the Dapp chain. What are the latest developments about Canister?
Paul: Just this Monday, DFINITY released a development roadmap, and the community is welcome to participate. https://dfinity.org/roadmap
Among them are related to canister:
- Stable memory expansion
- Canister ECDSA signature
- Apply AMD SEV to protect data privacy
The expansion is currently mainly for stable memory, that is, memory management that is not affected by code upgrades. Previously, it was limited by the 4GB limit of the Wasm virtual machine, but now it can be released. The upper limit is subject to the total amount of memory in the subnet, which is currently around 300GB.
ECDSA threshold signature technology, in simple terms, is to allow each canister to sign data without storing the private key, and this signature can be verified by the public key, and each canister can get a unique public key. This is in the same line as the Chain Key technology we have implemented, and it has a wide range of applications. For example, canister can directly initiate a Bitcoin or Ethereum transaction and sign it.
This means that what originally had to be done in a private environment to give the private key to the program can now be done in a decentralized environment. It can also be used in the issuance of SSL certificates, DNS custom domain names, and so on.
The use of AMD SEV technology is mainly to protect Canister’s data privacy to a certain extent, so that even node operators cannot snoop on user data. We have been making preparations for this side, and the difficulty is relatively large. Fortunately, the hardware used by the nodes already supports SEV technology, so I hope it will be a smooth upgrade by then.
Q6: “Open Internet Services” can implement permanent APIs, allowing developers to build data or functions that depend on other services without the risk of revocation. How to deploy “Open Internet Services” on Internet computers?
Paul: The easiest way to provide a permanent API is to make its code unmodifiable by setting the canister controller to an empty set.
I personally made a very simple canister called blackhole. Its main purpose is to allow other canisters to set the controller as blackhole, so that not only the code becomes unmodifiable, but blackhole also provides additional query functions, such as checking the balance of book cycles or checking the hash value of the code. The controller of blackhole itself is set to itself, and its code is also public, so it is easy to verify the correctness of the hash value. If you need to make your canister trusted by others, setting its controller to blackhole is a simple way.
https://preview.redd.it/uhgu81yl5ii71.png?width=691&format=png&auto=webp&s=678280fdc06b0e8bf9ff04abf205c68876b944fe
But if you still need to maintain the code upgrade function, this requires the introduction of community governance functions. The Service Neuron System we are developing allows applications to create neurons by locking tokens and then voting to manage all aspects of the application, including code upgrades. Of course, the SNS system we made is still under development, and there is no instance yet. And it is only one of the candidate solutions. The community has already made other attempts in this area, and I believe it will gradually mature.
Q7: Security is an important issue for computers. What mechanisms does Internet Computer use to replace functions such as firewalls? In terms of tamper resistance, what are the characteristics of DFINITY compared to other blockchain bottom layers?
Paul: One of the main functions of the firewall is to prevent hackers from invading the system and gaining intranet permissions to achieve the purpose of stealing or tampering with data. First of all, the division of authority between the intranet and the extranet is very problematic. It is quite fragile, because once it is breached, all the default permissions of the intranet are exposed to the attacker. Therefore, we have seen that many companies have abandoned this practice and changed to set permissions for each service and use unified identity management technology to authorize users.
Corresponding to it is the identity management on Internet Computer. A public key corresponds to the identity of a user, and then each canister can obtain the identity of the caller. This identity cannot be tampered with by a third party, whether it is a user calling canister or a call between canisters. This can be done because such calls must pass a consensus protocol, especially cross-subnet calls. Both the initiator and the responder must pass the consensus protocol, and will be recognized and executed after verification.
To quickly and efficiently verify the validity of any subnet signature, we must use the chain key technology developed by us. It can support dynamic node connection and removal while ensuring that the threshold signature public key remains unchanged. This is currently not possible with other blockchains, so Internet Computer is currently the leader in verifying transactions. Basically, there is no need to synchronize data between its subnets (except for the necessary public keys of the subnets and the public keys of the nodes. ).
To tamper with data on the Internet Computer, it is not enough to break the authority of a node. It must be able to control more than 2/3 of the number of nodes in a subnet. Therefore, the security of the subnet depends to a certain extent on the number of nodes. And through the irregular rotation of nodes, the security in this area can be further strengthened. Even if one subnet is compromised, it cannot fake the identity of other subnets, so the scope of the loss is controllable.
Ensuring that data is authentic and reliable from being tampered with is one aspect, while protecting data privacy is another aspect. Most blockchains are public data, so there is no privacy protection. True privacy protection can be achieved at the application level, using technologies such as homomorphic encryption, but the current efficiency is not enough. So our current plan is to apply AMD SEV technology to encrypt at the hardware level. However, the security of the entire Internet Computer does not depend on hardware, and the guarantee of SEV is a plus.
Q8: DFINITY’s name actually started 6 years ago. Although the mainnet launch process is relatively slow, we can see that the DFINITY team really wants to do something disruptive, and the consensus is also very strong. What are the factors affecting the transition from “Ethereum’s sister chain” to “world-class Internet computer”?
Paul: The slogan of World Computer was first put forward by Ethereum, and it has inspired many people, although now it is more focused on DeFi and digital assets. The direction of “world-class Internet computer” has always been the goal of DFINITY’s efforts, not a route that will be changed after financing.
At first, due to the constraints of the team, there were only clear innovations in BLS and consensus protocols, so the first step was to start with this aspect, launch a chain and then gradually iterate. But then we realized that if we don’t solve the problem of cross-subnet communication, we will always stay in the pit of “another blockchain” and it will be difficult to innovate. It is precisely because of the persistence of the team that a breakthrough in chain key has been achieved, the problem of cross-subnet verification has been solved, and the promise of scalability has been achieved.
Looking back, in fact, we just have to keep asking ourselves one question: Why can’t a decentralized blockchain run a website?
First of all, we must solve an efficiency problem, that is, access to a website requires a response in the millisecond level. How can it be done? Our answer is to separate the read-only query from the status modification, so 99% of the network traffic is read-only, which can be responded to in milliseconds. To modify the status, we also achieved a response within two to three seconds through innovation in the consensus protocol.
When the efficiency is reached, how to verify the correctness of the content? How to make ordinary browsers can do it? Then the conditions required for verification must be streamlined. Can you abandon the historical block and just pass a public key? How to solve the problem of dynamic changes of nodes with BLS public key? How to solve the problem of centralized domain name and SSL certificate? How to expand the capacity if the access traffic increases? Where are the bottlenecks and boundaries of expansion? What should I do if there is a conflict between the expansion requirements and the synchronization contract call method?
As long as you keep asking questions and searching for answers, I believe a practical plan will gradually emerge. This is what DFINITY has been doing in the past few years.
Q9: Ethereum has just completed the EIP-1559 upgrade and has taken the first step towards deflation, and the token price has gradually increased. Do you think that for decentralized infrastructure, is the performance of tokens more motivating to supporters or is technological disruption more important? How to achieve a relative balance between the two?
Paul: I look at it this way. The short-term performance of tokens depends on the confidence and expectations of market participants, and the long-term performance must return to the value of the platform itself. Ethereum’s technology can be said to have undergone the test of time. Despite its various shortcomings, it has been recognized by the entire cryptocurrency market.
As for deflation and inflation, each has its drawbacks. I can’t quite agree with the rhetoric of BTC maximalist. DeFi’s innovation in liquidity and incentives is also very exciting, but in the long run, most projects actually do not add value, and more of a digital game. Users who are acquired through a short-term increase in the price of tokens may also lose users because of the price drop or the rise of another project.
Technological innovation can easily be copied by competitors. However, from an overall perspective, these innovations have been pushing the entire industry forward. It is hard to say whether it can benefit from pure technological innovation if it falls on a single project. The industry is talking about ecological construction. How much protection can an ecological project have on a platform, especially a start-up platform, how to convince developers to invest is not an easy task.
I think the most worthwhile direction is to expand the circle of best efforts. From payment and transfer, to DeFi, to NFT and games, it is a continuous process of expanding the territory. Under this general trend, try to expand the blockchain technology to a wider range of fields, such as the goal of letting native websites run on the blockchain. Only by using technological innovation and token incentives to acquire new users can we flourish the ecology and enhance value.
Q10: Many people think that Internet Computer is the main position of web3 applications. Each public chain more or less has its own insights and technical implementation paths for web3, such as Polkadot and Ether. What is DFINITY’s insights and future plans/Roadmap on the road to web3?
Paul: The purpose of DFINITY is to put aside all unnecessary baggage and move towards the destination of Blockchain Singularity. There are still many imperfections in the Internet Computer project, and there is still a certain way to achieve this goal. We hope that more people can join in to promote the technological progress of the platform itself and build more colorful upper-level projects. To win customers.
The focus of each public chain is different. We believe that everything that can be built with blockchain will eventually be realized with blockchain. Therefore, it does not exclude the combination of other public chain technologies. For example, in our roadmap released on Monday, there are deep integration projects with Ethereum and Bitcoin, which are a perfect complement to both parties. This will further stimulate the cross-chain flow and integration of assets, simplify the application architecture, and discard the centralized baggage of cloud services, thereby improving the overall security and robustness of the application.
Running a website is an important step, but it is only the first step for Internet Computer. I believe that the foundation laid by Internet Computer will become part of the magnificent puzzle of Blockchain Singularity in the future.
Q11: What is Canister Signature? Where is the private key used by Canister for signing? In addition, does Canister support the Event mechanism similar to the Ethereum smart contract, which can be obtained by subscribing to an update call. Is the caller obtained based on the return value? Finally, when can ordinary Canister deal with ICP tokens?
Paul: Canister Signature refers to signing the calculation result (or contract status) of Canister with the public key of the subnet. Currently we use BLS threshold signature. It has a good feature that is the uniqueness of public key and signature, which is not available in other aggregate signature technologies (BLS can also be used as aggregate signature, and we also use it in the agreement) .
Threshold signatures are simply different nodes that have their own private keys to sign the calculation results. Once a limited number (threshold) of signatures are collected, a unique threshold signature can be obtained, using a public key. It can be verified, so this public key is treated as the public key of the subnet. There is no corresponding private key for the subnet here. The private key of each node is stored in its own way and is different.
A subnet can run many canisters. Using the merkle tree method, it is easy to get a path to one of the canister calculation results. Therefore, the signature of the subnet plus this path can be regarded as Canister’s signature of a certain piece of data.
Canister signature is equivalent to event log or receipt to a certain extent. Because we don’t require nodes to keep all historical blocks, it doesn’t make much sense to do this for event log alone. After all, such functions can be achieved through query call and certified var, and they are more powerful.
Canister’s handling of ICP tokens has long been technically not a problem, because it has not let go of permissions due to security considerations. With the stability of the system, our confidence has also increased a lot, so if there are no accidents, it is estimated that the decision will be made through community voting in the near future.