Ethereum

More Thoughts on Scripting and Future-Compatibility

7 Mins read


My previous post introducing Ethereum Script 2.0 was met with a number of responses, some highly supportive, others suggesting that we switch to their own preferred stack-based / assembly-based / functional paradigm, and offering various specific criticisms that we are looking hard at. Perhaps the strongest criticism this time came from Sergio Damian Lerner, Bitcoin security researcher, developer of QixCoin and to whom we are grateful for his analysis of Dagger. Sergio particularly criticizes two aspects of the change: the fee system, which is changed from a simple one-variable design where everything is a fixed multiple of the BASEFEE, and the loss of the crypto opcodes.

The crypto opcodes are the more important part of Sergio’s argument, and I will handle that issue first. In Ethereum Script 1.0, the opcode set had a collection of opcodes that are specialized around certain cryptographic functions – for example, there was an opcode SHA3, which would take a length and a starting memory index off the stack and then push the SHA3 of the string taken from the desired number of blocks in memory starting from the starting index. There were similar opcodes for SHA256and RIPEMD160 and there were also crypto opcodes oriented around secp256k1 elliptic curve operations. In ES2, those opcodes are gone. Instead, they are replaced by a fluid system where people will need to write SHA256 in ES manually (in practice, we would offer a commision or bounty for this), and then later on smart interpreters can seamlessly replace the SHA256 ES script with a plain old machine-code (or even hardware) version of SHA256 of the sort that you use when you call SHA256 in C++. From an outside view, ES SHA256 and machine code SHA256 are indistinguishable; they both compute the same function and therefore make the same transformations to the stack, the only difference is that the latter is hundreds of times faster, giving us the same efficiency as if SHA256 was an opcode. A flexible fee system can then also be implemented to make SHA256 cheaper to accommodate its reduced computation time, ideally making it as cheap as an opcode is now.

Sergio, however, prefers a different approach: coming with lots of crypto opcodes out of the box, and using hard-forking protocol changes to add new ones if necessary further down the line. He writes:

First, after 3 years of watching Bitcoin closely I came to understand that a cryptocurrency is not a protocol, nor a contract, nor a computer-network. A cryptocurrency is a community. With the exception of a very few set of constants, such as the money supply function and the global balance, anything can be changed in the future, as long as the change is announced in advance. Bitcoin protocol worked well until now, but we know that in the long term it will face scalability issues and it will need to change accordingly. Short term benefits, such as the simplicity of the protocol and the code base, helped the Bitcoin get worldwide acceptance and network effect. Is the reference code of Bitcoin version 0.8 as simple as the 0.3 version? not at all. Now there are caches and optimizations everywhere to achieve maximum performance and higher DoS security, but no one cares about this (and nobody should). A cryptocurrency is bootstrapped by starting with a simple value proposition that works in the short/mid term.

This is a point that is often brought up with regard to Bitcoin. However, the more I look at what is actually going on in Bitcoin development, the more I become firmly set in my position that, with the exception of very early-stage cryptographic protocols that are in their infancy and seeing very low practical usage, the argument is absolutely false. There are currently many flaws in Bitcoin that can be changed if only we had the collective will to. To take a few examples:

  1. The 1 MB block size limit. Currently, there is a hard limit that a Bitcoin block cannot have more than 1 MB of transactions in it – a cap of about seven transactions per second. We are starting to brush against this limit already, with about 250 KB in each block, and it’s putting pressure on transaction fees already. In most of Bitcoin’s history, fees were around $0.01, and every time the price rose the default BTC-denominated fee that miners accept was adjusted down. Now, however, the fee is stuck at $0.08, and the developers are not adjusting it down arguably because adjusting the fee back down to $0.01 would cause the number of transactions to brush against the 1 MB limit. Removing this limit, or at the very least setting it to a more appropriate value like 32 MB, is a trivial change; it is only a single number in the source code, and it would clearly do a lot of good in making sure that Bitcoin continues to be used in the medium term. And yet, Bitcoin developers have completely failed to do it.
  2. The OP_CHECKMULTISIG bug. There is a well-known bug in the OP_CHECKMULTISIG operator, used to implement multisig transactions in Bitcoin, where it requires an additional dummy zero as an argument which is simply popped off the stack and not used. This is highly non-intuitive, and confusing; when I personally was working on implementing multisig for pybitcointools, I was stuck for days trying to figure out whether the dummy zero was supposed to be at the front or take the place of the missing public key in a 2-of-3 multisig, and whether there are supposed to be two dummy zeroes in a 1-of-3 multisig. Eventually, I figured it out, but I would have figured it out much faster had the operation of theOP_CHECKMULTISIG operator been more intuitive. And yet, the bug has not been fixed.
  3. The bitcoind client. The bitcoind client is well-known for being a very unwieldy and non-modular contraption; in fact, the problem is so serious that everyone looking to build a bitcoind alternative that is more scalable and enterprise-friendly is not using bitcoind at all, instead starting from scratch. This is not a core protocol issue, and theoretically changing the bitcoind client need not involve any hard-forking changes at all, but the needed reforms are still not being done.

All of these problems are not there because the Bitcoin developers are incompetent. They are not; in fact, they are very skilled programmers with deep knowledge of cryptography and the database and networking issues inherent in cryptocurrency client design. The problems are there because the Bitcoin developers very well realize that Bitcoin is a 10-billion-dollar train hurtling along at 400 kilometers per hour, and if they try to change the engine midway through and even the tiniest bolt comes loose the whole thing could come crashing to a halt. A change as simple as swapping the database back in March 2011 almost did. This is why in my opinion it is irresponsible to leave a poorly designed, non-future-proof protocol, and simply say that the protocol can be updated in due time. On the contrary, the protocol must be designed to have an appropriate degree of flexibility from the start, so that changes can be made by consensus to automatically without needing to update any software.

Now, to address Sergio’s second issue, his main qualm with modifiable fees: if fees can go up and down, it becomes very difficult for contracts to set their own fees, and if a fee goes up unexpectedly then that may open up a vulnerability through which an attacker may even be able to force a contract to go bankrupt. I must thank Sergio for making this point; it is something that I had not yet sufficiently considered, and we will need to think carefully about when making our design. However, his solution, manual protocol updates, is arguably no better; protocol updates that change fee structures can expose new economic vulnerabilities in contracts as well, and they are arguably even harder to compensate for because there are absolutely no restrictions on what content manual protocol updates can contain.

So what can we do? First of all, there are many intermediate solutions between Sergio’s approach – coming with a limited fixed set of opcodes that can be added to only with a hard-forking protocol change – and the idea I provided in the ES2 blogpost of having miners vote on fluidly changing fees for every script. One approach might be to make the voting system more discrete, so that there would be a hard line between a script having to pay 100% fees and a script being “promoted” to being an opcode that only needs to pay a 20x CRYPTOFEE. This could be done via some combination of usage counting, miner voting, ether holder voting or other mechanisms. This is essentially a built-in mechanism for doing hardforks that does not technically require any source code updates to apply, making it much more fluid and non-disruptive than a manual hardfork approach. Second, it is important to point out once again that the ability to efficiently do strong crypto is not gone, even from the genesis block; when we launch Ethereum, we will create a SHA256 contract, a SHA3 contract, etc and “premine” them into pseudo-opcode status right from the start. So Ethereum will come with batteries included; the difference is that the batteries will be included in a way that seamlessly allows for the inclusion of more batteries in the future.

But it is important to note that I consider this ability to add in efficient optimized crypto ops in the future to be mandatory. Theoretically, it is possible to have a “Zerocoin” contract inside of Ethereum, or a contract using cryptographic proofs of computation (SCIP) and fully homomorphic encryption so you can actually use Ethereum as the “decentralized Amazon EC2 instance” for cloud computing that many people now incorrectly believe it to be. Once quantum computing comes out, we might need to move to contracts that rely on NTRU; one SHA4 or SHA5 come out we might need to move to contracts that rely on them. Once obfuscation technology matures, contracts will want to rely on that to store private data. But in order for all of that to be possible with anything less than a $30 fee per transaction, the underlying cryptography would need to be implemented in C++ or machine code, and there would need to be a fee structure that reduces the fee for the operations appropriately once the optimizations have been made. This is a challenge to which I do not see any easy answers, and comments and suggestions are very much welcome.


Source link

Related posts
Ethereum

Bitcoin recovers $100k price level as expectations of Fed rate cut rise

1 Mins read
Bitcoin (BTC) briefly surpassed the $101,000 threshold after a 3% surge in the hours following November’s Consumer Price Index (CPI) numbers, which came…
Ethereum

Ethereum Undergoes Critical Pattern Breakout, Bull Run Officially Begins

2 Mins read
Amid waning market momentum, Ethereum, the second-largest digital asset, is displaying a notable downswing, falling to multiple support levels. However, with recent…
Ethereum

What’s Next for ETH's Price?

2 Mins read
Este artículo también está disponible en español. Ethereum (ETH), the second-largest cryptocurrency by market capitalization, seems to be drawing attention as analysts…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *