Optimistic rollups / STARK-based validity rollups and binary partitioning.
Continuing on the fascination with the binary tree shape from my last post, this post is going to look at two approaches to scaling blockchains: optimistic rollups and STARK-based validity proofs. I’ve done quite a lot of reading in both and I found it very interesting to see the same pattern here.
An optimistic rollup is an approach to blockchain scalability, where a 2nd blockchain (an L2) is created that processes transactions independent of the L1, and periodically “rolls them up” and submits them to the L1 for settlement. The L2 executes transactions and submits the state root to L1. These transactions can be proven to be (in)correct by replaying them on L1 - if one of the transactions is not processed correctly (perhaps the L2 operator wants to steal funds), then you can prove this. This is called a “fraud proof”.
A STARK-based rollup does not rely on an “optimistic” assumption of validity, and instead outright proves the integrity of the state root by submitting a STARK proof to the L1. This proof is a zero-knowledge proof that the state root is correct, and that the transactions were processed correctly.
What is interesting is that both approaches are essentially an example of a binary partitioning:
Binary partitioning: optimistic rollups
In an optimistic rollup, the fraud proof naively involves replaying the transaction on L1 which takes O(n) steps. Naively, this involves re-executing the bytecode inside an EVM implemented atop the EVM, since L1 Ethereum has no generic EVAL
opcode we can use. However, modern rollups use binary partitioning to reduce this to O(log n) steps. The idea is that you run an interactive protocol between the honest verifier (the EVM) and the potentially-dishonest prover. Instead of proving the entire transaction’s N steps, we bissect the transaction to find the single VM step (read: opcode) which was not computed correctly.
Binary partitioning: STARK-based validity rollups
In a STARK-based validity proof, this same process occurs, but within the STARK itself. This one might be slightly slower to explain but let’s go:
- The transaction is run inside a VM.
- The output of this VM is a trace - the history of registers, which is a table of tuples
(t, register1_value, register2_value)
. - This trace is turned into a set of arithmetic equations - arithmetization.
- This arithmetic equation is turned into a set of polynomials - interpolation.
- These polynomials are turned into one big polynomial.
The satisfiability (read: y = 0
) of the polynomial is equivalent to the correct computation of the transaction. The trick with STARK’s is the satisfiability can also be related to the low-degreeness of the polynomial, via the polynomial uniqueness theorem. So proving a polynomial is of a specific degree is equivalent to proving its satisfiability.
Polynomial unicity theorem. There exists a unique polynomial of degree at most $n$ that interpolates the $n+1$ data points $(x_0,y_0),\dotsc,(x_n,y_n) \in \mathbb {R} ^2$, where no two $x_j$ are the same.
So can we efficiently prove the degreness of a polynomial? Yes. This is the FRI protocol - Fast Reed-Solomun Interactive Oracle Proofs of Proximity. How does FRI work?
FRI is a bit complicated, but the essence of it is a logarithmic process. Imagine our polynomial as a number line. Our goal is to sample enough points in this number line to check for their degreeness. To make this harder to fake, we “blow” up the polynomial by redundantly encoding it using Reed-Solomun coding (the same technology as RAID backups). FRI then proceeds round-by-round to evaluate the polynomial at a single point, check its degreeness, and then divide the polynomial into two halves and repeat.
(There is a lot more to FRI and I hope I haven’t gotten something entirely wrong - but this is the rough intuition for it.)
In this manner, STARK’s are more akin to randomly sampling the transaction to check it’s correctness, whereas optimistic rollups are more akin to bissecting the transaction to find the single point that is incorrect. Though due to the blowup factor, the STARK proof is probabilistically sound.