ZKVM Compilers: Motivation

This is a continuation of Schemes for recursive proof composition and (N)IVC-based ZKVMs of the series “Compiling to (N)IVC ZKVMs”. Feedback is welcome :slight_smile:

Motivation

While a ZKVM is not a compiler, any end-to-end architecture for proving computation of a program likely involves the design of a compiler with that of a ZKVM. A ZKVM is often regarded as “something” that can prove the computation of a generic CPU program. Since a program is written in a high-level language, this seems to imply that a compiler is a component in a ZKVM, but it is not. For a given program, a compiler may output different circuits for the same instruction set of a ZKVM or, more interestingly, generate different instruction sets or ZKVMs that are optimised for that compiled program.

What is a program? One very common abstraction used in compilers is that of a control flow graph, which splits a program into a series of blocks and arrows of blocks based on jumps. A compiler takes a program written in a high-level language and outputs a circuit. The inputs and outputs of a circuit are finite field elements. A compiler should be able to \textit{decide} which circuits or blocks are derived from a program and a prover would aim to minimise the cost of computing such subset of circuits that gets executed from a given input.

For example, given a program C=\{A, B, C, D, E, F, G\} and inputs x, w for which the path P=[A, C, F] is taken, a compiler will output the set of circuits while the prover’s costs will only depend on this path P, not on the blocks not taken. With folding schemes, each path in P can be folded into a single instance.

Different back-ends provide different trade-offs that will affect how the output of the compiler. This dance between compilers and proving systems will be the focus of this report.

In fact, we are not designing a ZKVM, but a compiler to a ZKVM. Whatever ZKVM will be suitable for a given program will be determined at compile time. Another way of seeing this, is that the set of instructions \{ F_1, ..., F_N\} of the ZKVM will be set \textit{dynamically} at compile time. A question may arise: what is really a ZKVM? Does a different instruction set render a different ZKVM, or can the same ZKVM hold different instruction sets?

We want to design a compiler that leverages the state-of-the-art work on folding schemes, thus outputting a NIVC-based ZKVM.

In the design of a ZKVM compiler, it is important to strike a balance between the size of each step function F and the overhead induced by recursion, accumulation or folding. On one end of this spectrum, we have the whole program being one big circuit; on the other end, we fold of all primitive operations such as addition, equality or multiplication. There is no folding in the former case, rendering in the majority of cases a circuit that exceeds the current memory limits that a prover generally has access to. The base overhead of folding in the latter case may be comparable to the cost of proving due to having such small circuits (and there are many of them!). Adding a small folding overhead to many small circuits removes almost all the advantages of folding. So, where is the balance?

NIVC-based ZKVMs can benefit from compiler passes. While it is common to \textit{fix} the set of functions \{F_1, ..., F_n \} to a basic instruction set, it doesn’t have to be so. A compiler can leverage the information it gathers from a given program to create these step functions F_i at compile time. They no longer need to be small instructions, but subsets of the whole program created at compile time. Given some design constraints, the compiler, acting as a front-end to a SNARK, can split the program F into a set of \{ F_1, ..., F_n \} subprograms. The compiler must be given a heuristic of this desired balance between circuit size and number of circuits. For example, equilibrium might be found by creating bounded circuits of 2^{12} gates with only one witness column. The compiler must have an educated guess of how to decompose larger circuits into smaller ones.

This holistic ZKVM compiler approach is not commonly seen, since most projects are focused on building on top of fixed instruction sets (IS) such as RISC-V or one that is compatible with the EVM. However, we do find this integral frontend-backend approach in projects like Jolt and Lasso.

Example

Let’s use an example to highlight the different ways of compiling a program. Let multi_algorithms be a program that serves as a database of algorithms: given a program identifier, it proves knowledge of the outcome of that algorithm. We have mixed some algorithms together to illustrate how real-world applications tend to be assembled.

For example, in the first program (i.e. program_id=0) we prove both knowledge of the n-th Fibonnacci number and that it is a prime number.

Let’s revisit the three different compiling and proving approaches studied so far:

Monolithic circuits

This would be a direct compilation into a circuit for an arithmetisation (e.g. Halo2 Plonkish).

The circuit derived from this approach contains all the unused branches, since the compiler doesn’t know the inputs of the program. For a complex application, this turns to be a large circuit with high memory requirements. Since, proving time is at least O(n \log n), where n is the number of gates (including all the unused branches), this approach turns out to be highly inneficient for large applications.

Fixed instruction sets

This is the approach taken by STARK ZKVMs: the set of (primitive) instructions is pre-determined, that is, it is independent of the program. Underlying any computation on finite fields, there are primitives operations such as addition or multiplication, as well as other common and more complex operations such as range checks.

The prover doesn’t prove the whole program at once, but proves each of these primitive operations, one at a time. In the case of STARKs, they use full recursion.

Having a set of fixed instructions for any possible program turns out to be quite inefficient when the prover provides a proof for each of these opcodes, since these opcodes happen to be too small to justify the overhead of proving.

Dynamic instruction sets

Fixed-instruction-set zkVMs don’t have knowledge of the program (so they include instructions that a particular program might never use).

In contrast, using some heuristic, a compiler may determine the set of instructions for a given program. For our example, a possible instruction set is depicted below. The aim of such compiler is to choose blocks of computation that are big enough to justify the overhead of folding and small enough to avoid blowing up memory resources and other issues derived from large circuits.

This set must be crafted with some considerations. Notice that functions such as range_check, which checks that a value is between 0 and 2^n, or is_prime are independent of the rest of the computation. This means they can be treated as separate “opcodes” in this dynamic set of instructions. They are not too complex but not too small, and are used multiple times.

The different ways of splitting a program into subprograms that serve as an instruction set for a NIVC ZKVM is more an art than it is a science.


This post continues here

2 Likes

Very helpful comparisons! Some comments:

I think we need to clarify here when the compiler is run, since the word itself is ambiguous in this regard. Let’s say that there are two cases to start:

  • the compiler is run on the whole program, without knowing (x, w) - here the compiler cannot select only path P, since P is unknown - but the prover can e.g. only make proofs for P (potentially) since the prover always knows (x, w). here the verifier doesn’t need to do anything special because the verifier knows as much as the compiler and can check what the compiler did.
  • the compiler is run once on the whole program to produce some intermediate output, and then the compiler (or a part of it at least) is run on the intermediate output for each instance, e.g. with knowledge of (x, w) - here the compiler can select path P, but the verifier has to also check (somehow) that what the compiler did was correct, since the verifier (presumably) does not know (x, w).

By comparison, fixed-instruction-set zkVMs don’t even have knowledge of the program (so they include instructions that a particular program might never use).

A diagram of what happens at various stages in time might be helpful here.

This sounds like it would be dependent on something like Kolmogorov complexity - is this the case?


In general, my takeaway from this post is that:

  • decomposition of a program into subprograms,
  • potential usage of a folding scheme (IVC/NIVC), and
  • function privacy via a layer of recursion

are separate choices, but that certain decisions may make sense only if done together, e.g. program decomposition makes sense if you have a flexible NIVC folding scheme, but if you have a fixed instruction set, you just have to decompose the program into the instructions which are available. Is that accurate? Is there any coupling with function privacy and either of the other two?

1 Like

I don’t think so, but I may be wrong. Can you elaborate?

The idea of this decomposition is as follows: one of the costs in Lasso is proportional to the number of permutations of your lookup table – if the lookup table is 64-bits this would be 2^64. This is too big. If instead we chunk that 4 times, this costs goes down to 2^16. Which is entirely practical. The reduction is exponential.

Some of the feedback on this forum refers to: ZKVM Compilers: Goals

Imagine that we have a completely random function, such that to encode the function evaluation table (regardless of how many we split into) we need bits equal to:

S_{domain}^{S_{codomain}}

As I understand, chunking would split this evaluation table into many lookup tables - do we pay any cost proportional to the number of lookup tables, or not? I guess that’s a better way of phrasing my question.