by Gavi Galloway

on

on

In the fast-growing field of zero-knowledge proofs, a multitude of new proving stacks is emerging, each with its unique attributes, advantages, and limitations.

The demand for ZK applications is skyrocketing, driven by urgent needs across diverse sectors — from outsourcing computation for machine learning tasks, to combating the rise of deepfake technology, to scaling blockchains for greater efficiency and throughput.

Which brings us to the important choice of a proving stack. This is more than just a technical decision: It's a strategic one that can significantly influence what you can build, how you go about building it, the resilience of the end product, and how easily new developers can join the project as contributors.

Just as web developers benefit from understanding the key areas of differentiation between various JavaScript front-end stacks, anyone entering the ZK space — be it a developer choosing a framework to learn or a tech-savvy individual invested in understanding the trajectory of this industry — will find it invaluable to have explicit criteria for evaluating the sprawling landscape of ZK frameworks.

At Standard Crypto we've been gathering firsthand experience with different ZK developer frameworks, explicitly aiming to understand these crucial differentiators and share our insights with the broader community.

Here we're highlighting one such important distinction: The various models for computations performed "outside" of the circuit. This distinction can greatly affect both prover performance and developer experience. Let's jump in, shall we?

Let's unpack the meaning of computations that occur either "inside" or "outside" of the circuit, a feature we'll refer to as the proving stack’s **execution model**.

To grasp the significance of this, consider a simple example:

Problem Statement: *Given a positive integer X, prove that X is not prime*

Imagine you have two Python functions. The first, *find_factors*, takes an integer *X* and returns two numbers *i* and *j* that are factors of *X*, thereby proving *X* is composite.

The second function, *assert_composite*, simply verifies that *i* and *j* are indeed factors of *X*.

Note that an error-free run of either of these two functions alone is sufficient to prove that *X* is not prime.

The first program does the heavy lifting — it finds the factors. The second program simply checks that the factors are correct. However, it needs *i* and *j* to be provided in advance, which means either the first or an alternate program must have already done the work to find them.

This distinction mirrors the concept of in-circuit and out-of-circuit computation in proving stacks. Some frameworks allow (or even require) a developer to perform all the computational work "inside" the circuit, while others enable performing some of the work in advance, providing the results to the circuit and merely checking the constraints.

Let's begin by understanding the most straightforward way to model the roles of the prover and the verifier:

*The Prover runs a program on some input to produce an (optional) output and a proof of its execution. The Verifier confirms that each instruction of that program was run correctly on that input, without the Verifier necessarily knowing the value of the input itself.*

This model is the most approachable for developers familiar with traditional programming but new to ZK contexts. A developer writes a single version of their program that captures the entire computation process. The proving stack handles the intricacies of arithmetization and witness generation, shielding the developer from the nuances of how the prover and verifier utilize the source code. In this post we refer to this as the **simple execution model**, emphasizing its straightforward nature for the developer, even though its underlying implementation can be quite complex.

The *find_factors* function introduced earlier is a clear example of the simple execution model if we were to plug it into a hypothetical proving stack compatible with Python code. For an example in a real-world proving stack, consider the following implementation of the same program in Cairo 1.0, the language of StarkNet:

*A Cairo 1.0 program that proves X is composite by computing a pair of its divisors. Note that we use recursion instead of loops.*

The simple execution model is particularly useful when the goal is to attract developers seamlessly. It's familiar to non-ZK developers and offers minimal friction for porting existing code. However, this ease comes with trade-offs: larger and more complex circuits, which can lead to slower prover performance.

Projects that commonly adopt the simple execution model include programmable blockchains and ZK virtual machines. Starknet, Aleo, RISC0, and the multiple zkEVM efforts stand out as notable examples. For these projects, the ability to support a wide range of programs and foster a robust developer ecosystem is paramount. The advantages of a more complex model that exposes the separation between prover and verifier might not outweigh the benefits of simplicity in these cases. But then these stacks can cause tradeoffs in performance, as the size of the circuit may become large for complex applications.

Another model for the prover and verifier runtimes can be described as follows:

*The Prover runs a known program on some secret input. Certain instructions in that program are flagged as Prover-only. These special instructions assist the Prover in filling in intermediate values during computation for the Verifier's reference. The Verifier, in turn, disregards the Prover-only instructions and verifies that all other instructions were executed correctly.*

The **prover hints** model offers a balance between the conventional computer architecture model and the intricacies of ZK circuit implementations.

Consider the following snippet in the Circom DSL:

*A sample Circom implementation of the assert_composite program *

In this example, the developer encounters the notion that a prover might be *advised* on certain computations to perform externally. The results of these computations are then used by an honest prover to populate specific parts of the circuit’s memory. It's essential to note that the verifier should not blindly trust the results of out-of-circuit computations; they must be verified in-circuit.

The concept of prover hints introduces developers to the concepts of *constraints* and *signals, *where “signals” are the program’s variables, over which constraints are defined. It emphasizes the difference in assumptions and threat models between the prover and verifier. The verifier focuses on the results of the prover’s work, often without delving into the methodology. Importantly, prover-only computations are mere "hints" and can be disregarded by a malicious prover. This can lead to vulnerabilities if the circuit's constraints aren't meticulously defined.

While the prover hints model presents a steeper learning curve than the simple execution model, it's a worthy investment, akin to a developer transitioning into functional or asynchronous programming paradigms in other languages. This model offers a flexible abstraction, allowing developers to decide the extent to which they utilize prover hints. Avoiding prover hints entirely reduces back to the simple execution model, while extensive use can significantly optimize performance by offloading substantial logic from the circuit.

We’ll move even further away from the simple execution model, and in turn we get a better understanding of the underlying details of these proving stacks.

A more precise model of the prover and verifier “runtimes” is thus:

*Both parties agree on a set of constraints defined over a set of variables. The Prover must know or derive values for all variables such that every constraint is met, and must provide to the Verifier a “witness” attesting to its knowledge of those values. The Verifier knows only the constraints and does not learn the values used to satisfy them.*

As the **fully segregated **name implies, the fully segregated model distinctly separates the task of determining valid variable values from the task of defining a circuit’s constraints. Code written in this model will have a section that defines the circuit, separate from any code used in creating witnesses.

This model grants developers unparalleled flexibility, allowing them to optimize their circuits by focusing solely on the essential constraints. Given the current limitations on prover time in ZK applications, streamlining the circuit's logic can have a profound impact on performance.

However, this power is not without its challenges. The fully segregated model demands the most from developers compared to the other models. They must independently manage both the constraint-generation and witness-generation processes. Furthermore, circuits designed in this model might be more challenging to decipher and audit than those crafted using the other models. It also poses the steepest learning curve for developers transitioning from non-ZK backgrounds, as it deviates the most from conventional programming paradigms.

The choice of execution model significantly influences the design and deployment of ZK applications. Let's delve into the themes and questions that determine the suitability of each approach.

ZK applications can vary in their expectations of who produces proofs. Some expect end-users to generate their own, while others delegate this to third parties due to computational costs. This distinction raises the question: Who performs any out-of-circuit computation, the user or the prover?

Imagine integrating our *assert_composite* function into a blockchain app, where we expect to calculate *X*'s factors out-of-circuit for efficiency.

If a third-party prover is tasked with proving a call to *find_factors* for a large number as *X*, they might waste computational resources only to discover *X* is actually prime — therefore the transaction is unprovable and they cannot collect fees for it on-chain. To compensate for this, the blockchain’s protocol could require a proof of the out-of-circuit computation to receive payment — defeating the purpose of hoisting it from the circuit body to begin with. Or the protocol could impose constraints on the types of out-of-circuit computations that can be written, which guarantees provers their efforts will be worthwhile but limits the power of what can be expressed in those computations.

Conversely, if the *transaction signer* undertakes the factorization rather than the prover, they face the same risk of a potentially extensive and unfeasible computation. This becomes especially problematic when *any* transaction signer could indirectly interact with our application and thus become exposed to this computational burden.

Consider a ZK application where someone discloses a sanitized version of sensitive documents to an auditor, accompanied by a proof of their faithful edits. Here, privacy demands that the document owner produces the proof, merging the roles of prover and end-user. This scenario naturally supports out-of-circuit computations without complications.

The simple execution model is beginner-friendly, offering a familiar environment that abstracts many ZK complexities. If performance allows, this model is excellent for attracting developers. Conversely, the fully segregated model shines in performance-critical applications where out-of-circuit computations offer tangible benefits and where developers can handle the added complexity.

Navigating the landscape of ZK tooling can be a challenge. The choices made around in-circuit and out-of-circuit computation aren't just technical details — they can directly impact performance, code maintainability, and overall developer workflow.

We've focused on one key aspect here, but there's more ground to cover. If you're building or investing in this space, understanding these nuances is crucial. Keep an eye out for our upcoming material as we dive deeper into the mechanics and best practices of ZK development.