Modular Prover Design

Zircuit operates a comprehensive zero-knowledge proving system built on a modular architecture to efficiently process L2 blocks and submit proofs to L1.

Core Architecture

The Proof Orchestrator serves as the central coordinator, monitoring L2 blocks and organizing them into proving batches. It uses NATS as a message bus to queue blocks awaiting proof, stores proof artifacts in a database, and manages block traces in S3. The team is transitioning to a more modular, configuration-driven version that decouples the orchestrator from specific proving strategies.

The Proposer handles the final step of submitting output roots and their corresponding proofs to L1 smart contracts. Running every 5 minutes, it consumes proofs from the orchestrator, forms transactions, signs them with a secure private key, and submits to L1 nodes via proxyd connections.

Proving Technology Stack

Zircuit has evolved from dedicated zkEVM circuits to a zkVM-based approach. This represents a significant architectural improvement, allowing any program to be proven by simply running it in the VM rather than requiring hardcoded circuit constraints. The system uses Kona (implementing Optimism's state transition) as the program being proven, with Zircuit-specific modifications layered on top.

Two-Phase Proving Pipeline

  1. Range Proofs: Generate proofs for multiple L2 blocks, ensuring correct state transitions, balance computations, and smart contract execution

  2. Aggregate Proofs: Combine multiple range proofs to maximize the number of L2 transactions rolled up in a single L1 proof verification

Advanced Components

We use a zkVM abstraction layer enabling programs to run across interchangeable backends (currently SP1). It separates execution into three components:

  • Program (P): Core computation running in the zkVM

  • Host Program (HP): External setup, key generation, input preparation

  • Guest Program (GP): In-zkVM execution reading hints and committing outputs

The Modular Orchestrator represents the next-generation architecture with four key components:

  • Dispatcher: Processes individual messages and updates status

  • Collector: Groups messages using strategies (Sequential for blocks, Match for shared fields)

  • Message Bus: Handles message delivery and state management

  • Executor: Encapsulates actual execution logic (currently expects JSON input/output binaries)

Development Considerations

The architecture emphasizes modularity and flexibility, allowing for easy updates to proving strategies and backend systems without requiring complete infrastructure overhauls. This design supports our goal of maintaining cost-effective, frequent state root submissions while handling the computational demands of ZK proof generation.

Last updated

Was this helpful?