RLN Partial Proofs in Practice

RLN Partial Proofs in Practice

Introduction and context

In a recent post, a technique was proposed to split RLN proof generation into two phases: a precomputable “partial” proof, and a lightweight finishing step for the per-message inputs.

The idea exploits the fact that over 90% of the RLN witness only changes when the membership tree is modified. Since users typically send many messages between tree updates, the expensive MSM over the fixed witness elements can be cached, yielding ~3x speedup.

We implemented this optimzation in Zerokit v2.0.0, along with an example. This post covers the usage and best practices that arise when integrating partial proofs into a production system, in the context of a “Centralized Prover Service” like the rln-prover in the Status L2 architecture.

How to use partial proofs in Zerokit

In standard RLN, you build a full witness and generate the entire Groth16 proof in one shot. With partial proofs, you split this into two steps:

  • Precompute: Take the slow-changing parts of the RLNPartialWitnessInput (identity_secret, user_message_limit, path_elements, identity_path_index) and generate a PartialProof. This is the expensive step, but you only need to redo it when the root is stale.
  • Finish: When you actually want to send a message, take the cached PartialProof and combine it with RLNWitnessInput, which contains the per-message inputs (signal_hash, message_id, external_nullifier) to produce the final Proof. This step is cheap, roughly 3x faster than generating from scratch.

The resulting proof is a normal Groth16 proof. Verifiers can’t tell it was generated in two phases, and verification works exactly the same way.

In the Zerokit API, this is just two calls:

When is the root “stale”?

PartialProof is tied to a specific Merkle root. The moment someone registers or gets removed, the root changes, and your cached partial proof points at a root that no longer exists.

The fix is to keep a small window of recent roots instead of only accepting the current one. Zerokit supports this with verify_with_roots, instead of using verify_rln_proof which only checks against the current tree root.

When a member leaves or gets slashed

Different from the register case where your leaf is still in the tree and the old root still works no matter what the root window size, leaf removal means the old root that still included that member doesn’t just go away, any proof generated against it is still valid, partial or full.

The fix is straightforward: when a leaf is removed, reset the root history window so you only accept roots from after the removal. This also means all cached partial proofs tied to those old roots are now invalid, you’ll need to regenerate them against the new root.

Best practices for partial proofs when membership changes

The most naive approach is to cache each partial proof along with the latest (root, path_elements, path_index) for that member to use in full proof finalization, and regenerate all partial proofs whenever the Merkle tree changes. This is what the example does, simple to reason about, but comes with a lot of overhead when you have many users or frequent tree updates.

A few ways to do better:

  • If you have a small number of members and most root changes are from registrations, you can safely keep all the roots around for a while. Over time, dequeue the oldest roots from the window and only regenerate partial proofs that were still tied to those removed roots.

  • If you have more members, you can keep the same dequeue architecture and introduce a more sophisticated caching layer (something like LRU or LFU) to only precompute partial proofs for members who generate proofs frequently, and fall back to normal full proof generation for the occasional ones.

  • In the case of a member leaving or being slashed, you have to reset the cached roots window and invalidate all cached partial proofs. But you don’t need to regenerate everything at once, prioritize higher-tier users first (e.g., in Status L2 manages users by tier according to their accumulated Karma tokens), and regenerate the rest gradually based on the current rate of incoming requests and who is generating more proofs lately (from the rate before invalidating the cache). That way, you don’t waste your entire hardware budget on recomputing partial proofs all at once, which can halt the system and block incoming requests for generating new proofs.

Some other ideas:

  • Lazy regeneration: Only maintain partial proofs for active users. Combine with LRU and a threshold (e.g., only cache partial proofs for users who generated more than 2 proofs recently). Everyone else falls back to normal proof generation.

  • Background job: Kick off partial proof regeneration in a background thread during idle.

  • Multiple trees: If your system can uses multiple smaller trees instead of one large tree, partial proofs are cheaper to regenerate and go stale less often.

Conclusion

Partial proof generation is a nice speedup, but it’s not free, it adds caching logic, invalidation handling, and more moving parts to your system. Benchmark your actual workload: cache hit rate, tree churn, proofs per user between updates, and regeneration cost during bursts.
If your system users being removed too frequently, partial proofs might not be worth the complexity, just stick with normal proof generation.

2 Likes