TL;DR: no matter how clever we are about the graph construction, unsealing will cost over 4 hashes per label, unless we learn more about predecessor robustness or depth robustness. In a companion note, I explain why the cost of unsealing in SDR/SPR can’t be improved much; this note applies more generally to any graph construction.
This note is limited to graph-based PoS constructions in which initialization computes a hash-based graph and execution probes some graph nodes (there’s only one non-graph-based construction and its concrete parameters are unclear). It is also limited only to symmetric constructions (i.e., ones in which sealing and unsealing have the same cost or latency, depending on whether we are in the cost or latency model). We will assume a parameter that is an upper bound on the fraction of nodes may be incorrectly computed (i.e., so called “red pebbles”).
Recall the definition of predecessor robustness: if you pebble any fraction of nodes in the graph, there’s a single-sink unpebbled subgraph of fractional size . Recall the definition of depth robustness: same, but this subgraph is a path.
Recall the cost (or latency) ratio is the ratio between the cost (or latency) of
- answering some (reasonably likely) query given some storage, and
- pebbling the whole graph from scratch.
Suppose the PoS consists of pebbling some graph and then outputting some (or all) nodes of this graph. Let the number of nodes output be and the total graph size be . For example, in SDR, and . However, we do not assume that the graph is necessarily layered, or even that the output nodes are necessarily the tail end of the pebbling.
Note that relates directly to the cost of unsealing: to unseal, you need to compute the whole graph, so for every node of storage, you perform hashes.
Claim. If a graph has space gap , and cost (or latency) ratio , then it is predecessor (or depth) robust for and .
Proof. Consider an adversary who places pebbles on the graph. Since the graph has space gap , we know that during the execution phase, in response to some queries, the adversary will have to do some fraction of original work (in the cost model); in the latency model, we will also know that this work is sequential. Let . Because queries are to specific nodes, that means that, no matter how the adversary places pebbles on the graph, there should be at least one node that takes hashes (in the cost model) or sequential hashes (in the latency model) to compute. But if a node takes hashes to compute, then it is the single sink of a fully unpebbled subgraph of size (else, it could be compute with fewer hashes, because only the nodes that are its ancestors need to be computed). Similarly, if it takes sequential hashes to compute, then it is the last node on a path of length (else, it can be computed in less time, by greedily computing everything possible).
So a space gap of and cost (or latency) ratio implies predecessor robustness (or depth robustness) for and . QED.
Corollary. If we don’t know how to reason about predecessor (or depth) robustness for , then we must set to or less. And if we want a spacegap , then . So unsealing will be at most times cheaper than it is now, no matter how clever we are about the graph construction.
Remark. This note does not consider what happens when the verifier can take advantage of querying multiple nodes and adding up their costs, instead of just considering the maximum cost of a single node; however all this will do is change , but not , which is the focus of this note.