1. When will a rational adversary choose to store instead of recompute the PoRep for WindowPoST?
Assume is the ratio between the cost of computation for our providers and the cost of computation for a hypothetical adversary. Let be the time period between WindowPoSTs (currently 24 hours) and be the cost of storage of a sector for time . Let be the cost of unsealing for our providers (the adversary’s cost may be up to times less), which is the same as the cost of computing all 11 layers of the graph.
Question: when will a rational adversary store (most of) a sealed sector?
- Result by Fisch, Luca and Irene: it is rational to store at least 80% of a sealed sector whenever
- Result submitted to Eurocrypt: it is rational to store at least 80% of a sealed sector whenever .
- New result explained below: it is rational to store at least 87% of a sealed sector whenever
- Moreover, we can reduce the cost of WindowPoST from 10 queries per sector to only 3, and it still rational to store at least 87% of a sealed sector whenever
Item 3 provides a 62x improvement overall in cost assumption and 35% improvement in space gap, from 20% to 13%.
The simplest explanation for items 3 and 4 is that we now consider the combined cost of storage and computation over the entire range of possible storage amounts, rather than either the cost of storage or the cost of computation. I explain a bit more below. The details are in https://github.com/ninitrava/PoS/blob/main/SDR_improvement/tighterSDR.tex.
Security Margin. According to Section 5.1.3 of https://drive.google.com/file/d/1notObdkPT1BCztgspIpzSUAzWSrM8h81/view, our security margin under the first analysis was 3.15. According to item 3, it is 62x bigger, i.e., 195.
Room for Improvement?
We now know that assuming is sufficient to ensure storage of at least . of a sector. On the other hand, assuming that is necessary — otherwise the adversary would pay to simply repebble the whole graph before WindowPoST, and that would be cheaper than storing of the sector. So the best we can hope for is to improve for 1.1 to 0.87.
So the gap between an assumption we currently know to be sufficient and an assumption that is absolutely necessary is small — . There is not much room for improvement from 1.1.
2. How We Achieve This Better Analysis
2.1 Summary of the traditional analysis that gives us items 1 and 2 in Section 1
Traditional Analysis. Typical proof-of-space security analysis goes like this. Suppose the space requirement is , and we assume that adversary can perform the PC1 computation (that is, pebbling the whole graph; equivalently, unsealing) at cost . (Because we don’t know for sure what the best technology available to the adversary is, we must take to be smaller than the typical cost of PC1 computation during sealing; assume is the cost of the actual PC1 computation for typical storage providers, and thus is the ratio is of costs between typical and what is available to the adversary.) If the adversary stores less than , then at execution time, the adversary will have to do redo (in expectation) some fraction of the cost . This is known as the spacegap.
Application of the Traditional Analysis to the Latency model. This analysis seems sufficient in the latency model as long as the space gap is small enough: if must be done sequentially and the verifier will time out before it’s done, then the verifier will catch any adversary who stores less than ; if the penalties for being caught are significant enough, the adversary will be incentivized to store at least .
Application of the Traditional Analysis to the Cost Model. In the cost model, assume the cost of storage of for the entire time between execution is . Then the adversary who erases everything will incur cost at most (perhaps less, depending on what it costs to compute query answers, which may be less than computing everything); to ensure a rational adversary does not do so, we need to at least ensure .
But what about an adversary commits some amount of storage that is less than but more than ? With the traditional analysis, here’s the best we can say:
1) If , all we know is that the adversary will have cost .
2) If , all we know that the adversary will have expected cost ; since we don’t know anything else about , we get just .
Thus, to ensure that erasing more than is not a rational strategy, we need (Note that there is nothing we can do to ensure that erasing less than is not a rational strategy — given this type of analysis, it may be perfectly rational.) This leads us to conclude that to Noting that typical parties incur cost to unseal, we get a that the unsealing cost is at least . Noting that that , that’s a pretty high unsealing cost.
In other words, if we want to ensure that storage, rather than computation, is a rational strategy, then cost of unsealing must be at least more than the cost of storage between executions. In our current implementation, . The probability of being caught is, (because the probability of being caught is 10% per challenge per our analysis, and there are 10 challenges), and if you are caught, you pay (per Eurocrypt submission analysis; in the previous analysis by Fisch it was ), so the expected cost is . We thus get that the cost of unsealing must be at least times the cost of storage between WindowPoSTs in order for storage to be rational.
2.2. Improving the traditional analysis by considering multiple space gaps and rationality to get items 3 and 4 in Section 1
In reality, a rational adversary can use storage for any . What if instead of a single space-gap analysis for , we had a function which told the adversary the (expected) work needed to pass execution? We know (execution is trivial to pass) and (even if you store nothing, you won’t have to do more work than pebbling the whole graph, or PC1). Then the rational (risk-neutral) adversary will choose that minimizes the cost of storage plus the expected cost of answering, i.e., .
The proof described in the previous section tells us only that for , (assuming we use 10 queries in the execution phase, because is the probability of being caught). But in fact we can prove that for any , .

Then to argue that it’s irrational for the adversary to reduce storage by more than , it suffices to argue that for all , . Because the left-hand-side is linear in , it’s minimized at the extremes or . Note that at the inequality is definitely true (as it becomes ). And at the inequality becomes . Thus, it is not rational to save or more storage as long as .
Even though it may seem that we are using the bounds on only at the extremes of and , that is not actually the case. We need the bounds in between, too. In fact, the reason we are able to get away at looking only at the extremes is that we have a linear bound on as a function of . Basically, because we have a linear bound, we know the following: if it’s worth it for the adversary to reduce the storage a little below and take the hit in computation, then it’s also worth it to continue this reduction all the way to (almost) no storage, because the hit in computation is less than the savings of storage. If we didn’t have a linear bound, we would have to assume something about the relative costs of storage and computation in order to find the optimal storage amount for the adversary.
Moreover, if we have only 3 queries instead of 10, we can prove that for any , , and a similar reasoning gives us bullet 4 above.
3. Further thoughts on space gaps
3.1 Why do we want a small space gap?
Why do we want a small space gap in the cost model? Because we want to have a proof of space, and the smaller the space gap, the more space we can be sure is occupied. In the extreme, imagine we had a huge space gap of . Then it would be rational to store of each replica, and that’s all we can prove. That doesn’t feel much of a proof of space. Moreover, the typical honest participants store of each replica, because they actually provide storage for data, which they can’t compress. This would make it easy for the adversary to take over the majority of the space in the system (WinningPoST helps with that, but that’s another story).
The new analysis technique above doesn’t help solve this problem — we still need a small space gap. In fact, the new analysis reduces it from 20% to 13%.
3.2 Why is the need for a small space gap a problem?
The need for small space gaps means that it will be harder to improve the cost of unsealing, because we don’t have constructions with small space gaps and low cost of unsealing. See “The Cost Of Latency and the Cost of Cost” note.
3.3. What if the space gap is large but everyone can take advantage of it?
In the analysis above, we assumed the adversary could erase fraction of the storage without any penalty for answering queries. This, of course, begs the question of whether the honest parties can do the same.
It is certainly possible to come up with PoS constructions in which the honest parties can erase some amount of storage reduce their costs of answering queries (see the end of this section for an example). Then they should do so! The result is simply a new PoS construction. In other words, after the honest parties took advantage of the space gap, all you have is a different PoS. In that new PoS, we won’t know how to take advantage of the space gap anymore (else, we can improve the PoS again). But this new PoS will still likely have a space gap against parties who don’t follow the protocol, because of the difference between what know how to do and what we can prove is impossible to do (i.e., the difference between known upper and lower bounds, and the ratio of known cost of computation to adversarial cost of computation).
The situation is more subtle in a PoRep, because removing a portion of the PoS reduces your available PoRep storage accordingly (else, you are not actually saving on storage, and then you could as well just keep the PoS as is, since your are not gaining anything by taking advantage of the space gap). And if we care about cost of sealing / unsealing, then removing a portion of the PoS and the corresponding storage may actually increase the per byte cost of sealing / unsealing (since you may be erasing bytes that are cheaper to seal / unseal).
So even if you can take advantage of the space gap, in a PoRep context it may have other costs.
To sum up, we do not see a way to “take advantage” of a space gap without simply thinking of this construction in which everyone is “taking advantage of a space gap” as a new PoS / PoRep construction.
Example of a spacegap everyone can take advantage of. For example, suppose we have a single-layer PoS (full graph, not just the tail) of in-degree . Take any node. Erase it. Mark its ancestors as unerasable. Now erase any unmarked node. Mark its ancestors as unerasable. Repeat until all nodes are marked as unerasable. At each step, you mark at most new nodes and erase 1, so you can erase fraction of the nodes before all the nodes are marked. (You can probably improve this to by erasing unmarked nodes that have at least one edge from a node already marked as unerasable, but you have to prove that such a thing exists at every step.) Note that answering queries for unerased nodes is same as before, and answering queries for erased nodes requires just a single hash of ancestors. So for a random query, with probability you do one lookup, and with probability , you do lookups plus one hash. Thus, expected cost of answering a query is about two lookups and one hash. So the honest parties can also erase those nodes if the cost of one lookup plus one hash is less than the cost of storage of fraction of the PoS. And thus we have a new PoS.