Proofs of work are about one-and-done work: the prover is convincing the verifier that the work has been done. Note that some work may require a lot of space (for example, a lot of space is needed to compute memory-hard a function with sustained memory-hardness), but the space is used temporarily and freed up after the work is done. Thus, even if proofs of work are about space usage, they are about one-time space usage.
In contrast, proofs of space are about continuous space usage: the prover is repeatedly convincing the verifier that the space is still being used. However, since space is initially filled up via computation, we have an inherent problem: a malicious prover can temporarily give up space in exchange for taking on more computation in order to refill the space later.
1 Single-Resource Tightness
Naturally, the malicious prover who gives up space and then refills it will never need to do more work to refill the space than the honest one needs to fill the space initially. More precisely, whatever resource we are measuring, the malicious prover will never need more of that resource than the honest one at initialization. For example, the malicious prover will not need more time, more energy, more money, more cache misses, more bandwidth, etc. (We discuss a more sophisticated version of this view in the next section.) Ideally, the minimum that the malicious prover needs would be close to what the honest prover spends.
Let us make this a bit more precise. Let us assume that initialization for a proof of space consists of two steps: filling up the space, and proving that it was filled up correctly. (This assumption is somewhat oversimplifying, as (1) these two steps can perhaps be combined or (2) the second step may not even be needed, as in the proof of space by Abusalah et al.) Then, at various times, the prover and the verifier run a protocol that is easy to pass if you are still keeping the space filled up.
Let P be the honest prover and P∗ be the malicious prover who is trying to save an ε fraction of this space after initialization is over (i.e., uses less than (1 − ε) fraction of the space). In order to analyze a proof of space, we will generally need to establish, for every relevant resource,
- an upper bound r_upper on the amount of this resource that P needs to initialize the space.
- an upper bound r_proof on the amount of this resource that P needs to prove initially that the space was filled up correctly
- a lower bound r_lower(ε) of this resource that P∗ needs to pass the periodic verification.
The two upper bounds are needed to establish efficiency of initialization. The lower bound is needed for security: we would argue that P∗ will keep the space because r_lower(ε) is either not possible to acquire (for example, if the resource is time, and the verifier does not accept responses after a certain time) or not rational to use (for example, if the resource is money, and storing is cheaper than spending rlower(ε)).
It is necessarily the case rlower(ε) < rupper for every ε such that (1 − ε) exceeds the initial input to P. Ideally, we would like rlower(ε) to be close to rupper for every ε, and rproof to be small. Then the resource usage of the honest prover is as low as possible. The closer rlower(ε) to rupper + rproof , the tighter our proof of space is.
2 Multi-Resource Tightness
Considering single resources in isolation doesn’t always make sense. In computation, time can often be traded for parallelism, cost can often be traded for time, memory bandwidth can often be traded for larger caches, etc. Thus, it make more sense to consider tradeoffs. For instance, for k different resources we can ask how much of resource k is needed given a particular amount of resources 1 . . . k − 1. For example, how much time is required given a particular number of cores, caches sizes, and latencies. Again, P∗ never have a worse tradeoff than the honest one, because P∗ can just do at verification time whatever P does at initialization.
Tightness, in this case, would mean that whatever tradeoffs are available to the honest prover, we know that that malicious prover won’t be able to do much better. For example, we may be able to argue that either P∗ spends an insane amount of money, or P∗ spends essentially as much time as P.
3 Going beyond tightness
So far, we have argued that P*’s resource usage is at most that of P. Can we ensure that the P*’s use of some resource is higher than P’s? One approach that may work is to consider the case when P is gets to take advantage of parallelism and/or pipelining at initialization, by initializing multiple proofs of space at once (in Filecoin language, sealing multiple sectors). If we can ensure that P* is not able to do the same when performing the proofs, then it may be that rlower>rupper. This requires going away from analyzing a single PoS in isolation.
4 Prior tightness notions
The tightness notion in Ben Fisch’s SDR paper simply means the following: for every ε, he can construct a proof of space with a constant rlower(ε)/n > 0.2. Here n is the amount of space used (measured in hash outputs) and the resource r is the number of sequentially (i.e., non-parallelizable) applications of the hash function, which corresponds to what is known in complexity theory as “parallel time”.
This tightness does not refer to the tightness of any particular resource.
In contrast, NSE is actually almost tight with respect to the number of hashes (or, more precisely, compression function invocations): rlower/rupper approaches 0.9ε (I may be wrong about the 0.9 constant, but it’s something like that).