A proposal to specify a defined term for FIL+ data cap allocations, independent of storage deals, so that deals can be extended indefinitely without FIL+ power boost being forever.
Terminology
Allowance: data cap provided to a verifier by the root, or to a client by a verifier.
Allocation: data cap allocated by a client to a specific piece of data.
Term: period of time for which a sector/deal/allocation is active or valid.
Claim: a provider’s assertion they are storing all or part of an allocation
Background
A FIL+ data cap allocation doesn’t have any intrinsic term. An allocation is currently bounded in time only because (a) allocation claims are made via deals, (b) we don’t support deals longer than a sector’s life, and (c) a sector has a maximum lifespan . When we decouple FIL+ from the market actor or enable renewal/extensibility of deals, data cap will be forever. More detail in FIL+ Forever.
On the assumption that this is not what we want, this proposal is a mechanism for adding explicit term limits to data cap allocations. Additionally, the current QA power mechanism for FIL+ consensus power and rewards introduces many Challenges with QA Power. This is an opportunity to resolve them in a simpler way than .
Toward programmability
These ideas build on the Architecture for programmable storage markets. That structure removes the market actor from intermediating use of FIL+ data cap because we want alternative storage markets to be able to broker FIL+ deals, but can’t trust user-programmed contracts to enforce any network policy (like how much QA power some data is worth). In doing so, it breaks the linkage of the verified data to the term of the deal negotiated by the client. So even before actually implementing deal extension in a market, the FIL+ data cap would have unbounded term.
We need FIL+ data cap to have a term that is independent of any deal for a client to pay for a particular storage period (because we want that to be indefinite).
Goals
- Add limits on the duration for which a provider can command 10x power for storing verified data
- Limits can be set by policy of the FIL+ notaries, rather than network upgrades
- Compatibility with architecture to support programmable storage markets
- Don’t trust or rely on market contracts for FIL+ enforcement
- Scalability for a large expansion in FIL+ data
Design ideas
FIL+ term limits
The programmable architecture adds a record for each data cap allocation to the FIL+ verified registry actor. The record stores the piece CID, client, and provider, and is somewhat analogous to a storage deal, but simpler. To this we can add a verified term: the duration for which the data piece should qualify for a power/reward boost.
At its simplest, a verified term could be a single number of epochs. But with a little more, we can add lots of flexibility. A verified term comprises:
- minimum term: the minimum period a provider must store the piece continuously to avoid early termination penalties
- maximum term: the maximum period for which a provider can earn quality-adjusted power for the piece
- expiration: the latest epoch by which a provider must commit data before the data cap allocation expires
The verified term is independent of any storage deal term. So, for example:
- A client could allocate data cap for a term of between 180 and 1000 days, then make a storage deal for 180 days.
- If the storage deal expires un-renewed, the client would stop paying, but the provider could choose to continue proving the data and retain the power boost. Or, after 180 days, allow the sector to expire, or even replace the data with other data.
- The storage deal could be extended, and the existing data cap allocation would continue to reward the data.
- A client could allocate data cap for a fixed 500-day period, and make a deal for 100 days.
- After the deal expires, the provider must continue proving the data for the 500-day term to avoid a termination penalty.
A verified term is relative to when data is committed, rather than an absolute epoch.
Term policy
The policy for allowable terms is set by the FIL+ root key, as configuration of the FIL+ verified registry actor.
Term policies include:
- Minimum term: the smallest value a client may specify for a verified term minimum
- Maximum term: the largest value a client may specify for a verified term maximum
For example, we might set a policy minimum of 180 days (matching the sector minimum commitment) and maximum of three or five years. The policy constrains the values that a client may set when allocating data cap. Policy cannot change the minimum term for an already-committed piece of verified data.
We could support per-client policies. This would let the root key set different term policies for different data cap holders.
Quality-adjusted power
The feature of the current quality-adjusted power mechanism that causes all the problems is that the power and reward boost from a verified deal is spread out over the sector lifespan. This makes it hard to recalculate when updating sector data. The deal term never exactly matches the sector’s lifespan, because the client and provider can’t coordinate that well. But because FIL+ terms are independent of deals, we don’t need to do it the same way.
The term for a verified piece starts when the data is committed. This exactly aligns the beginning of the term with either the sector’s activation, or re-activation through a mechanism like SnapDeals. The sector immediately gains 10x power according to the fraction of deal space occupied by verified data.
The term for a verified piece ends when the allocation term exrhjjpires, regardless of the sector’s scheduled expiration. The sector immediately loses the power boost from the verified data.
Aligning the boosted period with the term exactly in this way removes the troublesome feature of QA power.
Enforcement by miner actor
The miner actor is trusted code and can enforce the verified term, independently of sector lifespans (so verified data is portable between sectors).
When a miner claims a data cap allocation, it records in state the sector that is serving the allocation, and the current epoch. It also schedules an expiration event for the end of the term in an event queue.
When a sector expires (as part of normal miner cron processing), the miner checks if it is currently serving a data cap claim. If the minimum term has not yet been met, the miner pays a termination penalty for the quality-boosted power that the claim represents. If the term minimum has been met, no penalty is needed.
A miner can commit to a term longer than the sector into which the data is initially sealed, and either (a) extend the sector if possible, or (b) seal the same piece into a new sector. When the verified piece is sealed into a new sector, the miner actor updates its records such that the old sector can now expire normally without paying a termination fee.
When the term expiration event is popped from the expiration queue, the miner actor reduces power for the sector in question during normal cron processing.
The miner actor’s records must also enforce that a data piece cannot serve two different FIL+ allocations simultaneously.
Term extension
A FIL+ verified client can extend the term for a data cap allocation that is already claimed by spending new data cap on it. The term maximum can then be extended by up to the verified registry’s policy maximum again. Increasing only the maximum doesn’t require agreement from the provider. The client extending the allocation need not be the one that originally made it.
In this way, the notaries can support the indefinite extension of FIL+ power and rewards without that being a default behaviour, and on a per-client basis.
Scaling up (optional)
We could support data cap allocations toward pieces that are larger than a single sector. This would allow a scalable representation within the FIL+ registry actor.
Rather than claim an entire allocation at once, a provider would claim a range of the allocation for each sector by providing a sub-piece CID and a merkle inclusion proof into the root piece CID. The sectors with sub-pieces would gain power individually.
This technique could mesh nicely with the supersector technique from project Neutron. A miner could then efficiently claim a contiguous range of the data cap that is many sectors large, with a higher level inclusion proof.
Issues
- Want to support different providers taking different parts, or not?
- Don’t need to. Client could split dataset into shards to support multiple providers.
- What if some parts sealed before allocation expiration, but some miss it?
- Withhold power until all parts sealed? Quite difficult.
- Revert power gain if all parts not sealed by expiration? Also quite difficult.
- ∴ Just don’t worry, other parts are still eligible.
- What if some part is lost; want to incentivise re-sealing it to regain complete replica
- Yes, could allow this for simple allocations too?
- Similar to some parts missing expiration.
Aggregation directions:
- Multiple sectors → one big allocation
- Arbitrarily scattered chunks around sectors 😨, vs
- Completely full sectors (i.e. at most one chunk from one allocation in one sector)
- Multiple allocations → one sector
Implications
- Don’t need a deal for “free” FIL+ storage. Just make the allocation.
Implementation details
Allocations and claims in verified registry
type Allocation struct {
Client Address // can drop this if it's in map key
Provider Address // optional
Piece CID
Size uint64
TermMinimum uint64
TermMaximum uint64
Expiration Epoch
AllOrNothing bool // optional? requires atomic commit of entire piece
}
type Claim struct {
Provider Address // can drop this if it's in map key
Piece CID
Size uint64
ExpirationMinimum ChainEpoch // Computed from allocation term at commitment
ExpirationMaximum ChainEpoch
}
type VerifiedRegistryState struct {
// Verifiers, VerifiedClients as today
// Allocations by client, then by ID.
// Nesting by client promotes more efficient lookups at scale,
// and supports per-client iteration for revocation.
// Removed when claimed.
Allocations HAMT[Address]AMT[AllocID]Allocation
// Claims by provider ID, then Claim ID.
// Claim ID is inherited from allocation ID.
Claims HAMT[Address]AMT[ClaimID]Claim
NextAllocationId uint64
// In the future, for FIL+ premium.
// Map of sectors to claims so we can withhold rewards during faults.
// For Neutron, SupersectorID plus offset needed for partial faults.
Commtments HAMT[Address]AMT[SectorID][]{ClaimID, ClaimedSize}
}
Sector claim mapping in miner state
type MinerState struct {
// ... all the usual stuff
type SectorClaim {
Claim ClaimID
RangeStart uint64 // Range of the verified piece (implies size, power)
RangeEnd uint64
Pledge TokenAmount? // FIL+ pledge separate from sector
}
// Verified claims indexed by sector ID.
// With supersectors, the claim must also include supersector offset,
// so that if a part terminates, we can remove part of the claim.
// FIXME: how can this tell that ranges are not overlapping?
VerifiedPieceClaims HAMT[SectorID][]SectorClaim
type ClaimExpiration {
Sector SectorID
Claim ClaimID
// Penalty for sector expiring while holding claim
PledgePenalty TokenAmount
// Power drop for claim expiring while sector is live
PowerLoss StoragePower
}
// Contains an entry for each instance of either an allocation/claim
// term ending, or sector scheduled to expire.
// Scale problems? With big quantization, entries could get large.
// Fold into partition expiration queues?
ClaimExpirationQueue AMT[ChainEpoch][]ClaimExpiration
/////
// Alternative. Ranges for a claim co-located.
// When we have supersectors, the sector ID must include a
// range of the supersector.
type ClaimSectors {
Claim ClaimID
Sectors []{
Sector SectorID,
RangeStart uint64,
RangeEnd uint64,
Pledge TokenAmount?
}
}
// Verified claims indexed by claim ID.
VerifiedPieceClaims HAMT[ClaimID]ClaimSectors
// Expiration queue as above
}
The scalable thing is hard because we need to record for each sector what range of the piece it stores.
Need to bound work in cron for a mass expiration
Workflows:
- revoke expired allocations
- ✓ Needs client → allocations in registry
- commit new FIL+ data
- ✓ Needs some miner-alloc/claim association somewhere. Remove allocation into claims.
- sector expires before term expires - penalty due for the QA power (but not base power)
- With sector→claim in miner: during expiration look up sector, look up claims, call to registry to get minimum term (❌ expensive in cron)
- With a queue entry for the sector termination: add entry on commit at sector expiration iff sector expires before claim (otherwise add at claim expiration). Remove and replace if sector extended, or piece moved to new sector (needs worker to identify old sector). At sector expiration, if there’s anything in the queue then it’s a penalty. Limits work to process sector expiration in cron.
- Need to delete claims in local state and registry. Especially to allow re-sealing of still-active
- During partition compaction? Too late
- A method to clean them up, popping the queue?
- Lazy next time it’s processed manually?
- move data to different sector
- Update sector←→claim in miner, recalculate power for both sectors
- Remove any expiration queue entry from original sector expiring before claim minimum term, add a new one for the new sector.
- extend term max
- Lazy? Just update allocation in registry, wait for miner to poll it at expiration.
- Load problem if many expire together
- Eager? Push change to miner. Remove expiration queue entry and add new one
- claim expires while sector is hosting
- Need claim size to compute power drop for sector
- Need to propagate power drop to deadline/partition , uh oh!
- → Need this to be already in the per-partition expiration queue
- Sector initial pledge and penalty params must decrease
- Separate out into per-claim pledge and penalty values?
- sector terminates (and pays penalty), but re-sealed to regain incentive
- support for future transfer of alloc to a new miner (but might need client approval)