Protocol v0.1 (Initial Consortium)


Public project doc:


Parts under discussion and not currently belonging to Protocol v0.1 are marked in brown

Protocol Overview

We have an initial consortium of N auditor parties P_1,...,P_N. At this stage, the aim of these nodes is to scan the network according to storage deals that are on-chain (each storage deal needs to provide retrieval).

In particular, this means that parties being audited are essentially storage providers. In future versions of the protocol, we’ll open the possibility to any IPFS node to be audited. In order to do so we’ll put in place mechanisms like indices which will allow non-SP parties to be audited according to what they claim to store and keep available for retrieval

  • Step 0: Auditors join the Consortium
  • For v0.1 we assume Auditors parties to be pre-determined and known. Thus, there is no particular joining protocol to become an auditor. This will change in future version of the protocol where we will open the possibility to other parties to join as Auditors.

  • Step 1: Auditors scan the network
    1. Each party in the consortium periodically query (ideally) all the SPs involved in a storage deal, checking if the retrieval of the file is successful. Each SP is then scored by auditors according to the following metrics:

    2. Time to first byte
      • Key Aspect: low latency
    3. Average Retrieval Speed (MB/s)
      • Key Aspect: high speed retrieval is rewarded
    4. Actual Retrieval Success (percentage)
      • Key Aspect: SP reliability
    5. Retrieval Deals Acceptance (absolute value ? TBD) not for MVP
      • providing good service on a limited amount of files/deals should is not the rational strategy to put in place
        • Another way is to consider two different parameters (As bedrock is doing)
          • Retrieval Deal Acceptance (percentage)
          • Minimum Deal Volume

      Note (TBD): Given some of the metrics are influenced by proximity/location, it can be useful to ask Auditor Parties to be spread all over the world (in local clusters) in order for measures to be as accurate as possible. Note that accuracy of measurements is a priority for Auditors: the more Auditors measures reflects the reality, the more their work is valued by clients

  • Step 2: Auditors take Part to a Survey Protocol and produce a Report
    • Each auditor fills a survey according to the metrics above and give a score to each SP they queried.
    • A local scoring table is filled locally and committed on-chain.
    • Option 1: Fixed Aggregator (MVP). Results are sent to an aggregator node that aggregates survey results in a report checking the correspondence between what he received from auditors and what auditors committed on-chain.

      • Aggregation
      • Note: Given metrics data are succinct, we assume the whole opening of the metrics data contained in the commitment are sent to the aggregator node by the auditors.

      A report is valid once it is signed by the aggregator node. Once valid, a report is posted on-chain and serves as the metrics for that round

      • Correctness of Aggregation:
        • Option 1: Dispute mechanism. There is a time window W (TBD) over which a report can be disputed by any Auditor party. If P_i disputes the report R, the validation protocol is the following:
          • P_i shares the parts of the report he wants to dispute
          • The aggregator node share the correspondent local surveys’ results and sends them to P_i
          • P_i checks against the commitments
        • Option 2: Public Webpage listing metrics and SP IDs. The aggregator node maintain a webpage listing all the metrics and SP IDs, in order for everyone to be able to check against the commitment onchain
        • Option 3: Proof of Correct Aggregation. The aggregator node posts a proof of correct aggregation on-chain
    • Report Frequency: A new report is required every X epochs
    • Option 1b: Random Aggregator [not MVP]. Aggregator node is chosen at random and rewarded once the report is shipped. Rest of Step 2 then follows Option 1.

      Option 2: Consortium Internal Agreement [not MVP]. Results are shared inside the consortium and results are aggregated in a report (for instance via round robin). The report is signed by all the Auditor parties that agree. A report is valid if it is signed by at least 50% + 1 auditors. Once valid, a report is posted on-chain and serves as the metrics for that round.

      • Disputes: Any Auditor party can dispute a report (or a part of it) before signing. In this case the report is checked against commitments. If an auditor is not willing to open a commitment, his vote for that round is considered null.

      On the reason of asking to commit and reveal: given that we are using a majority-based mechanism, we want to ensure Auditors do not change their mind according to the majority, once results are aggregated. Moreover, in Option 1 and 1b this commit and reveal approach allows for public verifiability even in the presence of an aggregator node

  • Step 3: Metrics
    1. Metrics are still a work in progress, but a nice example is represented by what Bedrock Project is considering:

    2. Maximum time to first byte: 5s
    3. Minimum Average Retrieval Speed: 1MB/s
    4. Retrieval Deal Acceptance Rate: 98%+ (in case of percentage, TBD in case of absolute number) not for MVP
    5. Actual Retrieval Success Rate: 95%+
    6. Those metrics should be known and public

  • Step 4: Retrieval SPs are ranked according to their metrics in the Report
  • Once the report is published, it is used for assigning points to SPs for that round (the final result is a ranking acting as a reputation system).

    Option 0: SP are ranked according to their metrics

    Metrics gathered by Auditors and aggregated by the aggregator node result in a ranking of SP for each metric. As a result we’ll have a board showing SP quality of service according to each metric

    Option 1: All or Nothing (binary)

    There are base metrics and there is a fixed number of points given for meeting these metrics. An SP is getting 0 points if any of the metric is not met by any value. The global score of a miner is (TBD)

    • Option a): An average of the values gathered by each Auditor
    • Option b): An average of the values gathered by each Auditor, without considering tails (best and the worse values)
    • Option c): The median of the values gathered

    Option 2: Decreasing Score [weighted average]

    Maximum number of points is granted to SP who satisfy all the metrics. SP who are not satisfying all the metrics are given points according to a (decreasing) scoring function (TDB)

  • (Step 5: Auditors are Scored with auditing points)[WIP, not mandatory for MVP]
    1. For each SP which is audited, all auditors according with the majority are given points.

      In order to disincentivize collusion, we have a fixed number of points per round. This would incentivize auditors not to share their observation with other auditor parties, since sharing those information is potentially lowering down the number of points they get. Auditor Score Overview (WIP)

    2. There is a total number of points R for each auditing round.
    3. The total R is divided by the number of SPs being audited
      • Considering L Storage Providers being audited, we have then points R_1,...,R_L to be distributed, where R_i = R/L for all i
    4. For each Storage Provider SP_j we have that SP_j is divided among the W > N/2 Auditors that agreed on SP_j score and voted accordingly (we are using majority vote). This means that each of the W auditors is getting R_j/W points.
      1. Auditors whose observation does not agree with the majority and auditors who don’t sign/don’t open their commitment are not getting any point.

        Note: Irrationality of collusion: One can think that having a fixed number of points per round is incentivizing Auditors to collude. Indeed, at a first glance the number of points is maximized if 50%+1 auditors agree, and goes down if more that 50% + 1 auditors do so. This can be true on a single R_i but it does not hold in the grand scheme of things. Indeed, according to the above, the best strategy would be for Auditors to fully share their observation and artificially agree that for each round there are exactly 50%+1 auditors who agree on a certain outcome, while the remaining 50% -1 agree on the opposite. This would maximize the points earned by each winning Auditor and give 0 to the others.

        Given that all auditors would take part to this strategy, all auditors would have to earn the maximum amount of points for some SP_j and win 0 for some other on SP_j’. We show that this strategy is equivalent to the one where all the auditors always agree.

        Indeed, let’s consider the following toy example (which can be fully generalized):

        Example: We have 5 Auditors. They fully coordinate so that everyone is part of the majority the same number of times as the others and part of the minority the same number of times as the others. In order to to do we need to consider N subset of points R_1,...,R_5 (if not, this is not achievable) This means that everyone earns R_i/3 points for 3 different times, and wins 0 for 2 times. It follows that Total_Points = R_i.

        Now, if everyone agrees with each other in all of the 5 rounds, everyone has Total_Points = (R_i/5)*5 = R_i.

        This means that colluding and try to maximize the earned points among all the auditors correspond to the strategy where everyone agrees on everything Idea Introducing Auditors Score Weights since v0.1: Even in Protocol v0.1 we could put in place some sort of Auditors Score in terms of weight (given that we are not envisioning an actual token reward for auditors at this stage). It could work as follows:

      2. Auditors are given some Weight Points (it can be collateral, or for now it can be that everyone is given the same amount at the beginning of the consortium, given that putting collateral without real incentives could not be rational).
      3. Every time Auditors are in the majority set, they gain an additional weight of R_j/W
      4. Every time an Auditor does not open a commitment upon request, his weight is decreased by a penalty fee of p points
      5. Auditors vote is not weighted, meaning that the scoring function does not take into account weights, but weights can be taken into account once Auditors Reward is introduced [ another possibility is to have a scoring function which is indeed influenced by weights, but this seems to potentially open the door to attacks based on the fact that reports would need majority of weight rather than majority of Auditors votes to be valid (WIP)]
      6. Pro: we could have a sort of reward (via points) even in v0.1 “for free”, in order to disincentivize collusion

        Cons: Bribing?

  • Auditors Anonymity (not for MVP)
  • For now we are working under the assumption that Auditors will find their own countermeasures to the fact that SPs can identify them as auditors and behave differently with Auditors and non-auditors.

    We stress that it is in Auditors’ interest to do so, given that the fact that the metrics are accurate and reflect the reality is something that gives value to the consortium itself.