- Context
- Goals
- Share is Scalar Field
- Scalable
- Weights compatible
- One round / Non Interactive
- Public verification of successful DKG
- Potential "adhoc" verification solution
- Forward Security
- Curve Compatibility
- Relaxation
- Prover Time
- Network
- Size of network

In threshold cryptography, participants need to run a Distributed Key Generation before anything. After the DKG has been succesfully completed, nodes can then uses theirs shares for threshold signing or encryption. The most famous and used DKG is the Pedersen one but it lacks several properties that we would like to have in order to use threshold cryptography at scale in a permissionless environment.

This documents highlights what would be the properties of an ideal / perfect DKG. It first presents in which settings the DKG would be run and some notations then define the properties.

# Context

We assume the DKG protocol is ran in permissionless setting where each participant has an associated known public key and a public stake. We assume we can express the stakes in "small units", i.e. discretize the stakes of each participants to integers (note it can be that all stakes = 1). Participants to the DKG are chosen randomly in proportion to their stake. The random selection is assumed to be public (at the beginning - we may want to relax that assumption to have private "election" as in Single Secret Leader Election).

We assume the existence of a bulletin board (BB). It does not have necessarily to be a blockchain with "computation features" but we should assume it's highly likely that it is the case. In case it is not, we assume clients can always refer to the BB and run any computations locally with the data stored on the BB.

We assume there is a proportion of faulty nodes $f$ and a proportion of byzantine nodes $t$. (TODO: fix notation with common notations).

Note that in the context of stakes/weight, we are considering f and t to be stake proportions rather than absolute numbers.

We assume the DKG participants can communicate via a gossip network.

# Goals

## Share is Scalar Field

We want the shares of each node to be a scalar field element (using elliptic curves). More specifically, if we use an elliptic curve $E(F_q)$ of order $r$, we want the share of each node to be an element from $F_r$. As opposed to some publicly verifiable DKG schemes where the resulting share lies in the elliptic curve itself $E(F_q)$. Even better if this is the scalar field of a pairing equipped curve such as BLS12-381. The end goal is to be able to rely on all the previous and compatible signature & encryption schemes which have nice properties (random beacon with BLS combined with identity based world, etc)

While aggregatable DKG is a great system (because of aggregatability and verifiability), the biggest problem is that they rely on PVSS and their sig + enc schemes leads have higher complexity and requires $G_T$ computations (3 $G_T$ computations to derive the randomness from the VUF)

TODO: give more detail on that, may actually be workable. Anoma is using this in prod.

## Scalable

Gaining an order of magnitude more: targeting a thousand nodes would be ideal.

## Weights compatible

Each node should have weights in proportion to their stakes. Trivial solution is to have different number of shares according to the stakes.

The threshold protocols (signatures generation & "partial" enc/decryption) should work with this notion of weights. (Using naive approach, a node will sign with each share it has for example).

However, are there concrete optimizations one can do to "batch" operations per node instead of per shares in this model? Are there other ways to restrict signature and encryption to succeed according to some weights ?

## One round / Non Interactive

- (1) Having multiple rounds is a hassle for the DKG because some nodes may be faulty during the different rounds of the DKG. Having the protocol in one round is one way to easily decrease the prob. of having faulty nodes during these steps, since it's a "one off" protocol. Even the faulty nodes before the DKG starts would have more time to recover given it's non interactive it allows a buffer.
- (2) It makes the protocol more "stateless": given the list of input participants, do your local computation and send out the result. Having multiple rounds requires intermediate state and increase complexity.
- (3) Less communication / complexity on the gossip network as well.

## Public verification of successful DKG

In the same vein as PVSS, being able to verify who actually successfully ran the DKG is a feature that would allow to

- (1) remunerate "good" nodes for their honest behavior and
- (2) slash the "bad/faulty" nodes that should have participated in the DKG
- (3) Count the actual nodes that have a valid secret shares.

The last point is to avoid situations where the DKG network's size is close to the pre-specified threshold. We want to guarantee **liveness** of the network by having a large number of nodes, as close as possible to the intended $n$ participants as far away as possible as the threshold. Slashing is an incentive that increases that gap.

One end goal possible is for a third party (think smart contract) to be able to "manage" DKGs committees automatically.

**Succinct**: Verification ideally should be succinct. Naive scheme is at least requiring to read all individual public keys of participants. Could we assume/design a succinct (log/constant) size accumulator of all public keys that can help the DKG verification to be more succinct (to define how !) ?

### P**otential "adhoc" verification solution**

After the DKG, the successful participants all have a share and know the "Qualified set" QUAL of participants (the participants that have a share, this is defined in the Pedersen DKG).

- A naive solution would be for participants in this set to threshold-sign this set (a list of indices or public keys) after the DKG has been run as proof. However, a third party verifier still don't know if the threshold signature really comes from the QUAL members, there is no proofs that "participant i in QUAL is the one with this long term public $PK_i$". For all we know, I could even submit a simple BLS signature signing an arbitrary QUAL set, any third party verifier could not see the difference.
- Another thing is we could require each QUAL member to sign with their longterm public key a partial signature using their partial share (a signature of a signature). That really creates the link between "public key $PK_i$ → partial share i". This might work but
- (1) requires an extra round after the DKG and an extra delay to know whether the DKG is successful enough or not (if the number of nodes is high enough wrt the threshold for example)
- (2) Requires in total a linear verification cost (2*n) because the messages are different, so we can't aggregate the signatures nicely

Can we do better ?

## Forward Security

As defined in Non Interactive DKG by Groth, if we are using the distributed key for encryption, we would like clients to have only one public key to encrypt messages to but DKG nodes to be able to roll out different decryption keys according to the epoch.

(TODO: can we argue that frequent resharing can act as this?)

## Curve Compatibility

Being able to run the DKG over a regular curves helps with the compatibility with other systems (think seckp Bitcoin curves).

Being able to run the DKG over pairing curves enables to use pairing based schemes which are often considered more efficient (like BLS signatures).

So in the end, we want the **verification** to be compatible with either

- BN254 or BLS12-381 (
**preference**) - Sec{k,p} Bitcoin curves

# Relaxation

## Prover Time

It is OK to assume it takes time to produce a proof of correct DKG computation (from a dealer's perspective), in the order of minutes to hour.

## Network

We can assume an aggregator node that perform some "aggregation job" off-chain / off-bulletin board to enable efficient verification. This node should only be trusted for liveness.

## Size of network

It might be beneficial to assume a specific structure, like some powers of two for the network size (for FFTs with $2^n$ primitive roots of unity or inner product proofs etc).