More efficient SNARKs on large committed data where proofs only involve a small part of the whole.
Verifiable Small Computation on Big Data
Many applications involve a (vector) commitment to a large chunk of data D, e.g. a whole chain or a DB. If the properties we intend to prove involve only a small fraction of D, we would like to pay (in proof time) only in proportion to the partial data we are proving on.
We are designing a system with such properties and can be instantiated with existing trusted setups. How? We are adapting techniques from Caulk and commit-and-prove universal SNARKs in ECLIPSE.
While there already exist schemes with similar properties (e.g., Harisa), our new construction should be more suited to data committed as vectors (rather than sets), be concretely more efficient for some scenarios of interest and be completely compatible with current setups.
NB: For a technically related project with different scope, see also Lookup SNARK (for SHA).
People
Matteo Campanelli (DRI), Anca Nitulescu