The objective of this topic is to discuss the possible particulars and affordances of “prize-based public goods funding” as a potential distribution allocation mechanism. Let’s first define prize-based public goods funding. By prize-based public goods funding, I mean:
funds (tokens) committed up front to a particular, intersubjectively measurable objective, to be paid out in some way to whichever parties first, or best, accomplish that objective, and
protocol and social structures to facilitate evaluation of whether or not the previously specified objective has been met (and to what degree), and distribute tokens as previously committed.
This definition is a reasonably broad umbrella:
Any mechanism which first commits to a measurable objective and then distributes funds to whoever has helped with that objective (using some pre-agreed methodology to evaluate) would fall within this definition.
Any mechanism which commits to specific recipients would not fall within this definition.
For example, to classify specific existing mechanisms,
Advance market commitments (e.g. Frontier Climate) would fall within this definition.
Winner-takes-all prizes (e.g. Ansari X-Prize) would fall within this definition.
Namada’s PGF system would not fall within this definition, because specific recipients are specified (there’s no way to commit funds to an objective but not a specific recipient).
Similarly, Gitcoin would not fall within this definition. In general, any form of public goods funding which commits to specific recipients and not objectives would not fall within this definition.
One could also imagine other rough shapes of mechanism which would fall within this scope:
Committing X tokens of RPGF per year for Y years to be distributed to whichever parties most help with objective O based on evaluation methodology M. An example objective could be “carbon removal”, and an example M could be “tons of carbon removed”. This is similar in desired effect to an advance market commitment, but structured as RPGF instead of a purchase agreement, which might be more flexible in some cases.
Committing X tokens as a prize for accomplishing objective O, to be distributed by evaluation methodology M to whoever contributed to accomplishing O. For example, O could be the development of a material with a specific tensile strength and production practices scalable enough to use it for space elevator manufacturing, and M could be a procedure for analyzing which research, companies, etc. contributed to this effort in which proportion(s).
In general, prize-based public goods funding as defined here has a few strengths:
Committing tokens to a prize ahead of time creates an incentive for various agents in the broader ecosystem/market to develop solutions aimed at the prize objective, without the governance mechanism needing to know up front how precisely these solutions will be developed.
Assuming the evaluation methodology is reasonably clear, distributing tokens according to the methodology to whoever won/contributed/etc. is more credibly neutral and less subject to politicking than a PGF distribution mechanism where governance picks specific parties.
It also has a few weaknesses, or requirements:
Assuming that the goal is actually to accomplish the objective, ecosystem participants must take the prize seriously, and be willing to take on some risk themselves (given that they might not win the prize), especially as compared to a system where PGF distributions are up-front.
Evaluation methodologies could become quite intricate, and determining “who contributed what” to a major research or product advance is a very non-trivial problem.
Credit for parts of this idea to a discussion between myself and Dev Ojha.
I think refining the evaluation model has a lot in common with other oracle selection problems we’re facing, e.g. for wall clock or local weather oracles.
One first pass might look like this:
Any party which can verify the validity of a specific claim (e.g. tensile strength of some material) can offer performing this verification and producing a signed message for it. Lets call this an evaluation oracle.
Any agent can choose which oracles to believe.
Any agent can contest claims of oracles that they believe to be wrongful in any sense.
Disputes will need to be resolved by other oracles. This would probably lead to oracle hierarchies, similar to controller hierarchies. Disclosure of these hierarchies would provide a transparent commitment to the evaluation process, non-disclosure might make collusion harder in specific cases. (Hiding commitments are always an option.)
Ideally we can structure governance to select/rate oracles at all layers by competence and neutrality, including checks and balances that reinforce these traits.
For complex problems, networks of many oracles that can verify the necessary claims could be used.
Some ways to mitigate this would be to require open notebook research, or mechanisms similar to preregistering studies.
We could use some commitment scheme, where participants going back on a promise would incur a (reputation) penalty. If open notebooks are required already, this could be structured in intervals, where even partial solutions could be rewarded, if the evaluation mechanism permits it.
This is a hard problem. Does any of the Schelling test thinking overlap here, using attestations for determining “who contributed to what”?
Is this thinking in line, or am I off base here:
It seems you would need to be able to keep the identities private of institutions and individuals who receive funding to make this work. Otherwise, you run into the problem of funding a prize where there are only a fixed set of individuals / orgs who can credibly complete the objective. This would mean you know in advance (within reason) who you are funding. ZPrize would be an example.
However, in an environment where identities are only cryptographic keys, then there is reasonable doubt that the agent who wins the prize is not even in the perceived set of qualified individuals.
Maybe – what defines a bounty system? Do you have examples in mind?
I think different objectives and different (attempted) attribution schemes have different difficulties here. For example, the original XPrize offered $10 million to “the first non-government organization to launch a reusable crewed spacecraft into space twice within two weeks”. Whether or not that objective has been achieved is not particularly difficult to evaluate, and in their scheme all of the funds went to the organization (who could internally distribute it however they wish).
Whether Schelling-test like methods for more sophisticated evaluation and/or prize distribution schemes is an interesting question which deserves further research. Your intuition makes sense to me – I think the problem spaces overlap a lot. We should investigate this.
I’m not quite sure I follow. If you announce an objective and prize (like XPrize did), in principle anyone can start a new organization, raise funds, and attempt to complete the objective and win the prize. In practice, that may be more or less difficult depending on the nature of resource allocation in the “real world” outside the protocol, but there’s no specific barrier to a previously new or unknown entrant winning a prize. Do you just mean that some expertise (e.g. spaceship design, ZKP cryptography) is not widely distributed? Can you further explicate (and also the example)?
In many cases (e.g. Zprize, Xprize), a winner would be expected to publish enough of their real-world identity information (and associate it with the public keys) to allow for verification. Whether or not we could privately award prizes is also an interesting question to explore (what would be required there in terms of making the choice of recipients credible).
Section 2.1. of this survey introduces a model of bayesian persuasion in which a principal (evaluation oracle) tries to convince an agent (prize giver) to take a specific action, desirable to the principal.
Section 2.3., Theorem 2.4 states that, in the case of independent, non-identical actions, no efficient convex optimization problem can be formulated, but efficient sampling to optimize signaling schemes might be possible.
Incentive draft
The incentive structure in which the persuasion game is embedded must incentivize correct certification being preferred by the principal:
The utility of an evaluation oracle must be proportional to the (estimated) cost it incurs to validate the claim.
The disutility of validating a false claim must be proportional to the misallocated prize and (estimated) cost of falsifying the wrongful claim.
Depending on the hardness and uncertainty of determining wether a claim is correct, the penalty needs to be adjusted, to disincentivize oracles which are not sufficiently competent or colluding, while still leading to profitability for honest actors above a (potentially very hard to determine) threshold for competence and effort.
Yes, this is precisely what I mean. I used ZPrize as an example because there are only so many people in the world working on the intersection of zero-knowledge proofs and hardware acceleration. In principle, anyone can form a team and compete. But in practice, the competitive teams / individuals could be known in advance. Which means that you are designing the prize for only a select group of people, in a sense, which means there is a sense in which the recipients are specified in advance. But maybe this doesn’t matter, and I’m fixating on the wrong thing here.
Yes, I see. I think that’s true, but I also think that it doesn’t pose too much of a problem for the kind of credible neutrality I’m talking about here:
While I suppose someone could develop specialized skills specifically because they think there might be a prize related to those specialized skills in the future, this doesn’t seem like it violates the neutrality of the network itself – it’s just smart planning. It actually might be a benefit, if these kinds of prizes become common.
A potentially more concerning case is if someone lobbies the network to create a prize which is specific to skills so niche that only the person or organization in question has them. This is possible, but I’d say that preventing that is rather the job of the overall network governance mechanism (whatever is selecting prizes), and doesn’t have much to do with whether recipients identities are public or not – it should be reasonably obvious whether a particular objective is so specific that only one person or organization could possibly win a prize associated with it.
There is some broader question of neutrality with regards to groups (e.g. people with applied cryptography experience) versus individual teams or organizations, and in a sense the network does favor the group of applied cryptographers by creating a prize related to applied cryptography (relative to another potential prize requiring different skills). I think this is good to note, but I don’t think it’s a problem for credible neutrality, because it should be obvious to the network governance function (so the choice to affiliate/favor such a group is taken consciously).
@degregat This is helpful, thanks – I will read the linked references later – I just wanted to note that I think the aspect of this question you’re investigating now is basically the class of questions relevant to generalized “Schelling games” (as in the other thread), which is the same correspondence that @apriori mentioned above.