The objective of this topic is to discuss the possible affordances of proof-of-work as a distribution allocation mechanism. Let’s first define proof-of-work. By proof-of-work, I mean:
computational work done on a problem that is hard to compute, but where a valid solution is easy to verify (broadly, the complexity class NP), where
the problem can be iterated by changing the inputs slightly (e.g. an incrementing nonce in a preimage to a hash function), and the difficulty can be easily altered (e.g. a target number of 0s in the hash output), such that the problem can measure (in expectation) a certain amount of work done (e.g. a certain number of expected hashes).
Proof-of-work in the case of Bitcoin is also used for Nakamoto consensus, but this use does not concern us here – we are considering only proof-of-work as a potential mechanism for:
Token distribution, in conjunction with
Creating an incentive to optimize a particular function (e.g. a hash function), and
Creating a certain baseline energy demand.
These three effects are coupled – by distributing tokens to whoever solves a proof-of-work puzzle (assuming the tokens have some value), all three will necessarily result. They may be desirable or undesirable for different reasons, for example:
Proof-of-work is a nice mechanism to distribute tokens to random parties (whoever was computing the proof-of-work puzzle) who do not need to have any prior connection to anyone else using the protocol.
Development of optimized software and hardware for some functions – for example, a hash function used in the ZK proof scheme – may be a public good, which might therefore make sense to incentivize.
Creating a baseline energy demand may change the economic calculus of certain electricity suppliers, and encourage development of supply that could be quickly repurposed for higher-value demand when necessary – but on the other hand, it may also create more incentives for methods of electricity generation that externalize pollution and other negative side effects.
Luckily, the advantage of considering proof-of-work strictly as a distribution mechanism is that the network could have pretty precise control over these effects, because the amount or nature of proof-of-work distribution is not related to network security. For example, the network could have a set of proof-of-work puzzles (e.g. different hash functions) and current distribution amounts per unit time. As the benefits and/or drawbacks of these effects change over time, the puzzles and distribution amounts could be adjusted accordingly.
Rapidly changing the puzzles and distribution amounts may have other side effects, some of which may be desirable and some of which may not – for example, rapid change would give a relative advantage to general-purpose hardware (for which new software for a particular new puzzle can quickly be written, and which can quickly switch between puzzles), as opposed to e.g. the specialized ASICs and such which have been developed for Bitcoin. On the other hand, rapid change is also less likely to create the conditions where the capital expedenture required for developing new hardware and energy supply makes sense (and those might sometimes be desired).
I think, roughly, we can treat these kinds of proof-of-work distribution options as “control inputs” in our cybernetic system, where the targets should be things like:
a certain algorithmic speed on a particular puzzle (e.g. hash function)
a certain diversity of distribution (measuring this could be complex)
a certain energy supply infrastructure (which also depends on a lot of other inputs)
awesome proposal for a potential distribution allocation mechanism!
for distribution purposes, is it beneficial to target gpu mining as more people have access to gpus?
what are the risks of mining pools forming where miners amass large amounts of the token over time? if there is governance minimization then it doesn’t really matter.
Potentially. I think if the primary goal is distribution to users with a specific type of hardware, you could probably craft a proof-of-work puzzle which is at least (for awhile) most efficient to run on that type of hardware.
I think that depends not only on the particulars of proof-of-work distribution but also on what other distribution is happening simultaneously. A plurality of distribution mechanisms in combination with the reality that professional miners who invest in hardware and electricity have to recoup some of their costs (i.e. sell some tokens) should be sufficient to minimize this concern, I think, although it’s certainly something to keep in mind.
awesome papers. i have a question about self-stabilizing systems as i’m not too familiar with the concept.
for examples systems that stabilize itself, are you thinking something like;
bitcoin’s difficulty adjustment to keep block times at 10 minutes
ethereum’s fork choice rule which eventually picks the “heaviest” chain
the paper used an example from federated learning where it suggested using “wasted” bitcoin mining energy for federated learning. i would assume this is also an example of self-reinforcing system.
is this the right way to think about it? @graphomath
I am not too familiar with it either. In the area of half-knowledge, I was associating it with Niklas Luhmann’s Systems Theory, and then I now find this passage from a random source[1]:
The difficulty adjustment in bitcoin strikes me as a good example to keep the system in its present state.
The heaviest chain rule (as followed by the Ethereum community) is maybe a little bit of a different thing, but it could be considered as part of the mechanism to keep the chain running (cf. self-preservation).
Right, indeed, I should have spelled out the self-stabilizing mechanism; so, let us take the airdrop allocation example from the other thread. Let us suppose for the sake of thought experiment that it makes sense to think of “the statistical estimate of the correct token allocation given the data of extant semantic attestations” as a hard problem that is amenable to useful proof of work, e.g., involving some hard inference problems (beware, semi-random link), then proof-of-useful-work and the “attestation game” may interact with each other in the following way:
investing computational power to contribute to the attestation game should increase the chances of getting airdrops
however obtains more airdrops can invest more computational resources to the attestation game
… \infty
Now, the question is whether this is possible in a sustainable way: be sure beyond doubt that we do not just re-invent a fancy version of PoW as used in bitcoin. One way to avoid this is by making proof of work actually also involve proof of human work, but without introducing surveillance …
Emphasis is mine (no emphasis in the original). ↩︎
thank you for explaining your thinking. i will take a closer look at some of the literature you referenced.
in principle you could require attestations as a gate-keeping mechanism to contribute to proof-of-work, ensuring that all p-o-w is done (operated) by individual humans if that property is desired.