Simple median stream voting as a distribution governance mechanism

One simple distribution governance mechanism (remember the taxonomy here) I’ve been thinking about lately is “median stream voting”, and I think it might be worth a little more analysis of this mechanism.

The basic setup would be as follows:

  1. Assume some voting weight function, which we will leave out of scope (e.g. staked tokenholders can vote, as a simple example, with weight proportional to their tokens staked).
  2. Assume a dynamic set of funding streams, which we will just identify as F_0, F_1, etc. Anyone can create a new funding stream at any time (it becomes F_n). Funding streams are identified by a target destination (which might in practice be another contract, application, etc. governing further distribution of any funds sent to that stream).
  3. For each funding stream, voters simply vote as to what they think the issuance to that stream should be (we can take no vote to be equivalent to voting “0”).
  4. We set the issuance of each stream to the median of all votes for it.
    • If more than half of the voting weight voted “0”, there will be no issuance.
    • Otherwise, the issuance will be whatever the median voted issuance was.

This scheme could work continuously or in periods - it doesn’t really change the mechanism, but might be easier to coordinate the attention and provide predictability with fixed periods.

This has the advantange of not really making any decisions about funding semantics at all – there are just streams, which can be further subdivided using different mechanisms.

A possible disadvantage is that it may not elicit true preferences, if voters vote for an issuance higher (or lower) than they actually want in order to try to affect the median. Periods and temporary privacy of votes would probably help with this concern. On the other hand, there might be an advantage to votes being public, as participants could suffer a reputational penalty for voting in a way which they aren’t prepared to publicly defend.

This could serve as a meta-mechanism for more specific distribution allocation mechanisms such as generalized proof-of-stake, proof-of-work, mutual network credit, Schelling tests, etc. It has the great advantage of being very simple and easy to understand (as well as implement).

Let’s discuss. /cc @degregat @apriori in particular.

3 Likes

Since this is a candidate for a meta-mechanism, I think we need to first figure out what type of social welfare function we want to implement: Do we want to choose between funding streams, do we want to rank them, or just weigh by preferences?
Reading your example, it seems that we want to weigh them?

Then, we should think about whether it should be, e.g. utilitarian (maximising the total sum of utilities), max-min (maximizing the minimum utility) or something else.
The choice here probably depends on the use case and constituent choice/preferences.

Then we can make a wishlist of properties and see whether the required assumptions for them are justifiable, and don’t imply any impossibility results.

Note: In general, we probably want to come up with characterizations like this for the whole hierarchy of mechanisms, with more fundamental ones ideally requiring fewer assumptions, s.t. they can fix parameters/assumptions for downstream mechanisms, but this work can be done iteratively.

If things turn our well, we might even have some parameters to choose from a well characterized family of mechanisms with different tradeoffs, s.t. every entity that wants to issue stake only needs to make this initial choice out of band and could then bootstrap from there.

1 Like

A visual representation of my thoughts, summarizing this thread from our conversation.

2 Likes

Yes (in the context of distributing a token).

I think one implicit assumption in this model is that funding streams are relatively independent, in the sense that they fund different things and choices can be made relatively independently (I also think this is an assumption which we can make for this investigation at least, but it’s good to make it explicit).

The basic underlying goal assumed here is for stakeholders in a network (tokenholders) who want to redistribute ownership of that network as a means to encourage desired actions (whatever is actually funded) to be able to do so. How much to redistribute (and encourage a particular action) is clearly a cardinal preference (in the social welfare sense), not an ordinal one. I guess the other choice we’re making here is to use the median function as a way to combine votes. That seems to me like it corresponds to an assumption that the underlying space of rewarded actions is relatively continuous and always-increasing or always-decreasing (w.r.t. preferences), such that my vote (for a higher value than actually selected) pushing the median up makes sense given my preferences, and your vote (for a lower value than actually selected) pushing the median down makes sense given your preferences.

I think we want to think about the network as decomposing into institutions with distinct governance, which could be any service provider, be it consensus, routing, solving/compute, storage. Then funding streams could be seen as voting on allocation distributions of the output of an institution, e.g. how bandwidth of a consensus provider is used, or how storage is allocated to current and potential users.

Then stake of a composed institution (or the whole network) would act as votes on distribution for the aggregate output, subject to internal commitments being upheld.

The value of stake should then in expectation not exceed the utility agents could derive from the voting power it confers. For this to hold, switching costs and barriers to entry need to stay sufficiently low, s.t. users could always choose to use a service, that better approximates their target distribution.

Using and incrementally staking in a service become equivalent if we do mutual network credit based on service consumption.

The utility from stake will depend on:

  • The preferences of the constituents for what the target distribution should look like (although this is not stationary and the state value derives from value of potential future outpus).
  • The mechanism(s) aggregating preferences within institutions.
  • How well the self regulation of the institutions can bound their aggregate behavior to realize the aggregate target distributions.
1 Like

Interesting. I think this frame makes sense when the institution being governed is a collective provider of a service (such as the examples which you specify). In this case, the institution being governed is more abstract – let’s say the Anoma network as a whole, where the funding streams are funding public goods such as protocol development, research, open-source hardware, etc. – so I’m not quite sure what the “output” would be in this case, exactly. I see it as more like “Anoma token governance will fund X, Y, Z” – which can of course change over time – but where users would choose to hold (or acquire) the token if they support funding X, Y, and Z (or moreso than not in aggregate, compared to the other options available). So in a sense (albeit indirectly) the good being allocated is “surplus resources” which holders / supporters of the token have decided to allocate to these longer-term public goods projects (supposing that issuance funds those).

In this case, I see agents as deriving utility from voting power insofar as that voting power is an effective use of their attention (or another to whom they delegate their vote), in combination with the rest of the network participants, to support recipients of funding which they care about - i.e. they derive utility from the outputs of the funding. I think the argument about low switching costs also applies here, but in the sense of being able to switch to a different token (this is starting to look a lot like scale-free money from a different angle) whose funding distribution better matches the agent’s preferences.

1 Like

Notes from our discussion (cc: @cwgoes):

  • We should analyze the strategyproofness of different aggregation functions. Taking the median seems like a good start, as it should be at least robust against subgaussian misreport distributions, i.e. when the number of agents who over- and underreport balance each other out in expectation.
  • Parallelizable aggregation functions would be preferable for scalability.
  • Each stream should support separate voting intervals, to enable making commitments of different lenghts. There can be multiple streams with different intervals for a single issue.
  • If staking weight depends on lockup duration, an interest in sustained existance of the network might counterbalance increased bribe attempts. We should analyze different weighting curves.
2 Likes