Protocols as composed games

Some of the recent discussions about integrating the different components of anoma made me think of approaches that enable us to do incremental, but (coarsely) unified, analysis of the whole stack.

Layers of a protocol embedding

When designing a protocol, we need to take into account how it is embedded into its context.

Physics

The physics a protocol rests upon describes the relevant constraints a model of reality imposes on the protocol.
In the case of anoma the constraints can be split into (at least) two classes:

  1. Complexity constraints. This contains e.g. intractability assumptions for certain problems, which are relevant for the cryptographic primitives we use.

  2. Network constraints. This contains, e.g. the CAP theorem, consensus assumptions and models, the best approximation of time being a vector clock and messages being the only observable objects.

Protocol

The protocol we define is a set of behavioral commitments, i.e. how agents that implement the protocol react to messages of specific form and content and when to send them. By extension, agents expect other agents that implement to protocol to behave the same way.

For these expectations to be fulfilled individual commitments need to be verifiable within the protocol, usually by being bound to its assumed physics in reliable ways.

For example: Cryptographic primitives like signatures, hashes and ZK proofs enable us to verify specific classes of statements, which we can use to construct higher order semantic commitments.

It is important to note that behaving according to the protocol should be (as close as possible to) incentive compatible for agents, if we want to be able to expect adherence.

Out-of-band

Any behavior of agents that falls outside of the behavioral commitments specified in the protocol.

There might be commitments that describe behaviors at the boundary, e.g. how an information oracle ought to function: To use information from outside of the protocol, an oracle assures the validity of some information by wrapping it in a signed message, also containing the oracles identity. This way the oracle stakes its reputation on the validity of its claim.

The protocol could also define (or leave implicit) to fall back to out-of-band governance e.g. to recover from certain failure modes, or for choosing which mechanisms to protocolize.

Protocols as composed games

One approach to model the above could be compositional game theory.
It uses an open game formulation (see Section III of the paper) which admits composition of games, and more importantly the analysis of their properties under parallel and sequential composition.

The framework should also enable us to decompose the larger games we have identified as we learn more.

This might sound similar to universal composability, with weaker guarantees and more flexibility. This seems helpful to guide our exploration of the design space and help with proposing candidates for an ideal functionality of anoma. Once we have a well defined idealized functionality for anoma, trying to come up with a full UC proof for our construction would still be desirable.

Binding between games and across layers

Looking back at the whole stack, physics, protocol, out-of-band, with the new perspective of trying to decompose it into smaller games, let us revisit a concept which we are going to call binding.

Binding determines the strength of coupling between games, essentially modelling constraints for their composition operator.

In the composed game setting, we should be able to model the composition operator between two games as a binding game, or move it into the games being composed.

Games that are strictly bound from out-of-band to the protocol should be treated as implicitly in-band. (For physics this is technically true as well, but as long as there is neglegible feedback from the protocol to its physics, we can ignore this.)

Examples:

Cryptography

A cryptographic primitive can be modeled as a game. Assuming its assumptions hold and the implementation is correct, it strongly binds a game composed with it to the physics.

Distributed systems

TODO

Credible commitment devices

Let us first import the definition of a credible mechanism briefly. From the abstract:
“Suppose the auctioneer” (or mechanism designer) “can deviate from the rules provided that no single bidder detects the deviation. A mechanism is credible if it is incentive-compatible for the auctioneer to follow the rules.”

We might be able to introduce a relaxation on this notion of credibility: Assume two mechanisms A and B, composed as A \circ B. A is a pre-existing mechanism, and a designer wants to establish mechanism B. Assume all potential participants for B are already participating in A.
Then approximate directional credibility cred(A, B) of B in respect to A can be recovered if the designer of mechanism B must account for the participants ability to detect deviation, and use this information for enforcement in mechanism A (see slow games). I don’t expect this to be symmetric, i.e. cred(B, A) probably has to be analysed sperately.

A credible commitment device (a protocol can be one) can be used to bind some types of games strongly to each other. An approximate commitment device will fall somewhere on the weak-strong scale, proportional to the accuracy of approximation.

Commitment devices can be used to structure the incentives between in-band and out-of-band games.

Correspondence of representations

Let us introduce proofs-of-translation: They prove correspondence between different representations of the same object, e.g. source code and target code, in-memory structs and wire formats.

For any language pair with deterministic translation (e.g. determistic compilation, provably correct parsing), everyone who has access to source and target representations can verify their correspondence.

This can bind games requiring different representations.

What does that mean for us?

Let us derive some (draft) principles for protocol design and implementation:

  • The specification needs to construct a hierarchy of games that bind to each other and fundamentally to the physics.

  • Compatible implementations are isomorphic up to implementing the behavioral commitments the protocol specifies.

  • Behavioral commitments should be defined at the lowest layer where the relevant information to enforce the commitment is available, to maximize generality.

Failure modes of binding and band separation

As designers we try to come up with abstractions that ideally are not leaky, but when dealing with complex, composed systems, mistakes might happen. Mistakes that could happen with the binding between games might look like this:

Behavior that is supposed to be out-of-band and not bound, but which is strongly bound

  • The definition of the protocol did not assume the possibility of some external game that, after its discovery makes it incentive compatible for agents to defect out-of-band by participating in it.

Behavior that is supposed to be in-band, but which is not bound

  • The credibility of a mechanism relied on some observations being possible, but a new side channel was discovered which enables effective out-of-band coordination, reducing observability.
  • Some cryptographic primitive, or its implementation is found to be insecure or broken, removing the binding to physics and invalidating any mechanism relying on the primitive.

Open questions

  • Given a specific physics, is there a unique protocol (up to which isomorphism?), which enables us to utilize all optional expressivity induced by the physics, but not more? This should would be a protocol that does not introduce artificial constraints or relies on “unnatural” assumptions which can not be derived from the physics.

  • Do separation games exist, or can there only the absence of specific binding games? Could information flow control be related?

  • All of the technical details.

3 Likes

I would amend this slightly to something like “individual commitments need to be assessable within the protocol”, where assessment (more general than verification) may include:

  • direct verification, where possible
  • statistical assessment based on historical behavior (e.g. a commitment which has been kept so far may be treated as likely to be kept in the future)
  • assessment-by-expectation (e.g. evaluating whether this commitment would lead to an equilibrium which plausibly makes sense for the agent making the commitment)
  • possibly additional methods

Following a principle of “least-assumptions most-binding”, or something like that, we can impose the further requirement that the ideal protocol should allow for a given commitment to be assessed in the way requiring the fewest assumptions, given the constraints imposed by physics and possible additional practical computational limitations. I think that this requirement – properly mathematically articulated – should say quite a bit about what form the ideal protocol must take, e.g. it implies something like the message logic / service commitment system.

To clarify, is the behavior you’re talking about here:

  1. Literally any behavior (any messages sent) that isn’t specifically committed to in protocol-legible commitments, or
  2. Any behavior that isn’t specifically committed to in protocol-legible commitments, and is relevant to future protocol state.

These are pretty different classes. My first guess is that for the most part we will just care about the latter – or at least we should start the analysis there (and the latter definitely includes e.g. oracles and external governance mechanisms which you mention).

I think I get some general intuition, but it would help my understanding here a lot if we could be a bit more mathematically specific as to what we mean by “game”, “composition”, “coupling”, etc. A few questions which might illuminate this:

  1. If the equilibria of two games are unaffected by composition, would you say that those games have “neutral binding”?
  2. If the equilibria of two games are both strengthened but unchanged by composition (e.g. same equilibrium but fewer assumptions required), would you say that those games have “positive binding”?
  3. If the equilibria of two games are both weakened but unchanged by composition (e.g. same equilibrium but more assumptions required), would you say that those games have “negative binding”?
  4. If the equilibria of two games are both changed by composition, would you say that those games “cannot be bound”, are “altered by binding”, or something like that?

(or am I totally off base here?)

What do you mean by this? Is “separation games” a known concept?

Note: For a draft defnition of binding, including strength, see the end of this post.

Yes, assessment is a better term!

This will be very useful for characterizations, but I think we need to distinguish between assumptions required for a specific binding mechanism and their ordering from “least to most” binding. For example, a mechanism that is incentive compatible potentially needs more assumptions than a cryptographic primitive, but the quality of the binding might be the same.

I would say that they are not bound, but neutral binding works too.

These are good examples for why I think we want to distinguish between necessary assumptions and strength of binding. I don’t know if we can get a total order for the assumption sets, but we can probably work out some way to compare them. I’d start to work out the details from relevant examples here.

I think binding is a necessary precondition for equilibria of composed games to change under composition.

This seems like a good refinement to me!
Note: the initial protocol spec and implementation will be the initial in-protocol commitments.

Not that I know of, I chose this as a placeholder name since I was wondering if there is something like “negative” binding.
After replying to your post, I believe there is only directional binding, or no binding: A game can either bind another game, be bound to it, or not affect it at all. This seems to be equivalent to computing the approximate directional credibility cred(A, B) from above for both directions and extending it by accepting assumptions as an argument.

Draft definition for Binding

First let us briefly introduce at an open game \mathcal{G} as defined here:

\mathcal{G} = (X, S) \rightarrow (Y, R), with X being observations, Y choices, R utility received from the target of the choices. S is coutility, which is utility sent back to the calling environment of the game.

Then we define the strength of binding from game \mathcal{G} to game \mathcal{F} as the degree we have to take observations from \mathcal{G} into account when making decisions in \mathcal{F}, if we want to optimize received utility R (and potentially minimize coutility S).

To quantify binding strength, we could try the approach from estimating approximate incentive compatibility: Instead of estimating the utility loss an agent incurs when misreporting their type, we try to estimate the utility loss an agent from game \mathcal{G} incurs when ignoring observations from \mathcal{F}.

Using the above definition, \mathcal{G} does not bind \mathcal{F} if agents in \mathcal{F} can ignore observations from \mathcal{G} without utility loss.

We further define a set of assumptions \mathcal{A}, b_g strength of binding \mathcal{G} to \mathcal{F}, b_f strength of binding \mathcal{F} to \mathcal{G}, with b_g, b_f \in \mathbb{R}_{\ge 0}.

Assuming no symmetry, and binding strength being scalar, we can then express binding as bind(\mathcal{G}, \mathcal{F}, \mathcal{A}) = (b_g, b_f).

Ahhhh, so binding is more like some kind of covariance between incentives or equilibria, where e.g. the moves I may want to take in a game A will depend on the moves I see you take in a game B, where A and B have some binding. Is that closer to what you have in mind?

Where did F come into play here? Are you talking about the observations and decisions of G, of F, or of both in some way? I think you want to say something about the dependence of decisions taken in F on observations in G, or vice-versa, but that’s just a guess.

This is a point of evidence in favor of my above guess :slight_smile:

Another point of evidence.

Speculating a bit here, I now understand binding in this “open game” framing as roughly:

  • Define a game \mathcal{G_n} = (X_n, S_n) \rightarrow (Y_n, R_n), with X being observations, Y choices, R utility received from these choices, and S is coutility sent back to the calling environment of the game [I don’t quite understand “coutility” but it seems inessential to clarify the binding concept for now]
  • Suppose that we have two games \mathcal{G_0} and \mathcal{G_1}.
  • Define the composed game \mathcal{G_{0,1}} = (X_0 \cup X_1, S_{0,1}) \rightarrow (Y_0 \cup Y_1, R_{0,1}).
  • Let the binding of G_0 to G_1 given assumptions A as binding(G_0, G_1, A) = R_{0,1} - R_0 - R_1, i.e. the difference in utility between the composed game and the two games taken individually (which should be 0 if the games are totally unrelated and positive otherwise, if I understand this frame correctly).

I have some qualms about the types here, and I think we could make this much more explicit in the physics-style model of agents, objects, and messages, but let me first check if this proposal is a closer match to what you have in mind.

1 Like

Something like this might be derivable further down the line. As a first step, what I’m trying to figure out is: When looking at two games, A and B, can we quantify to which degree things happening in game A are cheap talk or not in respect to game B and vice versa.

This is what I meant! I’ll edit the above post to clarify.

Composition is a bit more complex, since we need to distinguish between sequential and parallel composition. Section V and VI introduce the definitions and graphical notation for it. I will work out how exactly binding integrates here and write that up.

For now I’m just trying to define binding for the individual games that gets composed with each other, s.t. we can use it as an input to compute the equilibrium of the composed game.

Example: bind(G_0, G_1, A) = (0, 10) that means agents in G_0 do not need to care about anything in G_1, but agents in G_1 need to care about the other direction, or they could lose up to 10 utility (in whatever terms we define this, in the long run we probably want to generalize this to bundles). How that affects the composition I still need to work out.

I agree, I think this can be solved by spending more time working through the paper and translating into its framework.

That would be the next step after clarifying the composition, right now its seems to me that we are mostly going to define observations in terms of messages and utility in terms of objects (mainly resources?). The work of @Jamie on idealized anoma is also very helpful here, so we can work on these in parallel and then integrate.

1 Like