Learner Graph: state of the art and open questions

The learner graph is introduced as a mathematical structure that makes assumptions about validators (aka acceptors in the context of consensus) explicit. A quick recap of the main points is the following diagram.

based on §3 of the (ready for review) PDF spec.

We have the following questions:

  1. how to represent & store the learner graph (two closely related issues)
  2. how to match the observations against the learner graph (to detect failures)
  3. how to update the learner graph, e.g., in case of stake change
  4. how to deal with forking (in particular in the case of chimera chains)

Questions 1. and 2. are clear for the consensus and mempool protocols; question 3. is in the workings, and the last question is still research.

This might be related to the slow game.

How to represent & store the learner graph

The naive fully general representation would be to store a list of quorums (each stored as a list of Identities) for each learner, and perhaps a list of safe sets (each stored as a list of Identities) for each pair of learners. This is wildly inefficient, since there can easily be exponentially many quorums.

An SML Signature for Learner Graphs, or at least sets of quorums

We should establish an SML Signature (analogous to a Rust Trait) for representations of learner graphs, or at least representations of sets of sets. Any given Typhon implementation would, of course, only be able to support certain implementations of this signature, so a given blockchain will have to choose (to some extent) what implementations it will use in advance.

One instantiation for sets of sets, which we should provide, should be based on weighted sets with a threshold:

  • there is a set of identities, each paired with an (integer) weight
  • there is an (integer) threshold
  • the set of sets represented is the set of sets of identities whose weights sum to at least the threshold

This would make representing, for instance, proof of stake, very easy.

This is very similar to threshold identity composition, and indeed we may want to make an instance of one implement the other.

1 Like

How to match the observations against the learner graph

Each protocol has its own invariants, and corresponding forms of evidence that someone has violated those invariants (fraud). While we might imagine general fraud proofs, we probably don’t need them. Instead, each component protocol can establish fraud proof types it is interested in hearing, and fraud proof types it can produce. Each component protocol may be able to do different things given a fraud proof.

Example Fraud Proofs

Consensus

Consensus can produce a fraud proof of the form “two consensus messages for the same height signed by the same validator, neither of which is in the other’s transitive history.” For our optimized consensus, there is a simpler fraud proof: “two consensus messages for the same height, signed by the same validator, with the same (possibly null) predecessor.”

Given a fraud proof, consensus can (implicitly or possibly explicitly, although we haven’t written an explicit option yet) include it in future consensus messages, letting consensus engines know that certain learners may not need to agree. This is in fact required for consensus liveness.

Mempool

There are a few types of fraud proof the mempool can produce:

  • two different (signed) proposed headers for the same sequence number by the same primary
  • two different (signed) worker batch hashes for the same sequence number by the same worker
  • two conflicting (signed) votes for the same primary / sequence number, signed by the same primary

There isn’t much that Mempool is currently programmed to do in the event that it detects fraud. However, we might be able to stop work on some chains or learners that we know to be forked. We can certainly forward fraud proofs to consensus.

1 Like

How to update the learner graph, e.g., in case of stake change

There are a few different ways we can do this. The overarching assumption is that the state machine being maintained (in the execution engine group) defines the learner graph we’re using, so any updates have to come from there.

Pre-defined epochs

One solution is to have the state machine periodically issue statements of the form “we will definitely be using this learner graph between timestamps X and Y.” These should be irrefutable: no correct state machine should issue different learner graphs for overlapping time periods. Obviously, this requires a notion of timestamp for everything involving the learner graph.

The requirement here is that the state machine must issue these statements before we try to use a learner graph at a timestamp, because otherwise we’ll have no way to know which learner graph to use. This does present a potential liveness issue: we have to decide on what learner graph to use for the next epoch before the next epoch arrives. Presumably we could somehow make this a requirement: “any consensus at timestamp Z or later must have an epoch declaration for the next epoch after Z, or something.”

For chimera chains, we essentially require that any epoch declarations on the chimera and base chains agree. Making sure we do this right is going to be tricky. In particular, we have to ensure that if we have a Red/Blue chimera chain, and Red doesn’t have the information to decide on its quorums for the next epoch yet, then Blue doesn’t get stuck waiting.

1 Like

How to deal with forking (in particular in the case of chimera chains)

It depends what we mean by “forking.”

In the event that a base chain forks, and we can prove it (i.e. we have fraud proofs from a sufficient set of validators), I suppose we can have honest validators give up. There’s not much point in maintaining a forked chain.

In the event that various learners on a chimera chain no longer agree, it has essentially split into multiple (possibly boring) chimera chains, each of which can keep operating. We can try to make it clear to the users which one they’re operating on, and what guarantees they get on there, but we don’t have to shut it down, or fundamentally change the available operations.

1 Like

I think that it might be easy and helpful to establish a standard for describing these fraud proof types - probably simply as predicates which take two messages and return whether or not fraud occurred. If we do this cleverly enough, we can simply reuse the existing solving abstractions and infrastructure for fraud detection (I expect these predicates will be much simpler than most intents solvers will be otherwise handling).

One interesting option we have would be to allow chains to define meta-quorums which can arbitrate in the case of forks (e.g. a quorum of the combined validator sets of several other chains). In a world where chains hold/custody tokens of other chains, there may be some natural/automatic way to calculate and update these meta-quorums (merely as a default which could be overridden).

We haven’t touched a game theoretic analysis of the “learner-graph game” yet. If we have automatic mechanisms for dealing with forks, this will not get easier as there might be incentives for forking due to the automatic resolution mechanisms. For the game-theoretic analysis, the first question would be the example of bribing of (all) safe sets of a chimera chains, or if at least the cosomos-like situation with “strong” safe sets works or is patchable by suitable slashing mechanisms.

I’m not sure we can (always) re-use existing infrastructure: for example, solvers aren’t required to find a solution if it exists, but our consensus liveness does require that we find fraud proofs in some cases; in fact, it’s built into the consensus.

I’m also not sure we gain much by calling these “predicates.” We still have some specific list of fraud proof predicates each component cares to recognize, which isn’t meaningfully different from having some specific list of fraud proof types each component cares to recognize.

1 Like

Is that different from having a controller (defined using these meta-quorums) up-stream from this controller in the relevant resource’s controller graph?

1 Like

I think so, because this recovery quorum would be rather part of the definition of this controller (and, say perioidically updated by it), not a decision made by resources on an individual basis. The controller system would allow resources to make a similar decision on an individual basis, but I’m not sure if it would be as easy to update the recovery quorums automatically without user interaction. Perhaps this is unnecessary, though, and it’s not an important question to worry about right now. This may also be more a question of how the controller naming/specification works.

In fact, updating the usual learner graph (including quorums) based on stake distribution is a specific use case that might in fact need “supervision” if the stake update provably breaks entaglement of the learner graph. Can we incorporate these points into the upcoming controller report?