I’ve been thinking about what a general answer to “What is a solver” might look like. Here’s a sketch of my thoughts.
I’m going to make a few simplifying assumptions. I’ll assume that intents are “floating in the void”, so to speak. So a solver may pull an intent from a global pool of intents in the network (this isn’t needed, but it helps to not have to think about batching and whatnot). Worst case, this can be bootstrapped into a “batched” version by adding “trusted/chosen solver” requirements into the intent’s satisfaction predicate and asserting that all intents have requirements like this.
I want to think in terms of “Transition Relations”. That is, predicates over pairs of states which characterize what transitions are possible. So, I’m not thinking of a specific transition function, just a relation that restricts transitions. This coincides with the way HPaxos is formulated in the formalization; the actors (learners and acceptors) don’t have transition functions, just relations between subsequent states.
Each agent (engine, actor, node, etc) has control over some portion of the network, defined by a set of “permitted” transitions. These are, for each a, a relation P_a : \text{State} \times \text{State} \rightarrow
\text{Bool}. These are the transitions that a has permission to initiate. As a basic example, if a controls a resource, then the transition that just burns this resource may be among the permitted transitions (so long as it’s allowed by the resource logic). Permitted transitions that only require one agent I’ll call “solitary” transition relations. I am assuming that these relations modify something while explicitly leaving everything else on the network unchanged, since that’s all a can do to the parts of the network it has not control over.
Actors will issue intents. These intents have two transition relations associated with them;
- S_i, defining which transitions satisfy intent i. (Note, not all satisfying transitions are permitted or respect resource logics; if you can cast magic to satisfy an intent, then it will satisfy this relation).
- P_{a, i} ⊆ P_a, the transition relations permitted by the intent. Essentially, these are the transitions which a would allow to happen on their behalf if the intent is trying to be satisfied by some solver. These, by definition, only include things that a already has control over.
A reasonable basic assumption is that x S_i y \rightarrow \neg x P_a y, (in other words, S_i ⊆ P_a^−). That is, an agent issuing an intent should not be able to solve their own intent just by manipulating stuff they have full control over. If they can do that, then why are they issuing the intent in the first place? One might imagine a scenario where a could satisfy an intent, but there’s a good reason not to do it, and they want to see if an alternative could be proposed by a solver. In that case, they would add what they find undesirable to the requirements of the intent, and it would satisfy S_i ⊆ P_a^−, and that’s what they’d give to solvers. Although, I’m not sure this should be part of the definition of an intent, anyway, but it’s good for intuition to assume this.
We can explicate what a solver can do by asserting that \left(\bigcup_i S_i ∩ P_{a, i}\right)^+ ⊆ P_s; that is, the transitions a solver is permitted to make are, at least, (the transitive closure of) those which satisfy an intent and are allowed by that intent. This does not completely define the solver’s permitted action. The solver has control of its own resources, and it may do what it wishes with them even if it has no intents under consideration. Assuming the intents were made properly (i.e. the intent does actually grant permissions for the resources the intent cares about), then this property should guarantee that the solver always has the resources necessary to solve the intents its given, if a solution is possible at all.
This does implicitly assume that, for any resource that might be handled by a solver, it’s always possible to simply consume a resource without producing anything (that is, without transferring it to a new owner, for instance). This allows the consumption of the resource (which is controllable by the owner) to be present in P_a while the creation of a successor resource (outside the control of the owner) is not. Of course, an intent should be structured so that the mere consumption of a resource is never sufficient to satisfy the intent. Or maybe this isn’t a necessary assumption? I’ll have to think about it.
I will also assume that reward is offloaded into the network, so every solver can assess preferences over transitions with a function, u_s : \text{State} \times \text{State} \rightarrow
\mathbb{R}. This means the rewards granted to a solver are on the network and are a consequence of the chosen transition. In that case, the reward a solver gets emerges from what the solver is permitted to take according to each P_{i, a}. Each solver then seeks to pick;
- \text{max}_{t∈ P_s} u_s(t)
The permitted transitions may entirely be functions of the resource logics (not sure, but it makes sense to me). A resource logic for a resource, r, can be characterized as a transition relation, L_r. This relation should pertain to the creation, destruction, and alteration of resource r while leaving everything not relevant on the network unchanged. In general, we may have a condition that, for any actual transition the networks undergoes, T, T ⊆ \left(\bigcup_{r ∈ R} L_r\right)^+, where R is the set of all resources on the network. Most resource logics will not actually be relevant; but all resource logics should always hold anyway, and we can narrow from there where needed.
This is assuming that networks states are only distinguishable up to resource content. That is, if two states have agents with different internal states but otherwise identical resources, then the two states are considered indistinguishable.
This may be used to completely characterize the permitted transitions. The solitary permitted transitions for an agent, a, are those transitions associated to the set of resources a controls, R_a (if that’s not too restrictive). Given that, we can simply define the permitted (solitary) transitions of a as
- P_a = \left(\bigcup_{r ∈ R_a} L_{r}\right)^+.
The solver permitted actions would then be
- P_s = \left(\bigcup_i (S_i ∩ P_{a, i}) \cup \bigcup_{r ∈ R_s} L_{r}\right)^+
Some examples;
- A liquidity pool. This could satisfy a simple trade intent without matching it with another intent. It just takes an intent and uses the permissions granted by that intent to trade resources the solver owns with those made available by the permissions; satisfying the intent.
- “coincidence of wants”; a solver uses permissions for many intents simultaneously to trade resources.
- Individuals can simply pull existing intents to make individual trades; in this case, the individual is acting as a solver.
Essentially all services may be viewed as solvers for some narrow kind of intent. One may imagine an agent acting like an investment brokerage satisfying intents asking for resources to be invested under certain requirements, for example. This should cover auctions and broader financial services as special cases of solvers.
A lot of the logic underlying these services I imagine being encoded into the resource logics; that’s where full trades vs temporary investment would be enforced, for example. So maybe it’s the resource kinds which should be identified with particular services, and some solvers may specialize in dealing with certain resource kinds. This is a semantics issue, so I won’t dwell on it.
So, what is a solver? It’s an agent which uses the transitions permitted by intents as part of its own chosen transition. At least, that’s the answer suggested by the above commentary.