Intent Languages: CSP Formulation

That sounds like great progress, especially maximizing the overall flow!

Regarding free-riders, or pre-filtering intents: We (@cwgoes and I) figured out in the last hours, that there should be another field on each PTX containing a welfare function by the intent origininator. These welfare functions should compose, s.t. the solver can compose them together, to determine the splits of resource bundles, their allocations and how to handle potential free riders.

We assume that these functions should be scalar and probably restricted in some other ways, to make them compose well. Researching their structures and composition, as well as the composition of incentives induced by them will be one of the future tasks here.

Multi Agent, Multi Resource Swaps with Intents

Condensing the above discussion, I propose the following, mostly written out examples of a generalized swap Intents.

Preliminary: Authentication

Here we introduce a simple two party, two resource swap with consumption authentication via signatures in ephemeral resources. The authentication is not relevant at solving time, as it is validated before and/or after search has produced candidate matches. We introduce it here, because it might be relevant context for some readers, as it defines required pre- and post-conditions, but you can skip it if you’re not interested in these mechanics.

fn can_be_consumed(self, tx: TX) -> bool {
    if tx.input_resoruces.contains("an ephemeral resource with a valid signature of the owner") {
        true
    } else {
        false
    }
}

fn auth_eph_can_be_created(self, tx: TX) -> bool {
    if TX.outputs.contains("a resource of the type which consumption this auth resource authenticated") and TX.persistent_resources.is_balanced() { // TODO: is the first statement implied in the second?
        true
    } else {
        false
    }
}

let have1 = Resource {
    res_type: A,
    quantity: 1,
    owner: agent1,
    predicate: can_be_consumed(),
    ephemeral: no,
    value: "", // The values of persistent Resources are not relevant for this example, but in general they will contain the actual state represented.
}


let auth_eph_in1 = Resource {
    res_type: consume auth for A,
    quantity: 1,
    owner: agent1,
    predicate: auth_eph_can_be_created(), // Always evaluates to true at consumption.
    ephemeral: yes,
    value: "signature over have1 by agent1"
}

let want1 = Resource {
    res_type: B,
    quantity: 1,
    owner: agent1,
    predicate: can_be_consumed()
    ephemeral: no,
    value: "",
}   // can_be_consumed() will always evaluate to true on creation. Balance checking, which always happens, ensures that only the same amount of resources per type get created as were consumed.


let have2 = Resource {
    res_type: B,
    quantity: 1,
    owner: agent2,
    predicate: can_be_consumed(),
    ephemeral: no,
    value: "",
}

let auth_eph_in2 = Resource {
    res_type: consume auth for B,
    quantity: 1,
    owner: agent2,
    predicate: auth_eph_can_be_created(),
    ephemeral: yes,
    value: "signature over have2 by agent2"
}

let want2 = Resource {
    res_type: A,
    quantity: 1,
    owner: agent2,
    predicate: can_be_consumed(), 
    ephemeral: no,
    value: "",
}


// Now the following two PTXs get, amongst others, submitted to a solver.

let ptx1 = PTX {
    input_resources: [have1, auth_eph_in1],
    output_resources: [want1],
    term: "create auth_eph_out1 in this ptx after solving"
}
let ptx1 = PTX {
    input_resources: [have2, auth_eph_in2],
    output_resources: [want2],
    term: "create auth_eph_out2 in this ptx after solving"
}

// All predicates (here can_be_consumed()) of all Resources in a PTX are evaluated before solving. A PTX is only accepted into a batch if all of its Resource Predicates evaluate to 'true'.

// In this example, the solver matches up ptx1 and ptx2, since want1 + have2, and have1 + want2 produce a satisfying assignment. The only information relevant for search by the solver in this batch are these persistent Resources.

// Once a TX has been bundled from PTXs, the solver creates auth_eph_out1 and auth_eph_out2 as indicated by the terms.

let auth_eph_out1 = Resource {
    res_type: consume auth for A,
    quantity: 1,
    owner: agent1,
    predicate: auth_eph_can_be_created(),
    ephemeral: yes,
    value: "",
}

let auth_eph_out2 = Resource {
    res_type: consume auth for B,
    quantity: 1,
    owner: agent2,
    predicate: auth_eph_can_be_created(),
    ephemeral: yes,
    value: "",
}

// Once this is done, all Predicates (here auth_eph_can_be_created()) of all Resources in the TX get evaluated again, on the context of the TX and balance is checked. If all these evaluate to 'true' the TX is handed on to consensus for validation.

Example: Multi Agent, Multi Resource swap with fixed Resources and one Linear Preference

Let us now move to a Multi Agent, Multi Resource swap. We leave out the details of the consumption authentication introduced above and focus only on the machinery relevant to intents and solving for them. We also omit data fields of Resources and PTXs when not required for explanation. The following example will be one, where all Input and Output resources are uniquely specified and persistent, with one party expressing a preference over the amounts they give up for consumption.

// agent1
let have1 = Resource {
    res_type: A,
    quantity: 4,
    owner: agent1,
}
let want1 = Resource {
    res_type: B,
    quantity: 1,
    owner: agent1,
}

// agent2
let have2 = Resource {
    res_type: C,
    quantity: 2,
    owner: agent2,
}
let want2 = Resource {
    res_type: A,
    quantity: 4,
    owner: agent2,
}

// agent3
let have3 = Resource {
    res_type: B,
    quantity: 4,
    owner: agent3,
}
let want3 = Resource {
    res_type: C,
    quantity: 2,
    owner: agent3,
}
fn linear_preference3(tx: TX) -> scalar {
    "prefer minimizing C given away, with linear utility, by splitting up want3 and sending the remainder back"
}

let ptx1 = PTX { 
    input_resources: [have1],
    output_resources: [want1],
}

let ptx2 = PTX { 
    input_resources: [have2],
    output_resources: [want2],
}

let ptx3 = PTX { 
    input_resources: [have3],
    output_resources: [want3],
    welfare_function: linear_preference3(),
}


// The above PTXs get submitted to the solver and it produces the following TX fulfilling their intents, after updating the PTXs as follows. 
// Since ptx3 contains a linear preference, which is not in conflict with other constraints or preferences, it enables the solver to locally optimize the assignment to lead to a better outcome for agent3.
// Should conflicts arise, they can be resolved by weighting whole PTXs, either by querying a trust database or offering rewards to the solver, by which the clauses of the CSP instantiated by any given PTX can be weighted. 

let have3' = Resource {
    res_type: B,
    quantity: 3,
    owner: agent3,
}
let ptx3' = Resource {
    input_resources: [have3],
    output_resoruces [have3', want3],
    term: term3,
}

let tx = TX {
    ptxs: [ptx1, ptx2, ptx3']
}

Example: Multi Agent, Multi Resource Swap with non-fixed Resources and Linear Preferences

To further generalize the above, we introduce a preference over a partially determined resource collection.

// agent1
let have1A = Resource {
    res_type: A,
    quantity: 4,
    owner: agent1,
}
let have1B = Resource {
    res_type: B,
    quantity: 2,
    owner: agent1,
}
fn linear_preference1(tx: TX) -> bool {
    "prefer giving away 2.5A over 1B and send the remainder back"  
}
let want1 = Resource {
    res_type: E,
    quantity: 2,
    owner: agent1,
}
let ptx1 = PTX { 
    input_resources: [have1A, have1B],
    output_resources: [want1],
    term: term1,
    welfare_function: linear_preference1()
}

// agent2
let have2 = Resource {
    res_type: C,
    quantity: 4,
    owner: agent1,
}
let want2 = Resource {
    res_type: E,
    quantity: 2,
    owner: agent1,
}
let ptx2 = PTX { 
    input_resources: [have2],
    output_resources: [want2],
}

// agent3
let have3 = Resource {
    res_type: E,
    quantity: 4,
    owner: agent2,
}
fn resource_preference3(tx: TX) -> bool {
    if tx.output_resources.contain("(2A || 1B) && (1C || 1D) each with owner == agent2") {
        true
    } else {
        false
    }
}
let want_pref_eph_out3 = Resource {
    res_type: "preference for resource",
    quantity: 1,
    owner: agent3,
    predicate: resource_preference3()
    ephemeral: yes,
}
let term3 = Term {
    term: "create want_pref_eph_in3 for balance check after matching",
}
let ptx3 = PTX { 
    input_resources: [have3],
    output_resources: [want_pref_eph3],
    term: term3,
}


// The solver searches according to the PTX structure, with additional constraints given by lin_pref_eph_in1 and res_pref_eph_in3 updates ptx1 as follows:
let have1A' = Resource {
    res_type: A,
    quantity: 2,
    owner: agent1,
}
let ptx1' = PTX {
    input_resources: [have1A], // TODO: When do the ephemeral resources for preferences get consumed? 
    output_resources: [want1, have1B, have1A'], // have1B technically is a different resource after consumption and creation, but it is of the same type and amount
    term: term1,
}

// And ptx3 as follows, as indicated by the preference predicate and the term:
let want3A = Resource {
    res_type: A,
    quantity: 2,
    owner: agent3,
}
let want3C = Resource {
    res_type: C,
    quantity: 1,
    owner: agent3,
}
let ptx3' = PTX {
    input_resources: [have3, want_pref_eph_in3],
    output_resources: [want3A, want3C, want_pref_eph_out3],
    term = term3,
}

let tx = TX {
    ptxs: [ptx1', ptx2, ptx3']
}

Distinguishing between Predicates for Validation and Solving

As we have seen, some predicates will be only relevant for validation, which happens before and after solving time, and some predicates will be relevant during solving time, as they encode constraints. We probably want to implement them with different recognizable (at parsing time) subsets of the predicate language.

This post will summarize the broad research on composable CSP solving I’ve done so far. There are a lot of specific and quite technical topics to cover, so I can’t be thorough, but my goal is to give an overall impression of the relevant subject matter and point to some decent sources that could be used for future, more detailed research. I don’t feel like this is finished, so I’ll continue updating this over the next week or more.

Setup

This is the way @cwgoes described the decomposition framework to me,

  • An agent would get a segment of the CSP, which it would then “simplify” into a smaller CSP. This smaller CSP would then be sent (along with other small CSPs from other sources) to another solver. This leads to a tree of solvers which aggregate solutions to create a solution to the original problem.

I don’t know how much we can vary from this because I do not have the context necessary to know where this setup comes from. If it can’t be varied, then some of the things I will talk about are less relevant or even useless to us. If this can be varied, then some things are more applicable.

Preprocessing Techniques

The most directly relevant methods are so-called “preprocessing” techniques. These are methods that can be used to reduce the size of a problem. These fall into semantics-preserving transformations and satisfiability-preserving transformations. The former can be applied at any time without limitation, though they are unlikely to be able to reduce a typical problem to any significant degree. Satisfiability-preserving transformations are more likely to be applicable but have some limitations.

To illustrate the nature and limitations of these, I’ll use blocked clause elimination as an example. Technical details aren’t so important, just know that it’s a method to detect when a clause can be removed from a SAT problem in conjunctive normal form while preserving satisfiability.

There are two important things to note when it comes to applying these sorts of techniques to a problem split up between different agents.

  1. What does the agent know? In general, one cannot detect blocked clauses without access to the full problem. However, clauses are always blocked by a specific variable, so if an agent has control over all the clauses that a variable appears in (common in the distributed CSPs I’ll talk about in a later section) then the agent can at least detect if the variable it has control over blocks any of its clauses. If a clause is blocked, this information needs to be communicated with any other agent that shares those clauses. How we break up the problem has a substantial effect on what valid transformations are detectable.

  2. Generally, formula preprocessing only preserves satisfiability (or optimizability), not semantics. This means a solution to the simplified problem can be used to construct a solution to the original problem, but they do not share solutions. If a blocked clause is removed, this information needs to be kept around. It’s not helpful to other solvers; rather, it would be sent to an aggregator that takes the solution the solvers found along with a trace of preprocessing steps in order to calculate a solution to the original problem.

Since we are specifically interested in optimization, there are additional concerns related to cost-preservation after transformation. Naively applying transformations will not generally preserve costs, but there are relatively simple steps one can take to ensure these are preserved.

Preprocessing is a broad field, and methods can vary substantially depending on the domain. The preprocessing chapter of “Handbook of Satisfiability” is a good source for ordinary SAT solving.

Our problem most closely resembles mixed integer linear programming. Methods in this domain are substantially less developed, but some exist, such as those in

Proper simplifications are rare in general. In order for the scheme described to me to be practical, the simplified constraint problem should always be substantially smaller than the input. This really can’t be done without simply committing to a specific choice, thus limiting optimization potential. This would likely imply that earlier solvers would lock in values to optimize over an expectation of the variables those agents don’t control. This matches what some incomplete algorithms for distributed CSP do.

CSP Decomposition

Decomposing CSPs is a highly nontrivial task. Not because splitting the CSPs themselves is hard; what’s hard is splitting them up in a way that solutions to the smaller problems can be recombined. For SAT solving, the most common method to split up a problem picks a variable and substitutes it as true and false in two copies of the original problem. These two copies are then simplified and sent off to two separate solvers. This eliminates a variable, but, generally, the two subproblems are approximately the same size as the original problem. As such, the amount of total work done to solve the two subproblems will be ~2x the work needed to solve the original problem. The reason it’s most popular is that it’s the only method that works for any problem and always provides some benefit in the presence of parallelism. A similar procedure can be done for any (set of) variables within a finite domain CSP. Other methods can offer more advantages but come with some limitations.

Here’s an example decomposition technique from sat solving.

One can identify a vertex separator of the problem graph which splits the problem into two problems only connected by the fact that they share the variables in the separator. We then fix the values of all the variables in the separator. Assuming there were n variables in the separator, we now have 2^n sub-problems, each, roughly, half the size of the original problem. This means the new (total) work will be ~2^n * 2^(m/2), where m was the size of the original problem. Compared to the original work of ~2^m, if n is small, this could be a considerable asymptotic efficiency boost. However, random SAT problems can’t generally be efficiently decomposed like this since their minimal vertex separator will have roughly the same number of variables as m/2, meaning we’d be back to 2^m work and wouldn’t gain any advantage. Also, finding minimum vertex separators is, itself, an NP-hard problem, but decent approximation algorithms do exist. We’d have to do a survey of real-world problems we’re interested in to see if this will generally be effective in practice.

It’s also worth noting that this transformation makes the problem much larger, but, potentially, easier to solve. If we want the problem to become smaller, this generally won’t do. Alternatively, we could just split the problem in 2 and coordinate a search for values between two agents. This would involve trying to find solutions to the sub-problems, then, if one agent succeeds, the valid variable assignment it found will be sent to the other agent to constrain its activity. If this subsequent search fails, both agents will negate that variable and continue searching. This approach is less parallel and isn’t very friendly to incomplete algorithms, but it will actually decompose the problem. However, the number of messages exchanged between the agents will be exponential with respect to the size of the vertex separator. In the limit, this will be something like synchronized search in distributed CSP.

These sorts of tradeoffs tend to be fairly generic unless the problem has some kind of special structure.

The problem is that decomposition is itself an NP-hard problem, meaning finding simpler subproblems whose solutions can be efficiently combined will generally be as difficult as simply solving the original problem in the first place. However, if a problem has sufficient structure, then smarter decompositions are possible.

There is a generic notion of “CSP decomposition”. The method involves transforming the graph of a problem into a tree. Generally, this is done by clustering variables in the original problem into nodes. If there are variables X, Y, and Z with domains D, E, and F, then their clustered variable would have domain D x E x F. There would be further constraints stating that these cluster variables must match on components that came from shared variables. CSP decomposition specifically focuses on incomplete decomposition methods which are efficient to compute. There are lots of ways to do this, but it seems that “hypertree decomposition” is the dominating method.

This is done because acyclic CSP problems are always tractable as you can find a satisfying assignment without ever needing to backtrack. This section of this lecture gives a good description of the procedure;

We may apply something like this to our problem; it would allow us to place node sub-problems into agents, and we’d need a more sophisticated method to solve the tree-shaped meta-problem to get a full solution. It’s also worth noting that we have no guarantee for the maximum size of the node sub-problems.

If we move to a more structured domain, namely ILP, there are more and more useful decomposition methods.

There are a few common methods. Generally, these focus on finding essentially independent dimensions of the problem that can be used to split it into smaller subproblems. There will be “linking” variables, whose constraints are shared over essentially all the subproblems, and there are “block” variables that can be solved in relative isolation. Due to the linking variables, these techniques don’t actually make the problem asymptotically smaller, rather, the problem is split into easier-to-solve subproblems which are, in the worst case, about the same size as the original problem. They also generally only apply to problems with the right structure. However, if a problem has only a small number of linking variables, the sub-problems could be substantially smaller than the original problem. We’ll have to survey examples that we’re interested in to get to see if we’d get an advantage.

Distributed CSP solving

The idea is that we have numerous agents working in parallel, each responsible for a finite segment of the CSP problem (generally, a small set of variables or a single variable). They repeatedly use knowledge of their local environment to search for solutions and continuously send messages to locally relevant agents in order to ensure a cooperative maximum.

The setup I described before, where there is only one-way communication through a single message that hands off problems from one agent to another in a hierarchy, is manifestly incompatible with almost all approaches to DisCSP. The relevance of DisCSP is dependent on whether the paradigm of solving, as described to me, can be abandoned essentially in its entirety for a more connected network with more communication overhead. If we look at some incomplete approaches, such as Communication-Free Learning, it is compatible with this setup, but such algorithms are rare and considerably less studied than algorithms with two-way message passing.

It’s also worth noting that there’s not generally going to be an algorithmic advantage to distribution. However, for certain kinds of problems, there are heuristic reasons to think distributed solving might work well. From chapter 20 of “Handbook of Constraint Programming”

In general, distributed techniques work well only when the problem is sparse and loose, i.e. each variable has constraints with only a few other variables, and for each constraint, there are many value combinations that satisfy it. When problems are dense and tight, usually the distributed solution requires so much information exchange among the agents that it would be much more efficient to communicate the problem to a central solver.

This does seem to describe our problem.

Generally, we will have to trade-off between completeness, message size, and message number. All complete algorithms (ones guaranteeing an optimum) are either exponential in the number of messages, the size of messages, or both. This would seem to limit us to incomplete algorithms, some of which have linear or constant message size and number.

Relevant sources for details are

For incomplete algorithms, we could use something as simple as hill-climbing (essentially gradient descent), possibly augmented with a stochastic search of some kind. In practice, more complex algorithms are used, see section 4.3.5 in that first source. If we choose to go down this route, I’ll write a separate follow-up article comparing our options with respect to how compatible it is with our domain.

Promise CSP

Based on the setup described to me by @cwgoes, we need a way to “simplify” a CSP so that it can be passed onto another solver. One question that should be asked is if the solution to a different CSP can even help with the current CSP. If so, passing the simplified CSP will accelerate later solving. The question of leveraging solutions to produce faster solutions within CSP is called the promise CSP problem. Unfortunately, the field is simply too primitive to give concrete results. Only a small number of families of PCSPs have been classified, none quite matching our domain.

Propogators

Propagators are a model for designing distributed algorithms. They are similar to Petry nets, except, instead of transferring tokens, lattice elements are instead transferred. Each node stores a lattice element, and when a new element is received, the meet of the two elements is taken and stored. The idea is that each meet is an update on new information. The lattices could be subsets of some collection of elements (the powerset lattice), or intervals that could be made more specific with updates, etc. According to some sources, common CSP and SAT-solving algorithms are special cases of this, but I can’t find a clear account of how.

Section “6.3 Dependencies for Implicit Search” may offer a basic description of a propagator network for resolution-based SAT solving which might be adapted with other methods.

is an attempt at creating a modern propagator library.

As for what lattices we could use, the field of abstract interpretation offers a lot of work on which lattices are useful for tracking the flow of information through programs.

2 Likes

Communication Games in Solver Composition

Setting

Each Agent has a view of their intents. Many different views exist in parallel.

The more of them are composed, the more comprehensive the characterization of the desired future state frontier of the participating agents becomes, but composition is not a fully cooperative game in general.

Let’s assume solvers perform local hill climbing over their intent view, but can share information about it. They do that by using one-way messaging as a primitive.

For an agent, or a solver to be willing to share an intent, we need some way to ensure there is utility in that.

Example 1: One-way Intent Sharing

Agent A1, A2, A3 submit their intents to Solver A.

Agent B1, B2, B3 submit their intents to Solver B.

Solver A shares all received intents with Solver B, but B does not reciprocate.

Solver B could now inject intents over several rounds to extract value from potential TXs which match intents from A with intents from B.

Example 2: Symmetric Intent Sharing

If A and B share symmetrically during the same round of solving, there might still be timing games to be played on the consensus layer over the state space, if the solvers crafting TXs and Controllers for Resources contained in them are not synced/batch encrypted.

Mechanism Design (Really Early Notes)

VCG Mechanism for Intent fees

MDPOP explains in detail a distributed VCG Mechanisms for DistDCOP.

This implements incentive-compatibility and individual rationality, but requires a bank (i.e. an instance that is disinterested in the outcomes) for taxation/burn to achieve it.

Trust Aware Intent Sharing

As sketched here one way towards iterated games with better equilibria could be Trust Aware Intent Sharing (TAIS):

Amendment for Example 1

Solver A shares low risk (e.g. low utility, or meant for public solving) intents with Solver B and measures (or queries Agents who let it solve their Intents), how outcomes change depending on the Intents it receives from Solver B, compared to their absence.

In case of improving outcomes, more Intents can be shared in the next round, or fewer, in case of deterioration.

If Agents annotate trust thresholds, which must be fulfilled by Solvers they are forwarded to, they can help the solvers make decisions on which to share first.

If we implement progressive Intent sharing like this, the Agents most exposed to a particular Solvers defection (by extracting value via Intent injection) would be the ones closest to it in the trust graph, which moves the problem to the place that is most effectively regulated.

Trust Aware VCG

Since reputation metrics between Agents and Solvers exist, they could be reused to determined what the tax levied for, i.e. which bank or which public good.

Agents could vote on what they trust the solver tax to go to for each solver and solvers could agree on higher order tax sinks.

The natural choices would be public goods the creators of the views that are being composed by collaborative solving want to see funded.

Misc Protocol Notes

Consensus over Intents

Some guarantees for a single round could also be provided by running consensus over intents, for Intents which are implemented as Resources.

Questions

  • Do we want to model Intents as Resources sometimes?

Separation pre-solving and solving communication

By default, we assume solvers compute independently.
They can however, mutually agree on interactive solving schemes where they pass messages during solving.

Ideally, we structure the protocol in a way, s.t. communication before solving a batch would look the same as communication during interactive solving.

I.e. we want scale-invariance along the dimension of participating nodes, as well as along the dimension of time/rounds of solving. Incentives will still be up to trust semantics, but the syntax should support that.

Also, solver fee structures etc. should be homogenously encodable as regular intents. They can always be out of band, but if they are embeddable, solvers can publish intents about their internal structure.

In-band is anything the parser of this protocol layer recognizes.

Task

We want to figure out what kind of solving we can do with unidirectional message passing,
and which more efficient protocols built out of that can agree on.

CSP / DCOP Formulation

Some more notes on flavors of CSP which might be useful for implementing the above setting.

Interactive CSP

In Interactive Constraint Satisfaction Problems acquisition of domain values happens on demand.

If we find a way to tie a variable in the CSP to a namespace for the Resources that implement its domain values, we could query them on demand.

For example, we could use directories where owners or controllers of Resources of a certain type could announce their current owners, or solvers who hold intents over them.

These directories could serve requests in a trust aware manner, or against payment, if no prior trust was established.

I don’t know yet if we want to structure this as a “single-shot, but interactive” or “multi round keepalive + commitments for queries”. In non-cooperative settings (or as we call it: the general case), I assume they would collapse into the same game.

In the worst case, we can “just broadcast” queries for intents, but that would likely not give us reduced complexity.

Misc CSP Notes

Some sections from the “Handbook of Constraint Programming” we might want to give a closer read:

  • 20.2.3 Async DCOP, the lower bound propagation approach might be interesting in async timing games
  • 20.4 Distributed Local Search, which is compatible with our setting
  • 20.5 OpenCSPs, regarding problems where not all constraints are set at t_0
  • 21.3.1 Fuzzy Problems, might be useful for constraint languages UX/mapping with LLMs
  • 21.4 Problems that Change, regarding local repair

As it seems, there is a lot of relevant research. Our task will be to come up with an architecture that does not close the doors on implementing refinements as we proceed from a “simple” starting point.

Mechanism Design for Resource Swaps and Distributed Solving

Some more notes and literature pointers on different topics related to solving and intents.

Barter Exchange

Robust Kidney Exchange: [1811.03532] Scalable Robust Kidney Exchange

This model seems to be very close to Multi-Agent, Multi-Resource swaps we have modelled before and provides robust and efficient solution algorithms.

One difference: instead of having one-for-one swaps, we need to enforce simultaneous fulfilment of potentially multiple HAVE and WANT constraints, so I don’t assume we can just reuse the incentive structures, but will “only” have to research their composition (which will likely lead to combinatorial complications, but alas).

Thesis on Kidney Exchange Mechanisms (+ background on Robust Barter Exchange)

This has some more background material

Towards Barter Double Auction as Model for Bilateral Social Cooperations

Modularity and greed in double auctions

Useful techniques to (de)compose double auctions

Mechanism Design which might be helpful for distributed solving

The following papers contain results on information elicitation and propagation in distributed information settings and networked auctions and might contain approaches on how to propagate partial solutions during solving.

Collusion-proof And Sybil-proof Reward Mechanisms For Query Incentive Networks

A Truthful Referral Auction Over Networks

Incentive-Compatible Diffusion Auctions

Information Diffusion and Revenue Optimization in Distribution Market

Trust Aware Mechanism Design

These are interesting in regard to trust aware mechanism design

Mechanism Design Powered by Social Interactions: A Call to Arms

http://dengji-zhao.net/publications/IJCAI/ijcai22-ec9.pdf

Strategy-Proof and Non-Wasteful Multi-Unit Auction via Social Network

Translating Natural to Formal Intent Languages

Notes

For distributed solving, we should investigate the intersection of composition of auctions and trust aware auctions.

1 Like