The hard problem of fuzzy matching & requirement synthesis

In many intent-centric applications - especially those focused on demand-side aggregation - the “hard problem” (or so to speak) to solve is that of fuzzy matching and requirement synthesis:

  • Fuzzy matching, i.e. how to recognize that user intents with descriptions of requirements for a product to be produced or an outcome to be sought written in natural language are similar to each other, and
  • Requirement synthesis, i.e. how to combine two “similar enough” user intents in a way that preserves the essential requirements of each intent being so combined.

These problems are not trivial - in full generality, they are impossible, since this would require a perfect model of the world - so we will not attempt to “solve” them, per se, but rather clarify the interfaces and point out possible decompositions into sub-problems and ways to introduce humans (or LLMs, or other entities with world-models) in the loop to help with synthesis.

I will use Public Signal as a prototypical example to explain what I mean concretely here. In Public Signal, users submit intents which describe a willingness to pay for a particular product, if only it existed. For example (using some pseudo-syntax), imagine that we have intents as follows:

  • Bob is willing to pay up to 100 USD for a simple smartphone with a web browser, hardware kill switches, at least one SIM slot, and open-source hardware/software stack. It must weigh under 500 grams and have at least a day’s worth of battery life in regular use.
  • Sally is willing to pay up to 150 USD for a simple smartphone with a web browser, two SIM slots, and an open-source hardware/software stack. It must weigh under 750 grams and have at least a day’s worth of battery life in regular use.
  • Charlie is willing to pay up to 75 USD for a simple smartphone with a web browser and two SIM slots. It must have at least two days worth of battery life.

Reading this text, one can straightforwardly infer classes of potential products which would satisfy different combinations of these intents. What is not clear is how best to represent these kinds of requirements in a way which can be deterministically processed - since in reading this text and synthesizing possible combinations, one uses an understanding of the world - specifically smartphones - to understand which entities are unique in the description of requirements and how they match up. I can imagine a few ways the application design might go:

  1. Users of the application might construct and share specific product ontologies which define standardized features, attributes, and synthesis rules for a particular category of products. For example, if “smartphone” is a category, “weight” and “battery life” might be standardized attributes. Intents can then either be written using these specific ontologies directly, or perhaps converted from natural-language descriptions using LLMs (and then displayed to the user for confirmation). Solvers (and potential producers of the product(s) in question) can use the synthesis rules to determine how requirements can be combined.
    This option is simple, straightforward, and safe, but it puts a lot of onus on the authors of these product ontologies to capture salient features and attributes, and we will need to think more about how these ontologies are developed, shared, updated, etc. - and how disputes are handled after the products are actually built (as to whether or not they matched the requirements specified in the ontology).
  2. We might focus exclusively on the problem of composition of natural language descriptions, and aim for Public Signal to ultimately produce some composite intent with a natural language description of the product to be produced. The question here then is how these natural language descriptions can be combined. Users of Public Signal could choose a designated agent - perhaps a sufficiently neutral party or an LLM - who then has a special permission to combine intents and produce composed natural language descriptions which they assert will satisfy the originally articulated requirements. In the limit case, there can be many such parties at different stages of composition, and users could delegate this “right of composition” along specific lines of trust and expertise (e.g. I might delegate mine to a smartphone hardware expert friend).
    This approach is much more general and can adopt a topology which requires much less agreement on natural language ontologies, but requires a lot of online user interaction - perhaps workable for a slower-paced application such as Public Signal - but we need to think through the specifics. There’s also some “compositional MEV” here that will require analysis.
  3. Some combination of (1) and (2), possibly with additional incentives such as staking on the accuracy of compositions, where future post-production disputes could lead to slashing.

I’ll leave my thoughts here for now. This problem is quite general and also very relevant to topics such as governance. Inviting opinions by @apriori @degregat @nikete especially.

2 Likes

Since we want to enable a plural approach here, there will be many ontologies and users who want to do similarity matching will need to agree on a common one or at least different ones where elements are “comparable” inter-ontology. Ideally, users would pick the ontologies that improve their outcomes the most, but this does not sound like a trivial problem to solve for individual users, but there should be overlap with the slow game research to come.

It sounds to me like the governance issues for curating ontologies will be quite similar to the ones of curating community models, e.g. LLMs, so any progress in these mechanisms can be shared for different building blocks needed for approaches 1. and 2.

Also, once ontologies are available, governance and selection techniques discovered above should be useful for linking (sub-)ontologies (including similarity scores?) to each other, as well as decomposed elements of natural language descriptions to elements of ontologies.

This way, approaches 1.,2.,3. could be used in parallel but approximate similarity scores might be recovered, across approaches, ontologies or agents.

The most straightforward way to start seems to be having application developers curate (sub-)ontologies, or maybe have solvers recommend some to users, since then trust assumptions could be reused.
In the long run, we certainly want to have a refinement of this, or a transition to learning based approaches, but if we want to get things off the ground quickly to observe the system in operation, I think we should start with 1.

1 Like

Some initial thoughts on nomeclature; I would call the compatibility of intents not their similarity, this seems more general in a way that is correct and avoids confusion between simbol and signifier.In particular in demand aggregation games you care about matching a large set of similar demands, but in the task/provider case you care about matching a task with a suitable provider, even through this may be a very different in the similarity space.

If you take that semantic step, then I think it suggests a approach that unifies the fuzzy matching and requirement synthesis, into a single game where you are effectively trying to be a good zero shot agent for tasks that take in reports and select allocations.

2 Likes

This book on evolutionary learning in signaling games (including convention games) seems to provide relevant context on the iterated game of agreeing on an ontology:
Signals: Evolution, Learning, and Information | Oxford Academic

2 Likes

Agreed, “compatibility” is a better word here. In a sense we can probably generalize the theory if we want, where discrete intents (as existent currently in the resource machine) have objectively verifiable (if not necessarily decidable) compatibility with 100% certainty, while natural language intent compatibility would have to be measured directly (method 2) or mapped to a discrete representation (method 1).

1 Like

In the smartphone example above, you could require for such a product that the potential buyers must construct an on-demand DAO and vote on their preferences? You run into plenty of edge cases where many preferences might be missed. But if you think about the smartphone market today, there really isn’t all that much choice anyway. If Public Signal is a 10x improvement in user satisfaction, that might be good enough. In this case, an aggregate of users’ preferences would be sufficient.

One challenge with natural language is handling disputes as they arise. Prediction markets run into this problem and the resolution is intersubjective. Up front, buyers of the product and creators of the product would need to agree on a dispute resolution mechanism that is binding. One challenge here is that physical items, unlike software, are sourced from raw materials in the physical world. The extraction of these materials and their assemblage into a product typically requires an understanding of the local laws w.r.t. in these activities. There exist consumer protection laws, labor practices, and ESG laws to name a few.

To make this work, we probably need some type of programmable dispute resolution which executes based on some oracle update and attestations in the event a dispute is triggered. A credible agreement on semantics would be required as well.

1 Like