Forall / negative checks in resource logics

One question that has come up in our resource machine specification efforts is whether resource logics should be able to see all resources which are present (consumed or created) in a partial transaction, or whether they should only be able to explicitly check for inclusion of specific resources or resources with particular properties. In particular, the former would allow a resource logic to perform a kind of “forall” check, of the form:

logic (...) newResources =
  forall resource <- newResources
    if resource.kind == B return false
  ...  

This kind of check is not possible if a resource logic only sees a subset of the resources consumed, as it could no longer exclude the possibility of a resource with kind B (for example) existing outside of the subset which it sees.

A resource logic can already enforce arbitrary positive checks (statements about the consumption or creation of such-and-such a resource with such-and-such a property) by requiring the existence of a temporary resource which must be balanced, and whose logic encodes the check in question. A forall check would add the ability to enforce negative checks (statements about the non-creation or non-consumption of such-and-such a resource with such-and-such a property). In a certain sense, this is more powerful. However, it comes with a number of potential downsides:

Validity under compositionality

Resource logics no longer obey validity under compositionality. If resource logics only check a subset, once a resource logic returns true, a proof can be created, and the proof will remain valid no matter what other partial transactions the original partial transaction is composed with. In a naive implementation, if resource logics can iterate over all consumed and created resources, proofs can only be made at the end, which means that we would give up both the added privacy that would otherwise come from partial solving and the possibility of hybrid public-private transactions (which would typically entail adding consumed/created resources in the post-ordering execution stage). A more sophisticated implementation would split up the resource logic into the regular logic, with only positive checks and for which a proof can be made as soon as the positive checks are satisfied, and a set of negative checks for which proofs must individually be made about subsequent resources added to the partial transaction (in the created or consumed sets). This would recover the privacy benefits (sans, of course, the information in the negative checks) and a form of validity under compositionality, but it will add some implementation complexity.

Relatedly, negative checks create a counterintuitive disequivalence between combined balanced partial transactions and separate balanced partial transactions. Consider two partial transactions A and B, both balanced. If negative checks are allowed, A may be a valid balanced transaction, and B may be a valid balanced transaction, but A + B may be an invalid transaction due to some negative condition in a resource logic in A which is violated by some resource in B or vice-versa.

Resource wrapping

Plausibly, the intention of negative checks may be to prevent interaction with some undesirable resource, perhaps a user or asset one wishes to sanction for whatever reason. Regardless of the motivations in doing so, this approach is not necessarily as effective as one might imagine, because any resource can always be wrapped in another resource which directly controls it but would not trigger the negative check. Say that the negative check checks for non-existence of a resource with kind B - I can simply create a new resource kind B’, where anyone can create one B’ from one B, or convert a B’ back into a B. Of course, additional negative checks could be added, but this game of identifier cat-and-mouse does not seem to provide the original property we thought we might be able to obtain.

My current conclusion is that this kind of negative check is worth continuing to investigate, but not yet worth implementing. Discussion welcome!

2 Likes

In this case, I would opt to apply the general principle of “architect for the option, but do not implement yet” based on complexity alone.
We will be able to build the prototype applications we’re planning now without it and their development will likely gives us information about the desirability of properties like this, or other more complex ones, s.t. we can decide later.

It raises an interesting question though: How do we enforce non-inclusion of terms on the verifier level? Do they just agree on a canonical language and if any non-canon constructs are used, verification fails?
Would this language be an IR that we provide, or a low level one, e.g. halo2? In the halo2 case, would malicious users be able to implement non-canon constructs directly in halo2, to bypass the agreement?

I don’t know if these are solved problems, but it seems we need a way to enforce canonicity via the proof as well.

Ideally different predicate languages can exist, but it needs to be verifiable which one is being used.

Which leads to yet another question: Can resources using different languages be used in the same ptx? It would be fine if a ptx just fails because, e.g. one language permits terms that are too restrictive for any other to verify, but it would be too restrictive if a ptx would just fail by default on heterogenous predicates.

1 Like

I’m not sure I understand the problem, could you elaborate?

Reading this made me realise that the current version of taiga also doesn’t handle forall checks well because post-ordering execution resources are essentially out-of-scope if the transaction submitted for ordering is balanced

The idea of separating forall checks from other checks sounds interesting, but considering that forall checks seem to be too easy to escape, I agree it makes sense to assume only “positive” checks for now. I don’t like to call them positive though or maybe I’m just thinking about different kind of positivity, but a check “all notes are of type X” seems to be both positive and forall, so it is a bit confusing to call them that

1 Like

If e.g. halo2 is the language being used directly, all halo2 terms are considered valid.

If we want to assume a canonical higher level language that compiles down to halo2, we need some way to verify that the halo2 is compiled from a valid term in the higher language.

If someone could find a way to just pass a halo2 representation, halo2 terms that can not result from valid higher level terms could be introduced and lead to behavior that the canonical form does not permit.

So if we ever want to assume that some higher level language restricts any operations of the lower layers, we need to be able to verify correspondences.

1 Like

I’m not clear on how we would prevent, say, double-spending without a negative resource check. Presumably, if I have a resource representing some amount of money, part of the resource logic checks that the total money of resources created is not more than the total money of resources destroyed. This is absolutely not compositional: if pTX A consumes money resource r1 and produces money resource r2, and pTX B consumes money resource r1 and produces money resource r3, that doesn’t mean we can have pTX A+B which consumes money resource r1 and produces money resources r2 and r3.

I understand that we have a notion of “balance checks” as an application-specific work-around for this example. They are a form of negative resource check. However, balance checks do not seem very general. Presumably not everything expresses neatly as a pseudo-monetary “balance,” I’m just not sure I can think of any off the top of my head at the moment. Lack of creativity is not evidence of absence.

2 Likes

These partial transactions cannot be composed anyways, because the composition of two consumptions of r1 would conflict (probably this needs to be clarified in the definition of composition).

I think we can always prevent double-spending with solely balance checks and the other commitment, nullifier etc. integrity checks. If this is a point of differing understandings we should probably clarify it first, since this is the point of balance checks in the first place. I think one could understand balance checks as encoding a special subclass of “balance constraint” which could otherwise be encoded as a check over all resources in a partial transaction - but doing so would require the ability to read those resources, which we can avoid in the balance check with homomorphic delta commitments and integrity checking.

Hmm - what do you mean by “application-specific workaround”? I understand balance checks as implementing a linear logic over resources with variable quantities. That doesn’t seem particularly application-specific as it doesn’t have anything to do with a particular application (only the concept of linearity checks).

I think it depends on what “everything” is, right? I agree that balance checks cannot encode, for example, an arbitrary constraint that “no resource of kind K” is created which a resource not of kind K wants to enforce, and some other kinds of negative checks as described above. I think that they encode a “self-linearity” check for all resource kinds, which checks that the sum of quantities of resources of my kind consumed and created in a balanced partial transaction is exactly zero.

We could consider trying to generalize balance checks to the largest subclass of these “whole partial transaction” checks / negative checks which is structured in such a way as to admit the use of homomorphically additive commitments (and thus improved privacy), I wonder if that would clarify the model.

1 Like

I feel like it makes sense to differentiate between global forall constraints, application-specific forall constraints, and cross-application forall constraints. Global invariants (like resource uniqueness) should probably be enforced on the RM level (e.g., action circuit or delta checks). Application-specific invariants about the application’s resources are enforced by the application logic, and the question left is how can one application check invariants about other applications. I can’t see how the first two types of checks could help with enforcing cross-application checks (e.g., even though delta check checks multiple assets’ deltas at the same time, they are essentially parallel checks AND-ed together).

I’m also wondering how many (some? all?) of those potential forall checks could be easily escaped even in the presence of an appropriate checking mechanism. Because if there is a way to escape every check in a valid way (e.g. by wrapping a resource) then there is no point in providing the checking mechanism

1 Like