Do we have one compliance proof per resource logic? I thought we had a compliance proof for the whole transaction (but I may be out-of-date here). If the compliance proofs are per-resource, they should be bundled as part of the proofs in the action, yes.
Yes. Also, in principle, roots relate to the ordering of the transaction, they have no relation to any particular action (except insofar as proofs of inclusion in CMtree must reference one of the roots in this set).
Each compliance proof involves at most one created and one consumed resource, so it is neither one per resource nor one per transaction. It might be more convenient to have compliance proofs per transaction (bc this way we can have less compliance proofs), but it might also be simpler to have them per action bc this way we have all proofs that require the knowledge of the full resource plaintext in one context
I see. If compliance proofs require knowledge of the resource plaintext, we would want to make them separately, right (since we donāt necessarily want to reveal the plaintext of a resource to the solver or further parties, ideally we make a compliance proof as soon as possible)?
Do we actually expect the proofs of actions and proofs of the transactions that contain these actions to be created at different times? Because if not it might not make the difference if these proofs are associated with actions or transactions, otherwise it might be not ideal to have multiple points of proof creation that require the plaintext knowledge for the same resources
Yes, in our models so far they were sets too, but some mechanism needs to ensure that they are order invariant in practice, as @vveiln pointed out here.
No inputs to the logics have any order at all, since they are sets. You cannot write a function that depends on ordering information if the ordering information is not available to the function. What variables are you referring to? Maybe there is some terminological confusion here.
That means we require order invariance of evaluation from the predicate and its validation mechanism, which is fine, but then we need to make the implementation behave like this:
For example if a predicate looks like this: (*1 && *2) || (*3 && *4) with *1-4 being variables and {A, B, C, D} being a set of linear resources that can fill them, then ā(A && B) || (C && D)ā would need to be equivalent to all other orders like ā(B && A) || (D && C)ā etc.
There should be a way to make this transparent and not put the labor on logic authors, but we should make this explicit.
Note: This problem exists whether we introduce actions to the model or not.
Thanks (also synced verbally) - I understand what you mean now, and I agree that we need to think about this + make sure resource logic authors donāt have to take on a lot of extra mental load.
I just realized that there is still a problem in the context of authorization via signed messages in the extra data (a type alias in the Juvix code hid it for me until now).
If we require the nullifier nf instead of the commitment cm, we must be able to reproduce it as part of the authorization check (i.e., isAuthorizedByOwner) being part of the resource logic (see the implementation code at the end of this post).
However, resource logic functions being currently defined as
only receive the resource plaintext self and transaction tx as inputs, which is not enough to compute the nullifier \texttt{nf} = h_\texttt{nf}(\texttt{nk}, r). We need the nullifier key nk as well, to which the resource logic function does not have access.
I am also aware that we most likely do not want to share the nullifier key, although it can be a randomly generated secret used only once for this specific resource ā @vveiln explained it here:
If we are ok with sharing it, we could put nk (encrypted?) into the extra data map, but then the question is under which key to find it. The only way seems to be to use the commitment h_\texttt{cm}(\texttt{self}) of the self resource as the lookup key, which we donāt want to use as it reveals information about the creation date and brings us back to the beginning.
Any ideas how to solve this more simply and elegantly @vveiln@cwgoes@degregat ?
I am not sure if this changes with an action stack. It could work if an action contains exactly one consumed resource, because then there is only one nullifier to search for.
Implementation Code
tokenLogic
(self : Resource) (tx : Transaction) : Bool :=
if
| isConsumed self tx := isAuthorizedByOwner self tx -- <-- NOTE: `nk` cannot be passed here
| isCreated self tx := true
| else := false;
isAuthorizedByOwner
(self : Resource)
(tx : Transaction)
(nk : NullifierKey) -- <-- NOTE: This argument is not available in `tokenLogic.
: Bool :=
let
owner : PublicKey := getOwner self;
selfNf : Helper.Nullifier := nullifier self nk;
lookupResult : Maybe (Pair AuthMessage Signature) :=
lookupExtraData (natToBytes32 selfNf) tx;
in case lookupResult of
| nothing := false
| just (msg, sig) :=
AuthMessage.consumed msg == selfNf
&& anomaVerifyDetached sig msg owner
&& isSubset
(AuthMessage.mustBeCreated msg)
(commitmentSet tx);
type AuthMessage :=
mkAuthMessage {
consumed : Nullifier;
mustBeCreated : Set Commitment
};
A transaction includes commitments of created resources and nullifiers of consumed resources. If we have access to the transaction data, we know the nullifier. More than that, the party that creates logic proofs, also creates compliance proofs. One of the checks included in the compliance proof is the proof of the correct nullifier computation, which requires the knowledge of the nullifier key. So from the perspective of knowledge it is sorted.
Another question is the function signature. From the specification perspective it isnāt a problem as it currently defines the resource logic inputs as:
Resource Logic Public Inputs:
nfs \subseteq nfs_{tx}
cms \subseteq cms_{tx}
tag: \mathbb{F}_{tag} ā identifies the resources being checked
extra \subseteq tx.extra
Resource Logic Private Inputs:
input resources corresponding to the elements of nfs
output resource corresponding to the elements of cms
custom inputs
which certainly allows to pass some private data securely, if we are talking about the shielded case. And whatever works in the shielded case must work in the transparent case as well
Given a consumed resource r^* with an associated tag^* and the input resources r_i corresponding to the elements of nfs, how can I find the nullifier nf^* associated with r^* in nfs if I want to?
Since nf^* = h_{nf}(nk^*, r^*), I must know nk^* as well to compute the associated nf^*.
As explained above, I want to compute nf^* in the r^* resource logic to look up a signed message in the extra data without looping over all nullifiers nfs and trying every single one, which would be computationally expensive. If I would use the commitment instead, it would work, but I would leak the resource creation date.
Depends on the resource machine instantiation. In Taiga it is a commitment for a created resource and a nullifier for the created resource, which also answers your second question - given the tag of a consumed resource, you know its nullifier
I just talked to @AHart on how to best bridge the RM and IM to resolve the linearity problem here:
Intent ā CSP ā TX pipeline
We have a solver that matches intents and a state machine that only needs just check if a TX it gets from the solver is valid.
The CSP survey provides an example encoding for intents in section 6.2 (we add an extension to memorize resource names, for implementation reasons).
An intent like this could be realized as an unbalanced TX carrying an intent predicate which encodes extra constraints over the resources. This predicate would likely be nockma code.
For each solving backend (e.g. integer linear programs) there would be one transformation that
takes the nockma code of the intent predicate and all the resource logics from the resources in the intent and translates them into a set of constraints in the constraint solving language. This represents our intent in constraint form.
The solver then tries to match intents in constraint form to each other, to produce assignments of resources to variables which satisfy multiple intents at once. If a matching is found, it gets translated back by accumulating all the resources and intent predicates from the original intents into a TX.
To solve ordering problems the state machine might have when re-validating a TX, we can provide annotations about the explicit structure of the assignment: During the translation, we memorize which resource was assigned to which variable in the constraint form and translate this to which resource needs to be assigned to which variable in the different pieces of nockma code in the TX.
The state machine then only needs to run the nockma code using this annotation and observe wether it fails or passes the evaluation. (Does this make sense @mariari@cwgoes?)
Takeaways
This implies that nockma will be the stack machine used in the evaluation context for TXs, which makes sense to me.
The application team (@Michael@paulcadman and Iām happy to help) should check out the predicate language in section 6.2. of the CSP survey and decide whether this is a good starting point for formulating constraints. We should choose a predicate language that makes it intuitive to think about constraint implementation for applications. Translating something like logical conjunction into mixed integer linear programs is pretty straight forward and by choosing an IR here, we also make the backends interchangeable.
The resource machine doesnāt have anything called (specifically) an āintent predicateā. So-called intent resources are just ephemeral resources (scoped to the lifetime of a single transaction), which have regular resource logics (which, in the transparent case, could indeed be Nockma). Is that what you have in mind here (or sufficient to implement it)? Or do you have in mind something else?
Here you seem to imply that the intent predicate is something other than a resource logic, which is confusing to me. Why canāt we just try to implement a bidirectional transformation between CSPs and (at least a subset of) resource logics? Since resource logics encode the constraints which must be satisfied in order to create a valid transaction, that seems like the most natural option to me. Now, it may be difficult to translate arbitrary resource logics (e.g. in Nockma) to CSPs, so we might want to think about what subset we can easily translate, and what kind of structure might be necessary in a resource logic VM in order for it to be easily translatable (at least in the mid-term, adding a second - especially a second transparent - VM is not at all out of the question, if doing so would allow us an easier translation here).
This part makes sense to me - although we need to spell out exactly what this translation looks like, there will be some non-trivial parts I expect.
Yes, this (what resources to pass in which order to resource logics) can be encoded in the transaction function - that probably makes most sense.
I donāt think we need any fancy annotations here, just the transaction function and a transaction data structure which allows for an order of the resource logic arguments to be clearly specified should be sufficient. The transaction data structure should not require running Nockma code to evaluate, though, apart from any resource logics written in Nockma (not sure if you meant to imply that other Nockma code is required or not).
I donāt follow this part. Using Nockma as a language for resource logics and transaction functions and using it to evaluate transaction validity are three different choices, and choosing (1) or (2) does not imply choosing (3) - we have a deterministic procedure to evaluate transaction validity, I see no reason to implement that procedure in Nockma or any VM (other than perhaps a zkVM to prove evaluations later onā¦), that would just be extra work and lower efficiency, and doesnāt bring us any benefits. Iām fine with the āstack machine evaluationā as a mental model for how transactions are evaluated, or even as a formal model if we need one for some reason, but I donāt think thereās much benefit in actually implementing it, apart from as a tool for understanding.
The application team (@Michael@paulcadman and Iām happy to help) should check out the predicate language in section 6.2. of the CSP survey and decide whether this is a good starting point for formulating constraints.
Nice, thanks for the reference. @paulcadman wrote already a Domain Specific Language (DSL) for intents that allows expressing clauses such as
simpleSwap (A B : Asset) : Transaction :=
intent@{
clauses := [give (exactly A) for exactly B]
};
complexSwap (A B C D E F : Asset) : Transaction :=
intent@{
clauses :=
[ want (exactly A) for any [B; C]
; give (any [A; D]) for all [E; F]
]
};
that is a good starting point ā it misses the intents and resource weights and the non-trivial part that creates the actual (partial) transaction (consumption of the āhaveā resources and creation of an ephemeral intent resource expressing the constraints).