The objective of this thread is to followup on the discussions between myself, @Michael, and @degregat concerning resource logic composability.
The most simple model for a resource logic evaluation context I came up with during the inital intent machine research was a stack based one as follows:
The evaluation context consists of the following:
 Two pairs of stacks for
consumed_resources
,created_resources
 A set of constraints, taken from the
resource_logics
andextra_data
field(s). The constraints specify slots for
consumed
andcreated
resources, as well as invariants that need to hold over them. This defines a clause we need to find a satisfying assignment for.
 The constraints specify slots for
The evaluation process works as follows:
 We receive a TX. This could be a TX composed from other TXs.
 We populate the first stacks in the pairs with
consumed
orcreated
resources respectively.  We populate the constraints with all
resource_logics
, if they refer to side constraints fromextra_data
we add these too.
During search for a satisfying assignment of the constraints we do the following:

For each slot we pop elements of the respective first stacks and check:
 Is the element valid in this slot (i.e. are its properties the required ones)?
 Yes: â†’ put the element in the slot
 No: â†’ push the element onto the other stack
 Is the element valid in this slot (i.e. are its properties the required ones)?

Should the first stack in any pair be empty, we switch stack positions, with the second stack becoming the first and vice versa. If both stacks are empty and slots still need to be filled, the
logic
has failed to validate.
Note: This can lead to satisfying assignments for subclauses, but the whole process might still fail. We then have to start again. This is where the search comes in.
If a satisfying assignment for the clause is found, all constraints are satisfied and the TX is valid.
(Edit: Amended this model to tie the local view of a resource logic to the global view of the whole context by loading all constraints first and doing CSP style solving. @AHart @cwgoes @vveiln it seems to me that this could/should be connection between resource machine and intent machine, so we should figure out how we can make the model of the evaluation context compatible with both machines.)
This could be implemented using sets or even smarter approaches, and thats fine, Iâ€™m just using this stack based approach to have some implementable model that introduces only the constraints we want the resource machine to have: Everything is order invariant, it only cares about amounts, i.e. balances and predicates being satisfied.
@cwgoes mentioned this, or equivalent behavior, is not (yet) included in the specification of the resource machine, but I think the resource machine would be the correct place to implement it.
what do you, @vveiln think? @mariari is anything like this implemented already, and if so, are the approaches compatible/equivalent?
I think we still have a problem in the context of authorization and resource logic composability (at least with the current ARM).
On the meeting yesterday, we defined the signature data to contain the following information:
{
"consumed": resource commitment
"must_be_created": (set resource commitments)
}
Letâ€™s look at the transfer case, where Alice
partially transfers Kudos to Bob
:
10 A_Alice > 5 A_Alice + 5 A_Bob
Alice
calling the transfer(...)
transaction function, has information about all resources and commitments. She signs the message
transferMessage =
{
"consumed": h_cm(10 A_Alice)
"must_be_created": (h_cm(5 A_Alice), h_cm(5 A_Bob))
}
Letâ€™s call her view the global view of the (partial) transaction.
However, taking the local view of the consumed resource logic (i.e., of the 10 A_Alice
), we donâ€™t have knowledge about which resources should be in the "must_be_created"
set.
Thus, the 10 A_Alice
resource logic cannot reproduce the transferMessage
and check if it is present in the extra
data of the tx
.
This doesnâ€™t change if we iterate over all elements of extra
data (which was proposed yesterday as a solution, although being computationally expensive). We still donâ€™t have the "must_be_created"
information in the local view.
Could a solution be to simply put signed messages (the signature concatenated with the message) in the extra
data and to verify those against the consumed resourceâ€™s owner
public key?
No.
The logic function of the consumed 10 A_Alice
resource can iterate over signed messages present in the extra
data until it finds one that verifies against the owner
public key of the 10 A_Alice
resource, here Alice
. However, the problem is that this resource logic check would be valid for every other signed message being present in the extra
data that Alice
has signed. In particular, a malicious actor could just put an old signed message from Alice
in the extra
data and steal her tokens. Preventing this replay attack by putting a nonce in the message doesnâ€™t work, because, again, the nonce isnâ€™t known in the local view of the consumed resource.
In a nutshell:
We are trying to verify information from the global view in the local view where this information is only partially present.
I think there are several distinct questions here. I will start with the first, which I think we can call linearity of constraint satisfaction.
Suppose that I authorize the consumption of my 5 ChrisKudos resource contingent on the creation of a 5 ChrisKudos resource owned by Michael, and also authorize the consumption of another 5 ChrisKudos resource contingent on the creation of a 5 ChrisKudos resource owned by Michael. I do not want it to be the case that a transaction which consumes both 5 ChrisKudos resources and creates only one 5 ChrisKudos resource owned by Michael (perhaps creating some other ChrisKudos resource owned by someone else to balance) is valid  i.e., I want to authorize the consumption of my resources contingent on the creation of new resources, where the new resources created are not used to satisfy any other constraints.
We can alter the resource machine to provide this property by default by simply requiring that the commitments and nullifiers passed as inputs to resource logics are disjoint subsets of the commitments and nullifiers present in the transaction (instead of just subsets, as they are currently). However, I think that this might be too restrictive. In particular, it seems to me like we might want to allow cases like:
Inputs  Outputs 

Counter1 { n = 3 }  CounterCombined { n = 7 } 
Counter2 { n = 4 }  Counter1 { n = 0 } 
CounterCombined { n = 0 }  Counter2 { n = 0 } 
In this example, for whatever reason, we want to allow the â€ścountâ€ť to be transferred between different counters (perhaps some kind of statistical aggregation), and we want to check a balancelike property (the â€śnet countâ€ť being preserved) in resource logics (we could technically use the RM balance check for this case, but we couldnâ€™t use it in a case with a more complex aggregation function). On the consumption side, we essentially want the resource logics for Counter1 and Counter2 to check that their count is included in the CounterCombined output resource  and in order to check this, they will each need to read all three input and all three output resources (since the relation being checked is that the sum of n
for all outputs is equal to the sum of n
for all inputs, plus perhaps some additional restrictions).
Disjoint subsets would not allow for this kind of usage, since each input and output resource could only be used as the input to a single logic. As this example gestures towards, I think the restriction we want is more like:
 a transaction includes an unordered set of actions
 each action includes a set of consumed and created resources
 the set of consumed and created resources in an action (or subsets thereof) is passed to the resource logics for all the consumed and created resources within that action  but not visible to any resource logics in other actions  i.e. actions must be individually valid (but they do not need to be independently balanced)
I note the additional property that if in fact we have a transaction with actions A_0 ... A_n, such that A_0 ... A_i and A_i ... A_n are independently balanced, we can split the transaction into two transactions, the first with actions A_0 ... A_i and the second with actions A_i ... A_n, such that both transactions will be valid and balanced (i.e. the transaction is decomposable).
To return to our original example, in this case, we want the additional guarantee that 5 ChrisKudos are consumed and the new 5 ChrisKudos created as an action (one containing no other changes), which ensures that the newly created resource cannot be used to satisfy any constraints in other actions which the transaction might possibly contain. We can enforce this by checking in the consumptionside resource logic that exactly one resource is consumed and one is created in this action, and that they have the right properties. I donâ€™t think we need to give the actions identifiers known to the resource logics or anything like that  although perhaps we can compute action identifiers for usage elsewhere  resource logics just need this actionseparation to be enforced.
There are a few open questions here, namely:
 Is extra data included in this action segmentation?
 I think it should be (and thatâ€™s required for the â€śdecomposabilityâ€ť as above).
 Does this rule out any transactions we want to be possible?
 I think not  transactions can always just be one big action.
Hmm, but shouldnâ€™t the resource logic be able to iterate over the extradata and check for an authorization where the consumed
value in the signed message is equal to the commitment of the resource being consumed? Thatâ€™s what binds the authorization/signature to this specific resource, and also what prevents replay (since this specific resource can never be consumed again). Then the resource logic can see whatever was authorized (in the â€śmust_be_createdâ€ť field of that authorization) and check for the presence of those resources.
Does it make sense to see what you describe here as atomicity from the perspective of a resource logic?
As in: a TX should not be smaller than an action, but could contain multiple? In the latter case, it shouldnâ€™t matter if we just load them into the same evaluation context but do not preserve information about how they were segmented initially.
Yes, that sounds right. In particular, a resource logic should be indifferent to whether or not other actions are present in the transaction (which is enforced by not allowing the resource logic to see whether or not other actions are present).
I think we do not want to identify/order/specifically tie resource logics or authorizations to specific actions  so if this is what you mean by â€śhow they were segmented initiallyâ€ť, I would agree.
shouldnâ€™t the resource logic be able to iterate over the extradata and check for an authorization where the
consumed
value in the signed message is equal to the commitment of the resource being consumed? Thatâ€™s what binds the authorization/signature to this specific resource and also what prevents replay (since this specific resource can never be consumed again).
Ah, I think you are right! Thank you for pointing this out to me
One approach we could do to tie the local and global views together would be to require the resource logic to load the constraints specified in extra_data
first.
Then, for the TX to be valid, all logics and all constraints from extra_data
must be validated first. Now thinking about it, we should use the global view until all of that that is done. Basically this is the CSP solving part and if we do it like this, it becomes compatible with the descriptions of the intent machine report. I will amend the stack model above.
How are resources represented? Does this model consider the shielded case as well?
You mention slots but I donâ€™t understand where they come from. They donâ€™t belong to the stacks from what I understand
Is this model defined for a single logic or all logics from the tx?
So instead of checking that a certain resource satisfies the certain logic, we check that there is at least one resource satisfying the logic being processed now? If this model is defined for all logics from the tx being checked, it seems to me that the order in which the resources are loaded affects the results, e.g., if there are two resources R1 and R2 that satisfy logic L1 but only R1 also satisfies logic L2, the logic L1 is checked first and the resource R1 is on top of the stack, the resource R1 would satisfy the constraints from L1, go in the slot, and then the logic L2 will fail because there is only R2 left. And if R2 was on top of R1, it would work well  R2 would satisfy L1 and R1 would satisfy L2. What am I missing here? What happens then? Is it where the search comes in? What is search in this context? How the order in which resources and logics are loaded to the corresponding structures is determined?
Yeah I think it would be a good idea to specify this
Should be a nullifier, I believe
So basically actions are initial transactions that preserve some context even when composed? Who is responsible for action separation? Are actions composable?
As I define them above, actions do not need to be preserved in transaction composition  a solver, for example, could move commitments/nullifiers around to different actions  but ultimately, in order for the final transaction to be valid, all actions within it must be individually valid. Which action a resourceâ€™s commitment or nullifier is in also determines which data is available as public inputs to the proof  so proofs, once created, will also be bound to a specific action in this sense.
They are something like variables in the constraints, e.g. underspecified resources, but if a resource is already known, it should be tied to the slot.
Reading this just made me understand:
All search should precede this, in the sense that a solver can come up with a valid TX in an arbitrary way, e.g. by using Z3 to find satisfying matches.
Then this stack based evaluation context should only need to verify the validit, or fail in case the TX is invalid.
As far as I understand now, resource_logics
and additional constraints should be written in such a way that they are order of satisfaction invariant, or order information is provided.
That probably requires that constraints can only be satisfied uniquely, e.g. if there is a resource R1
that could satisfy constraint C1
or C2
when going into specific slots of them and there is a resource R2
that could only satisfy constraint C2
, then we lose order invariance.
We could introduce something like annotations in the sense that any TX coming from a solver would already provide positioning information of the resources for the slots of the constraints and the evaluation context would just apply the predicates implementing constraints and logics to the context in the order provided. Should one of them fail, the TX gets dropped as invalid.
If we donâ€™t want to do this, then we need to require constraints and logics to only ever be uniquely satsifiable, but I donâ€™t know if thats even possible if the parties defining them only have a view of their own data and not the whole TX it would be part of in the end.
The evaluation context as described above should still be useful
to model something like brute force search, where one could decide to keep going after failure to validate for a certain amount of time, but randomizing the stack each time.
To make this concrete, I propose the following changes:
Current RM (from v2 report)
Transaction data structure:
 rts \subseteq \mathbb{F}_{rt} (roots of CMtree)
 cms \subseteq \mathbb{F}_{cm} (commitments for created resources)
 nfs \subseteq \mathbb{F}_{nf} (nullifiers for consumed resources)
 \Pi: \{ \pi: ProofRecord\} is a set of proof records
 \Delta_{tx}: \mathbb{F}_{\Delta}
 extra: \{(k, (d, deletion\_criterion)): k \in \mathbb{F}_{key}, d \subseteq \mathbb{F}_{d}\}
 \Phi: PREF where PREF = TX \rightarrow [0, 1] (preference function)
Transaction validity:
A transaction is considered valid if the following statements hold:
 rts contains valid CMtree roots that are correct inputs for the membership proofs
 input resources have valid resource logic proofs and the compliance proofs associated with them
 output resources have valid resource logic proofs and the compliance proofs associated with them
 \Delta is computed correctly, and its opening is equal to the balancing value for that transaction
(I omit the information flow control predicate for now, letâ€™s think about that separately)
New RM with actions (proposed)
Action data structure Action:
 cms \subseteq \mathbb{F}_{cm} (commitments for created resources)
 nfs \subseteq \mathbb{F}_{nf} (nullifiers for consumed resources)
 \Pi: \{ \pi: ProofRecord\} is a set of proof records for the resources in this action
 extra: \{(k, (d, deletion\_criterion)): k \in \mathbb{F}_{key}, d \subseteq \mathbb{F}_{d}\}
Action validity:
An action is considered valid (with respect to a transaction) if the following statements hold:
 rts (in the transaction) contains valid CMtree roots that are correct inputs for the membership proofs
 input resources (in the action) have valid resource logic proofs, where the inputs to the proofs are only the other commitments, nullifiers, and extradata in this action
 output resources (in the action) have valid resource logic proofs, where the inputs to the proofs are only the other commitments, nullifiers, and extradata in this action
Transaction data structure:
 rts \subseteq \mathbb{F}_{rt} (roots of CMtree)
 actions : \{ Action \} (the set of actions in the transaction)
 \Pi: \{ \pi: ProofRecord\} is a set of proof records
 \Delta_{tx}: \mathbb{F}_{\Delta}
 \Phi: PREF where PREF = TX \rightarrow [0, 1] (preference function)
Transaction validity:
A transaction is considered valid if the following statements hold:
 All actions within the transaction are valid (as defined above)
 All required compliance proofs are included and valid
 \Delta is computed correctly, and its opening is equal to the balancing value for that transaction
Questions from my end:
 @vveiln Does this make sense to you? Do my new structures look correct? Is it clear to you how to adjust the public/private inputs and compliance proofs?
 @degregat Does this match your mental model, considering each action as an independent evaluation context / independent â€śstack frameâ€ť (which must be atomically satisfied)? If not, whatâ€™s different?
I think this model should be good, as long as we require evaluation/validation of individual actions to either
 be invariant in respect to resource position in logics / constraints.
 ship information of resource position in logics / constraints.
 accept the cost of potentially running evaluation over and over again until a satisfying position is found.
(See this post above for context.)
I think 2. would give us the most flexibility, but I donâ€™t know how much we would actually need it in practice and it increases the complexity of the RM a bit.
Variant 1. seems like it would place the work on guaranteeing invariance on logic authors, unless we can supply tooling/patterns that make this easy.
What do you mean by â€śresource positionâ€ť, exactly? In the data structures I propose, resources created and consumed are each part of a particular action  and it is with respect to that action that their resource logic is checked. Itâ€™s up to the solvers / transaction function / etc. to form a transaction object with a valid set of actions  whatever search procedures they might engage in to do that are outofscope.
Yes, in the data structure. The question is: Does an action consist of resource logics and potentially additional constraints, which already have the resources positioned in ways that e.g. a greedy evaluation strategy would always successfully validate, or is evaluation not order invariant, i.e. if I change the order of the fields in the data structure, without changing any other property of the fields, would the logics and constraints still validate without changing the evaluation strategy.
Ah I see  in my formulation, all fields in the data structure available to resource logics are sets, so yes, they are orderinvariant.
I assume this set in the tx structure contains compliance and delta proofs? Can we move the compliance proofs to the actions and consequently check the compliance proof validity as a part of action validity?
Why do we want to keep roots in a transaction but check them in actions? Just to avoid duplication?