# ART Report: Resource Machine

The report contains the specification of the Anoma resource machine abstraction. It should contain all the necessary information needed to create a resource machine implementation (e.g., taiga) with the desired properties (which should also be clearly defined), but not to be unnecessarily restrictive.

1 Like

I would prefer to limit Galois fields to prime order (which is trivial) or 2^n, unless we have some reason for other prime powers? Arbitrary Galois field multiplication is not actually easy; even for 2^n we need to parametrize on a particular polynomial

1 Like

Here are my preliminary notes. Note that Iβm a non-expert when it comes to crypto stuff, so things that someone familiar with the field might take for granted, I cannot. Iβm mostly coming at this from a mathematical perspective. If I wanted to formalize a resource machine as a concrete mathematical structure, what would that look like? To that end, Iβm fixating mostly on ambiguities and things that I canβt make sense of, and giving some recommendations for clearer exposition along the way. I also just finished reading it, so my understanding of what a resource machine is supposed to be is undeveloped. Some of these might be obvious or pointless to others, but I hope this is helpful anyway.

### Page 1:

We say a field to refer to a component of some composite structure

You never define what a composite structure is. It seems to be a record with named fields.

We say a field to refer to a component of some composite structure π

Youβre not consistent about using the word βfieldβ instead of βcomponentβ, and I donβt see a good reason to use this terminology. Why not just always say βcomponentβ? You could then say βcomputable componentβ instead of βcomputed fieldβ, etc.

A computed field of π is a value that is not a field of π but that can be derived
solely from the composite structure π fields.

I think what this is trying to say is that the computed fields are trivially derivable from other fields. This sentence isnβt correct as later computed fields are computed using hash functions being applied to other fields, so, not βsolelyβ; you also need the hash function. If a hash is always involved, maybe you should move the mention of computed fields until after hashes are mentioned. You can then define computed fields as hashes of other fields. Also ensure to mention that computed fields can be created from other computed fields (e.g. π.Ξ), as the default interpretation of just βfieldβ on its own would be the non-computed fields.

The grammar of this sentence suggests that βfieldsβ is a verb, an action that S is performing, which is obviously wrong. So this sentence should be rephrased even if you donβt take my previous advice.

Let π» be a class of hash function

Where does H live? Is it part of the data within the machine itself, or is this just defining a type of hash function?

such that the output of any function in π» forms a finite field.

Is the range a field, or is the image a field? For example, can I have the outputs be, say, graphs, and then define a field over the finite set of graphs defined by the image of H? Are there restrictions on the input?

Iβm also not sure about having π» at all. Itβs never really used. Itβs mentioned as containing the hash functions in the βResource Computable Fieldsβ section, and thatβs it. I donβt see why we need a symbol when all it refers to is the hash functions listed in that section, and we donβt ever need to talk about the set of those functions.

and the corresponding to β finite field is denoted as πΉβπ₯

Rephrase to βand the finite field corresponding to β is denoted πΉβπ₯β.

We use π₯π  (βπ₯-setβ) to refer to a set of objects of the same type as π₯.

Is x a type? Or is x a component, like in the previous paragraph? Taken literally, this says that x should be an element of some type, and βxsβ is referring to that type itself, but thatβs awful. It would imply that, for example, β25sβ means the natural numbers since 25 is a natural number.

I also donβt think this line is that helpful, as this is just a naming convention, not a notation that needs to be explained. Where it appears, βxsβ is just a name, itβs declared to be a set, and it doesnβt need further explanation, IMO. I spent too long trying to figure out what this line was saying when what itβs saying is already obvious from the context where itβs used.

### Page 2:

πππππ : πΉ_πππππ

Many of the "F"s in the resource definition should be in blackboard bold (i.e. using \mathbb) but arenβt.

Resource Computable Fields

Are these hash functions the same across different resources? Or are they global; part of the definition of the resource machine? Or are they universal; the same across different resource machines? This is essentially the same question as before, where does βHβ live?

Address π.ππππ = ππ = β_ππ(π)

Shouldnβt it be βAddress π.ππππ = r.ππ = β_ππ(π)β? Also, why have separate bullets? Canβt one just say cm is also called addr since itβs used as an address? Also, the βr.ππππβ notation is never used, so thus bullet can be dropped entirely.

When a resource is created, its commitment is added to an append-only global
commitment tree.

What is an βappend-only global commitment treeβ? Is it a Merkle tree? Where does it live? Is it part of the resource machine, or is it an external thing the machine has access to?

Delta π.Ξ = βΞ(π.ππππ, π.π)

Why is βRβ capitalized here when itβs lowercase on the other bullets?

Proving system

Where do proving systems live? Are they part of the data within the resource machine, so every resource logic uses the same proving system? Or does every resource logic have its own proving system?

Remark 1. The hash function used to derive Ξ must have additive homomorphic properties [β¦]

The algebra should be spelled out explicitly through equations. Something like

• βΞ(K, q1) + βΞ(K, q2) = βΞ(K, q1 + q2)

It should also be made clear what itβs a homomorphism of. If ββΞ(K, 0) = 0β doesnβt hold then itβs not a monoid homomorphism.

when deltas of two resources of different kinds are added, their quantities stay independent of each other

I donβt know how to interpret βtheir quantities stay independent of each otherβ, itβs not clear what that could mean.

ππππππ¦(π£π, π₯, π) : ππ.ππππππ¦ππππΎππ¦ Γ ππ.πΌππ π‘ππππ β F_π΅"

Should this have βππ.πππππβ as one of its input types? Also, what is π΅? Thatβs not mentioned before this. Did you mean F_π, for the boolean field?

A proof record carries the components required to verify a proof.

This is the same data thatβs taken as input for Verify, but in a different order. Thatβs quite messy. Why not make Verify : πππππππππππ β F_b? That way, you donβt need to permute a proof record to verify it.

### Page 3:

A partial transaction is a mutable composite structure

What does βmutableβ mean? I know what it means in the context of programming, but I think this is trying to handwave something that really needs to be described explicitly. Iβll write about this more at the end of my note, but what actually is a resource machine, and how does its data evolve during execution? The answer to this question is given in disparate pieces implicitly throughout the paper but is never described explicitly. βmutabilityβ canβt be made sense of without a more explicit description of the architecture. Where are the partial transactions stored so that they can be mutated, and what update function is being applied to perform the mutation? What does the global state transition look like; specifically where/when does it call the mutation update function? I suppose all that doesnβt need to be explained at this part of the paper, but it should be explained somewhere.

A partial transaction is a composite structure π π π = [β¦]

Resources are defined in terms of a large product but partial transactions are not and a name for the type of partial transactions is never given. This is an inconsistency that should be easy to fix.

πππ  : {ππ : Fππ}

This notation is very confusing and suggests that βanβ is an element of the set βFanβ, so every βanβ is part of a different set indexed by itself. Very strange. I think this is trying to say that βansβ is a set of "Fan"s, and that any particular element of this set is called βanβ. Same thing with βπππ  : {ππ : Fππ}β and βππ π  : {ππ : Fππ }β. In these lines, βanβ, βππβ, and βnfβ donβt seem to actually refer to a specific element, but are just part of the naming schemes for example elements, so I think their mention can be dropped entirely without causing confusion. It may be helpful to explain where the names come from. βcmβ standing for βcommitmentβ is obvious, but where does βanβ come from?

πππ  : {ππ : Fππ} is a set of roots (from different epochs) of the global resource commitment tree.

Is the βglobal resource commitment treeβ the same as the βappend-only global
commitment treeβ mentioned earlier? It would probably be a good idea to make symbols for all the global data structures and use those symbols when referencing them.

Ξππ‘π₯ : πΉΞ is computed from Ξ of input and output resources (signed addition).

Isnβt the Ξ computed using hΞ? Where does signed addition come in? And when and where are these requirements enforced? This is another place where the architecture is hinted at without being described explicitly. If Ξππ‘π₯ is a computed Ξ, how is that? If πΉΞ is just a field, then the partial transaction canβt seemingly know where it came from. Ξππ‘π₯ is just a field element, and it being some kind of difference must be enforced by something outside the partial transaction, and itβs unclear whatβs doing that and how.

Alternatively, is this actually a computed field? If so, then it should be separated from the other fields, as it looks like data of the ptx that has to be checked for correctness rather than something computed from the ptx.

is a post-ordering execution function

What does βpost-orderingβ mean?

q = 5 [β¦] π = π_πππ₯ = |πΉ_π| β 1

F is not in blackboard bold. Also, where does this field come from? It doesnβt look like itβs used by any hash function. And why does πΉ_5 have 6 elements, according to this?

Each partial transaction contains a set of resources to be consumed and a set of resources to be created.

This is not true according to the definition just given. Is this referring to the cms and nfs? Those are field elements, not resources. If I understand this properly, they are essentially names of resources that can be looked up to get the actual resources. However the resources themselves are not part of the partial transaction.

For each resource consumed or created in a partial transaction, it is required to provide a proof that the logic of the resource evaluates to 1 given the partial transaction parameters

The βresource logicβ terminology is really confusing. It should probably be renamed to βresource constraintβ, or something like that. It doesnβt make much sense for a logic to evaluate to 1.

Ξ proof (balance proof) [β¦]

It would be good to write this out explicitly in more mathematical terms. Itβs not clear what Ξ composition is, how they are extracted from resources, or how the check actually occurs. Is composition just a linear combination? The way itβs written, it sounds like they are separate things (one is the value, while composition is something else thatβs not specified).

RM compliance proof [β¦]

Being explicit in more mathematical terms about what this looks like would be good. Thereβs a lot of moving parts here, and a detailed list of descriptions is necessary, I think. Also, what is βRMβ? Is this supposed to be βARMβ, or whatever other acronym we settle on?

It also requires explicit check of the presence of all other required proofs.

Where do the other required proofs come from? Are they part of the data of the resource machine? Are they the same thing as the βmandatory inputs and checksβ mentioned later in the Resource Logic section?

### Page 4:

ππ₯ππππ‘π₯ = πΉ (ππ₯ππ1, ππ₯ππ2) [β¦] Ξ¦ππ‘π₯ = πΊ(Ξ¦1, Ξ¦2)

Itβs fine to have F and G as parameters for composition, but it would be good to have some guidance for what sort of things they would be in practice.

F is used elsewhere to represent fields. While I donβt think this will cause confusion, it may be wise to change these variable names so thereβs no clash.

Remark 4. For parameters composed with the disjoint union operator, it has to be checked that those sets didnβt have any elements in common, otherwise the partial transactions cannot be composed.

Why? Disjoint unions generally label the elements of the two sets so there isnβt an overlap; why canβt that be done here, and why does that not apply to πππ ? Why do the disjoint unions need to be disjoint to begin with? Thereβs a lot of unexplained intuition packed into this remark.

Executable. The system used to represent and interpret executables

Are these related to earlier EXECs in the π π πs? If so, the earlier declaration that πΈππΈπΆ = π π π β π π π is wrong, as whatβs being described in this new section is a first-order representation that can be interpreted as a function. Instead, there should be something like a function of type βeval : EXEC Γ PTX β PTXβ, and EXEC should actually be defined to be a kind of Nockma program. If Iβm wrong, then a clarification that weβre talking about different executables should be made.

ππΈπ΄π·_ππ πππ΄πΊπΈ(ππππππ π  : Fππ)

What does this actually return? An integer? An element of some other field? An actual resource? If that last one, is there a read function for nfs? If not, how does one get the resource associated with the hash generated by h_nf? Thatβs needed for the private input resources of a Resource Logic.

If the value is not found, the operation should return an error.

What does this mean? Again, the resource machine is being described implicitly, in parts, but the whole is not clear.

ππΈπππ ππΆπΈπ_π΅π _πΌπ π·πΈπ(πππππ₯_π π’πππ‘πππ)

What are index functions and where do they come from? Whatβs an example of an index function? The only example of an index function mentioned in the paper appears on page 5, where itβs stated that βan index lookup uses the value stored at the address as an index functionβ. So, is an index function a function at all, or are you using a lookup table that contains addresses and calling that a function?

The executable might update partial transaction data fields,

What is an update? Again, this is implicitly describing the machine architecture without describing it explicitly. How are updates done? Presumably, this relates to the earlier comment on mutability. This is really a global architecture question. This section describes an update function, but where it lives (itβs part of the machine, right?) and how/when itβs used is less clear.

ππππ’ππ‘ [β¦] ππππππ‘

Where are these variables stored? Are they part of the machine?

If ππππ’ππ‘ > ππππππ‘, the execution halts with an error indicating the gas limit was exceeded.

What are errors? What does it mean to halt? Error handling would be part of the machine definition, but, again, this is mentioned offhand without ever being described explicitly.

Recursive calls first increment the counter ππππ’ππ‘, then recurse

Are executables even capable of recursion? It doesnβt seem like it, based on the Nockma description later on. If a pattern reduction step is executed, but itβs not recursion, just a step of evaluation, does that still increase gcount?

### Page 5:

6.2. Resource Logic. A resource logic is a predicate [β¦]

A Resource Logic represents a predicate, but itβs not literally a predicate (that is, a function from a set onto the booleans). We can extract a predicate from a resource logic, but it clearly has more structure.

The evaluation context of a Resource logic is a partial transaction.

This is contradicted by the later

• input resources corresponding to the elements of ππ π
• output resource corresponding to the elements of πππ

The partial transactions do not contain the resources themselves, only their names, so clearly the context must be more than just the partial transactions. Presumably, the context also includes the βglobal content-addressed storageβ and βglobal resource nullifier setβ. These might be made accessible by functions in the program language itself, but in section 6.1, βππΈπ΄π·_πππππ΄πΊπΈβ is described as an IO function, and the beginning of 6.2 states that resource logics do not require IO, so I donβt know.

Resource logics are customizable on both implementation of a resource machine and resource logic creation level. A concrete implementation of a resource machine can add more mandatory inputs and checks

This should really be spelled out explicitly by describing how information flows between/through the resource machine and the logic.

Hereβs my best guess for how this should work: I think that a resource logic is a Nockma program with access to variables representing the cms, nfs, and actual resource data associated with them from the relevant partial transaction. A PS.Proof is a ZKP of the trace of the program attesting to it outputting βtrueβ. And the resource machine has, as part of its data, a Nockma program thatβs run prior to the resource logic program, and the ZKP should also verify its trace ends in βtrueβ as well.

Thatβs my best guess for how this is supposed to work, but Iβm not confident at all. Whatever the case is, it should be stated explicitly.

Verifiability. It must be possible to produce and verify a proof of type π π.π ππππ that given a certain set of inputs the resource logic output is πππ’π

βTrueβ is not a possible output according to the Nockma specification given later on. Should this be β1β, representing true?

ππ π  β ππ π ππ‘π₯

Why canβt the resource logic mention all the nullifiers or commitments? That is to say, why are these proper subsets instead of improper subsets?

custom inputs

How are custom inputs stored and read?

for each output resource, check that the corresponding ππ value is derived according to the rules specified by the resource machine from its fields

What are the fields of a resource machine? Thatβs never described. Or is this the cm resource fields? If the latter, do fields need to be mentioned here?

### Page 6:

Pattern

An explicit syntax for patterns would be helpful. Itβs clear that the syntax for patterns is different for the syntax of nouns, and some patterns donβt have evaluation rules (for example, β?[a b c]β) and it never says what to do in such a case.

finite field πΉπ of order π

This contradicts the previous notation where non-blackboard bold F represents fields that are outputs of hash functions. I would recommend making this F, and all subsequent Fs in this section, blackboard bold.

division operation (floor division)

What happens when dividing by 0?

addition operation ππ Γ ππ β ππ Γ πΉ2

F should be blackboard bold. Same for remaining operations mentioning F2.

the conversion function would return a flag (of type πΉ2)

F should be blackboard bold.

### Examples section

The examples provided are generally understandable, particularly in their depiction of transactions. However, they fall short in offering deeper insights beyond this basic case. It mentions green arrows indicating a state change. Whatβs state is being changed? The resource machineβs? Where is this state being stored so that it can be changed? Presumably, there should be global something-or-others involved, but they go unmentioned. The composition is also unclear. They are just handwaved away with unspecified Fs and Gs, leading to no additional insight. Itβs still unclear what the execs are, for instance, since none are described at all. None of the examples describe specific resource logics, meaning they fail to give examples of things like custom inputs which are still unclear.

### Discussion: What even is a resource machine?

Nowhere in the paper is a resource machine actually defined. Frequently, βglobalβ data structures are mentioned, but itβs never made clear if they are part of the state of the machine or are some external data that the machine can read from. Because of this, whatβs in it is ambiguous. Hereβs a reasonable interpretation:

A resource machine contains the following data;

• A finite set of hash functions, with at least the specific βππ, βππ , βππππ, and βΞ.
• A Merkle tree for commitments
• A table with Fcm entries containing resources
• A table with Fnf entries containing resources
• An integer gas counter
• A set of partial transactions
• A nockma program representing mandatory constraints

such that, at each step, it takes an input from an external clock, and then executes each of its partial transactions. This step is the least clear. There should be some state transition function of some concrete form that does this. It seems like most of the paper is trying to describe the details of this, but the connective tissue that ties this all together isnβt really there. What Iβd like to know is, given a collection of resources and a machine state, how do we get to the next machine state? A timeline describing the order in which every transformation to the state is done would be helpful.

I suspect that some of the things in the previous bullet list arenβt part of the machine, but external things the machine can read from (and I may have described some of these incorrectly), but this hopefully demonstrates ambiguity in the current paper. A more clearly articulated architecture described in terms of an explicit data structure would be extremely helpful.

2 Likes

The connective tissues are sort of out of scope of the paper. Iβd go as far to say it prescribes too much flow which isnβt necessarily there. For example I believe the `EXEC` part of the partial transaction is likely to be removed as itβs subsumed by `transaction candidates`. This is because the document is describing an abstract Taiga (The resource machine). The more connective flow of the system can be seen in

https://research.anoma.net/t/graphing-anoma-agents-v3/341/2

This is a rough graph of how the system flows as a diagram. Transaction candidates are created either by the user or by solvers and submitted to the mempool, which ends up spawning processes that starts up the execution engine. From here the transaction candidates (nockma code) evaluate and can read data stored in shards. At the end of execution it will have new state to be written to the shards. Ordering is decided by consensus, which can be read about here:

https://specs.anoma.net/alpha/main/operational-architecture/typhon.html

Thanks for the reviews so far (particularly @AHart - this is fantastic feedback). A few comments from my end (cc @vveiln):

Execution flow

As @mariari mentions, I think we can remove `EXEC` and stipulate that the Nockma code must simply return a full partial transaction structure - and all we need to define is the verification function (no need to define mutation functions) over `PTX`.

In that case, we might also want to clarify execution flow. Perhaps we can have a parameter which is passed to the Nockma code - something like:

``````data ExecTime = PreOrdering | PostOrdering

data ExecInfo = ExecInfo {
execTime :: ExecTime,
execWho :: ExternalIdentity
}
``````

In both pre-ordering and post-ordering execution, the Nockma code must return a partial transaction, but the partial transaction returned in pre-ordering execution need not be balanced - this is what the solver would then examine in an attempt to solve (and compose the involved partial transactions, then forward their composition).

The question naturally emerges of whether and how to compose the Nockma code segments of two different partial transactions (or whatever we call the message that contains the Nockma code which returns a partial transaction). For now, letβs not worry about this, and say that the naive solver simply takes partial transaction codes A and B, calculates A, calculates B, and if they can be matched (and balanced) as C returns new Nockma code that simply returns C. In other words, we can worry about hybrid post-ordering matching later.

One problem with passing parameters to the Nockma code like this is that the parameter values arenβt verifiable ex post, so they cannot be checked by resource logics. We might also want to have βsentinel resourcesβ which contain these values if we want resource logics to be able to check them.

Distributed partial evaluation

Relatedly, I think it may be helpful to visualize and articulate how this resource machine, when run in a distributed network, implements what I think can be understood as a form of distributed partial evaluation.

• resources <> memory states
• resource kinds <> memory cells
• partial transactions <> evaluation states

For example (diagram could be greatly expanded), understanding `#A`, `#B`, `#D`, and `#E` as kinds of resources previously created, `#C` and `#F` as kinds of ephemeral resources, and `res` as some result, the following three partial transactions perform a distributed reduce operation:

Further input here from the PLT folks (cc @jonathan) would probably be helpful, Iβm sure Iβm butchering the language, but I think the analogy holds, and itβs worth making explicit.

Resource machine

I agree with @AHart that it would be helpful to more explicitly define a resource machine. In this case, the machine is in fact deterministic and need only process one transaction at a time, executing the Nockma code, validating the returned partial transaction, and either applying the updates (should the partial transaction be valid and balanced) or terminating without any changes (should the Nockma code run out of gas or the partial transaction be either invalid or unbalanced). Anything about sharding should be out of scope, though - that is a detail (feature) of Typhonβs execution engine, which preserves serializability - meaning that we need not care here.

Might have more thoughts tomorrow.

1 Like

@AHart seems to have covered almost all my comments (and much more!), so I have just three:

Ptx Arity
Does the ARM spec put any restrictions on the numbers of input and output resources? Is it theoretically allowed for ARM to have different number of inputs and outputs? E.g, 2 input and 3 output? Would this break anything?

Page 3
Section 5.1:

A RM compliance proof is required toβ¦

This is the first time the abbreviation βRMβ occurs in the document. I think it stands for Resource Machine, but since this abbreviation is not encountered anywhere except here, itβs probably better to just write that explicitly.

Page 5
Section 6.2:

Resource logics take as input a subset of resources created and consumed in the partial transaction

A subset of what? Resource logic takes all resources comprising a partial transaction, I think?

Resource Logic Public Inputs:

• πππ  β πππ _ππ‘π₯
• πππ  β πππ _ππ‘π₯
• Custom inputs

Resource logic also needs an indication of which resource it is the logic of. This can be assumed to live under custom inputs, but we may want to mention it explicitly (we donβt have to define the exact mechanism of such indication in the spec).

1 Like

Generally ptxs donβt have size bounds anymore, and right now I canβt see how having different number of input and output resources would break anything, but it is definitely a good idea to think it through

Not anymore

Why do you think it would be helpful? What is the context around it?

Not anymore

How does it work then? This is not immediately clear from the spec, as far as I can tell, but maybe I missed something. What determines which resources are passed as inputs?

``````What determines which resources are passed as inputs?
``````

I have the same confusion. Whatβs the context that resource logic can access in a ptx?
I tried to let the resource logic access only the owned and concerned resources but failed. Maybe you have a good solution now?

The one we had in mind is to use a similar mechanism to how the βownerβ resource is currently determined in Taiga. Would there be a problem with that (forgetting about efficiency)?

The efficiency is not a problem.
If the resource logic wants to access a potential target resource in circuits, the circuit must load all the resources in the ptx and search the target.
I didnβt figure out a good solution if the circuit only loads a subset of the resources.

Ah, true. So, what if we just pass only some resources from the ptx in and apply the checks to them only? It wonβt support all possible checks (namely, we canβt check that the partial transaction doesnβt include something or some statement holds for all resources in a ptx), but it seems that it is a good enough subset, at least for now (discussed in more details here: Forall / negative checks in resource logics). If logics contain statements βthere must be a resource in the ptx that satisfies xβ, whoever creates the proof will have to provide all the sufficient resources from the ptx to have the logic return true

If the resource logic doesnβt treat the resources equally, Iβm unsure if it would leak some relation information among resources.
Besides, for now, function privacy is under the assumption that all the resource logics/circuits have the same layout.

For example, we could pass a hash of all of the included resource commitments (in a mini Merkle tree), and then the resource logic could check inclusion of (and properties of) specific ones, without leaking any info, right?

The point is the resource logic should also publicize the nullifiers or note commitments in the subset(specific ones).

Not sure that I follow. We do want function privacy to hide which other resource commitments were checked by a specific resource logic, but the public inputs should give the resource logic the required information to check included commitments (e.g. a Merkle tree of the commitments in the transaction).

``````namely, we canβt check that the partial transaction doesnβt include something or some statement holds for all resources in a ptx
``````

That makes sense if we are under this condition.
So we wonβt support the whitelist/blacklist examples, in which we need to check for all resources? I remember It was a classic example we wanted to achieve. And it might be reasonable for a user to refuse to receive a specified Token or from some senders.

The resource logic needs to publicize some information to help check the resources passed to logic are actually from the ptx.

I agree we can use hash/merkle tree to encode all the resources in the ptx, and the resource logic can decode the hash/merkle tree and use the specific ones. In this way, only the hash/merkle_root goes to the public inputs.

However, circuits can not decode/accept variable resources. If the number of resources passed to logic changes, the circuit and resource kind change. In general, we have the resource and its logic before the number of resources in the ptx is determined.

We wonβt support this for now. In order to support this and preserve privacy, we need a slightly different implementation (see the discussions here) - curious how complicated you think this would be. If itβs easy we can scope it in to the next version of the resource machine report.