Anoma Resource Machine Explainer in Long Form

Ideally, one would not have to be a rocket computer scientist to understand the most important aspects of Anoma’s resource machine instantiations.[1] In this spirit, the post starts from a fairly simple but general notion of state, namely just a finite set bunch of X s whose nature will be “unveiled” over the course of the post; without knowing the precise nature of the X s, we already have covered a first aspect of how state is organized in any resource machine: it is a bunch of X s.[2]

As it turns out, the term “resource” will only surface relatively late, roughly, as an especially rich sort of X s that does allow for arbitrary state transitions with maximal information flow control. The main goal of the post is the development of a list of concepts that conveys the main ideas of the Resource Machine; the list should be such that its initial items are fairly simple while complexity and expressiveness grows as we progress towards the end.

TL;DR

There is a fairly natural progression of resource machine levels, adding expressivity and complexity as we go. We can proceed one step at a time.

Note

The author has invented new terminology to avoid clashes with existing terminology. The terminology of this post is mainly to avoid confusion with pre-existing terminology.

A PDF version is here (422.0 KB).

Aside on ADTs 1/2

Recap: Abstract Datatypes, in particular FinSet

Throughout this post, we use ideas related to abstract datatypes. The full details concerning abstract datatypes (or ADTs, for short) can be skipped on a first reading; just in case, here is the gist of the ADT of (finite) sets.[3]


  • new is an operation that creates a new set, namely some representation of the empty set {}

  • add is an operation that

    • takes
      • (some representation of) a set S paired with
      • (some representation of) a value v and
      • returns (some representation of) the set S ∪ {v}
  • remove is an operation that

    • takes
      • (some representation of) a set S and
      • (some representation of) a value v and
      • returns (some representation of) the set S \ {v}
  • contains is an operation that

    • takes
      • (some representation of) a set S and
      • (some representation of) a value v and
      • returns true iff v ∈ S

These operations are subject to axioms, so that all is correct, e.g., that no element is contained in the empty set and adding an element is idempotent.


Synopsis

In first approximation, state is a finite set bunch of X s and changing state amounts to removing a finite set bunch of X s and adding another finite set bunch of X s. The main question is how we want to organize and describe state change efficiently. In particular, we want to avoid an explicit enumeration of all state pairs that are considered to be permissible state transitions.

Covering the main ideas of how state change is described conceptually is the main goal of this post; at the end we give an outlook on possible continuations for writing about more aspects of Anoma Resource Machine instantiations, in particular on the topic of information flow control. The main focus of this post is to give a primer to understand the underpinnings of applications, leaving aside matters of shielding for the sake of brevity.

Simple state: “just” a finite set

The first definition—a pretty boring/uncontroversial one (hopefully)—is that

a simple state is a finite set of persistables

… where a persistable[4] can be taken to be any of the following:

  • if you like programming, a data value of a fixed datatype
  • if you like type theory, a term of a fixed type
  • if you like sets, an element of a fixed set
  • if you like it simple and concrete, a natural number.

We only make use of the following basic operations on finite sets of persistables:

  • creating the empty set (which is finite),
  • adding a persistable to a finite set,
  • removing a persistable from a finite set, and, last, but not least,
  • checking whether a persistable is an element of a finite set.

Having fixed what state can be (at any given point in time), let us now turn to how state x may evolve in a (possibly infinite) sequence of states.

x_0 \to x_1 \to x_2 \to \cdots \to x_n \to \ldots

Creation and consumption of persistables in simple state sequences

We first define formally what it means to create and consume a persistable. In a sequence of simple states,

x_0 \to x_1 \to x_2 \to \cdots \to x_n \to \ldots

a persistable v is

  • created at step n>0 if it belongs to x_n and not to x_{n-1} —in symbols v \in x_n \setminus x_{n-1}
  • consumed at step n>0 if it belongs to x_{n-1} and not to x_{n} —in symbols v \in x_{n-1} \setminus x_{n}

We are only interested in sequences of simple states that have the following three properties:

  1. there is at most one step at which it is created, called the creation step of the persistable,
  2. there is at most one step at which it is consumed, called the consumption step of the persistable,
  3. either there is no consumption step for the persistable, or the consumption step is (strictly) larger than the creation step.

We may choose to give state sequences that heed the above rules a fancy name, but let us pretend that these are the only regular sequences of simple states—so, let us call them regular for the purposes of this post (until we come up with a objectively better name to give the overworked “regular” a break). The above properties 1.–3. are the three laws of regularity.

Now, let us turn to a related important question:

How do we describe a state transition, without talking about all the persistables?

This question will be addressed repeatedly in between pairs of double lines.



What could a transaction description look like?

So far a transaction description can be given by

  • the set of consumed persistables and
  • the set of created persistables.


Accordingly, for any state transition x_{n-1} \to x_n, we define the Sets of Change (SoC) to be the pair \Bigl\langle (x_{n-1}\setminus x_n), (x_n \setminus x_{n-1})\Bigr\rangle; thus, the first component of the SoC contains the consumed persistables, and the second component contains the created persistables.



What does the transaction description look like?

It is just a SoC.



As a summary of what we have covered so far, we have

  • a notion of state
  • a notion of state transition
  • a way to describe state change—namely a SoC.

In fact, most pages of the resource machine specs concern at least one aspect of the above three, directly or indirectly.

Aside on ADTs: 2/2

An ADT for simple state and regular sequences

There is a natural variation of the ADT of finite sets such that we can only create regular simple state sequences from an empty set by repeatedly applying the addition and removal operations. Let us quickly sketch the main points, calling the operations create and consume instead of add and remove, because they behave differently.

Empty

Create the empty simple state. This is the “same” operation as for FinSet.

Create

Add the persistable to the simple state if it was never added before; otherwise, return the state unchanged.

Consume

Remove the persistable from the simple state if it was created before and has not yet been consumed; otherwise, return the simple state unchanged.

Implementing the simple state ADT using commitments and nullifiers

We can use a pair of finite sets to implement the simple state ADT; the two sets are accumulator and nullifier set, respectively.[5]

Empty

Create the pair of an empty accumulator and an empty nullifier set.

Create

If the given persistable is not member of the accumulator, add it to the accumulator; otherwise do nothing, i.e., return the pair unchanged.

Consume

If the persistable is member of the accumulator or is already element of the nullifier set, do nothing, i.e., return the pair unchanged; otherwise, add the persistable to the nullifier set.

Additional state transition constraints 1/3: SoC-based

We may want to complement the three regularity laws with additional conditions that restrict the possible state transitions—be it to encode business logic, to ensure information flow control, or to enforce any other functional or non-functional property. Note that these are additional constraints besides the three laws of regularity.

Now, how do we best bring these additional constraints on state transitions into the picture? One fundamental principle is that they should be expressible in terms of the SoC (as defined above). In other words, given two state transitions x\to y and x'\to y' that have the same set of change, a transition constraint will either allow both or forbid both transitions. In other words, a SoC-based transition constraint cannot be used to distinguish between two state transitions with the same SoC. We call such transition constraints SoC-based.



How could a transaction description look like? (i/iii)

It could be

  • a SoC and
  • a SoC-based predicate

Alas, this is not how it is done, but it is a good start.



So, it is high time to explain why we do not “just” add a SoC-based predicate to a transaction description! The first reason is that if we have several transaction descriptions (that are not directly interacting with each other), we want to be able to batch them together.



How could a transaction description look like? (ii/iii)

It could be

  • a finite set of SoCs where
  • each member SoC comes with a predicate.

This is “almost” how it is done; conceptually it is about right.[6]



At this point, we come to a highly non-trivial design decision: each persistable comes equipped with its own logic, which imposes transition conditions for every transition that “touches” it. The precursors[7] to these logics were already described in Anoma’s first white paper.

This is important to remember.

So, we have reached the part of this post that is not bed time reading material any more.



How could a transaction description look like? (iii/iii)

It could be just

  • a SoC and
  • a finite set of SoCs

because persistables are carrying their logic, which imposes transaction constraints.

This is extremely close to how it is actually defined. However, this would force us to “invent” persistables to express transition conditions…



… so, we now come to the final piece to the puzzle of how state transition constraints are organized that dispenses with the need to “invent” persistables to carry constraints: we complement persistables with a set of constraint carriers with their own logics. Moreover, the sets of change are augmented with such constraint carriers. We shall spell out the details in two steps: first we cover logics of persistables; then we add constraint carriers for ephemeral constraints and reach the next stepping stone in our list of concepts—completing the first level!

Additional state transitions constraints 2/3: logics of persistables

For each persistable v, we associate a predicate P_v, called the logic of \boldsymbol v; note that the word ‘logic’ here is used as in Wikipedia’s article on business logic (and not in the sense of mathematical logic). Given a pair of disjoint finite sets of persistables \langle U, W \rangle that contain the persistable v, its logic P_v will assign a truth value to the pair \langle U, W \rangle. More formally, we consider the logic P_v to be a Boolean valued partial function that expresses the logic of the persistable v; it is defined for every disjoint pair of sets of persistables whose union contains the persistable v, i.e., any \langle U, W \rangle such that v \in U \cup W and U \cap W = \varnothing hold.

The rough idea of how these logics operate is that each logic represents a check on (some part of) the SoC. The precise definition of allowed transition is somewhat technical, but we give it as it may help to understand the next “upgrade” of transition descriptions.

A state transition X \to Y is allowed if

  • there are k pairs of disjoint simple states \langle U_1, W_1\rangle, \ldots, \langle U_k, W_k\rangle such that
    • the first components are a partition of all consumed persistables, i.e., \bigcup_{i=1}^k U_i = X \setminus Y and U_i \cap U_j = \varnothing if i\neq j;
    • the second components are a partition of the created persistables, i.e., \bigcup_{i=1}^k W_i = Y \setminus X and W_i \cap W_j = \varnothing if i\neq j;
    • the predicate P_v of every persistable v involved in some component of a pair of partitions \langle U_i, W_i\rangle is satisfied, i.e., for each i, and for each v \in U_i \cup W_i, it is the case that P_v(\langle U_i, W_i\rangle) is true.

In one phrase, a transition x \to y is allowed if its SoC can be partitioned into pairs of disjoint sets of persistables such that the logic of any involved persistable is satisfied for the partition in which it is contained. One important consequence of this definition is that we are able to batch together several consecutive but independent allowed state transitions into a single “parallel” transition such that the logics of the persistables are satisfied for the relevant portion of the SoC.



What does the transaction description look like?

It consists of

  • a list of pairs of disjoint simple states \langle U_1, W_1\rangle, \ldots, \langle U_k, W_k\rangle such that (for all list indices i,j \in \{1,\dotsc,k\}),
    • for each persistable v \in U_i \cup W_i, it is the case that its logic is satisfied for \langle U_i, W_i\rangle, i.e., we must have the equation P_v(\langle U_i, W_i\rangle) = \mathtt{true}
    • U_i \cap U_j = \varnothing and W_i \cap W_j = \varnothing if i\neq j
    • U_i \cap W_j = \varnothing

Each pair \langle U_i, W_i\rangle is called a logic scope of the transaction description.



Additional transaction constraints 3/3: ephemeral constraints

Finally, we come to ephemeral constraints that users may want to impose on transactions. So, one may expect that we simply add these constraints, some place, e.g., some predicates to each logic scope. However, we re-use the formalism of logics in three steps:

  1. We introduce a set of ephemerals, disjoint from persistables. 2. We extend logics to be able to have ephemerals as part of their input. 3. We also equip ephemerals with logics.

Elements of the union of ephemerals and persistables are called pieces.[8]



What does the transaction description look like?

It consists of

  • a list of pairs of disjoint sets of pieces \langle U_1, W_1\rangle, \ldots, \langle U_k, W_k\rangle such that (for all list indices i,j \in \{1,\dotsc,k\}),
    • for each piece v \in U_i \cup W_i, it is the case that its logic is satisfied for \langle U_i, W_i\rangle, i.e., we must have the equation P_v(\langle U_i, W_i\rangle) = \mathtt{true}
    • U_i \cap U_j = \varnothing and W_i \cap W_j = \varnothing if i\neq j
    • U_i \cap W_j = \varnothing

Each pair \langle U_i, W_i\rangle is called a logic scope of the transaction description.



In short, a transaction description is a list of logic scopes. This concludes the first level of the resource machine concepts. The next level comprises, quantities and balancing, different kinds of pieces, and application data.

Quantities and balancing of transaction descriptions

  • As a design decision,[9] each piece is assigned a quantity, e.g., a natural number (less than a fixed prime); quantity could—in full generality—be an element of an arbitrary finite commutative (semi-)group, written additively.

  • The quantity preservation principle says that quantity must be preserved, i.e., in a transaction, the sum of quantities of all consumed pieces must equal the sum of quantities of the created pieces.

  • Balancing concerns the whole of the transaction description (and thus is not local to any of the scopes).

We write q_v for the quantity of a piece v.



What does the transaction description look like?

It consists of

  • a list of logic scopes \langle U_i, W_i\rangle_{i=1}^k
  • we have the equation \sum_{u \in U} q_u = \sum_{w \in W} q_w
    where U = \bigcup_{i=1}^k U_i and W = \bigcup_{i=1}^k W_i.

Each pair \langle U_i, W_i\rangle is called a logic scope of the transaction description.



Grouping pieces of the same kind together

We now assume that each piece moreover comes equipped with a kind; the set of pieces of the same kind are a club (for the purposes of this post) and all members of a club must have the same logic. Now, balancing is adapted such that the sum of the quantities of consumed and created pieces of every club must be equal. The concept of kind is essential to define and write applications. Two applications either work with disjoint clubs or communicate through pieces of a shared sub-club of “VIPs”.



What does the transaction description look like?

It consists of

  • a list of logic scopes \langle U_i, W_i\rangle_{i=1}^k
  • for each club C, we have the equation \sum_{u \in U \cap C} q_u = \sum_{w \in W \cap C} q_w
    where U = \bigcup_{i=1}^k U_i and W = \bigcup_{i=1}^k W_i.


Last but not least: application data in transaction descriptions

Finally, we come to application data. Roughly, application data is any data that

  1. logics can also take as part of their inputs and
  2. is provided as part of the transaction description.

The most common example of application data are signatures. The topic of application data deserves a description of its own, but strictly concerns applications and not the general concepts of the resource machine.

What is missing to get to the “actual” resources

So far, this post has side stepped the topic of information flow control. Most notably, state curation for any piece is typically performed by several parties; in particular, we split matters of data integrity and data availability. As an example, for matters of data integrity, it suffices to keep a hash of data; for personal information, one actually does not even want a plain hash of the data, but wants to add some salt to ensure that the personal information cannot be linked to the hash without knowing the salt. The actual data is kept elsewhere, in particular personal data.

Now, changing gears on last time, let us look whether we can better understand each of the fields of a resource as defined in the specification (even though we have not yet covered everything in detail).

logicRef

The logicRef is a hash that identifies the logic of the resource. Putting the actual code/circuit is not only expensive, but we actually may prefer to not leak this information.

labelRef

The labelRef is a hash of a datum of arbitrary type (called label), which together with labelRef determines the kind of the resource.

valueRef

The valueRef is a hash of another aribtrary datum (called value); however, different resource of the same kind may have different values, i.e,. the value is not determined by the kind (while the logic and label together determine the kind, barring hash collisions).

quantity

This is an element of a commutative finite group.

isEphemeral

This is a Boolean value: if it is true, the resource is persistable; otherwise it is ephemeral.

nonce

Sometimes we just want a different copy of essentially the same resource; if we want this, we just use a different nonce.

Let us at least mention the final two fields very quickly:

  • nullifierKeyCommitment, the counterpart to the resource commitment hash (in direct analogy to note commitments and nullifiers in zCash);
  • randSeed, a seed for pseudo-random number generation, which can also be used by applications.

Summary

This post has developed a proposal for a list of core concepts of the resource machine. The first level of this list is overviewed in Figure 1, which should already help understand the underpinnings of the object model that we are developing for applications.[10] Details of (shielded) state curation and proving systems that enable to give (zero-knowledge) proofs for transition constraints are left for future posts. The next topic to cover in more detail are applications and how we can implement the object model for applications.



  1. Admittedly, the shielded case is a tough challenge, because it involves zero-knowledge proving systems. ↩︎

  2. This state organization principle happens to be at the very heart of Petri Nets, also arising in the guise Vector Addition Systems. ↩︎

  3. This is just a quick recap of the main points. The main idea should be clear without all fine-print that we sweep under the carpet. ↩︎

  4. If you are already a resource machine aficionado, you can think of persistables as non-ephemeral resources plus all their hash-referenced data. ↩︎

  5. This terminology may seem wrong at first reading, and one would want to write accumulatoroid and nullifer-ish set, or better terminology; in other words, here we overload the terms ‘accumulator’ and ‘nullifier set’. ↩︎

  6. A transaction object as defined in the specification describes each of the above two points: the action has lists of consumed and created resources, each of which have an associated logic. ↩︎

  7. The specific term from the white paper is ‘validity predicate’. ↩︎

  8. These are pieces like in a game of chess; not pieces, like in a piece of cake. ↩︎

  9. Without going into detail, the idea of quantity balance goes back to a mechanism in zCash, which ensures that there are never more than 21 million zCash tokens. ↩︎

  10. Actually, the object model is closely related to the model presented in the paper Dynamic Effective Timed Communication Systems. ↩︎

1 Like