Anoma Resource Machine Explainer in Long Form

Ideally, one would not have to be a rocket computer scientist to understand the most important aspects of Anoma’s resource machine instantiations.[1] In this spirit, the post starts from a fairly simple but general notion of state, namely just a finite set bunch of X s whose nature will be “unveiled” over the course of the post; without knowing the precise nature of the X s, we already have covered a first aspect of how state is organized in any resource machine: it is a bunch of X s.[2]

As it turns out, the term “resource” will only surface relatively late, roughly, as an especially rich sort of X s that does allow for arbitrary state transitions with maximal information flow control. The main goal of the post is the development of a list of concepts that conveys the main ideas of the Resource Machine; the list should be such that its initial items are fairly simple while complexity and expressiveness grows as we progress towards the end.

TL;DR

There is a fairly natural progression of resource machine levels, adding expressivity and complexity as we go. We can proceed one step at a time.

Note

The author has invented new terminology to avoid clashes with existing terminology. The terminology of this post is mainly to avoid confusion with pre-existing terminology.

A PDF version is here (422.0 KB).

Aside on ADTs 1/2

Recap: Abstract Datatypes, in particular FinSet

Throughout this post, we use ideas related to abstract datatypes. The full details concerning abstract datatypes (or ADTs, for short) can be skipped on a first reading; just in case, here is the gist of the ADT of (finite) sets.[3]


  • new is an operation that creates a new set, namely some representation of the empty set {}

  • add is an operation that

    • takes
      • (some representation of) a set S paired with
      • (some representation of) a value v and
      • returns (some representation of) the set S ∪ {v}
  • remove is an operation that

    • takes
      • (some representation of) a set S and
      • (some representation of) a value v and
      • returns (some representation of) the set S \ {v}
  • contains is an operation that

    • takes
      • (some representation of) a set S and
      • (some representation of) a value v and
      • returns true iff v ∈ S

These operations are subject to axioms, so that all is correct, e.g., that no element is contained in the empty set and adding an element is idempotent.


Synopsis

In first approximation, state is a finite set bunch of X s and changing state amounts to removing a finite set bunch of X s and adding another finite set bunch of X s. The main question is how we want to organize and describe state change efficiently. In particular, we want to avoid an explicit enumeration of all state pairs that are considered to be permissible state transitions.

Covering the main ideas of how state change is described conceptually is the main goal of this post; at the end we give an outlook on possible continuations for writing about more aspects of Anoma Resource Machine instantiations, in particular on the topic of information flow control. The main focus of this post is to give a primer to understand the underpinnings of applications, leaving aside matters of shielding for the sake of brevity.

Simple state: “just” a finite set

The first definition—a pretty boring/uncontroversial one (hopefully)—is that

a simple state is a finite set of persistables

… where a persistable[4] can be taken to be any of the following:

  • if you like programming, a data value of a fixed datatype
  • if you like type theory, a term of a fixed type
  • if you like sets, an element of a fixed set
  • if you like it simple and concrete, a natural number.

We only make use of the following basic operations on finite sets of persistables:

  • creating the empty set (which is finite),
  • adding a persistable to a finite set,
  • removing a persistable from a finite set, and, last, but not least,
  • checking whether a persistable is an element of a finite set.

Having fixed what state can be (at any given point in time), let us now turn to how state x may evolve in a (possibly infinite) sequence of states.

x_0 \to x_1 \to x_2 \to \cdots \to x_n \to \ldots

Creation and consumption of persistables in simple state sequences

We first define formally what it means to create and consume a persistable. In a sequence of simple states,

x_0 \to x_1 \to x_2 \to \cdots \to x_n \to \ldots

a persistable v is

  • created at step n>0 if it belongs to x_n and not to x_{n-1} —in symbols v \in x_n \setminus x_{n-1}
  • consumed at step n>0 if it belongs to x_{n-1} and not to x_{n} —in symbols v \in x_{n-1} \setminus x_{n}

We are only interested in sequences of simple states that have the following three properties:

  1. there is at most one step at which it is created, called the creation step of the persistable,
  2. there is at most one step at which it is consumed, called the consumption step of the persistable,
  3. either there is no consumption step for the persistable, or the consumption step is (strictly) larger than the creation step.

We may choose to give state sequences that heed the above rules a fancy name, but let us pretend that these are the only regular sequences of simple states—so, let us call them regular for the purposes of this post (until we come up with a objectively better name to give the overworked “regular” a break). The above properties 1.–3. are the three laws of regularity.

Now, let us turn to a related important question:

How do we describe a state transition, without talking about all the persistables?

This question will be addressed repeatedly in between pairs of double lines.



What could a transaction description look like?

So far a transaction description can be given by

  • the set of consumed persistables and
  • the set of created persistables.


Accordingly, for any state transition x_{n-1} \to x_n, we define the Sets of Change (SoC) to be the pair \Bigl\langle (x_{n-1}\setminus x_n), (x_n \setminus x_{n-1})\Bigr\rangle; thus, the first component of the SoC contains the consumed persistables, and the second component contains the created persistables.



What does the transaction description look like?

It is just a SoC.



As a summary of what we have covered so far, we have

  • a notion of state
  • a notion of state transition
  • a way to describe state change—namely a SoC.

In fact, most pages of the resource machine specs concern at least one aspect of the above three, directly or indirectly.

Aside on ADTs: 2/2

An ADT for simple state and regular sequences

There is a natural variation of the ADT of finite sets such that we can only create regular simple state sequences from an empty set by repeatedly applying the addition and removal operations. Let us quickly sketch the main points, calling the operations create and consume instead of add and remove, because they behave differently.

Empty

Create the empty simple state. This is the “same” operation as for FinSet.

Create

Add the persistable to the simple state if it was never added before; otherwise, return the state unchanged.

Consume

Remove the persistable from the simple state if it was created before and has not yet been consumed; otherwise, return the simple state unchanged.

Implementing the simple state ADT using commitments and nullifiers

We can use a pair of finite sets to implement the simple state ADT; the two sets are accumulator and nullifier set, respectively.[5]

Empty

Create the pair of an empty accumulator and an empty nullifier set.

Create

If the given persistable is not member of the accumulator, add it to the accumulator; otherwise do nothing, i.e., return the pair unchanged.

Consume

If the persistable is member of the accumulator or is already element of the nullifier set, do nothing, i.e., return the pair unchanged; otherwise, add the persistable to the nullifier set.

Additional state transition constraints 1/3: SoC-based

We may want to complement the three regularity laws with additional conditions that restrict the possible state transitions—be it to encode business logic, to ensure information flow control, or to enforce any other functional or non-functional property. Note that these are additional constraints besides the three laws of regularity.

Now, how do we best bring these additional constraints on state transitions into the picture? One fundamental principle is that they should be expressible in terms of the SoC (as defined above). In other words, given two state transitions x\to y and x'\to y' that have the same set of change, a transition constraint will either allow both or forbid both transitions. In other words, a SoC-based transition constraint cannot be used to distinguish between two state transitions with the same SoC. We call such transition constraints SoC-based.



How could a transaction description look like? (i/iii)

It could be

  • a SoC and
  • a SoC-based predicate

Alas, this is not how it is done, but it is a good start.



So, it is high time to explain why we do not “just” add a SoC-based predicate to a transaction description! The first reason is that if we have several transaction descriptions (that are not directly interacting with each other), we want to be able to batch them together.



How could a transaction description look like? (ii/iii)

It could be

  • a finite set of SoCs where
  • each member SoC comes with a predicate.

This is “almost” how it is done; conceptually it is about right.[6]



At this point, we come to a highly non-trivial design decision: each persistable comes equipped with its own logic, which imposes transition conditions for every transition that “touches” it. The precursors[7] to these logics were already described in Anoma’s first white paper.

This is important to remember.

So, we have reached the part of this post that is not bed time reading material any more.



How could a transaction description look like? (iii/iii)

It could be just

  • a finite set of SoCs

because persistables are carrying their logic, which imposes transaction constraints.

This is extremely close to how it is actually defined. However, this would force us to “invent” persistables to express transition conditions…



… so, we now come to the final piece to the puzzle of how state transition constraints are organized that dispenses with the need to “invent” persistables to carry constraints: we complement persistables with a set of constraint carriers with their own logics. Moreover, the sets of change are augmented with such constraint carriers. We shall spell out the details in two steps: first we cover logics of persistables; then we add constraint carriers for ephemeral constraints and reach the next stepping stone in our list of concepts—completing the first level!

Additional state transitions constraints 2/3: logics of persistables

For each persistable v, we associate a predicate P_v, called the logic of \boldsymbol v; note that the word ‘logic’ here is used as in Wikipedia’s article on business logic (and not in the sense of mathematical logic). Given a pair of disjoint finite sets of persistables \langle U, W \rangle that contain the persistable v, its logic P_v will assign a truth value to the pair \langle U, W \rangle. More formally, we consider the logic P_v to be a Boolean valued partial function that expresses the logic of the persistable v; it is defined for every disjoint pair of sets of persistables whose union contains the persistable v, i.e., any \langle U, W \rangle such that v \in U \cup W and U \cap W = \varnothing hold.

The rough idea of how these logics operate is that each logic represents a check on (some part of) the SoC. The precise definition of allowed transition is somewhat technical, but we give it as it may help to understand the next “upgrade” of transition descriptions.

A state transition X \to Y is allowed if

  • there are k pairs of disjoint simple states \langle U_1, W_1\rangle, \ldots, \langle U_k, W_k\rangle such that
    • the first components are a partition of all consumed persistables, i.e., \bigcup_{i=1}^k U_i = X \setminus Y and U_i \cap U_j = \varnothing if i\neq j;
    • the second components are a partition of the created persistables, i.e., \bigcup_{i=1}^k W_i = Y \setminus X and W_i \cap W_j = \varnothing if i\neq j;
    • the predicate P_v of every persistable v involved in some component of a pair of partitions \langle U_i, W_i\rangle is satisfied, i.e., for each i, and for each v \in U_i \cup W_i, it is the case that P_v(\langle U_i, W_i\rangle) is true.

In one phrase, a transition x \to y is allowed if its SoC can be partitioned into pairs of disjoint sets of persistables such that the logic of any involved persistable is satisfied for the partition in which it is contained. One important consequence of this definition is that we are able to batch together several consecutive but independent allowed state transitions into a single “parallel” transition such that the logics of the persistables are satisfied for the relevant portion of the SoC.



What does the transaction description look like?

It consists of

  • a list of pairs of disjoint simple states \langle U_1, W_1\rangle, \ldots, \langle U_k, W_k\rangle such that (for all list indices i,j \in \{1,\dotsc,k\}),
    • for each persistable v \in U_i \cup W_i, it is the case that its logic is satisfied for \langle U_i, W_i\rangle, i.e., we must have the equation P_v(\langle U_i, W_i\rangle) = \mathtt{true}
    • U_i \cap U_j = \varnothing and W_i \cap W_j = \varnothing if i\neq j
    • U_i \cap W_j = \varnothing

Each pair \langle U_i, W_i\rangle is called a logic scope of the transaction description.



Additional transaction constraints 3/3: ephemeral constraints

Finally, we come to ephemeral constraints that users may want to impose on transactions. So, one may expect that we simply add these constraints, some place, e.g., some predicates to each logic scope. However, we re-use the formalism of logics in three steps:

  1. We introduce a set of ephemerals, disjoint from persistables. 2. We extend logics to be able to have ephemerals as part of their input. 3. We also equip ephemerals with logics.

Elements of the union of ephemerals and persistables are called pieces.[8]



What does the transaction description look like?

It consists of

  • a list of pairs of disjoint sets of pieces \langle U_1, W_1\rangle, \ldots, \langle U_k, W_k\rangle such that (for all list indices i,j \in \{1,\dotsc,k\}),
    • for each piece v \in U_i \cup W_i, it is the case that its logic is satisfied for \langle U_i, W_i\rangle, i.e., we must have the equation P_v(\langle U_i, W_i\rangle) = \mathtt{true}
    • U_i \cap U_j = \varnothing and W_i \cap W_j = \varnothing if i\neq j
    • U_i \cap W_j = \varnothing [EDIT if v is a persistable][1:1]

Each pair \langle U_i, W_i\rangle is called a logic scope of the transaction description.



In short, a transaction description is a list of logic scopes. This concludes the first level of the resource machine concepts. The next level comprises, quantities and balancing, different kinds of pieces, and application data.

Quantities and balancing of transaction descriptions

  • As a design decision,[9] each piece is assigned a quantity, e.g., a natural number (less than a fixed prime); quantity could—in full generality—be an element of an arbitrary finite commutative (semi-)group, written additively.

  • The quantity preservation principle says that quantity must be preserved, i.e., in a transaction, the sum of quantities of all consumed pieces must equal the sum of quantities of the created pieces.

  • Balancing concerns the whole of the transaction description (and thus is not local to any of the scopes).

We write q_v for the quantity of a piece v.



What does the transaction description look like?

It consists of

  • a list of logic scopes \langle U_i, W_i\rangle_{i=1}^k
  • we have the equation \sum_{u \in U} q_u = \sum_{w \in W} q_w
    where U = \bigcup_{i=1}^k U_i and W = \bigcup_{i=1}^k W_i.

Each pair \langle U_i, W_i\rangle is called a logic scope of the transaction description.



Grouping pieces of the same kind together

We now assume that each piece moreover comes equipped with a kind; the set of pieces of the same kind are a club (for the purposes of this post) and all members of a club must have the same logic. Now, balancing is adapted such that the sum of the quantities of consumed and created pieces of every club must be equal. The concept of kind is essential to define and write applications. Two applications either work with disjoint clubs or communicate through pieces of a shared sub-club of “VIPs”.



What does the transaction description look like?

It consists of

  • a list of logic scopes \langle U_i, W_i\rangle_{i=1}^k
  • for each club C, we have the equation \sum_{u \in U \cap C} q_u = \sum_{w \in W \cap C} q_w
    where U = \bigcup_{i=1}^k U_i and W = \bigcup_{i=1}^k W_i.


Last but not least: application data in transaction descriptions

Finally, we come to application data. Roughly, application data is any data that

  1. logics can also take as part of their inputs and
  2. is provided as part of the transaction description.

The most common example of application data are signatures. The topic of application data deserves a description of its own, but strictly concerns applications and not the general concepts of the resource machine.

What is missing to get to the “actual” resources

So far, this post has side stepped the topic of information flow control. Most notably, state curation for any piece is typically performed by several parties; in particular, we split matters of data integrity and data availability. As an example, for matters of data integrity, it suffices to keep a hash of data; for personal information, one actually does not even want a plain hash of the data, but wants to add some salt to ensure that the personal information cannot be linked to the hash without knowing the salt. The actual data is kept elsewhere, in particular personal data.

Now, changing gears on last time, let us look whether we can better understand each of the fields of a resource as defined in the specification (even though we have not yet covered everything in detail).

logicRef

The logicRef is a hash that identifies the logic of the resource. Putting the actual code/circuit is not only expensive, but we actually may prefer to not leak this information.

labelRef

The labelRef is a hash of a datum of arbitrary type (called label), which together with logicRef determines the kind of the resource.

valueRef

The valueRef is a hash of another aribtrary datum (called value); however, different resource of the same kind may have different values, i.e,. the value is not determined by the kind (while the logic and label together determine the kind, barring hash collisions).

quantity

This is an element of a commutative finite group.

isEphemeral

This is a Boolean value: if it is true, the resource is persistable; otherwise it is ephemeral.

nonce

Sometimes we just want a different copy of essentially the same resource; if we want this, we just use a different nonce.

Let us at least mention the final two fields very quickly:

  • nullifierKeyCommitment, the counterpart to the resource commitment hash (in direct analogy to note commitments and nullifiers in zCash);
  • randSeed, a seed for pseudo-random number generation, which can also be used by applications.

Summary

This post has developed a proposal for a list of core concepts of the resource machine. The first level of this list is overviewed in Figure 1, which should already help understand the underpinnings of the object model that we are developing for applications.[10] Details of (shielded) state curation and proving systems that enable to give (zero-knowledge) proofs for transition constraints are left for future posts. The next topic to cover in more detail are applications and how we can implement the object model for applications.



  1. Thank you @vveiln for catching this one! ↩︎ ↩︎

  2. This state organization principle happens to be at the very heart of Petri Nets, also arising in the guise Vector Addition Systems. ↩︎

  3. This is just a quick recap of the main points. The main idea should be clear without all fine-print that we sweep under the carpet. ↩︎

  4. If you are already a resource machine aficionado, you can think of persistables as non-ephemeral resources plus all their hash-referenced data. ↩︎

  5. This terminology may seem wrong at first reading, and one would want to write accumulatoroid and nullifer-ish set, or better terminology; in other words, here we overload the terms ‘accumulator’ and ‘nullifier set’. ↩︎

  6. A transaction object as defined in the specification describes each of the above two points: the action has lists of consumed and created resources, each of which have an associated logic. ↩︎

  7. The specific term from the white paper is ‘validity predicate’. ↩︎

  8. These are pieces like in a game of chess; not pieces, like in a piece of cake. ↩︎

  9. Without going into detail, the idea of quantity balance goes back to a mechanism in zCash, which ensures that there are never more than 21 million zCash tokens. ↩︎

  10. Actually, the object model is closely related to the model presented in the paper Dynamic Effective Timed Communication Systems. ↩︎

1 Like

Thank you for this lovely explainer! I really like the general style of building up step by step from very simple abstractions, adding just one component of complexity at a time.

A few notes and questions:

I think it is worth some exposition as to why this is done. The most basic reason I can think of at the moment is that we are interested in state and state sequences which can be interacted with by anyone (anyone can submit a transaction), and therefore the only way for users to be able to reason about how particular parts of state (persistables) can change is if conditions are attached to persistables themselves. Does that make sense?

I didn’t follow this step, why do we need both a SoC and a finite set of SoCs, instead of just a finite set of SoCs?

I think we can further note that in deciding upon this structure of transitions, we have two goals:

  1. Allow persistables to impose arbitrary requirements on what other persistables must be consumed and/or created atomically with themselves (and indeed, since there is no state other than persistables, there is nothing else they could conceivably impose requirements on), and
  2. Allow valid transitions to be composed.

I believe that these two requirements should be sufficient to necessitate this structure.

This is the essential point, and I wonder whether we can establish an even more basic mechanism for cross-scope checks, of which balance is then perhaps “just” an optimized, shielded-compatible implementation.

For example, we could say that persistable logic can check:

  1. Persistables consumed and created in its logic scope (as you have already), and
  2. Persistables consumed and created elsewhere (but we don’t know or care where).

Why is this interesting? If I want to enforce that another persistable is created or consumed along with myself, why not simply check whether it is in the logic scope? The key difference is that the logic scope checks are exclusive: checking a persistable in my logic scope ensures not only does that it satisfies my requirements but also that it does not satisfy other persistables’ requirements other than the ones which I allow to be included in the logic scope. This means, furthermore, that (without balance), different sets of persistables within an SoC can also be split into different transactions which would be independently valid. Checking cross-logic-scope persistables, on the other hand, is a way to intentionally forgo this exclusivity, and to ensure that a persistable exists somewhere which satisfies my requirements while also allowing it to satisfy other persistables requirements of which I need have no knowledge.

See also this discussion, which is relevant. I am not yet sure exactly how to formalize all of these aspects and design constraints but hopefully this minor exposition is something to work with.

I wonder if we should include application “datums” as “pieces” as well, in this model.

2 Likes

Absolutely!

Yes, clearly, this a good way to start explaining this design decision, once one has introduced the idea that there are several actors that may in fact issue different transaction descriptions to a state curator, let alone a decentralized one. It is a great design decision for a setting of distributed state, where each piece of state comes with different rules of change.

For example, in the case of a business process collaboration, each possible task can be performed only by a certain party (and nobody else). In fact, just today I have talked with @Michael over lunch, that it would be great to write up how one could use the example of business process monitoring and execution using Anoma in a shielded manner ([edit]: see also https://research.anoma.net/t/the-case-for-short-term-specialization/1506/5).

2 Likes

Thanks for the write-up! I really like the abstraction you’ve put forward. It (almost) all makes sense to me. There are some subtle points that I’d like to bring to focus.

Above, a transition x \to y is defined as two partitions, one for x\backslash y, and another for y\backslash x. That is, two disjoint sequence of sets \{U_i\}_{i\leq k},\{W_i\}_{i\leq k}.

What if instead we use covers (i.e. not necessarily disjoint sequences) of x\backslash y and y\backslash x? I think they will also do the job. I mean, we can still pack independent state transitions into a “paralel” transition.

The reason for requiring partitions and not covers might be what @cwgoes said:

In my view, this design might be too limiting. Too limiting with respect two goals:

  • Parallelization
  • Tasks with preprocessing

Parallelization. The goal is to parallelize an algorithm between different nodes or threads (different logic scopes), and then pack their results into a single transaction.

There are tasks that cannot be distributed if we do not allow persistables to participate in different logic scopes. In general, the design won’t work for any problem in NC whose input is not partitioned.

For example, computing a matrix multiplication \mathbf C = \mathbf A\mathbf B can be parallelized by computing the inner product of every pair of row, column. Each row \mathbf a_i in \mathbf A participates in as many inner products as columns \mathbf b_j in \mathbf B. If you see rows and columns as persistables of kind ‘vector’, then there is no way to split n-dimensional matrix multiplication into disjoint logic scopes, each in charge of a subset of the inner products. The only possible partition we have is \langle U, W\rangle, where the consumed persistables are U = \{\mathbf a_i, \mathbf b_i\}_{i\leq n}, and the created persistables are W = \{\mathbf c_i\}_{i\leq n}. In other words, we are forced to deal with persistables of kind ‘matrix’ as a whole.

This brings me to the second point I’d like to drawn attention to.

Tasks with preprocessing. What if we want to do reactive computations? Namely, algorithms that re-use (part of) the state without modifying it. Tasks that need some sort of pre-processing would fall into this category. To capture this type of tasks, I think the domain of the logic P_v should be X \cup Y, so we also account for X \cap Y–the unchanged portion of the state for this transition.

For example, following with linear algebra. For a given set of vectors \mathcal B = \{\mathbf v_1, \ldots, \mathbf v_n\} we’d like to check linear combinations of them, but only if the elements in \mathcal B are independent. The algorithm P is then

P(\mathcal B, \mathbf v)=\mathsf{true} \Leftrightarrow \land \begin{cases} rank \{\mathcal B\} =n, // \text{ task 1}\\ \mathbf v \in span\{\mathcal B\} // \text{ task 2} \end{cases}

We could first prove the n vectors in \mathcal B are independent, and add them to the state as persistables of kind ‘linear independent vectors’ in transition X \to Y. Later on, we add vector \mathbf v to the state as a persistable of kind ‘linear dependent’, in transition Y \to Z. With the current design, we cannot re-use \mathcal B, it must be re-created, or copied, every time (and its linear independence proved in each copy).

In a related post I discussed these two notions (with different terminology). Therein, the conclusion (as I got it, @vveiln) was that we can always capture parallelization and preprocessing by passing resources as application data. Fair enough. However, it feels to me that application data is some sort of ‘miscellaneous’ category, that comes to fix anything the abstraction may miss out. I do like really your abstraction, and feel it should be able to capture more.

1 Like

Thanks for this post. I’m most likely not the addressee of this explanation, so all I’ll comment on is the following minor issue:

I believe the second mention of labelRef is meant to be logicRef as in “[…] together with logicRef determines the kind of the resource.”

1 Like

I’m not sure U \cap W = \emptyset is true. If I understand correctly, here U is a set of created resources and W is a set of consumed resources (or the other way around, doesn’t matter). Ephemeral resources can be created and consumed in the same transaction. I just want to make sure it is an intended simplification and not a mistinterpretation.

Later in the post you introduce ephemerals and this condition is still present.

Also, as I see introducing ephemerals didn’t conceptually affect transaction description. Did I miss something?

Strictly speaking it isn’t enough. It must be balanced per kind. But I’ll assume this is also an intended simplification. I see that you later introduced kinds.

The correctness of this depends on your definition of communication. Two independent applications might use help of a third independent application without intending to communicate to each other.

Thanks for the question!

In short: Ephemeral resources are ɴᴏᴛ intended to relate to persistables, except for sharing the kind.

In longer form:

persistable = perstitent resource + additional data/information

There is an intended correspondence of persistable to persistent rersource + additional data. In more detail, the intended interpretation of persistable in the RM is that for each persistable a there is a corresopnding resource a' and data \hat a such that a' and \hat a determine a.

ephemeral = ephemeral resource + additional information/data

In fact, later I introduce ephemerals, following a similar patter: the intended interperation (to be spelled out) is that an ephemeral e can be interpreted as an ephemeral resource e' plus additional information/data \hat e.

Ephemerals make only sense in a transaction description. They are not “visible” in the regular sequences.

At some point in the post, there is only one kind, and thus balancing does not need to make reference to this kind; roughly, I first introduce the idea of an RM explainer for a single kind and later we add the new feature to have several kinds. I thought, I had written that the balancing is per kind. In symbols, the per kind balancing is expressed as

This indeed should be elaborated in a future post; I had intended to leave most aspects of applications aside. The question to answer is: do we want to have applications share resource kinds? In particular, suppose there are two applications that work with completely disjoint resource kinds; now, if there is a third application that encopasses the union of these (and other resources). Now, knowing about the super-application, does this change how I should think about the other applications? In particular, does the super-application “connect” its sub-applications? What about any other Venn diagrams of resource kinds?
… Just to have a list of questions that I would want to have answered (or find answers myself).

Thanks for the responses! In fact, I had already other feedback that the post is sorely missing examples (which fell victim to the goal that it should be short).

Right on! I have fixed it now.

I agree with the general line of thought to explore read-only access. I would present it as follows—probably due to having spent too many years on graph transformation:[1]

  • In the logic scopes of transaction descripitions, we could
    • relax the requirement that consumed and created pieces are disjoint,
    • call the intersection the read-only pieces
  • if a piece is read-only in one logic scope, it is neither consumed nor created in any logic other logic scope.

I think this would cover the examples, @kike, but maybe I have missed something …

Finally, let me just mention, it would also be nice if one could have a general counterpart of rule-composition[2]. Roughly, the idea is to “batch together” several transactions that are not independent of each other, reducing the number of persistables that need to be created and consumed.


  1. Just in case, on page 10, there’s a lengthy explanation of the matter of read-only access, which specializes to what I am writing here. ↩︎

  2. This work gives both a sufficiently clear definition of the concepts at work, but also illustrates with examples from chemistry. ↩︎

1 Like

Right on! Thanks for catching this nasty copy and paste error :folded_hands: I have fixed it now …

1 Like

Yeah, read-only access is needed in some way or another. I think currently it can be done (via appData field), but my feeling is that it is more a hack than an explicit design feature.

Just to be on the same page: do you mean U \cap W = \emptyset or e.g. a non-disjoint sequence \{U_i\}_{i\leq k} for U = \cup_{i\leq k} U_i. I was thinking in the latter. Thus, a cover for either the consumed or created resources (U or W). Namely, for state transition X\to Y, covers of either portions X \backslash Y or Y \backslash X

I don’t think parallelization will be solved with read-only access, but perhaps you mean read-only in the logic scope. Let me explain myself better with the example I gave (apologize in advance if it is obvious to you, or it feels as if I’m repeating myself):

Task: Compute 4 linear combinations of 3 linearly independent vectors. The vectors are given in the columns of matrix \mathbf A = (\mathbf a_1,\mathbf a_2, \mathbf a_3). The scalars in the columns of matrix \mathbf B = (\mathbf b_1, \mathbf b_2, \mathbf b_3, \mathbf b_4).

Matrices \mathbf A, \mathbf B already exist as persistables (resources) of kind ‘vectors’. They will be consumed. The 4 linear combinations \mathbf C = (\mathbf c_1,\mathbf c_2,\mathbf c_3,\mathbf c_4) will be created (also as resources of kind ‘vector’). If we can have a non-disjoint (cover) of the consumed resources, then split the logic scopes in two:

  • LS_1 contains U_1 = \{\mathbf a_1,\mathbf a_2,\mathbf a_3,\mathbf b_1,\mathbf b_2\} and W_1 = \{\mathbf c_1,\mathbf c_2\}
  • LS_2 contains U_2 = \{\mathbf a_1,\mathbf a_2,\mathbf a_3,\mathbf b_3,\mathbf b_4\} and W_2 = \{\mathbf c_3,\mathbf c_4\}

So U =\{\mathbf A, \mathbf B\} = U_1\cup U_2, but U_1\cap U_2 = \{\mathbf A\}. The predicate P in each scope is the same: enforce \mathbf c_i = \mathbf A \mathbf b_i.

Matrix multiplication is used in ML, and their dimensions are huge (I think). In the shielded case, proving correct matrix multiplication is not trivial at all (see e.g this paper), so having a ‘divide and conquer’ strategy will help a lot here, I think.

The ‘read-only’ access to the state (not to logic scopes) will be useful to enforce, in an early state transition, that vectors in \mathbf A are linearly independent. Then, re-use vectors in \mathbf A without consuming them. Here, I think the predicates of each logic scope in transition X\to Y should account for this extra input (resources in X\cap Y).

Interesting. Would this be related to ephemeral resources?

1 Like

Indeed, this is an interesting thought. For example, we could simply introduce appData as distinguished kind.

I think that the key question with relaxing this requirement (as you infer) is how logics would enforce exclusive constraint satisfaction. For example, suppose that I want to create two persistables I_0 and I_1, each with a logic which allows an A_0 and A_1 (other persistables, owned by me) to be consumed if and only if a B (another persistable) is created satisfying certain conditions (intuitively, I want to swap an A for a B, twice). I do not want it to be possible for someone to create a single B and claim both my A_0 and A_1 – these were intended to be independent, exclusive swaps.

If we use a cover, this seems like it would only be possible to enforce if we expose a way for logics to check that a specific other resource is not included elsewhere. Does this make sense / am I understanding your proposal correctly?

I’m coming around to this perspective – at least, parallelization and preprocessing, even if not strictly necessary for verification of state transition correctness, seem like very common patterns which developers will want to employ, and it would be nice to have a model which captures them. I think that this is also necessary for consistent levels of abstraction: at the moment, in order to compute over historical resources, “read” resources, etc. in resource logics, you’d need to process the underlying structure (e.g. commitments, nullifiers), which breaks the abstraction boundary that we want resource logics to have in the basic case of state transition verification.

If we want to go in this direction, the question [to me] is how to best synthesize these design requirements (the ability to check exclusive constraint satisfaction, on the one hand, and the ability to support read-only persistables / parallelization / pre-processing, on the other). The best idea I have at the moment is to put matters in the hands of the logics: allow logics to check (for persistables in their scopes) whether those persistables are also used elsewhere in the transaction. If we do that, however, it’s not clear to me what a “scope” is anymore, since logics can check something outside of their “scope” – then maybe we’d better just give up the concept of multiple persistables being “in” a scope altogether and instead have “one scope per persistable”, where the logic associated with that persistable can check whatever it wants about other persistables in the transaction (including exclusivity). Thoughts?

1 Like

Yes, that’s a really good point that I didn’t think through thoroughly. See more below.

Perhaps it helps to understand what happens currently. In your example, to ensure two B's, and not just one, are created two things are done (afaik, perhaps I’m wrong):

  • predicate of I_b prevents inclusion of another resource of the same kind in the same action
  • resources belonging to different compliance units do not intersect. Namely, their commitments are different. (Here I would add, compliance units across all actions don’t intersect.)

The second check is global. It can’t be enforced by any predicate whose domain is restricted to actions. Without this global check, one can never be sure that the same B is not used in a different action. So, if I’m not wrong, ‘exclusivity’ is currently guaranteed by a combination of local and global checks.

The question is then if we can generalize it. I think of the current situation as if all resources are flagged as ‘read-once’; these are resources that cannot appeare twice across the transaction. This decision is ‘hard-coded’ in the design. Can we make it a feature? I’m not sure how exactly, what comes to mind is via an always-public field in the resource that depending on its value the global check is enforced or not. This way, applications requiring ‘exclusivity’, like token swaps, and others that don’t require it can coexist.

If we go down this path, two natural questions come to mind: does the hierarchy ‘transaction → action → compliance unit’ makes sense. Maybe no hierarchy is needed any more. What’s the impact in function privacy?

The above tries to capture parallelization. For preprocessing, predicates need to accept ‘foreign’ resources as inputs as well; i.e. resources created in a different transaction --in the abstraction, for state transition X \to Y, persistables in X \cap Y–. What’s the impact? At the very least it must be ensured the resource has not yet been consumed (it really is in X\cap Y). This is more tricky and seems to call for a proof of non-membership (to the public list of nullifiers).

1 Like

Yes, that’s right: the predicates of I_0 and I_1 would not allow multiple $A$s to be included in a single action, and commitments/nullifiers cannot be duplicated across actions (to duplicate them would make the transaction invalid).

Yes. This global check is “always present” (not controlled by resource logics).

This is similar to the lines I’m thinking along, but I think we may need to expose a somewhat more sophisticated interface, because it isn’t necessarily clear how to combine constraints: if a resource is used once with the flag set to “exclusive” and once with the flag set to “read-only”, should the transaction succeed or fail? Maybe what we want to prevent is multiple creations/consumptions with the “exclusive” flag. I think that it’s hard to make this decision without a clearer high-level semantics that we want to be able to use this low-level check to implement.

1 Like