Models of computation: selection criteria and candidates

OK; after some consideration, I propose - for now - “NockFF+”. NockFF+ (name TBD) is, roughly:

  • Nock, with
  • Primitive operations for arbitrary finite fields, some subset of which may be supported by any particular interpreter, and
  • Additional opcodes for storage and resource I/O

Finite field operations and indexing

Take the set of all finite fields F. Each finite field F_n, with n the order, has:

  • Additive identity: 0_{F_n}, with type F_n
  • Addition: +_{F_n}, with type F_n \to F_n \to F_n
  • Additive inverse: -_{F_n}, with type F_n \to F_n
  • Multiplicative identity: 1_{F_n}, with type F_n
  • Multiplication: *_{F_n}, with type F_n \to F_n \to F_n
  • Equality: =_{F_n}, with type F_n \to F_n \to F_2.

We also assume a family of conversion functions conv_{i, j}:

  • conv_{i, j} with type F_i \to F_j if i \le j
  • conv_{i, j} with type F_i \to (F_j, F_2) if i \gt j, where the bit in the return value indicates whether or not the conversion overflowed

In general, F_n is the finite field of order n. For example:

  • F_2 is the smallest finite field, with only two elements. (binary field, 1-bit).
  • F_{2^{32}} is the finite field with 2^{32} elements, i.e. uint32.
  • F_{2^{256}} is the finite field with 2^{256} elements, i.e. uint256 as used in Ethereum.

As actually indexing finite fields by their orders would not be space efficient, we instead index by prime, followed by power (e.g. (2, 1), (2, 32), (2, 256) for the examples above).

We then index the operations as above from 0 through 5, resulting in a final table as following:

1 2 3 4 OP
p k 0 0 0_{F_{p^k}}
p k 0 1 +_{F_{p^k}}
p k 0 2 -_{F_{p^k}}
p k 0 3 1_{F_{p^k}}
p k 0 4 *_{F_{p^k}}
p k 0 5 =_{F_{p^k}}
p_1 k_1 p_2 k_2 conv_{{p_1}^{k_1}, {p_2}^{k^2}}

As the space of potential primes p and powers k is infinite, we then map this to a known finite field F_h with a hash function hash, and use e.g. hash(p, k, 0, 0) as the opcode.

Storage and resource I/O operations

  • RESOURCES_BY_INDEX { index_function }
    Reads resources in the history at execution time by the specified index function. Typically, the index functions allowed will be very restricted, e.g. current unspent resources of a particular kind. (this is used e.g. to increment counters post-ordering without conflicts)
  • READ_STORAGE { content_address of type F_h }
    Reads the global content-addressed storage at the specified address. Storage not accessible to the machine accessing will be treated as non-existent.
  • WRITE_STORAGE { data } { deletion_criterion }
    Writes the content-addressed storage locally, on whatever machine is executing. The deletion criterion is a predicate that describes when the data can be deleted - e.g. after a certain period of time (local clock), after a signature by another party, etc. Typically, the deletion functions allowed will be very restricted, e.g. within a certain time bound only.
  • CONSUME_RESOURCE { resource_address }
    Consumes the resource at the specified address, adding it to the input resource nullifier set.
  • CREATE_RESOURCE { resource_data }
    Creates a new resource with the provided data, adding it to the output resource commitment set.

Cost functions

  • Finite field operations have specific fixed costs set per field
  • Simple Nock opcodes have fixed costs
  • Tree indexing and modification have costs proportional to the number of branch traversals in the tree access
  • Memory copying (including evaluation) - ?

My primary question right now is around the cost semantics for memory copying, which happens exclusively (?) with 2, Nock’s evaluation instruction. It would certainly be safe to charge proportional to the size of the noun copied. I wonder if this would be wildly impractical, though, since it seems like using Nock will require e.g. basically copying around the standard library into everything. @ray @mariari curious for your thoughts here - this is the main concern I have remaining about using Nock in this way.