Hello everyone I want to post Engineering’s thoughts on what a client is and what client side proving ought to be for both the v0.1
devnet and the v0.2
devnet.
A Bit About Clients
I want to not use the word client to describe “lightweight nodes”, as then we should just call them a node and this is an example of a kind of node.
I want to instead use client as a useful term that we can use to distinguish parts of an Anoma implementation. I believe if we don’t do this, then we will mix up the transaction life cycle with many components, which I believe has led to parts like the Identity aspect of Anoma to go neglect in the life cycle of the Anoma system.
For this, I’d like to define the following terms:
The Anoma node is the replicated transaction subsystem of Anoma.
- This includes things like: consensus, transaction candidates, ZKVM verifiers.
The Anoma client is a full environment for operating around the Anoma Node.
- This includes things like: Anockma, ZKVM, local storage, ZKVM prove, etc.
These definitions should be iterated upon, but they should give a feeling for how these components interoperate.
A good diagram can be found from our client vs node architecture meeting.
Here we lay out what component belongs where, and how they interact with the system. Something important to note, is that under this paradigm, it is very obvious where we can add Identities and where they belong in the transaction life cycle. Namely that the client workflow mostly comes in the form in of trying to create transactions to send to the node, with a different set of concerns and data available.
A Real Plan for References in Anoma
References show up quite a bit in the Anoma (label, logic proofs, value, and found in Action’s app-data).
However, what work would it take to actually support references?
@cwgoes in the OP points out various requirements, however I don’t believe it accurately accounts for the work that is required to support this feature.
In Engineering we made a Wardley map of what is required to get a referenced resource machine (RRM).
Values to the left are less specified and more experimental, with most of the features here being in their genesis phase of research and development.
So if we traverse this map we notice there are a few key areas of development that needs to happen:
- We need the construction of a client side vm.
- We need to specify how and when dereferencing happens (hence dereferecning protocol).
For now us let focus on the first chunk of nodes. These nodes deal with the Client Side VM.
The Client Side VM, is simply an execution environment for the nock function to run. This environment has to mainly give Client Side Scry to any given Nock code.
This part is important, how do we even get Client side scrying? Well we have to first develop Client Storage, this is smart storage that can hold a few different things:
- Hashed Blobs
- Identities (This is the user’s identities, we don’t want to gossip these around on the node side)
- Random User data (The client can act as a sort of “wallet”, further private local code can be stored here if so desired).
Further we can imagine a scenario where the user has references to data they don’t have locally to handle these scenarios properly, we must have read only transactions. That is to say transactions that bypass ordering and do not mutate state on the node to properly sync currently unknown data.
However, the Client Side VM is only half of the equation, we need to think about when references get dereferenced from the code and VM (both client and node) side of things.
Namely, should we have Manually Dereferenced or Automatically Deferenced data. Most GCd languages just have automatic dereferences with some ability to peek under the hood at references. However what is wanted I think is more dependent upon the programming language model we want. I suspect the answers for Juvix and AL may diverge. However what won’t change is how do we handle accessing data and when, and if the data we send to the node should still contain references in them.
An aside: (further, it’s customary for referenced types to keep their types thus if the label of a resource is of type X
, a reference to X is Reference X
not now a Field
element.).
References can not exist by the time one calls the prove function in the ZK case, as you need all data upfront, and thus the system should do zero fetching by submission time, however for the “transparent” case we have a bit more flexibility, however we should not let this fact blind us from the amount of work that it takes to properly deal with these questions.
On not what having references gets us.
Now let us imagine a world where we did not have references in the resource machine. In this case, what changes?
I’d argue not much.
The interface to the resource machine may not have to change much at all! Namely we can fake references in Juvix, and thus the user would form the transactions as they do now, just with fake references! We can already fetch data, just via the indexer which is what we’d have to do if no client side storage is available.
On the Modest Proposal for Preventing the References of Ill Conceived Plans from Being a Bother to The Applications or Anoma, and for Making them Beneficial to the Public.
Now, since we are talking about what we can do for the devnet, let us consider a proposal that was proposed by @ArtemG and accepted as somewhat uncontroversial:
-
References are used for/in
label
,logic proofs
,value
, and ((app_data
(?))) [let’s get a full list of things to be referenced] -
The value of these references are kept in two possible places:
app_data
in format{data, keep}
or{data, discard}
- Storage under the key
sha_256(data)
-
When a reference is to be accessed, it can hence be accessed either from
a. app_data
b. scried from storage (this can only happen inside of a TX during execution or through indexing) -
If a transaction passes verification checks, etc then we take all elements of form
{data, :keep}
in the app_data field and write them at the timestamp of the transaction with keysha_256(data)
On Refusing the Modest Proposal
Let us reflect on how this works in practice, given none of the work in the Wardley map is currently done, and we wish for proper error messages for the user.
Imagine the below code (written in Elixir) is Juvix code.
# First we consturct our transaction, we already Indexed
# Note: this requires separate compilation
def main1(resources, transactions, etc...) do
createMyTransaction(resource, transactions, etc...)
end
# After we create our initial Transaction, called myTransaction
# from now on, we have to get all the references from the indexer...
# (Defeats the point of refs anyways, as we have to grab it all, always)
# Meaning we call out to the JS or Elixir to then scry/index it all.
def main2() do
grabAllReferences(myTransaction)
end
# After we ask the indexer for all the hashes,
# we now put together our transaction to see if it proves
# Now we can submit this online.
# We could also submit `MyTransaction` as well, since
# transparent proofs aren't real, and the node will have to deref itself
# But we do this so we know our proof is true.
# Note: this requires separate compilation
def main3(references, public, private) do
construct_fullyderefed_tx(myTransaction, references)
|> prove_transaction(public, private)
end
@spec withLookedupData([any()]) :: %{hash => any}
def withLookedupData(answers) do
myTransaction
|> grabAllReferences
# This also takes myTransaction, as it could have answers to refs
|> correlateDataWithHashes(myTransaction)
end
# Note that in our prove context we either need to work on a transaction
# With a context I.E. the answers always, or replace the refs
# We can't actually replace the refs because Juvix is typed
# it will complain.....
# YOU WILL NEED MONADS TO PASS THIS AROUND TO CHECK oOoOoOoO
# State Monad Because + Random you want
# This all has to casted in Juvix
@spec construct_fullderefed_tx(Tx.t(), %{hash => any}) :: {Tx.t(), %{hash => any})
def construct_fullyderefed_tx(trans, answers) do
...
end
def grabAllReferences(transaction) do
[]
|> addReferencedLabels transaction
|> addReferencedData transaction
|> addReferencedLogicProof transaction
|> addReferencedAppData transaction
end
This computation is done in 3 stages
- We compute our Transaction, The queried data may contain refs. This is one compilation in Juvix.
- We then get all our refs in Juvix, to then send to the Indexer. This requires some JS or Elixir in-between to smooth it over. Note that data we got from the indexer may have references, so we’d have to query again and again until no references are left.
- Now we have to compile Juvix for the second time to now correlate the transaction with references to their data. We can do this via a monad that all user code must run in to assure correct computation.
- At this point we need to submit the transaction with it’s environment, as the juvix code does not reference 12 while running, meaning it must submit back all the data it gathered either way.
Note that we have to completely dereference the transaction code during proving, if we want to be able to test the transaction offline (ensuring verification will pass, etc.).
These problems arise on the Juvix side since they need to do some work to prove offline.
An Engineering view of user flow
Further while having these discussions, we’ve made a Wardly map covering on what we think user flow requires:
It’s by no means perfect but it’s a starting point to getting the good user flow working.