Questions I want to see researched

Preamble

There are many interesting questions in Anoma that have gone answered. Below is a small collection of them mainly focusing on long term burning quests I need answers to to better outline the long term system of Anoma

Terminology

  • Message Send :: To invoke a method on an object.
  • Method :: An operation on an object. Methods are themselves objects
  • Chambers :: Private controller domain
  • Court :: Shared controller domain between many parties
  • Domain :: Either the court or chambers
  • Have Control :: To be the dictator of a domain and can force changes
    at will
  • CLOS :: Common Lisp Object System
  • Erlang :: The Erlang programming language™
  • Upgrade :: To change code from one state to another
  • Engine :: An Actor
  • Overload :: The ability to change the behaviour of a method in some
    manner

Problems we need to think/specify

  1. What times of computation are there?
  2. What is the object structure of courts and chambers?
  3. How do structures get upgraded within a controlled domain? Do we
    steal with CLOS does or what erlang does? How do we initiate
    updates for domains we don’t control?
    1. If we have a way to initiating code updating on say other
      courts, does this work via message send?
    2. Can we initiate consensus upgrades this way?
  4. How does consensus upgrades work, when does the new code take
    affect? Is this an overloadable method on the court?
  5. What is the boundary between court and chambers?
  6. What is the interface for engines in system
    • What properties do we wish to gain out of them?
    • How do we synthesize event based programming with them? Do we
      steal urbit’s design a bit?
  7. Can we realize our long lived actors as objects? What about them do
    we wish to prioritize?
  8. How do we realize an OO protocol ontop of the RM model
    • My first stab at it was here:
      On the Will to Adapt: A Resource story
    • If everything as a resource computation, then =(+ 1 1)= is a
      transaction, where we get back the resource/object =2=. In this,
      do we just return all new committed objects!?
    • Do we make the model declarative? If so, then do we get solvers
      for free by unification!?!? Further, should we go full in on the
      declarative OO model?
  9. Can we realize Identities in system, what is the interface like?
  10. Do we do RM transaction processing in the chambers? How does it
    relate to our current work? How does it relate to our Object story?
  11. What is the boundary between Objects as resources, and the
    immutable data in storage?
    • I got a partial answer from Ray before, when we scry the data
      since we give the type we inherently wrap it in an
      object/resource interface.
    • Another method could be we interpret raw data as an object in a
      wrapper.
  12. How do we ensure the code that each court runs is the same!? How
    can we trust the other parties to run exactly the same code?
    • I’d imagine it’d be a pain to maintain forks if there are
      frequent upgrades but if there is money on a line I’d imagine
      this would be an issue.
  13. What are the major flows of the versions?
    • Is every engine encompassed in some kind of flow?
    • Namely is the user flow and all components thought of for a
      particular release.
  14. What is interop like within the various resource machines?
    • This is a very broad question
  15. How does indexing work for shielded
    • I believe xuyang did research good to get this written up
  16. Can we define what an intent is
    • I like the idea of coalgebras but it should probably be defined
      in the specs, I think this may have interesting synergy if we go
      in a declarative model
1 Like

Thanks for writing this up. I’m going to start with three clarifying questions:

  1. For the purpose of this topic, how would you define “an object”? I need a definition which does not simply refer to what Erlang or Common Lisp does - what are the essential features of “an object” which you want Anoma to be able to support?
  2. What is the best existing example of the “declarative OO model”?
  3. What do you mean by “Urbit’s event-based programming design”?
1 Like

Are these intentionally circular references?

Regarding 14: you might be interested in reading this RM interoperability questions and concerns
Regarding 15: you might be interested in reading this Shielded State Sync - Anoma | Research & Development Forum

Question 1

Glad you asked, I will take my definition mostly fromthe paper above along with minor clarification, the paper takes a denotational approach.

Here is a quote from the summary on the features of Objects:

“An object is a value exporting a procedural interface to data or behavior. Objects use procedural abstraction for information hiding, not type abstraction. Object and and their types are often recursive. Objects provide a simple and powerful form of data abstraction. They can be understood as closures, first-class modules, records of functions, or processes. Objects can also be used for procedural abstraction.” (3.10)

I can thus summarize it as such

  1. Objects only know of themselves (they are autognostic)
  2. Objects are higher order values (it is the same as passing around closures!). This is often referred to as late binding or dynamic binding.
  3. Objects use interface abstractions for their interface (not type abstraction, I.E. if we have a set, we can have a set object contain many different types of Sets. See the papers definition of Sets for an example, I can also post my prolog code if so desired). Thus it is to say object interfaces do not prescribe a specific representation for values.

I believe this should be the “core features” though this isn’t everything I wish to support or go for.

For what I want us to support I want us to support the full Meta Object Protocol (MOP) as the reflective features and control of allocation and creation are highly important. We will see use of it when we wish to query (Block Chain word: index) for certain details about data. We should not really pick and chose features here as that leaves with a more broken system that gives us a harder time to realize the entire system of Anoma.

Thus here are some extras features of this:

  1. We should support inheritance, but it ought to be customizable by the user
  2. We should be able to reflect on every part of the object
  3. Objects should redefinable within a domain (reflection will help with this)

Thankfully there are books and design documents on this subject and it can be studied.

I can go on for a while about nuances and what “message sends” are, but I’ll stop myself while I’m ahead.

Some choice quotes from the paper can be seen below, please do read the paper in earnest:

  1. “An essential observation is that object interfaces do not use type abstractions: there is no type whose name is known but representation is hidden” (3.1)
  2. “Object interfaces are essentially higher-order types, in the same sense that passing functions as values is higher-order.” (3.1)
  3. “Objects are almost always self-referntial values, so every object definition uses µ” (3.2)
  4. “However, inheritance will not be used in this section because it is neither necessary for, nor specific to, object-oriented programming” (3.2)
  5. “Just as the ADT version of integer sets had two levels (set implementations and set values), the object-oriented version has two levels as well: interfaces and classes. A class is a procedure that returns a value satisfying an interface.” (3.2)
  6. “This means that the union method in a set object cannot know the representation of the other set being unioned.” (3.3)
  7. Objects are autognostic: “An object can only access other objects through their public interfaces.” (3.3)
  8. “Object interfaces do not prescribe a specific representation for values, but instead accept any value that implements the required methods. As a result, objects are flexible and extensible with new representations” (3.4)
  9. “abstract data types have a private, protected representation type that prohibits tampering or extension. Objects have behavioral interfaces which allow definition of new implementations at any time.” (3.4)

Question 2

There are probably others, but I was experimenting with this last weekend, it is an OO prolog library that adds OO data abstractions to Prolog.

Question 3

https://docs.urbit.org/system/kernel/arvo

I am talking about Urbit’s Arvo system, they independently recreated genservers but based on a different set of primitives, @ray is the best person to explain it exactly but it seems directionality correct of what I do understand.

3 Likes

No, when I wrote the Chambers and Court definitions I was thinking in terms of the controllers report as researchers like to use the word domain IIRC. And when I wrote Domain I was using it as a short hand for the definition I gave above. All the writing in the paper outside of the Court and Chambers definition refer to the one I posted.

These are good threads thanks for bringing them to my attention

1 Like

Thank you for the paper, I found it a clear, helpful, and entertaining read. I have never seen that definition of objects before, but it makes a lot of sense.

Just as a note, the resource machine – as it stands – does not enforce this, resources are able to inspect the representation of other resources involved in the same action. They do not, however, have to – so an object system written on top of the resource machine could enforce this restriction.

I do not think that this will be a problem, but I will note that the resource machine does not allow us to escape the fundamental efficiency tradeoffs of late binding.

I think that this is the main part that we need to figure out (with respect to how to implement objects on the resource model). This question is related to the question of “standardized resource semantics” discussed here, which I think we could alternatively call “methods with properties”.

For example, we want to be able to define an interface (class?) with a view function:

class OwnedResource r where
  owner : r -> ExternalIdentity

and the property that the resource cannot be changed in any way that the owner has not explicitly authorized (which the resource logic has to “prove” in some sense, see the linked thread for further context). Having interfaces with provable properties also seems to me like it solves a major challenge for object-oriented programming which is mentioned in that paper:

This is basically the same problem as the one I describe in that thread, and the ideal future solution which I propose there is no more and no less than a distributed-operating-system-compatible form of a behavioral specification: in our case, a specification of how the state of the system itself can evolve over time, which is a little different than the cases considered in the paper. One might say that our objects are persistent.

In terms of the question of mathematical representation discussed in the paper, the resource machine should be perfectly capable of representing structures by their characteristic functions, although it doesn’t change the tradeoffs of that representation (e.g. the impossibility of iterating over sets represented by characteristic functions). This will perhaps make the work of solvers more complex (as compared to an algebraic representation), but it will also perhaps allow different (and perhaps at times more efficient) formulations of CSPs, I’m not sure yet.


As for the MOP:

I think this is purely a “programming system choice” and is fully compatible with (but not implemented by) the resource machine.

This sounds like more of a question of practical engineering and tooling. Seems possible to me.

I do not understand this requirement, can you further define or link to what you mean here?


One further thought: I would consider objects as they are defined in this paper to be purely a form of data abstraction, not a computational model. Contrast this to our concept of engines (as defined in the specs), which is a computational model and is not opinionated about the form of data abstraction. It is indeed a computational model that features sending messages around, which may sound superficially similar to “sending messages to objects”, but to me the choice of computational model and the choice of a form of data abstraction are independent. It may be the case that particular forms of data abstraction are more or less efficient to implement on particular computational models, but I don’t think this will matter, because the principal constraint in our choice of computational model is correspondence to physical reality, and that constraint is sufficient to fully determine the contours of the engine system.

2 Likes

I’m going to start addressing some of these questions in individual responses. Many may merit their own topics, but we can split those out gradually.

This question can be asked from many different perspectives. From the perspective of an intent and subsequent transaction, I’d say:

  1. Chambers (client) computation to translate the user intent into an Anoma network intent.
  2. Many different times of potential computation on solvers trying to match the intent as it flows around the network.
  3. Assuming that the intent is eventually matched, execution of the resulting transaction function on the appropriate controller, and verification of the transaction.

So, roughly, for a transaction, we might class the times of computation as:

  1. “Chambers-time computation”
  2. “Solver-time computation” (composed of many sub-times)
  3. “Court-time computation”

and they always happen in that sequence. Is this the flavor of question you wanted to ask?

n.b. I think we should pick terms other than “court” and “chambers”, they strike me as too… semantically loaded, at least for our own communications. I would prefer something like “local domain” and “foreign domain”, which also conveys the relativity that I think we want here. Thoughts?

What structures are you referring to here? Are you talking about resources (or objects implemented using resources) which are controlled by a particular domain? Are you talking about the rules for state updates in that domain itself (e.g. the RM version, or something related)? The answers will be different for different structures.

In general, I’d say that we don’t initiate upgrades for domains we don’t control, unless I misunderstand the question somehow (and in that case, what did you have in mind)?

I believe we can satisfy this for a given type system. I want AL to support pluggable types in time. Something important is that the type checker must be a function in system, so if you have different type checkers with different properties that you can attach to methods, then you could in practice query for all implementers of the interface and it must satisfy the specific type check you care about:

Namely the images under section 5: Standardization of Application-related Data - #19 by mariari

I post some example queries from GT here, just imagine another filter for satisfying some property.

I mention these, as these are required to have a live system of anoma without having to define extra out of system constructs. Just a note on practical requirements that are important that we try to realize.

These protocols are specified to be exact, and we will want all the features and maybe a few extra protocols for our own specific needs. It’s good to get in the habit of understand how things like the MOP compose as we’ll want a similar design ethos to our own extra needs.

I mean, we should be able to add new methods to an existing object, we should be able to add new slots and have all objects in the domain be upgraded with the new values. Etc etc etc.

Literature which talks about upgrades:

  1. The CLOS/MOP spec
  2. Erlang upgrading mechanism (on_codechange)
  3. Urbit’s on_load
  4. I believe some smalltalk book (I don’t know which book of theirs off hand, maybe the blue book?)
1 Like

I agree with pluggable type(systems) and having the ability to call the typechecker in-system, that is part-and-parcel of the interface-adherence-proofs I described in the other topic. Many typesystems are undecidable, though, so I do not think that we could guarantee in general that an indexer will return all valid implementations of an interface – it could only return all known implementations of an interface for which we have a valid proof.

What do you mean by this? Is there a clean definition of the MOP in mathematically precise language (in Lisp itself is fine actually if there’s a simple one)?

Adding new methods in the sense of imperative computation doesn’t even require an explicit state change with objects-as-resources, since objects-as-resources need not fix a set of methods. Changing the state transition invariant associated with an object-as-resource does require a state change, but is certainly possible. We might want to think about how objects might want (or not want) to limit how their state transition invariant (resource logic) can be changed.

One thing to keep in mind here is that we are dealing with permissioning in a distributed, multi-user shared-state system where the users do not, in general, trust one another – it is very, very different from running a local REPL on your personal machine. Whatever kinds of updates we allow will always entail trust assumptions, and these must be explicit.

Well that depends on the query you sent to the databroker, you can always start oring and anding clauses for primitives you know are fine or what someone has decided is valid. Many queries are fine just knowing who responds to some kind of messages, stricter guarantees about behaviour would be nice if possible but in practice there are many ways around it.

Btw why do we keep using the word indexer? We aren’t indexing into a table does it offer any semantic advantage over the word “query” which I’d argue is much more accurate to what is actually happening here. What is the role you imagine for these people, what are the requirements? Is it compute power? Is it storage power?

Namely I think a service could be that some courts or some people’s chambers will offer a databrokerage service, where you can send in queries to them, they can tell you what queries they have already precomputed, but you can send in a lambda to search particular datasets you may not have locally for whatever reason. This would look the same as querying for information locally but just on a non local entity (much like how I search google.

A lot of the design that is important to copy over is how much consideration is had for designs that are flexible and allow work at multiple layers. In essence it is a system and is a good system at that.

I’m not sure of a small mathematical definition, as it’s a system it’s many components coming together to make a whole, however here might be some leads:

https://franz.com/support/documentation/mop/concepts.html

If you are wanting to define your own the book “The art of the metaobject protocol” shows you how to define it precisely in a bunch of code.

Well new methods, mean the dispatch function of the object as a resource needs to have an updated reference so it can accept the new message handed to it, I.E. it now accepts a new message, so yes the state transition invariant does change, as it needs to contend with a new kind of message called on it. This might be handled by a mutable pointer (I.E. we lookup methods to what the pointer tells us so the data there changes), so it doesn’t require the object to be changed but something must be changing there to have the object react to a message properly.

Me loading stuff on domains under my control is fine, if you want to update something outside then send the changes, there is no guarantee that the changes may be accepted, and different courts will decide how they wish to proceed. We should work on permissioning systems in general with many levels of control that are overridable. It’s why I mentioned the ethos of the MOP design as if we take the same style of design we can bake in many layers that can be tweaked by users for niche use cases to allow for ultimate control for how objects we define to update.

1 Like

Is solver time any different from chamber time?

Rather to put it, can we describe the time without making reference to a particular actor on the system (solver computation likely happens in a court or chamber, making the entity not unique in it’s time). Further I can argue that everyone in their own Chambers is a solver, let me post a solution I had my chambers solve.

I want an X and an N such that X > 5 and X and N are used in factorial(X,N), where N is the solution, give me the first 5 solutions!

X = 6,
N = 720 ;
X = 7,
N = 5040 ;
X = 8,
N = 40320 ;
X = 9,
N = 362880 ;
X = 10,
N = 3628800 .

Further I can make another counter example… Let us think in terms of big data. Let’s say we are getting so much data over the network that the user can’t process it all, so instead of sending the subscription to their local chambers, they send it to a local court they control. Now the data is being fed into 20 of their machines in a cluster using a consensus algorithm to pick the machine with the lowest load for the data processing and finally given to the user. Here we have unfinished computation happening across the network across many machines but no intent is used, in fact we used many machines to fill some data but it certainly isn’t a solver!

The problem is, is a cluster of 5 of my computers a foreign domain? I own it so it’d be a local domain to me! With the court and chambers it isolates the computing power and environment and lets us use adjectives to be more specific (This is a court owned by identity X, this is a court where identity X is not the judge!). I’m fine moving away for discussions here but I’d like some better terms first.

1 Like

I think we’re roughly aligned here. I’m happy to replace “indexer” with “distributed query system” or something. I’d note however that “indexer” is not an artifact of weird crypto terminological shenanigans, the term just comes from “indexing” in databases, which is indeed what we’re talking about here (the difference between “indexing into a table” and “indexing into resources / historical state” is not important, “a table” isn’t anything specific). Maybe “indexer” as a distinct role is slightly crypto-specific, but the concept itself is not.

Thanks, maybe we can add selections here to a reading group session if you think it’d be useful.

My point is that we can separate – not only in the resource model but also in our object system – what state transitions an object accepts (governed by the resource logic of the associated resource) and how to compute a particular state transition (which doesn’t need to be part of the object’s definition itself at all). We may want the ability to both change the state transition invariant and the ability to define new methods of computing new states, but these are distinct (or at least they can be in our model), which is not the case for an imperative object-system (which interleaves state transition invariants and computation of new states in “message-handlers” without a clear delineation).

I agree with this, I think we might want something like “speculative future histories” of logical state domains (courts, in your lingo), where you can run different transactions locally, then perhaps choose whether or not to actually “apply” them. Note that the results of applying them may not always be the results which you obtained locally, since this is a multi-user system.

From the perspective of a particular intent, which originates from a source chambers (source user/machine), I’d say that it is, yes. There’s no objective categorization of “times” as far as I can see, there are only different perspectives, and how to classify events depends on what perspective you are interested in.

We can describe the time this way but I’m not sure why we would, because that wouldn’t describe any actual interaction pattern that we care about. If this is the kind of time you want to talk about, can you clarify what you’re looking for this set of “times of the system” to do?

Yes, that’s fine, “solver” is just whoever solves, it’s not a specifically permissioned role.

What is this a counterexample to? As mentioned above, I’m not arguing that a transaction-centric perspective is the only way to talk about time in the system, I just used it as example to help elicit what you were looking for. From this response I garner that you’re looking for something … more general? That sounds potentially useful, but I’m still not sure what you have in mind exactly. Do you want to talk about an arbitrary causal DAG of causally related messages or something like that?

As far as I understood per our discussion before, “chambers” meant literally the domain of a physical machine, so (if we replace the word “chambers” with “local domain”) anything else would be a “foreign domain”, including 5 of my computers, yes. If we want terms to describe zones of trust or control that’s a different separation (these are related to but not the same as logical state domains).

Hello all, I am pretty excited about this forum post.

I have been thinking and building around the idea of objects canonically serialized over resources, and the associated system to support this model for a few months now.

Because Anoma is in testnet and items like the data blobs are still in progress, I have just created some working assumptions on the resource model based on the specs, alongside some occasional feedback from the Anoma BD team.

My assumptions on the resource model:

  1. There will be a blob reference to the resource ‘value’ that allows for fungible data. When I send a transaction to consume a resource and create a new resource, I will somehow be able to replace that fungible data with the contents in the ‘extra’ section of the transaction.
  2. Builders run their own Anoma client, and that client can intake service requests. (I made the image below, feel free to correct it)

With these assumptions, I started to reason about objects.

My main goals for an object:

  1. Allow developers to define
    a.) More robust data schema(s) associated with a resource.
    b.) State changes that are dependent on off-chain actions like user input or API calls.
  2. Give developers a consistent way to reason about state in their application, regardless of if the state is occurring on-chain or off-chain.
    a.) Meaning in development, engineers can reference a resource and the change of the resource directly in the application code, making the application feels less segmented.
    b.) Improve end-to-end observability.

My current architecture for an object:

  1. Developers can define object schemas.
  2. Developers can tie universal state changes to resource logic (This is WIP).
  3. The object contains parameters that allow for mapping the object’s relationship to other objects, whether those objects exist as another resource on-chain, or in some arbitrary off-chain source.

Example:
A resource represents an on-chain leaderboard for a game.
That resource has a canonically serialized object, where a developer define the leaderboard schema as a rank of the top 5 players in the current game session, and the leaderboard updates based on changes to the player’s scores.
Player’s scores are derived from client actions in the game such as hitting a target, healing a team member, winning a round, etc.

Because the resource model cannot intake the player’s actions and reason about how that changes state, the object does interpolation, determining how these actions change state, and proxies that new state to the blockchain

There will exist some model by which the state commitment is binding, so that the blockchain’s role is to validate the state change and update the resource.
There will also exist some model by which the claim the user actions are validated against the semantics of the app to ensure the actions are valid for the given state of the application.

But overall, the goal for this system is to unify all parts of the application components, and allow developers to treat application state with deference. Builders should encode what they want to build, not worry about how it should execute.

I am still figuring out how to best bind the object with the resource and embed it in the transaction structure of Anoma. Is it best to use the outer and inner layer transaction structure? Extra data in a transaction? Create transaction actions?

I have learned a lot about Anoma from the ART reports, but my ultimate preference is for the Anoma team to tell me where this system fits best. :slight_smile:

1 Like

Welcome.

I will answer 2. first and in the process I hope it also answers 1..

So the Anoma client is an interesting piece of software in this equation. Currently the client is not up the standards that I hold for it, however it’s rapidly improving. The client will essentially acts like the local Anoma operating system, giving you access to the rest of the system. It’s a nock based system which has a lot of interesting properties

  1. it’s a multi-threaded environment that lets you run long running processes (not too dissimilar from erlang’s genservers)
  2. It’s a turing complete system meaning you can run any computation you would like
  3. We imagine it as an event driven system that will gets events from controllers (we can envision 1 controller as being 1 blockchain), or other clients
  4. The environment lets you referentially transparently read data across the system, this is known as scrying. For example the blob storage you referenced can be directly read from via this mechanism, and if you perchance don’t have the data your client can fetch it from many different nodes in which your client can cache the results until you expunge it or some policy is taken by you.

Thus for your diagram, we do offer these services via protobuf currently, however we wish to remove that part of the codebase (soon) for instead offering proper actions via submitting nock to do it all from client code that you can run interactively. In fact a lot of the IndexerService box in your diagram has already been replaced by our read-only-transactions that we expose via nock and users can run via Juvix.

Interesting what do you mean exactly by a)? I’d assume that an object declaration with the various slots would be the schema for a particular resource

b) sounds to me like a mixture of one computing on their own machine and also subscribing to some events over the network (I have a demo that maybe does the former?).

These are good goals. Although I’m not sure what on-chain or off-chain means here. At least how I envision things, I imagine a lot of computation will happen in a mixture of the following places:

  1. The client is a good place for mutli-threaded computation on one machine
  2. Controllers (think blockchain) with user hackable consensus (the entire controller codebase will be in the same environment, meaning it’s all user code) will be useful for the ability to create consensus that lets multiple machines share computation (sending erlang like genservers over the network to other machines you control based on proof of resource availability)
  3. And the outside world. I believe a lot of computation will continue to happen there and we can marshal data in and out using the tried and true methods of using FFI or putting formatted data out to a port that existing programs can listen to.

Meaning that if we have some system on Unix/Windows, we can have an erlang like genserver that users can spawn and have it subscribe to happenings on specific resources or their changes and then send data out via some unix socket or tcp socket to an existing application that wishes to monitor it.

Yeah, I want this as well, In the ideal future, there will be an editor that is hookedup to the image of Anoma. Ideally it’s written in the system of Anoma such that any proper bootstrapping of Anoma itself will lead to having the image editor, not too dissimilar to a smalltalk system where this occurs in practice. I believe this sorta bleeds into your point b)

Observability is a very important part of any system, and this is a wonderful goal to go towards. For
Anoma, I have plans for having gui frameworks to allow custom views on objects not to disimilar to Glamorous Toolkit. I’d be curious on what kind of observability you want out of the system and how maybe we can help support that in the medium to long run. In the shorter run I think we’ll try for both parts a) and b) by hooking up the Anoma system into existing tools that allows one to view data in Anoma and even write their own views onto data.

Your example makes sense, I’d imagine this can be done via a mixture of different kinds of resources and some processes that subscribe to state changes on a particular controller where this is being run.

Have you read the story I wrote about the dominion it kinda reminds me of the pickup games in how I’d imagine the flow to work:

So I have been working on this model recently, it’s not specs compliant in any fashion, but I have a data mapping schema that mostly works to the current RMV2.

I will be doing a writeup of the code soon but if you wish to read it now it is linked below:

I basically abuse the common lisp object system to give me a decent mapping to resources from any object from within common lisp. So something interesting is that since this is a mapping, you can choose not to exercise it, and so for local computations you can just run it all down to x86 (I guess nock in our case), and then when we want to submit some data to a controller we can serialize both the operations (which post constraints) and data (which also post constraints) into resources.

A limitation of my current model is that I have not done it in prolog so the meta-model I have can not properly have intents that let pose automatic constraints. However this is just the first stab at the experiment with more to follow.

Again thank you for your first post!

2 Likes

Thanks for your response, it has created so many great rabbit holes for me! I will first zoom out and provide my broader motivation for the object structure to anchor the conversation, then continue with our thread.

Broader motivation for why I want an object abstraction: The abstraction allows for state changes on the blockchain to be triggered by actions that happen in an outside system, in an interconnected way.

It seems we both see that this object abstraction can allow resources to be more connected to other portions of an application, and allows for state change to be dependent on other systems.

I believe our perspectives split at the architectural level, and I want to make sure I don’t miss this and accidently speak past you on implementation.

Architecturally, it sounds like you describe a client-server architecture, where developers run an Anoma client node, and a unix/windows machine with FFI mechanisms and sockets for relays. (Please correct my understanding, I am not sure if you are stating that the Anoma client can act as a genserver using Erlang)

I do think because of Anoma’s structure, there is an alternative; a client-blockchain architecture.

There are many ways this can be implemented, but my current imaginings is a verifiable client-side system that proxies directly to the Anoma client. (I building the client-side isolate, almost like a runtime environment.)

A logical question: Why?

  1. Predictability.
    a.) Removes system segmentation and it’s associated complexity that comes with federated server-side systems like ‘Which model updates what component?’, ‘Does this component have the right data or does that data need to be refreshed?’ ‘When do we implement caching?’ ‘How do we do testing?’
    b.) It’s hard to keep the plumbing between the database and other systems in sync, and account for latency, data inconsistencies due to clock cycles, database rollbacks, especially when some of that data exists on the blockchain. Data management becomes a bottleneck.

  2. Data ownership/control - More user data stays local, verified on the user’s computer, and the data’s existence is attested to but hidden via the fungible data within transaction and resource.

  3. Full transparency - This model creates a verifiable compute link between the client and the network. All compute is verifiably accounted for.

  4. *(This is opinion) As an engineer, this end-to-end system feels natural to reason about as the application increases in complexity. I can think of the entire system in terms of actions at a given state. (Because of the workflow orechestration capabilities within the tools I am building) Developers don’t think about the federated system syncing or interdependencies.

With this motivation, I see the application of the object structure from a different angle.
I don’t believe the purpose of this thread is to decide on one or the other of these architectures, rather the purpose is to first correct my understanding on Anoma architecture, and second, make clear that I would love to participate in making both possible within the system, and allowing developers to choose based on the application needs.

This dual model exists in cloud offerings. You can run your own server instance in the cloud, or you can run serverless functions, which execute without you having to create a server instance. It feels like you describe the former, and I describe the latter.

Continuing the thread:

I have been looking into the erlang genserver and nock, as I am not familiar with this tech.

Does this mean developers define the ports and connection between the Anoma client and other ‘server-side’ systems so that their application code can call the Anoma client and change the resource state? And the Anoma client can act ‘asynchronously’ because of the multi-threaded long-running processes?

Yes, the object declaration itself contains the schema.

Yes! I see these machines together (user machine and Anoma client) essentially emulating a web2 server environment. I would love to see the demo.

In this context I use off-chain to describe a computer that does not connect to the Anoma environment or any of the Anoma connected networks. Most commonly this is a user’s computer that is interacting with an app built on Anoma, or an API endpoint.

On-chain meaning an Anoma client, any Anoma node, or connected network nodes.

I am still having trouble bifurcating the controllers vs the engines vs machines in Anoma.. Any references for this would be much appreciated!

This goes into my motivation and the system I want to build.
I will continue to look into FFI, but I imagine making the threshold for outside systems meld to Anoma compute, so although computation is happening outside of the Anoma ecosystem, the traits and benefits of blockchain-based compute are extending into that environment.

Yes, yes, I imagine a very similar interface for developers to view the object structure.
I want to understand more about the short term plan to hook Anoma into existing tools, as I was planning to create this platform.

Maurice shared this previously, but the context of this thread helps to me grasp the concept much more!
I used this to guide my understanding of the system I put in the motivation section above. It looks like Eric and the CPP program are trusted operator(s) in this dominion? As in, members of the dominion trust Eric’s unix system to participate in the process, and trust that Eric’s changes were computed and performed honestly?

This is great, and one of the many rabbit holes I have gotten from this thread. I have been trying to find the best structures to serialize the object so this is helpful.


Thank you for your response! There is so much rich information here that I will have to continue to look into.

2 Likes

The first diagram isn’t quite right, Genservers are abstractions over a light weight process that can send and receive messages (called an actor model), if you are familiar with the Java world, Akka took this idea and applied it to the JVM. So for the Anoma case, you’d be able to spin up your own Genservers (we call them Engines) inside of the Anoma client. The Anoma client also has storage so it is a database of sorts as well.

So I’d imagine it is something like this

This model can work as however you see fit, so for example the client blockchain architecture would fall out of this, as this model doesn’t prescribe any forced way of working, you can with some code inside of the Client setup your client-blockchain architecture and then work outside the system.

However, I suspect if one is building tools up from the ground at that point it would be simpler to just build inside of the Anoma OS itself, since it should be trivial to spin up various chains and just do general operations. Much like how we get a lot of benefit from working within Erlang with genservers and their supervision architecture. This becomes harder to use when we have to mix a lot of things into the mix and we’d typically reach for other tools.

I’d be curious on the particular benefits of the runtime that you imagine, as the Anoma Client is basically a runtime of the operating system, I explain it in a bit more detail in this presentation I gave:

Looking at your four points I believe this should all be easily possible from within the Anoma Client (from here on out I’m going to call it the local domain), since it is the local anoma operating system, meaning you have 1 unified language to work through everything.

I believe we are describing the same thing, a lot of the power of Anoma comes from the local domain I believe.

This means that inside your local domain you can have certain engines (genservers/actors/processes that accept messages) that subscribe to certain controllers you are interested in. If you are interested in talking to a system outside of Anoma (some legacy codebase, or any other system), you’d then have that engine send a message out via port or FFI to the application outside of Anoma. Just the same as how you’d have any programming language talk to another programming language inside of Unix.

So engines live inside of the local domain, I think they can potentially live inside of controllers as well (no consensus here), you can have millions of them in one local domain (much like genservers). The controller is the distributed state machine. The local domain is your personal Anoma operating system where data is private to itself (however you can send this data over controllers. Even ones you fully control).

Yeah I like this line of thinking, one huge selling point that I view of the project is that we are offering a computing platform that offers easy to get custom computing between untrusted parties (much like how the innovation of Erlang was to offer a simple model for doing parallel computation among parties that trust each other).

The fun part is, nothing about the CPP program has to be trusted. The truth condition of the objects we are making insist that everyone’s time is satisfied for them to be considered valid. Meaning that if Eric’s program had a bug and computed the wrong result, the pickup game intents would not be solved and a different solution must be given!

I believe this was me referring to mapping objects down to resources, as how I do operations, lets most computation vanish with only the results and important state changes being posted.

We have 2 plans for the short term:

The first being An operator dashboard where one can run Juvix, Anoma Level, and Elixir and operate the entire system from within.

  • This would be done with Phenoix and be a web application for the web.
  • I don’t think we can get any good presentation systems here… we can but it’d take work (Konrad’s Hinsen’s HyperDoc demo comes to mind https://hyperdoc.khinsen.net/)
    The second being that in my free time I’ve been experimenting with hooking up Elixir to Glamorous Toolkit. GT is a pretty decent environment in that it allows other languages to get graphical view

This can be seen for python here:

And my incomplete experimentation can be seen here:

Above I have messages being sent from my Elixir Image locally to the GT environment. I haven’t finished it yet, but if it goes well and we get some nice guis for Elixir, I plan on implementing an Anoma<->GT bridge, by implementing a bridge for the Anoma Level programming language/environment, which should be quite simple given that Anoma Level is a lisp and that Anoma level is live. Namely we can take advantage of the fact that the local domain allows ports to be opened to open up a TCP port and hook it up with GT.

This means that our dashboard from 1. can have a desktop version with enhanced visuals for objects while people program and debug and a web version which is a less moldable/observable experience. The main benefit of this approach is that we can reuse a rich environment that makes it easy for people to write tools for any object they care about, and offer the first image based editor for Anoma.

I plan to have with this connection, have the GT environment be able to browse the code storage of the Anoma OS and be able to visualize all the code that is there without having to go to a website and find out what libraries are stored where, meaning that all this can be done from within the user’s operating environment that they were already writing code from.

Thanks for clarifying. This and your talk together have solidified the system at a lower level.
My updated understanding:

  • The Anoma client has a runtime where objects, which include engines and resources, can be referenced.
  • At this runtime level, the engine is the system performing ‘indexing’ of data from controllers, and long-standing processes.
  • At the client level, Anoma has a GUI to allow users to modify engines and things, and this is where the Glamorous toolkit comes in.

I have been talking and building around everything on the other side of the Anoma client, in the ‘outer world’ at the application level.

In other words, I am answering the question “What happens when data comes and goes from the FFI into the rest of the world?” and “How do we make these things work together better?”

Annotating over your image but I see it more like a threshold than a bubble.

A developer may want to define a process where some outer world event, like I/O in the app, triggers a change to the controller in their domain.

The easiest thing to do is to write some logic that runs on a Windows/Unix system that just calls the FFI as needed, but this goes into my previous responses and broader motivation. This gets hard to reason about, build, and get visibility into as complexity increases.

This is getting abstract so I have added an example below. I understand the great breakthroughs of Anoma, and that there is a way to write this as a resource with transactions that alter the state.
As the application increases in complexity, I believe this painpoint becomes glaring and the tools I am building really shine.

Vanilla: Resources and Resource Logic
With resources, you can write resource logic to define the state of the system and the conditions in which state changes.
If I had a counter resource, I could create resource logic that increments or decrements this counter.

The painpoint
What happens when some state isn’t known to the system at the time of creation (And is entered by the user, or accessed via API)?
Ex: I am building a food ordering application where the state and state changes are dependent on customized input from the user placing the order and the actions of the restaurant receiving the order.

Yes, I could create resources that reflect all of the menu items, including the permutations a person could change, and I could solve for all of the possible state changes through a really tight set of resource logic, but that gets hard really fast.

What happens when it becomes computationally expensive to write transactions for every state change?
Ex: In this food ordering service, I would make every food option on the menu a resource, and have a parent resource or a projection function that groups all of the resources in a given order. But this is a lot of compute and complexity for many orders in a day.

This does not include questions like what happens with the restaurant rejects an order? How do we group the food objects of an order? And all the non-deterministic things that can occur in this system.

Solution: Concordance - Workflow Orchestration
Developers need the ability to build workflows that are reliable and interoperable with the blockchain, even when state is segmented or recieved from sources outside the local domain.

I have been building Concordance, a workflow orchestration platform (similar to Temporal) but for Web3.

In our example, developers can define the entire order state within one order object, and allow the user to edit that order object with the food and customizations.
When the order is confirmed by the user, the object and its metadata is serialized and sent through as a transaction to Anoma. Once in the local domain, proof validation and underlying resource updates happen as designed within the Anoma system.

Developers write logic based on the object, and the possible states of that object. In this example, we would have an object schema:

const order = new object.resource.protobuf()

order {
    order_num: int,
    date: date,
    items: [array],
}

and state mappings

const ORDER_TRANSITIONS = {
  [ORDER_STATES.INITIATED]: [ORDER_STATES.CONFIRMED, ORDER_STATES.TIMED_OUT],
  [ORDER_STATES.CONFIRMED]: [ORDER_STATES.COOKING, ORDER_STATES.CANCELED],
  [ORDER_STATES.COOKING]: [ORDER_STATES.READY, ORDER_STATES.CANCELED],
  [ORDER_STATES.READY]: [ORDER_STATES.RECEIVED, ORDER_STATES.ABANDONED],
  [ORDER_STATES.RECEIVED]: [ORDER_STATES.COMPLETE],
  [ORDER_STATES.TIMED_OUT]: [], // Terminal state
  [ORDER_STATES.CANCELED]: [],  // Terminal state
  [ORDER_STATES.ABANDONED]: [], // Terminal state
  [ORDER_STATES.COMPLETE]: []   // Terminal state
};

Developers write code that feels like Redux, and Concordance does the orchestration for the system, breaking down actions to state and triaging what compute is performed where, accounting for verification at each step of the process. (Concordance does other cool orchestration stuff like idempotency and caching, etc)
Developers can write validation for each discrete state, and other ‘business logic’ that may only apply at a discrete state/after an action occurs. e.g. When a user clicks a button, an API is called, and the results of the API can determine how state changes.

The entire application is structured by state. Under the hood, the compute segmentation still occurs. Some compute is done on a windows machine, be it in the browser or server-side compute, and some compute on Anoma, but this segmentation is abstracted away, allowing the developer to focus purely on objects and their respective state.
Because Anoma manages so much of the orchestration that occurs in blockchain-related compute, it made the most sense to just drop this on top of Anoma architecture.

This allows for predictability at the application level because to the developer, there is one source of truth for state, even if some of that state exists within the controller. Concordance ‘guarantees’ the application executes in these discrete steps.

I hope this adds clarity to the questions I have been asking, and my meaning of ‘object abstraction’ in the context of our conversations.

Object abstraction as you describe works in the local domain to allow developers to modify the Anoma client.
Object abstraction as I describe works at the application level, and allows developers to define application workflows. (It appears I need to find better language for this, I do have a hard time explaining this tech to folks in a succinct way.)