Questions I want to see researched

Welcome.

I will answer 2. first and in the process I hope it also answers 1..

So the Anoma client is an interesting piece of software in this equation. Currently the client is not up the standards that I hold for it, however it’s rapidly improving. The client will essentially acts like the local Anoma operating system, giving you access to the rest of the system. It’s a nock based system which has a lot of interesting properties

  1. it’s a multi-threaded environment that lets you run long running processes (not too dissimilar from erlang’s genservers)
  2. It’s a turing complete system meaning you can run any computation you would like
  3. We imagine it as an event driven system that will gets events from controllers (we can envision 1 controller as being 1 blockchain), or other clients
  4. The environment lets you referentially transparently read data across the system, this is known as scrying. For example the blob storage you referenced can be directly read from via this mechanism, and if you perchance don’t have the data your client can fetch it from many different nodes in which your client can cache the results until you expunge it or some policy is taken by you.

Thus for your diagram, we do offer these services via protobuf currently, however we wish to remove that part of the codebase (soon) for instead offering proper actions via submitting nock to do it all from client code that you can run interactively. In fact a lot of the IndexerService box in your diagram has already been replaced by our read-only-transactions that we expose via nock and users can run via Juvix.

Interesting what do you mean exactly by a)? I’d assume that an object declaration with the various slots would be the schema for a particular resource

b) sounds to me like a mixture of one computing on their own machine and also subscribing to some events over the network (I have a demo that maybe does the former?).

These are good goals. Although I’m not sure what on-chain or off-chain means here. At least how I envision things, I imagine a lot of computation will happen in a mixture of the following places:

  1. The client is a good place for mutli-threaded computation on one machine
  2. Controllers (think blockchain) with user hackable consensus (the entire controller codebase will be in the same environment, meaning it’s all user code) will be useful for the ability to create consensus that lets multiple machines share computation (sending erlang like genservers over the network to other machines you control based on proof of resource availability)
  3. And the outside world. I believe a lot of computation will continue to happen there and we can marshal data in and out using the tried and true methods of using FFI or putting formatted data out to a port that existing programs can listen to.

Meaning that if we have some system on Unix/Windows, we can have an erlang like genserver that users can spawn and have it subscribe to happenings on specific resources or their changes and then send data out via some unix socket or tcp socket to an existing application that wishes to monitor it.

Yeah, I want this as well, In the ideal future, there will be an editor that is hookedup to the image of Anoma. Ideally it’s written in the system of Anoma such that any proper bootstrapping of Anoma itself will lead to having the image editor, not too dissimilar to a smalltalk system where this occurs in practice. I believe this sorta bleeds into your point b)

Observability is a very important part of any system, and this is a wonderful goal to go towards. For
Anoma, I have plans for having gui frameworks to allow custom views on objects not to disimilar to Glamorous Toolkit. I’d be curious on what kind of observability you want out of the system and how maybe we can help support that in the medium to long run. In the shorter run I think we’ll try for both parts a) and b) by hooking up the Anoma system into existing tools that allows one to view data in Anoma and even write their own views onto data.

Your example makes sense, I’d imagine this can be done via a mixture of different kinds of resources and some processes that subscribe to state changes on a particular controller where this is being run.

Have you read the story I wrote about the dominion it kinda reminds me of the pickup games in how I’d imagine the flow to work:

So I have been working on this model recently, it’s not specs compliant in any fashion, but I have a data mapping schema that mostly works to the current RMV2.

I will be doing a writeup of the code soon but if you wish to read it now it is linked below:

I basically abuse the common lisp object system to give me a decent mapping to resources from any object from within common lisp. So something interesting is that since this is a mapping, you can choose not to exercise it, and so for local computations you can just run it all down to x86 (I guess nock in our case), and then when we want to submit some data to a controller we can serialize both the operations (which post constraints) and data (which also post constraints) into resources.

A limitation of my current model is that I have not done it in prolog so the meta-model I have can not properly have intents that let pose automatic constraints. However this is just the first stab at the experiment with more to follow.

Again thank you for your first post!

2 Likes