I want to create this thread to discuss issues with the current program development process in Anoma.
I believe both d2d annoyances (uncertainty of errors in the stack, feedback, etc) and bigger architectural issues (having to use many different systems, writing resource machine transactions is hard etc) would be good to document.
I will post my own personal issues later in the thread
This is a bunch of notes on how I tried to figure out how to write a Juvix application in Anoma.
I have a bit of dusty knowledge on Haskell programming from during uni, but thats it.
I know that to “run” an application, I compile the Juvix code to nockma, and then I submit this to the node with a bunch of parameters that get passed into the main function. I assume that most people who come into contact with Anoma applications for the first time don’t know this. Anoma does not have a traditional computational model.
What could be done to improve: have a high level interaction overview of how an Anoma application actually works: how its executed, what the results are, and so on.
At this point, I understand it as follows.
An Anoma application is a bunch of Juvix files that define the structure of a resource, transaction functions that transfer resources, and “projection functions”, which are basically Juvix applications that can pick apart information from a resource that’s passed as a parameter.
Any Anoma application will probably have to provide the resource definitions and transaction functions. To get data out of the Anoma application into the application user’s world (e.g., a website for the application), the designers will rely on the projection functions to extract relevant data and make it representable in common formats such as strings, integers, json, whatever.
Any user of an Anoma application will never have to touch Juvix. Juvix is only relevant to the designer of the application. The designer of the application can offer functionality (e.g., “give out 10 spacebucks”) to the user, which is implemented as a series of calls to the Anoma client.
When I want to see output of the execution of a Juvix program (or nockma program, to be correct) for debugging, I put in traces everywhere that print to console.
What could be done to improve: This is probably not something that can be done in a short time frame, but the holy grail of debugging is obviously a stepping debugger, or even an interactive inspecting debugger, or praise all that is interactive, a reversible debugger. This is not something we can trivially build, because the execution environment of the nockma code is on a node, and not at the client side. Compounding to that is that the execution happens at the level of nock code, and not juvix code.
That being said, it would be nice to have a way for application designers to write a piece of Juvix code, submit it to the node, and in case of error, get back error messages at the level of the juvix code. So not “nock noun cell 123232324 tried 2323232 and failed with jet jam bir bor boo”, but rather “execution in function foo failed at line X”, at least. Maybe even a dump of the environment (similar to what binding() does in elixir), to further inspect things in a juvix repl.
I also don’t understand really well at this point (this is on me) how I can take some values I get back from traces, and use those in the Juvix REPL.
The compiler version of Juvix is a finicky beast. But this is probably due to it being so actively developed. So I’m fine with this.
What could be done to improve: When you compile any Juvix file, it could print out “warning: you are using version XYZ of lib ABC, and this is outdated.” Or, something like compatibility tables like Elixir has for Erlang: Compatibility and deprecations — Elixir v1.18.3
The compiler warnings/errors in vscode make no sense, sometimes.
I had a weird error, posted about it in the slack channel, but then i couldn’t reproduce it anymore. So I have no concrete improvement ideas here.
After creating my very own spacebuck, I thought I did everything right. I had created the Spacebuck.juvix resource, submitted a transaction, but nothing happened. I expected to see at least 1 resource when I queried for resources on the node, but I didn’t see any.
So now I’m left figuring out why the transaction is submitted, but not creating my resource. I want my spacebuck so bad.
This is tricky, because there is nothing to go on. The Anoma node doesn’t print anything useful (and Juvix app developers will probably not run their own node locally), the Anoma Client request doesn’t return any useful information either.
I am assuming here that all Anoma applications revolve around resources; creating, consuming, transferring them.
What could be done to improve:
The developer documentation could add a section that shows a simple way to list all resources in a very basic way. But, assuming that this would also return an empty list, this would not have helped me figure out the problem.
A convenience API to test whether a transaction has been executed, or when it failed, a way to get back the reason of failure.
This ties in with my explanation of debugging Juvix. An potential API could be, and I know I bring shame to the family, Jeremy forgive me, the future api in JS.
Submitting a transaction returns a handler that can subsequently be used to poll the node “what happened to this transaction?”, or “has this transaction been put in a block already?”.
Not sure if the notion of “in a block” is the right terminology, but you get the idea.
When going through the Dev Docs for Anoma, I did not know how to figure out how to use the mkResource function. The developer docs import a lot of things as “open”, so it’s hard to figure out which lib contains which data structures and functions.
What could be done to improve: Do not import things as open in the examples. This gives (at least me) a better feel of where in the stdlib which things are. I do have to say that the IDE integration for Juvix is pretty good, so I could click and go to definition.
The files you’re writing during reading of the developer docs do not compile. I understand why this is so, but I wanted to try and compile them anyway. I have very little patience when it comes to reading documentation when I want to build something.
What could be done to improve: During the examples, give the reader some instructions to “do something” with the code they’ve written so far. Maybe load it into the repl and call some functions with dummy data. This could make it a bit more interactive. As it stands, you have to wait to the end of the tutorial to actually try and run it.
Traces
Traces are useful to debug, but they are also pretty obscure. I can, for example, add a trace in my Juvix code that prints out a value:
Debug.trace "execution here" >-> ...
This is somewhat useful as it allows you to pinpoint where exactly in the code your program fails. But besides that, you’re on your own. You can go deeper and deeper to figure out where exactly your program fails, but each new trace means:
recompile Juvix file
resubmit to the node
copy paste the hints the node outputs
for each hint, in the Elixir repl, do Base.decode64!(hint) |> Noun.Jam.cue!()
The issue with this in particular is that I, as a node engineer, know that Jam.cue exists, and I have access to an Elixir repl. An application developer might not have this, and so all they have are a bunch of binaries.
That being said, it would be nice to have a way for application designers to write a piece of Juvix code, submit it to the node, and in case of error, get back error messages at the level of the juvix code. So not “nock noun cell 123232324 tried 2323232 and failed with jet jam bir bor boo”, but rather “execution in function foo failed at line X”, at least. Maybe even a dump of the environment (similar to what binding() does in elixir), to further inspect things in a juvix repl.
Once we have stack traces returned, it should in principle be possible to convert the nockma stack traces to Juvix stack traces, provided enough info is included in them (maybe some additional info needs to be stored as hints in compiled functions for this). That would be a first step.
Basically you just need to, when emitting a debug-mode build, emit [11 ["trace" [1 "whatever"]] [...] (wrap the function body) to push “whatever” onto the trace stack (and pop it when we exit that 11); though you can get more precise than just “a function body” if you want. We can come up with a standard together for line and column numbers and other information like that.