Just in case, I made an alternative proposal here.
Logic is a part of the kind, and kind is a part of the balance. You check that the logic is the same by checking if the kind balances or not
You can duplicate the the values in the value field and verify in the logic that they are the same
Imo, they can map clearly to different primitives you will implement in your standard library since all types represent different kinds of actions, but I didnāt think too much about it.
Iāll answer for each type I specified:
- ephemeral, zero quantity: trigger a logic
- ephemeral, non-zero quantity: carry constraints across actions
- persistent, non-zero quantity: regular state carriers
- persistent, zero quantity: not sure about this one actually, maybe it is useless. Added it for completeness.
- ephemeral, non-zero quantity within the same action: issue, burn, other kinds of balancing persistent resources
I agree with your points, but I think some of them are only possible to a point. I think, for example, if resources balance across actions or not is heavily related to the application model. The only way I see to not having to explicitly think about it is to cover all other cases that would be simpler so this one would be the āelseā, but this is not something we can choose.
But I want to make it clear that I am very interested in building an object model on top of resources with as much simplification as possible.
Iām not sure if you can express all of the constraints complexity by reducing it to data vs code. I think there are some base assumptions that might be preventing that, namely, you cannot claim to offer users programmability and abstract it away from them at the same time. I think it is a good idea for some applications, but it should be a choice, and not the choice between using a convenient and limited model vs using a programmable model without any tools available. I would be very interested in making more tools without compromising on the intent programmability properties. We can do both. Separately, if we have to. I just donāt want the programmable case to be left raw and hardly usable.
Iām not sure this is true. For example, when we issue resources, the resource is persistent, but it wasnāt created before. It works because there is a balancing ephemeral resource, of course, so if you mean that this choice is programmed not for the resource carrying the flag, then I agree. But to be honest I didnāt understand the choices you mentioned.
To be honest, I donāt feel convinced that this is the best solution for the case as opposed to using logics, for example. Do you maybe have ideas on when else it could be useful?
Is it specific for consuming resources? Because you can as successfully create resources for constraint-carrying.
Is it local to a transaction? Note that these sets will have to be public transaction fields, so essentially revealing which resources are transient and which are not, and since these resources are practically carrying constraints across actions, revealing how many intent carriers there are in transactions. I think there will also be more checks required to ensure that the prover treats these data structures correctly, but I will need to think about them more with more brain energy.
I would also suggest to rename spontaneous resources into action-local ephemeral resources and transient resources into cross-action ephemeral resources. It is a bit wordier, but if we donāt specify ephemerality, they might as well be persistent. To make it less wordy, I would also suggest to adopt the notions of e-resources and p-resources to refer to ephemeral and persistent resources.
Generally I think my main difficulty with this proposal is that I donāt see how carrying constraints in values is better than doing so in logics. Saying that the resulting abstraction is just better is not objective enough. I think the proposed change is quite substantial, so there must be a very good and explicitly articulated reason for that.
With the right style of design you can offer both they are not in contention. This is a key point in most of the designs I offer and so I wish to address this.
pictured is the kind of design that Iām going for, namely the point is to:
- Abstract over some lower base level in a principled way. This may not be the easiest to program over, but it should reflect the model somewhat faithfully
- Add more layers over that that progressively making it easier and easier to write layers.
This style of design was used all throughout the genera operating system, where you design at a high level but if you want to get full control you can.
Another example if the Meta Object Protocol in CL, in order to satisfy vendor demands, they created the Common Lisp Object System
(abbreviated as CLOS) that was backed by the Meta Object Protocol
(abbreviated as MOP), where CLOS is a decent point in the OO spectrum that makes certain assumptions about how the objects were allocated, how inheritance works, etc etc etc.
However the user can peak below this surface and change how everything works, as it was implemented via a meta-object, meaning that if you wanted 10000000+ slots/fields you can and can have it be backed by a allocate on write memory or maybe have a metaobject that backs data by database, all of this is possible, even though to most end users memory is nicely abstracted away.
I donāt think we disagree here based on the rest of your comment though so thatās good. The issue is, how can we present the mapping such that it works for the RM without much fuss. Namely I want to avoid the complexity issue of just having to know that certain classes exist to inherit from in this particular case that offer nothing other than a slightly different compilation path down. As semantically this intent-bearing-resource
class offers no one semantic difference and not doing so is a security risk for anyone who forgets or makes a small oversight. There isnāt really any logical differences in the programmability, itās just a matter of where data gets stored in the slots of the RM. If there were logical differences in intent-bearing-resources then Iād be more onboard with it as that would be a decent justification and would be something harder to forget as well as us being able to offer specialized guis that can specialize harder for intent-bearing resources.
However with slightly changed semantics, the particular reasons why this difference shows up in this case can be resolved and no programmability is lost
Thinking about resources balancing across actions does require some thought of the RM, however I think abstraction wouldnāt get rid of it but shift it to being a more intuitive and easier to program for problem. Something Iāve noticed when I created my fixed-supply
code, I didnāt actually think about ābalancingā the Transaction, that is much too low level for me to care about instead I cared about:
- Locally I, as the
fixed-supply-mixin
, may only see my self being created/burned, so we have to ensure somewhere else it has to be bruned/created. - I can do this by having an
fixed-supply-intent
that relates to me, so in my logic just ensure thatfixed-supply-intent
is there - (not in the RM) I need to write a minor amount of scaffolding that causes this to be there when I, the
fixed-supply-mixin
is made and when Iām used up.
At the RM level this is properly filled by balancing, and this fixed-supply-intent
must be filled. But in most cases I can just think in terms of āI donāt know everything when my properties must hold, make this other object to check the properties elsewhereā. This doesnāt remove any programmability, just shifts the thought process about what the object can know locally vs what we wish to hold globally
Even if I made them primitives, the ideas will leak as semantically theyāll be exposed to users. Something being primitive only makes things more opaque and complicated, not less so. I believe in whatever we implement we will reify the resource-machine itself in the language so users can have control for how certain things compile if they really wish to. This amounts to the same āprimitve APIā, but just in the user hands. My argument is that making all these distinctions is needlessly complex and will wear on the developers mind if not done right and hence trying to find a cleaner mapping.
The issue is that if you have many fixed-supply-spacebucks-intents
, all their kinds will be different because you put the amount in the logic. Meaning I have no way to confirm if there are other fixed-supply-spacebucks-intents
meaning I canāt search for others by kind or by their logic as all will be different. Limiting how this can be used to mimic the global delta check, making it subpar.
Thus even if we put the values we fixed into the resource-logic
itās much less effective as we canāt see other intents of the same kind to see if the amount to create/burn changes at all.
Yes, Iām just talking about the rules for whether a resource consumption is valid or not. Right now we have:
- Ephemeral resources, which need not ever have been created in order to be consumed, and
- Persistent resources, which must have been created before the transaction in order to be consumed.
My proposal is an aim to add a third option (which I call ātransientā) for resources which must have been created in the transaction in order to be consumed.
Strictly speaking, the two systems (my proposal versus using logics) are equivalent in expressive capacity, so itās just a difference in the ergonomics of representation and the efficiency of the implementation (weāre picking either more delta computations or more Merkle path existence checks).
Yes, Iāve come around to this position as well after re-reading all of these threads. I think a good next step would actually be to formalize a bit our āresource linear logicā at the RM layer of abstraction (as it stands right now, and perhaps potential alternatives), formalize a bit the desired higher-level object model semantics (or any others) and how the translation would work, and obtain a very clear picture of the tradeoffs there before making any changes (certainly substantial ones).
I agree with that, which is also my point of confusion. So to make sure we see this statement the same way: from my perspective, to capture the desired diversity of intents, we would have to eventually switch to code from data. In resources, logic
is the code field, while all other fields, including value
, are data fields. I can see how for simple intents storing them in data is indeed nicer (or, alternatively, we can reuse the abstraction mechanics from the shielded kudos design).
Of course we can store data in code and vice versa, but the motivation is still not clear.
Iām specifically concerned about the more sophisticated intents. I think it isnāt good enough that it is possible to express such intents in principle. If it is too difficult to implement in practice, it is likely to be equivalent to not having them at all.
That sounds nice, but I donāt see how. I think an explicit description would help here.
The difference between balancing and what you describe is the quantity. I donāt see how āI need to create Xā is much simpler than āI need to create X with quantity Qā. I think we can actually go further and abstract this away fully in many cases where balancing is ephemeral or trivial since these are pretty standard mechanics. The difference is that locking mechanics in the language prescribes the desired behaviour vs libraries offering a possible behaviour.
I think I imagine it not only being a primitive, but have a higher level meaning too. We donāt need to put ācross-action ephemeral resourceā primitive in there, it just becomes an āintentā primitive.
I feel like somewhere here we forget about the fact that we should prioritise the shielded case, for which, I think, many of these assumptions donāt hold. Specifically, how the search works for such intents, since they are not floating around in the intent pool or are encrypted. Shielded intents cannot be communicated in the form of unbalanced transactions only, and they are communicated privately. Even if we consider the context of a single solverās intents pool, an intent must contain more data (otherwise the solver wonāt be able to create proofs)
This is another case of prioritising āsimpleā over ādesiredā. We keep postponing solving the problems for the shielded case and develop a bunch of optimisations for the transparent case only emphasizing the convenience gap while we should try to reduce it by thinking about the shielded case first.
Okay, this is much clearer to me.