I burn/issue my token if my intent is satisfied

Until now we were only talking about transfer-intents, but it might be an interesting idea to consider burn/issue intents: I burn/issue my token if my intent is satisfied. In the context of the kudos application, which inspired me to write this post, I see two possible directions:

  1. The KL doesn’t acknowledge the existence of the intent carrier in the Burn function. From the KL perspective, this is just a burn (it ignores the other resources in the action, following the rules we described in pt.1 and pt.3). So the job of enforcing this condition would be on the DL. The question is, how can we reliably distinguish a burn from a burn-for-reward in a way that doesn’t allow a malicious creator to submit burn-for-reward as simple burn?

  2. A simpler, but less visual way to have this mechanic is to use a simple swap and delegate the burn to the issuer. I swap my tokens for the reward, the issuer burns my tokens. Note that transfer to the issuer is supposed to invalidate the tokens in that case. A burn-for-reward mechanism is an intuitive way to invalidate the points.

I do not consider an option of modifying KL and adding an intent-carrier check to each burn/issue. I think it is too particular for a higher-level logic and too limiting.

This highlights two interesting (in the broader context, not just relevant to the kudos application) questions:

  • specify an elegant design of one-time token mechanics
  • specify an elegant design for burn/issue intent mechanics

While in the kudos app context we have some pre-defined goals, perhaps, developing a general design for these mechanics will reflect in our future application design patterns.

I also raised the question in pt.3: why must an intent (an intent-carrying resource) be triggered by other logic? In my view, intents should be independent of applications so that we can compose any intents and various applications to achieve flexible requirements/transaction functions. The integrity of compliance proofs ensures intents cannot be tampered with, and delta proofs guarantee satisfaction of the intent. Does it make sense?

For this thread’s issue, we can create a complete burn action as defined in the application and add the intent-carrying resource to the action. The intent logic can access other resources and add constraints to represent requirements. The transaction can only balance if the intent is satisfied (consumed) in another action. This seems like a typical example of intent resources I didn’t misunderstand your requirements.

I agree with you, intents don’t have to be triggered by other logics, and in fact they are not. What we do there is move the authorship verification of the intent in the logic instead of relying on trust. This is not meant to be the only approach to achieve something, but this is one way to do that.

Intents do not exist separately from the resources they are about, at the very least because they are about these resources.

The intent application also doesn’t depend on the kudos application. There is no intent application. What makes an application? It is a collection of logics. What is an intent? It is a logic. Every intent, if expressed in a logic, is an application. We can also have more complex intent applications that work for the same intent form and only differ in values, but this is not a general form. The good thing is that applications also don’t exist as data structure. We don’t need to deploy them or anything. Being an application is a property in some sense.

It works, just as for transfer intents, if the user computes all of the proofs for the action themselves. If we delegate, it would be nice to verify that the associated intent is authored by the owner of the consumed kudo.

So yes, everything about your example is perfect and it is indeed the example we usually use. But it does imply the user computes their action. By adding the signature verification, we can just delegate proof creation to the solver and be sure they won’t change the intent. Maybe it is an overkill, but I feel like if there is a relatively easy way to avoid trusting, it isn’t a bad idea to explore it. The proof itself becomes more expensive to compute, but now the user doesn’t have to. Or am I missing something here?

Could you kindly explain how proof delegation works? More specifically, could you clarify which types of proofs would be delegated to solvers, such as compliance proofs, logic proofs, or intent/ephemeral logic proofs? Additionally, what information needs to be exposed to the solvers?

1 Like

My impression is that it is flexible: users can choose what and how many proofs to delegate. The solvers would need to know all the data the prover needs to know to create a proof. It doesn’t allow the solver to cheat per se because the user constraints are enforced by logics. The solver cannot modify the logics of the persistent resources they consume/create (e.g., they can’t modify the logic of a “euro” resource). So the solver can see what is going on, but cannot modify the transaction in a way that works against the user’s constraints. This assumption doesn’t apply to ephemeral resources that are carrying extra constraints (since these logics are not connected to any “recognised” application), that is why we need an authorisation mechanism there.

I think in the previous examples we imagined users proving some “initial” action (logic + compliance proofs) that they share with solvers and then solvers create more actions to complete the transaction (logic + compliance proofs, ephemeral resources included). Now I’m wondering how realistic this “initial action” assumption since proofs are quite costly. This shouldn’t influence our designs much, but it might be that users want to delegate all proving in the early stages.

These are just my somewhat educated assumptions. We only discussed it briefly before and I don’t think anyone is working on shielded solving mechanics specifically.