They are something like variables in the constraints, e.g. underspecified resources, but if a resource is already known, it should be tied to the slot.
Reading this just made me understand:
All search should precede this, in the sense that a solver can come up with a valid TX in an arbitrary way, e.g. by using Z3 to find satisfying matches.
Then this stack based evaluation context should only need to verify the validit, or fail in case the TX is invalid.
As far as I understand now, resource_logics
and additional constraints should be written in such a way that they are order of satisfaction invariant, or order information is provided.
That probably requires that constraints can only be satisfied uniquely, e.g. if there is a resource R1
that could satisfy constraint C1
or C2
when going into specific slots of them and there is a resource R2
that could only satisfy constraint C2
, then we lose order invariance.
We could introduce something like annotations in the sense that any TX coming from a solver would already provide positioning information of the resources for the slots of the constraints and the evaluation context would just apply the predicates implementing constraints and logics to the context in the order provided. Should one of them fail, the TX gets dropped as invalid.
If we don’t want to do this, then we need to require constraints and logics to only ever be uniquely satsifiable, but I don’t know if thats even possible if the parties defining them only have a view of their own data and not the whole TX it would be part of in the end.
The evaluation context as described above should still be useful
to model something like brute force search, where one could decide to keep going after failure to validate for a certain amount of time, but randomizing the stack each time.