I’m curating a list of open research questions related to cybernetics in Anoma. Feel free to respond with anything relevant; I’ll edit the top post to keep it up-to-date.
How can we model physical constraints?
The physical world - say, of energy and material flows - is complex and non-linear. If we want to accurately model this with digital abstractions such as tokens, we need to be able to capture the physical relations involved, at least to a certain degree of granularity. Specifically, physical systems are typically not freely exchangeable in the same sense as (abstract) tokens - there is some cost (energy loss) to conversion, many conversions are irreversible, and conversions affect other parameters of the system beyond just the atomic units being converted. How can we model these complex relations (in between “soulbound” tokens and typical transferable tokens), and in particular, how can we craft a system such that it can learn the relations of the physical variables over time (and enforce these constraints on the evolution of the digital representation)?
How can we capture tail risks?
Mark-to-market accounting fails to capture the possible evolutions of a system away from the current equilibrium, in part because it commits the fallacy of observer-independence (in order to actually convert, one must buy/sell, which may induce other participants to also act, etc.) and in part because it neglects non-equilibrium events (such as an external stimulus inducing many coordinated actions). Can we come up with better standards for risk accounting (e.g. as might be used in something like Compound) which don’t make such simplifying assumptions, and which can better capture the interdependent nature of a complex system?
How can we avoid overfitting?
When crafting a model to fit a world, and optimising actions taken in the world based on the model, it is crucial to avoid over-fitting the model to the world, and consequently taking actions in the world as a result of randomness or anomalies in the model. How can we craft multi-agent systems (such as financial systems) which account for the possibility of over-fitting, and limit themselves accordingly? One promising direction is limited-granularity accounting, where the system limits precision on purpose such that discrepancies below some “noise level” don’t result in any systemic incentive change.