Time Constraints in Resource Logics

Time-dependent logic is ubiquitous in decentralized applications. This thread aims to clarify how time constraints can be realized in resource logics.

A simple example would be a coupon resource, i.e., a resource that you can redeem for something but that becomes invalid on a certain date, e.g., January 1st 2025 00:00. Another example would be a voting application, where voting is possible only within a voting period.
The coupon resource logic could look like this

if (currentTime >= expirationTime)
  return false;
else
  return true;

where currentTime and expirationTime could be unix timestamps.
expirationTime would simply be a constant, hardcoded value stored in the resource plaintext.
In contrast, currentTime must be the timestamp of the block which this transaction is part of in the moment it gets executed.

Two sets of questions are coming to mind:

  1. Who provides this currentTime timestamp? The controller of this resource? An oracle?
  2. How can it be accessed in the resource logic and ensured to be up-to-date in the moment of execution? Must there be a built-in in the RM or can the timestamp be provided simply though a resource being part of the transaction object?

Providing time-information sounds like a natural responsibility of the controller.
In Ethereum smart contracts, for example, block.number and block.timestamp are special variables existing in the global namespace. The latter is calculated from the current block number and the genesis timestamp (since blocks/slots take exactly 12 seconds in PoS Ethereum).
Could this work similarly in the case of the ARM? Could we provide this information as special variables built into the RM or would this be non-linear resources data provided by the controller?
Moreover, this would require resource logics proofs of time-dependent resources to be computed at execution time (which requires knowledge about resource time dependencies of the block proposer). This sounds undesirable because you effectively have two resource types and this leaks information in the shielded case. Wdyt @vveiln
Furthermore, it is unclear how this can be generalized for resources moving between different controllers. If a resource moves between controllers, block.timestamp will still be approximiately the same value, whereas block.number will certainly be different. Any thoughts on this @isheff?

In contrast to a controller, an oracle is an application providing off-chain data. This application would provide time information in the form of resources being added to the transaction.
There are several problems with this approach.
The oracle application cannot know the time in which the transaction will get executed and would require a mechanism to add the resource right before transaction execution. This sounds impossible since there can be latency issues (e.g., because of packet routing, proof computation times, etc.).
Therefore, time provided by an oracle could just be seen as a lower bound for the actual time, which is insufficient in many cases. Similar questions arise for price-oracles that also must be up-to-date during execution time.
Moreover, trust assumptions can differ from the controller and oracle manipulation attacks could happen. Overall, attempting to provide time through oracle services/applications doesn’t sound like a good approache to me.

I would be happy to hear your thoughts on this topic @cwgoes @degregat @isheff @vveiln.

1 Like

It seems to me like it would be useful if there was some mechanism to add a wallclock timestamp at a predictable non-linear resource address to the block at execution time, if we don’t want this to be part of the resource machine. Controllers could announce this as part of their service commitment.

This way, one could use controllers as a time oracle and get guarantees up to existing trust assumptions. Controllers that internally run consensus could, e.g. compute the mean of the times of each node.

2 Likes

That could work. However, this would still mean that resource-logic proofs of time-dependent resource logic functions would need to be computed during execution time.

I can see that block-proposing validators would prefer transactions that are already valid.
In particular, if a time-dependent resource logic turns out to be invalid, the validator finding this out would need to remove this transaction object from the set of transactions to be executed.

1 Like

Side note: there are no non-linear resources supported by resource machine. Non-linear data you are talking about are not resources, let’s not make things even more confusing

2 Likes

Thanks for the shout-out. I fixed it in the text.

1 Like

A few thoughts:

  • For some applications, time is provided like any other oracle value: some trusted party can certify statements of the form “real time is at least _,” and predicates can require such statements in order to enable stuff.
    • In these applications, there is still no notion of “universal time.” Any time requirement is simply a matter of “what oracle certifications will I accept.” Different oracles may provide totally different times.
  • For statements of the form “real time is currently less than _,” things are harder.
    • We could treat this as a kind of oracle
      • Start with a large (infinite?) set of resources representing times that have not yet passed, and is continually consuming them (in order). Resources with predicates that depend on these have to prove the resource in question has not yet been consumed, which I guess makes these “resources” non-linear, which @vveiln pointed out is not a thing.
      • We could have oracles certify “real time is at most _” statements to controller storage, and treat the non-existence of such a statement above a given value as “evidence” from that oracle about the maximum current time.
      • Both of these solutions rely on a liveness guarantee: the oracle is able to constantly update the controller about current time.
    • Another solution would be to use the controller’s time. (These are not exclusive: you can do both). The controller can input a certification about what time it is to all predicates, or at least make it a readable part of controller state. This seems like a generally good idea.
      • Different controllers have different clocks, which can be arbitrarily skewed. The only clock actually “readable” at resource creation time is the creating controller’s, and the only clock “readable” at resource consumption time is the consuming controller’s. Resources logics will have to program with this in mind.
      • We can make statements about (minimum values of) other controller’s clocks: If we imagine that each controller includes light clients of other controllers, we know the other controller’s “current” clock is greater than (or equal to) the light client’s clock.
        • If the other controller has forked, this controller’s light client will presumably only track one fork, but other people may see forks of the controller with clocks that are earlier than this.
      • We can do Lamport clocks: if we imagine controllers communicating with IBC, they timestamp every message sent, and delay the “receipt” of any IBC message until their own internal clock is greater than the message’s timestamp, so “causal” order is “consistent” with timestamps.
        • This deliberately introduces delays that may be undesirable.
  • Outside of predicates that have to be evaluated at commit-time, oracles can “timestamp” hashes with statements of the form “I saw hash _ before time _,” and you can quickly accrue exponentially-many such attestations from different time-oracles. This may not be enough to do what you want.
  • Finally, remember that there is no universal time, real physical events happen in different orders for different observers, and despite this, programmers invariably mess up when they aren’t provided with global serializability.
4 Likes

Just as an operational note to add to @isheff’s comprehensive description of what’s possible - the transaction function (which runs post-ordering) could request time attestations (e.g. from the controller), and use those as inputs to compute the final transaction (including, potentially, resource data). Proofs for resources so computed cannot be created before ordering, yes, but resource could be split out so that this isn’t such a big deal - e.g. suppose that I have a shielded resource that can only be transferred (or, more generally, acted upon) before time t:

Pre-ordering

  • I consume my resource, create whatever corresponding new resource, and create a carrier resource for the timestamp check
  • I create a shielded proof that the transfer is correct

Post-ordering

  • The transaction function runs and computes the latest controller-attested timestamp, which is used as input for the timestamp check carrier resource logic proof
  • As long as the controller-attested timestamp is less than the timestamp in the carrier resource, the transaction is valid and we’re done
  • No information needed to be revealed other than this timestamp check carrier resource

Now, there’s a little bit of an annoyance here - the timestamp check carrier resource must predict a timestamp which will be after the controller’s attested timestamp at the time the transaction is ordered, and this prediction will have some error, so that in effect our transaction must be crafted before t - e for some epsilon e. In practice for many applications this seems unlikely to be an issue, since controller timestamps are unpredictable anyways.

3 Likes

I looked at how different controllers handle time.

Ethereum

In PoW ETH, timestamps were determined by miners and varied. On average, they occurred every 13 seconds.

After the merge, PoS ETH has now a more reliable clock. Slots occur every 12 seconds and have the potential to contain a block. Honest nodes are assumed to have clocks synchronized at least within 12 seconds of each other.
However, validator committee (randomly selected by the RANDAO) are incentivized to synchronize their clocks much more precisely than that to not loose rewards since introducing delay reduces validator rewards (see Anatomy of a Slot: The tumultuous 12 seconds during Ethereum slots # Defining Attestation Effectiveness).

Tendermint BFT

Tendermint BFT Specs require time to monotonically increase. Time is calculated from the timestamps of validator pre-commits during the consensus process.
The timestamps in the vote messages of validators are repeated proportional to the voting power and the median timestamp value of the distribution is picked to prevent skewing.

Accordingly, time is as heterogeneous as other properties of controllers are (e.g., the re-org probabilities, finality times, transaction censorship, etc.).

Solana

Solana combines Proof of Stake with Proof of History. Similar to Ethereum, Solana uses slots.
The Solana specs are not super helpful, but I’ve found an article explaining how time is handled.

A slot in Solana is a fixed duration of time, currently set at 400 milliseconds, during which a validator has the opportunity to produce a block. Slots are sequential, meaning that they occur one after another in a linear fashion. This predictable progression of slots ensures a consistent and orderly block production process, which contributes to Solana’s overall efficiency.

If a validator fails to produce a block during its assigned slot, the network does not stall or wait for the validator to catch up. Instead, it moves on to the next slot, giving the subsequent validator an opportunity to propose a new block. […].

In Solana, the Verifiable Delay Function (VDF) implemented through Proof of History (PoH) aids the process of leader rotation among validators. […]

Bitcoin

Bitcoin specs define valid timestamps as follows:

A timestamp is accepted as valid if it is greater than the median timestamp of previous 11 blocks, and less than the network-adjusted time + 2 hours. “Network-adjusted time” is the median of the timestamps returned by all nodes connected to you. As a result block timestamps are not exactly accurate, and they do not need to be. Block times are accurate only to within an hour or two.

Bitcoin also defines an NLockTime parameter, which encodes the time after which a transaction cannot be included into a block. This is compared against the median timestamp of the 11 blocks preceding the block in which the transaction is mined (see here).

This loose definition of time gives opportunity to miners to manipulate time and execute time warp attacks.

Conclusion

Unsurprisingly, controller time is heterogeneous the same way as other properties are (such as finality or re-org probability, etc.).

Realizing Resource-Logic Time Constraints

Knowing this, how can we realize time constraints? A simple way of realizing time constraints is the following.
Give the resource logic of a resource self access to the global variables

controller(self).blocknumber
controller(self).timestamp

thus allowing it to express constraints such as

controller.blocknumber < expirationBlocknumber
controller.timestamp < expirationTimestamp

In practice, block number constraints might be less relevant than timestamp constraints for three reasons:

  1. Block numbers and frequencies vary largely between controllers. Blocknumber constraints would behave unexpectedly when resources move between controllers.
  2. Controller times are consistent with each other and the world time within some \Delta.
  3. Timestamps are more natural and easy to reason about by users.

In particular, time is consistent when a resource just stays on one controller (e.g., if it is repeatedly transferred but stays on the same controller). Time inconsistency can happen when a resource moves from a controller A to controller B.
We can remove the inconsistency by adding an intent to the transaction enforcing

controller(resA).timestamp < controller(resB).timestamp

where resA and resB are the resources consumed and created on controller A and B, respectively.

2 Likes