I think the purpose of timestamps in Merkle tree is to optimize storage. If there is a timestamp associated with each node, and there is an expiration date set, the nodes of a full subtree (that will never be modified) are deleted by the storage after the expiration date. In case it happens, the clients will be responsible for storing the subpath of the subtree themselves if they ever want to create proofs for commitments stored in that subtree.
Here is the thread where it was initially discussed: Stored data format: resource machine <> storage
I’m not sure how storage policies work in the shielded case though, perhaps, @isheff has some idea.
The reason we have an explicit Merkle tree structure for resources in the transparent case (and anything else that seems inconvenient from the transparent perspective) is that transparent resource machine implementation implements a resource machine specification that is a shared interface that has to apply to both transparent and shielded systems (and anything in between) in a way that a shielded system is not underspecified, so that resource machine implementations can interoperate regardless of their privacy properties
The spec specifies that any implementation must work with resource commitments stored in a cryptographic accumulator, and we can produce proofs of inclusion for them. It also currently describes the use of Merkle trees as the accumulator of choice, but they can be abstracted away if needed. If you don’t want to use Merkle trees in the transparent system, you can either:
- find another way to satisfy the spec, potentially using another kind of accumulator (it doesn’t have to be exactly like in taiga as long as it still satisfies the spec)
- suggest another design for accounting and storing commitments that suits better the transparent system and doesn’t break (or keep underspecified) the shielded implementation that implements the proposed design