Circles Entropy: short-term outlook

This post summarizes short-term goals for Circles Entropy. We provide some background about CiclesUBI in its current form for enough context to be able to reflect on the general problem statement as well as problems with the current implementation. We then outline a short-term approach to the problem by outlining an architecture centred around solvers that run trusted execution environments (TEEs) for solving.


We start by stating some points that are not entirely clear in the current CirclesUBI [witepaper] (Whitepaper | Circles UBI | Handbook).

CirclesUBI acts as a payment system that relies on the web of trust where every user has their own personal Circles(CRC) token minted at the rate of 1 CRC per hour. Trust has clear definition in the context of CirclesUBI and is explicitly stated by users. Creating a trust connection between A and B (B trusts A) means that:

  • B now accepts A-tokens as a part of a payment
  • B now accepts that during a payment done by any user in the system A-tokens (together with any number of other token types B accepts) could be swapped automatically with any token B has (including B-tokens) and sent to any users that accept corresponding tokens, as long the sum of incoming tokens equals the sum of outgoing tokens. This is known as transitive transfer.

Trust limits

One implementation detail of CirclesUBI is the “trust limit”, a user-configurable threshold (set to 50% by default) that limits maximum amount of other users tokens present in an account at any time. It was implemented as a quick safeguard for users but has been proven to be problematic when dealing with large transactions. In future iterations of CirclesUBI, fully programmable trust limits to allow users to express their preferences would make more sense.

Max-flow and capacity graph

In order to maximize spendable Circles between users, the Max-flow algorithm is used to calculate maximum payment capacity between two users. Over time as CirclesUBI users and their trustees make transactions and receive payments we arrive at a situation where every user holds an array of different tokens from users they trust. This further increases dependency on highly-interconnected trust connections to maximize tokens a user is able to spend in the network. Thus Max-flow is not executed on a web of trust but rather on a structure we call “capacity graph” that is derived from WoT graph + state of all user assets allocation.

The payment capacity of an edge is the number of tokens possessed by the source node that are accepted by the target node. A node’s token never leaves the star centred at that node. Since a token is free to move around in its star, it can be freely swapped around to increase capacity for a certain path in the graph. In practice, there is an upper limit to how many tokens can be swapped in such a manner.

Abstract problem statement

Compute on a graph, minimizing for

  1. trust assumptions between solvers and users
  2. single points of failure
  3. statistical disclosure to observers (including solvers)
  4. disclosure of the graph structure itself
  5. Computational resources

Concrete problem statement

Compute maxflow over distributedly-held graph data. The graph in this case is a directed graph where nodes are users and an edge between two nodes corresponds to a possible flow path for tokens. Flow of an edge is the number of tokens in possession of the source that are trusted by the target.

Addressing the problem

In the current implementation, transaction data is published on-chain as a series of pairwise token transfers. Since there is no account hiding, it reveals data about the complete trust path of a transaction.

The key challenge is data being discerned by solvers.

  • Solver/user set intersection: We can’t prohibit solvers from also being users that make payment requests, which means if there is no rate liming of requests, there is high potential for statistical disclosure attacks from solvers.

  • Graph data leakage: Path finding algorithms should not leak information about the graph. This not only implies hiding graph data itself but hiding execution. Memory access should be data-independent or oblivious.

An MPC-based approach where parts of the data are held by several agents and invoked when necessary opens up other research questions about when-to-invoke-whom and fine-grained trust assumptions (e.g. a solver who has access to a user’s trust data has different privileges than a solver who doesn’t).

FHE does not solve the problem of statistical disclosure attacks by solvers. Furthermore, current implementation of FHE are too slow for practical use.

Current approach

We describe our current approach, that leverages TEEs to hide computation from the solver. A sketch of the high-level architecture can be found here.

A solver runs a CometBFT node, which has the ability to offload (parts of) the application logic to a secure enclave (the TEE). In particular, the path-finding happens on the enclave. Once a path with sufficient flow-capacity with respect to the transfer request is found, the request is considered solved. The transaction is then settle on some asset chain (e.g. Namada).

The enclave has access to an encrypted database that it may query for graph data during the computation. It generates a zero-knowledge proof of having found a path with maximal flow subject to restrictions described in the first section. Flow capacities are calculated based on assets that have been bridged to the app-chain from the asset chain.

This gives a protocol that needs consensus twice (once while solving and once while settling). It may be argued that it would be better to have consensus once rather than twice, but separating solving from settlement provides modularity. Using a settlement layer like Namada means users can benefit from the multi-asset shielded pool.

Current status

We are looking at various TEE solutions (SGX/TrustZone) and efforts are ongoing to setup some testing infrastructure that could be used to incrementally prototype zk-proof generation on a TEE.

Parallely, we are looking at approaches to max-flow that may be best suited for writing efficient circuits. The current implementation uses a variant of the Ford-Fulkerson algorithm, but a linear constraint-solving approach, apart from being more general, may have merits in the circuit setting. While proving that a computation has taken place subject to constraints requires a proof of every step of the computation, we are currently hoping to do atleast some of the computation outside of a circuit for efficiency gains.

We will do another post on a more long-term approach to solving this problem, which involves Taiga integration, an arguably better network topology and most importantly, no implicit trust assumptions involving large chip-manufactuting corporations.

In the meantime, questions or further discussion of any topic touched upon hitherto is appreciated.


Great summary!

I have a few questions on the proposed architecture, primarily the description in the diagram:

  • Can you explain a bit more about the arrow between Namada on the left and the CometBFT node on the right? I would have thought that this was an IBC connection, but your arrow says ABCI++ - is that intentional? If so, what sort of ABCI++ interactions do you have in mind?
  • Where do the account balances get updated after solving? I’m guessing that - in order to keep these changes private - you plan to update these in the TEE, then users could potentially withdraw to another chain such as Namada over IBC? It would be helpful for my understanding at least to see a writeup of what state lives where, and what the permissible sequences of updates (state transitions) are.
  • Have you thought about periodicity of solving? I don’t know how it works with current Circles either, actually - does the solver node just try to solve for each intent individually, based on the capacity graph, or is there some kind of batching? If you’re doing solving in TEEs in consensus, there’s almost certainly going to be some kind of batching (since the TEE is replicated and the clock speed is limited by block times), and I wonder if that has any implications for your design / UX.
  • I’m not quite sure I understand exactly what you mean by “run consensus twice”. The way I understood the system initially, users could hold part of their assets on EntropyChain and part of their assets on another chain such as Namada (but not exclusively), and they could just move between them using IBC as normal (perhaps the interface would automate some of this). If the TEE is run in a replicated CometBFT chain, users should be able to get at least some kind of confirmation from just the consensus of that chain, without necessarily waiting to settle anywhere else. Or did you change part of the plan and my understanding is out of date?

Thank you for the commentary!

The architecture diagram if really rough and it was rushed — my excuses for confusing parts. But we also overlooked some IBC capabilities in the intermediate TEE-based desigh.

You’re right - ABCI++ arrow was there because the initial idea was to implement ABCI++ methods such that the validation of a state change in Entropy will depend on state of Namada. So in a way when I drew it I had a picture of an ad hoc, informally-specified, slow implementation of half of IBC in my head. Funds locking was supposed to be done via some escrow account on Namada controlled by Entropy or something similar. I haven’t realized that we could just use IBC.

Exacltly. After tokens moved to Entropy from Namada all state lives there.

In this design proposal the mechanism is as follows:
a note: My knowledge of IBC is limited - so please point out wrong assumptions or use of terms if there are any : )

  1. A set of Namada users bridge part of their tokens to Entropy.
  2. Bridged tokens can be used in WoT (web of trust) based routing or participate in multi-lateral clearing (Entropy+CoFi) or in con-currency payments. For that users have to do payment requests inside Entropy to other bridged accounts or to issue IOUs.
  3. Once in a while (how frequently is to be determined) users can apply the state change to Namada. Or as you rightfully pointed out moved to any other IBC enabled chain.

An example: user A and B bridged some of their tokens from accounts Aⁿ and Bⁿ to Entropy accounts Aⁱ and Bⁱ. If there was a payment P of from Aⁱ to Bⁱ through WoT within Etropy and B wants to withdraw the gained amount back to Bⁿ then A can only withdraw (Aⁱ - P) back to Aⁿ and B can withdraw (Bⁱ + P).

There is also a concern about the privacy guarantees - does this link to state changes in Entropy in any way breaks the MASP privacy? What happens if the TEEs are compromised? Will it lead to a partial disclosure of Namada balances information (statements like “A had at least Aⁱ tokens on Namada”)? We will look into those.

This is not clear at the moment. For a WoT enabled payment network it makes sense to solve per payment request (that could be an intent). People expect payments to go through immediately. So if we speaking the same mechanics as CirclesUBI where a “solved” payment invokes a cascade of pairwise transitive exchanges along the trust paths, then we need to run max-flow for each payment. Which is not the case for obligation clearing applications, where invoices are just placed into a pile for periodical clearing (solving) that cancels big batches of payments at once.

A frequency of confidential max-flow instant payments could be too high for future Taiga based approaches. In this scenario I think we could explore ways to reduce frequency by grouping nodes into dense trust clusters where participants are expected to balance out to zero over time. We can then do less frequent MTCS style clearing, and, only do per request solving when payments cross boundaries of those clusters. This makes more sense with Kudos because density of trust clusters doesn’t necessarily reflect the frequency of transactions. But I’m just throwing ideas here.
Or did you mean that because we run CometBFT consensus we’ll need to validate a bunch of transactions that will be committed to a block and it might have an effect on the overall behavior of the system? I’m not sure I got what you refereed to by batching right.

I think this formulation is again the result of that we failed to take into the account that IBC already provides what we need. The way you understood the system initially is in sync with ours : )


Welcome to the forums! I’ll leave just a quick response here for now; there are also interesting questions related to “multi-frequency-level clearing”, as you mention, but they deserve a separate exposition :grin:

Gotcha - glad to hear that - the IBC interface boundary is very well defined, so it should be easy to further define all of the interactions (and test them).

Yes, that’s right.

This should be independent, I think. The users know which of their accounts are linked, but neither Namada nor Entropy should need to. A TEE compromise would reveal information about what folks were doing on Entropy, and that could be additional timing information to correlate with observed shielded transactions, but worst-case (in case of TEE compromise) Entropy is just like another transparent chain.

What exactly is “immediately”, though? If you’re waiting for a block to be confirmed anyways (~5 seconds), you could do once-per-block batches with presumably minimal impact on user-facing latency.