Impermanent-loss hedged LP strategies idea

Hi there, it’s first time to post here.
I’m sakuro, building TETRICS (former GOEMON), joining anoma cohort to shape intent-centric future.

We are collecting more trading data for attracting solver and build strategy using power of intent. Here is latest financial strategies that we’re working on.
Appreciate for feedback and comment.

Strategy: Impermanent‑Loss‑Hedged LP on AMM

  • Problem Statement
    Impermanent loss has been a major issue for AMM TVL growth. LST/LRT, Stablecoin, Lending application attain more TVL since they don’t suffer any IL.

  • Why not hedge IL ?
    Because AMM LPs are natively short gamma, have to buy options to hedge IL.
    However, DeFi has virtually no on‑chain option liquidity — 90 % of crypto options liquidity trades OTC among institutions, 10% are among CEX liquidity, there is not even 0.1% options liquidity in Defi. (Fact)

There are several solution until today, listed as reference;

  • Gammaswap
    Pros: Able to hedge your AMM LP position on-chain
    Cons: Hedge-cost is expensive compared with cex-option pricing. Liquidity is small, not scalable.

  • MEV capital
    Pros: Managed by top-hedgefund / access with large liquidity
    Cons: Not-onchain, have to rely on very specific market makers option pricing.

Solution

Act ourselves as solver at beginning, collateralize AMM LP position to buy option and build hedge positions.

We aggregate option liquidity from multiple CEXs / private MM, execute multi-leg position as single atomic on-chain transaction. By aggregating multiple option sources, we can execute much larger IL‑hedge positions entirely on‑chain.
As solver, expect to take few percent spread per monthly, will be better ROI with larger volume.
As strategy performance, we target -5% ~ +50% APY on USD base.

Will test strategies for a while, we’re looking for feedback and marketing support once we go live. Appreciate any comment, also I’m heading to token2049 in dubai next week, happy to connect.

Best,

Infos
X: https://x.com/tetrics_io
Document: Welcome | TETRICS
Telegram ID: @sakuro_tetrics

2 Likes

Thank you @sakuro! The write-up and the links to the documentation are really helpful!
One question that came to mind was how you estimated the ‘strategy perfomance’. I’m unsure whether I interpret the provided numbers correctly, could you provide a better breakdown of how the “-5% ~ +50% APY on USD base” are composed? Thank you!

1 Like

Yes, absolutely.
This impermanent loss hedging strategy consists of three components that affect PNL:

  1. Acquired LP Rewards
    Trading fees and farming rewards earned by providing liquidity to AMM (CLMM)
  2. Impermanent Net Loss
    If liquidity is provided to an ETH-USDC pair within a $1,500–$2,000 range and ETH falls to $1,500, the position would be fully converted into ETH. When valued in USD, LP provider will incur a net loss based on the value of the acquired ETH.
  3. Hedge Cost
    Part of the LP rewards is used to purchase options, aiming to offset impermanent loss.

Assuming 70% of acquired LP rewards are retained and 30% are allocated to hedging costs, the maximum potential APY would be:

70% × (100% - 30%) = 49% APY


Maximum Loss Estimate
Hedging Cost depends on option implied volatility and market liquidity at the time.
Assuming that each major ETH price drop results in impermanent loss that can be capped at 6% through option hedging, and considering that the most frequent occurrence of more than -10% weekly ETH drops over the past five years was 9 times in 2021,
the maximum cumulative loss would be:

6% × 9 = -54%

-54% + 49% = -5%

Here’s how the estimated performance range of “-5% to +50%” was calculated.
However, this simulation is based on assumptions and should not be considered for evaluating strategy’s performance.


Forward Test Results
We have conducted a forward test using real capital:

  • Date: April 29, 22:36 UTC+9
  • Testing Period: 22 days
  • Latest ROI: +9.04%
  • Max Drawdown: -1.50%
  • Current APY: +150.47%
    TETRICS Calc - Google Sheets

This forward test reflects the user’s PNL returns and is separate from solver returns.

1 Like