This article is a technical/educational essay on AMM mathematics. It is for informational purposes only and does not constitute investment, legal, or financial advice, nor a recommendation to provide liquidity, trade, or interact with any specific decentralized exchange, token, or protocol. Named protocols (Uniswap, Orca/Whirlpool, Meteora, etc.) are referenced for technical illustration; the publisher holds no positions in and has received no compensation from any party named herein.

Concentrated Liquidity Math — Ticks and Bins

I know how constant product AMMs work. I can derive the swap output formula in my sleep. Given reserves x and y and an input of delta_x, the output is y * delta_x / (x + delta_x). Clean, deterministic, elegant. The curve is a hyperbola, the product stays constant, and every trade slides along that smooth, predictable arc. I've built my screener around this math. It works.

And now I'm staring at a Whirlpool pool and a Meteora DLMM pool, and none of that math applies.

These pools don't use x * y = k. They use something more complex — concentrated liquidity — and the math underneath is dense enough that I've spent the last three days reading whitepapers, tracing through on-chain program logic, and filling a notebook with diagrams that look like the work of someone who's lost their mind. Ticks. Bins. Square root prices encoded as 128-bit fixed-point numbers. Liquidity that appears and disappears as the price moves. It's a different world, and if my bot is going to route through Whirlpool and Meteora pools — which it has to, because that's where a huge portion of Solana's volume lives — I need to understand this world completely.

Not approximately. Completely.

The x * y = k World — Clean But Wasteful

Before I dive into what concentrated liquidity is, I need to understand what problem it solves. And the problem is waste.

In a traditional constant product AMM, liquidity is spread uniformly across the entire price range — from zero to infinity. Every possible price, from SOL-is-worthless to SOL-costs-a-billion-dollars, has the same density of capital allocated to it. The curve doesn't care whether a price is realistic or absurd. It provisions liquidity for all of them equally.

Think about this the way a city planner thinks about zoning. Imagine a city that zones every single block identically — residential, commercial, industrial, parks, hospitals — all in equal proportion, on every block, from the dense downtown core all the way out to the empty desert at the city limits. Sure, every block is "ready for anything." But the hospital on the block where nobody lives? The shopping mall in the middle of the desert? Those are wasted resources. Capital is sitting in zoning allocations that will never be used, while the downtown blocks where everyone actually lives are underserved.

That's x * y = k. The liquidity providers deposit capital into a pool, and that capital gets spread across the entire price curve. But the current price of SOL/USDC is around 200. Nobody is trading SOL at $0.01 or at $50,000. The liquidity sitting at those extreme prices is doing nothing. It's not earning fees. It's not facilitating trades. It's just... there. Allocated but idle.

The implication is stark. For a typical constant product pool, the vast majority of deposited capital sits at prices far from the current market. Only a small fraction actively participates in trades near the current price. Liquidity providers are deploying a hundred dollars to earn fees on a few dollars' worth of work.

This is the inefficiency that concentrated liquidity is designed to eliminate.

"Concentrate the Liquidity" — The Idea

The concept, when you strip away the math, is simple: what if liquidity providers could choose which price range their capital covers?

Instead of spreading $10,000 across the entire curve from zero to infinity, an LP says: "I want my $10,000 to provide liquidity only between $180 and $220." All of their capital is compressed into that narrow band. Within that range, the effective liquidity — the capital actually available for trades — is massively amplified. The same $10,000 that would have provided a thin layer of liquidity across an infinite range now provides a thick, concentrated wall of liquidity across a $40 range.

The efficiency gain is enormous. An LP providing concentrated liquidity in a narrow range — say, a few percent of the full price spectrum — can achieve capital efficiency orders of magnitude higher than the same deposit in a constant product pool. Same deposit. Dramatically more trading capacity. Dramatically more fees earned per dollar deployed.

It's the difference between Walmart and a specialty store. Walmart spreads its square footage across every product category — groceries, electronics, clothing, automotive, pharmacy, garden furniture. A specialty store takes the same floor space and fills it entirely with one category. If you're shopping for running shoes, the specialty store has fifty options on the shelf while Walmart has three. The total investment might be similar, but the depth in any one category is radically different.

There's a catch, of course. There's always a catch.

If the price moves outside the LP's chosen range, their liquidity becomes inactive. They stop earning fees entirely. Worse, they're now holding 100% of whichever token depreciated — the wrong side of the trade. They need to actively manage their position: monitor prices, adjust ranges, rebalance. Passive buy-and-hold LP? That's the old world. Concentrated liquidity demands attention.

But the efficiency gain is why every major DEX has moved to some form of this model. Uniswap V3 pioneered it on Ethereum. Orca's Whirlpools and Meteora's DLMM brought it to Solana. And now I need to compute swap outputs through these pools, which means I need to understand the math that makes this work.

Square Root Price — Why Not Just Price?

The first thing that throws me is that neither Whirlpool nor Meteora stores the price directly. They store the square root of the price. Every calculation works with sqrt_price, not price. My first instinct is that this is needless complexity — why add a square root when you could just store the number directly?

It takes me a while to see why, and when I see it, it's genuinely clever.

In a concentrated liquidity pool, there are two types of swaps: selling Token X for Token Y, and selling Token Y for Token X. These two directions move different things. When you sell Token X, the amount of Token X in the pool increases, and the price (expressed as Y per X) decreases. When you sell Token Y, the amount of Token Y increases, and the price increases.

If you store the price directly, the formula for computing how much the price changes during a swap differs depending on the direction. One direction involves multiplication, the other involves division. The formulas are asymmetric, and the code needs separate logic for each case.

But if you store the square root of the price, something elegant happens. The relationship between liquidity, the change in reserves, and the change in sqrt_price becomes linear in both directions. Swapping Token X changes 1/sqrt_price by a predictable amount. Swapping Token Y changes sqrt_price by a predictable amount. Both directions reduce to addition and subtraction on sqrt_price — symmetric, clean, and expressible with one unified code path.

It's like converting between Fahrenheit and Celsius. The conversion formula is annoying — multiply by 9/5, add 32. But if you work in Kelvin, certain physics equations simplify dramatically because you've eliminated the offset. Working in sqrt_price instead of price eliminates a class of asymmetries that would otherwise complicate every swap calculation.

There's a second, more practical reason: overflow prevention. Prices in DeFi can span an enormous range. A token worth $0.000001 and a token worth $100,000 differ by 11 orders of magnitude. Their prices, expressed as one token per another, can be astronomically large or microscopically small. Square roots compress this range — the square root of a billion is only about 31,600. Keeping numbers smaller means less risk of blowing past the maximum value that a 128-bit integer can hold.

Q64.64 — How Blockchains Handle Decimals

Now I need to understand how this square root is actually stored on-chain, because blockchains don't have floating-point numbers. No decimals. No fractions. Everything is integers.

The answer is fixed-point arithmetic, and specifically a format called Q64.64. The idea is straightforward: take a 128-bit unsigned integer (a u128), and treat the top 64 bits as the integer part and the bottom 64 bits as the fractional part. It's like counting in dollars and cents, except instead of 2 decimal places, you have 64 binary places of fractional precision.

When Whirlpool stores a sqrt_price, it stores it as a Q64.64 value. If the actual mathematical square root of the price is 14.142135... (which would be the case for SOL/USDC at $200), that number gets multiplied by 2^64 — roughly 18.4 quintillion — and stored as a u128 integer. The enormous multiplier gives you 64 bits of fractional precision without ever touching a decimal point.

Think of it like a carpenter measuring in thirty-seconds of an inch. Instead of saying "three and seven-sixteenths inches," they say "110 thirty-seconds." Same measurement, different representation, but the thirty-seconds version is an integer — no fractions required. Q64.64 does the same thing with 2^64 subdivisions instead of 32.

To convert back: take the u128, divide by 2^64, and you get the actual sqrt_price as a human-readable number. To get the actual price from that, square it. All of the on-chain math — every addition, every multiplication during a swap calculation — operates on these Q64.64 values using integer arithmetic. The precision is extraordinary: 64 bits of fractional data gives you about 19 decimal digits of accuracy. More than enough for any financial calculation.

But it means every formula I implement has to account for this encoding. When I multiply two Q64.64 numbers, the result has 128 fractional bits — I need to shift right by 64 bits to get back to Q64.64 format. When I divide, I need to shift the numerator left by 64 bits first to preserve precision. The math is correct but the bookkeeping is unforgiving. One missed shift and the number is off by a factor of 2^64.

Whirlpool Ticks — The 0.01% Price Grid

With sqrt_price and Q64.64 understood, I can now tackle how Whirlpool organizes its price space. The answer is ticks.

A tick is a discrete point on a price grid. Every tick has an integer index — tick_index — and corresponds to a specific price via the formula:

price = 1.0001 ^ tick_index

Or equivalently, in terms of the sqrt_price that the protocol actually uses:

sqrt_price = 1.0001 ^ (tick_index / 2)

The base 1.0001 means each tick represents a 0.01% — one basis point — change in price from its neighbor. Tick 0 corresponds to a price of 1.0. Tick 100 corresponds to a price of 1.0001^100, which is roughly 1.01 — a 1% increase. Tick -100 corresponds to roughly 0.99 — a 1% decrease.

It's structured like the old-school stock ticker price increments before decimalization. When stocks on the NYSE traded in fractions — eighths and sixteenths of a dollar — there was a fixed grid of valid prices. You could quote a stock at 50 and 1/8 or 50 and 1/4, but not at 50.13. The grid was discrete. Ticks are the same idea, except the grid is multiplicative (each step multiplies by 1.0001) rather than additive (each step adds a fixed fraction).

Not every tick is usable, though. Whirlpool introduces tick_spacing, which determines how many ticks apart the valid liquidity boundaries are. The spacings correspond to pool types:

  • tick_spacing = 1: stable pairs (0.01% per usable tick)
  • tick_spacing = 64: general pairs (each usable tick spans 0.64% price change)
  • tick_spacing = 128: broader pairs (1.28% per usable tick)
  • tick_spacing = 256: volatile pairs (2.56% per usable tick)

Think of tick_spacing like yard lines on a football field. The field has markings every yard, but the game's major reference points are every 10 yards. You can be at the 37-yard line, but first downs and touchdowns happen at the 10-yard boundaries. Similarly, ticks exist at every 0.01% increment, but liquidity positions can only start and end at multiples of the tick_spacing. This keeps the system manageable — checking liquidity changes at every single tick would be computationally prohibitive.

When a swap happens and the price crosses a tick boundary where someone's liquidity position starts or ends, the pool applies a value called liquidity_net. This is the net change in available liquidity at that tick — the sum of all liquidity being added by positions starting here minus all liquidity being removed by positions ending here. Cross a tick going up, add liquidity_net. Cross it going down, subtract it.

This is where the computation gets intense. A single swap can cross multiple tick boundaries, and at each boundary, the available liquidity changes. The swap doesn't just slide along a curve — it slides along a curve, hits a boundary, adjusts the curve's shape, slides along the new curve, hits another boundary, adjusts again. Each segment between boundaries behaves like a tiny constant-product pool with its own liquidity value.

Whirlpool enforces a hard limit: a single swap transaction can cross at most 20 tick boundaries (MAX_SWAP_TICK_CROSSES = 20). If the swap would require crossing more than 20 ticks, it's truncated. This protects the network from computationally explosive transactions, but it means large swaps through thin liquidity might not fully execute in a single transaction.

The 19 Constants — Binary Decomposition of Tick-to-Price

One detail that I find fascinating from a pure engineering perspective: converting a tick_index to a sqrt_price requires computing 1.0001^(tick_index/2). Exponentiation with an arbitrary exponent is expensive. So Whirlpool uses a trick borrowed from fast exponentiation algorithms — binary decomposition.

The tick_index is broken into its binary representation, and the program stores 19 precomputed constants, each corresponding to a power of 2 in the exponent. To compute the final sqrt_price, it multiplies together only the constants whose corresponding bit is set. Instead of an expensive iterative exponentiation, it's at most 19 multiplications. Fixed cost, predictable compute, efficient.

It's like making change. If someone owes you $347, you don't count out 347 one-dollar bills. You use a $200 bill (if one existed), a $100, two $20s, a $5, and two $1s. The precomputed constants are the denominations, and the tick_index's binary representation tells you which denominations to use.

Meteora Bins — The Constant-Sum World

Now I turn to Meteora's DLMM (Dynamic Liquidity Market Maker), and immediately the math diverges. Where Whirlpool builds on the constant-product foundation — concentrating x * y = k into narrow ranges — Meteora takes a fundamentally different approach within each price unit.

Meteora organizes its price space into bins. Each bin has an integer ID, and the price for a bin is determined by:

price(bin_id) = (1 + bin_step / 10,000) ^ (bin_id - 8,388,608)

The constant 8,388,608 is the "zero point" — the bin_id where the price equals exactly 1. bin_step controls the granularity: a bin_step of 1 means each bin is 0.01% apart, while a bin_step of 100 means each bin spans a 1% price change. Common values include 1, 50, and 100.

So far, this looks similar to Whirlpool's tick system — a geometric grid of prices, each step a fixed percentage apart. But the crucial difference is what happens inside each bin.

In Whirlpool, within a tick range, the swap follows a constant-product curve. The relationship between reserves is still x * y = k (scaled by the concentrated liquidity), and the price changes continuously as you trade within the range.

In Meteora, within a single bin, the swap follows a constant-sum formula: X + Y = constant. This means the price inside a bin is flat. Fixed. A trader swapping within a single bin pays the bin's price for every unit, with no slippage at all, until the bin is drained.

This is a profound difference. It's the difference between a gas station and a stock exchange. At a gas station, the price is posted: $3.49 per gallon. Whether you buy 1 gallon or 15 gallons, the price per gallon is the same — until the station runs out, at which point you drive to the next one. That's Meteora's bins. Each bin is a gas station with a fixed price and a finite supply. On a stock exchange, every share you buy pushes the price slightly higher because you're consuming the lowest available offers. That's Whirlpool's ticks — continuous price impact within each range.

For small swaps that fit within a single bin, Meteora's math is simpler and more predictable. There's no slippage curve to traverse. You just multiply: amount_in * price = amount_out (minus fees). Done.

For larger swaps that exhaust a bin's reserves, the swap crosses into the next bin. The direction matters: if you're swapping Token X for Token Y, you traverse from the active bin downward through bins with lower prices (higher bin_ids numerically may correspond to different directional logic depending on how the pair is organized). If you're swapping Token Y for Token X, you traverse upward. When a bin is fully drained, the pool updates its active_bin_id to the next bin and continues filling the order.

In practice, Meteora swaps tend to cross fewer bins than Whirlpool swaps cross ticks. A typical swap might touch 2 or 3 bins, compared to potentially 20 tick crossings in Whirlpool. This isn't necessarily because the price ranges are wider — it's because the constant-sum formula within each bin means you can absorb more volume at a flat price before needing to move to the next bin.

But each bin crossing in Meteora involves a separate fee calculation. Meteora charges fees per bin traversal, and the fee structure can include a variable component that adjusts based on market volatility. This means computing the total fee for a multi-bin swap requires iterating through each bin individually — no shortcut formula that gives you the total fee for the whole swap in one calculation.

Ticks vs Bins — Same Goal, Different Math

Both systems are trying to solve the same problem: allow LPs to concentrate capital in narrow price ranges, improving capital efficiency by orders of magnitude over x * y = k. But their approaches diverge in ways that matter enormously for implementation.

Aspect Whirlpool (Tick) Meteora (Bin)
Unit formula Constant-product within range Constant-sum within bin
Price within unit Changes continuously Flat (fixed)
Boundary crossing Apply liquidity_net Update active_bin_id
Math complexity High — Q64.64 sqrt_price, multi-step Medium — simpler multiply/divide
Max crossings per TX Up to 20 Typically 2-3
Fee calculation Once per swap Per bin traversed

The constant-product vs constant-sum distinction has real consequences for my screener's accuracy. When I'm estimating the output of a swap through a Whirlpool, I need to integrate along the curve between the start and end sqrt_price within each tick range — and get the curve right, because the output depends on the shape of the curve, not just the endpoints. When I'm estimating a Meteora swap, the within-bin calculation is trivial (flat price, simple arithmetic), but I need to know exactly how much liquidity sits in each bin along the path.

The data requirements differ too. For Whirlpool, I need tick arrays — on-chain accounts that store the tick data (liquidity_net values) for ranges of ticks. Each tick array covers a fixed range of tick indices, and a pool might need multiple tick arrays to cover the relevant price space. For Meteora, I need bin arrays — similar on-chain accounts that store the reserves and composition of each bin. Both require separate account fetches beyond the main pool account.

If I only fetch the pool's current state and try to compute a swap output without the tick or bin data, I'm limited to a single-unit approximation. And single-unit approximation is where the errors live.

The Developer's Nightmare — Multi-Tick/Bin Traversal Reality

Here's where it gets personal. I need to compute swap outputs accurately enough to identify profitable arbitrage opportunities. "Approximately right" doesn't cut it. If my estimate is 5% higher than reality, I'll submit transactions that look profitable but actually lose money. I've been down that road already — the previous episodes of chasing phantom opportunities taught me that lesson in lamports.

For a simple constant product pool, the swap output formula is one line of code. Plug in reserves, plug in input amount, get output. Done. I can screen thousands of cycles per second because each computation is trivial.

For concentrated liquidity, the computation is a loop. Here's what the pseudocode looks like for a Whirlpool swap:

remaining = input_amount
output = 0
current_sqrt_price = pool.sqrt_price
current_liquidity = pool.liquidity
current_tick = pool.tick_current_index

while remaining > 0:
    next_tick = find_next_initialized_tick(current_tick, direction)
    next_sqrt_price = tick_to_sqrt_price(next_tick)
    
    // How much input can this tick range absorb?
    max_input_in_range = compute_max_input(
        current_sqrt_price, next_sqrt_price, current_liquidity
    )
    
    if remaining <= max_input_in_range:
        // Swap completes within this range
        new_sqrt_price = compute_new_sqrt_price(
            current_sqrt_price, current_liquidity, remaining
        )
        output += compute_output(
            current_sqrt_price, new_sqrt_price, current_liquidity
        )
        remaining = 0
    else:
        // Exhaust this range, cross into next
        output += compute_output(
            current_sqrt_price, next_sqrt_price, current_liquidity
        )
        remaining -= max_input_in_range
        current_sqrt_price = next_sqrt_price
        current_liquidity += get_liquidity_net(next_tick)
        current_tick = next_tick

Each function call in that loop involves Q64.64 arithmetic — 128-bit multiplications, careful shift operations, ceiling/floor rounding in specific directions. The find_next_initialized_tick function has to scan through the tick array to find the next tick that has a non-zero liquidity_net. The compute_max_input and compute_output functions use the concentrated liquidity formulas that relate changes in sqrt_price to changes in reserves, with liquidity as a scaling factor.

For Meteora, the structure is similar but the inner math is different:

remaining = input_amount
output = 0
current_bin = pool.active_bin_id

while remaining > 0:
    bin_reserves = get_bin_reserves(current_bin)
    bin_price = compute_bin_price(current_bin, pool.bin_step)
    
    // How much can this bin absorb?
    max_input_in_bin = compute_max_bin_input(bin_reserves, bin_price, direction)
    
    fee = compute_bin_fee(min(remaining, max_input_in_bin), pool.base_fee, ...)
    input_after_fee = min(remaining, max_input_in_bin) - fee
    
    if remaining <= max_input_in_bin:
        output += input_after_fee * bin_price  // constant-sum: flat price
        remaining = 0
    else:
        output += max_input_in_bin_after_fee * bin_price
        remaining -= max_input_in_bin
        current_bin = next_bin(current_bin, direction)

The within-bin math is simpler — multiplication by a flat price rather than integration along a curve. But the per-bin fee calculation adds complexity, and I need the reserve composition of each bin along the traversal path.

Here's the kicker: the single-tick or single-bin approximation — where you pretend the swap happens entirely within the current tick range or active bin, ignoring all boundary crossings — introduces errors of 2-5% for small swaps and 10-15% for larger ones. That might sound tolerable. It's not. My arbitrage cycles typically show raw spreads in the 0.3-0.8% range. An estimation error of 5% on one of the three hops can flip a profitable opportunity to a losing one, or worse, make a losing opportunity look profitable.

So I have to implement the full multi-tick and multi-bin traversal. There's no shortcut. No approximation that's good enough. I need to fetch the tick arrays and bin arrays, parse them, detect every boundary along the swap path, and compute the output step by step through each segment.

This is why concentrated liquidity is harder for developers. Not because the concept is harder to understand — "concentrate capital in narrow ranges" is a simple idea. It's because the implementation requires tracking state that changes at every boundary crossing, performing high-precision fixed-point arithmetic at every step, and fetching additional on-chain data that doesn't exist in the main pool account. The constant product formula is one equation. Concentrated liquidity is a loop of equations with state transitions.

The Weight of Precision

I'm sitting with two notebooks now — one for Whirlpool math, one for Meteora math — and both are filling up with edge cases. What happens when the swap exactly lands on a tick boundary? (You still apply liquidity_net.) What happens when a Meteora bin has zero reserves on one side? (Skip it, move to the next.) How does Whirlpool's binary decomposition handle negative tick indices? (Invert the result at the end.) What's the rounding direction for fee deduction vs output computation? (Fees round up against the trader; outputs round down against the trader. Always.)

Every one of these edge cases is a potential discrepancy between my off-chain estimate and the on-chain execution. Every discrepancy is either a missed opportunity or a submitted transaction that fails or loses money. The margin for error in arbitrage is the spread itself, and the spread is measured in fractions of a percent.

x * y = k was just the beginning. It was the introductory course — here's the concept, here's the formula, here are the reserves, here's your output. Concentrated liquidity is the graduate seminar. The concept is the same — pools of tokens, swaps, deterministic outputs — but the math has sprouted layers. Square roots. Fixed-point encodings. Geometric price grids. Piecewise integration across boundary-delimited segments. State that mutates at every tick crossing. Per-bin fee schedules.

And I haven't even started thinking about what happens when the on-chain state changes between the moment I estimate the output and the moment my transaction executes. The tick arrays I fetched a second ago? Someone else's swap might have crossed a tick boundary since then, changing the liquidity distribution. The active bin I based my Meteora estimate on? It might have shifted. The data freshness problem that plagues constant product pools is amplified here, because the state isn't just "reserves changed" — it's "reserves changed AND the liquidity distribution might have restructured."

The x * y = k formula fits on a Post-it note. The concentrated liquidity implementation fills a file. Both compute the same thing — how many tokens come out when tokens go in. But the gap between them is the gap between arithmetic and calculus, between a flat tax and the full IRS tax code with brackets and deductions and phase-outs and alternative minimum tax. Same purpose. Different universe of complexity.

I close the notebook and start writing code.

Disclaimer

This article is for informational and educational purposes only and does not constitute financial, investment, legal, or professional advice. Content is produced independently and supported by advertising revenue. While we strive for accuracy, this article may contain unintentional errors or outdated information. Readers should independently verify all facts and data before making decisions. Company names and trademarks are referenced for analysis purposes under fair use principles. Always consult qualified professionals before making financial or legal decisions.