Concentrated Liquidity DEX — Tick-Based Architecture
The constant product AMM I just integrated has one formula. One. I parse the reserves, plug them into x * y = k, apply the fee, and out comes my expected swap output. The math is done in a single pass. The accounts are static — always the same set of addresses for a given pool, every time, regardless of price.
I'm now integrating Orca Whirlpool, and that world is gone.
Same concept — an automated market maker that prices tokens according to a mathematical rule. Same goal — simulate the swap output, build the instruction, submit the transaction. But the implementation is so fundamentally different that sharing the label "AMM" feels almost misleading. It's like saying a bicycle and a helicopter are both "vehicles." Technically accurate. Practically useless for predicting what you're about to deal with.
Concentrated liquidity is the reason I'm here. Constant product pools spread their liquidity across the entire price range from zero to infinity. That's safe, simple, and massively inefficient. Most of that liquidity sits at price points that the market will never reach. A SOL/USDC pool doesn't need liquidity provisioned at a price of $0.0001 per SOL or $50,000 per SOL. But with x * y = k, it's there anyway, doing nothing, earning nothing.
Concentrated liquidity fixes this by letting liquidity providers choose a specific price range for their capital. Instead of spreading $10,000 across the entire number line, an LP can concentrate it between $120 and $180 — the range where trading actually happens. Within that range, the effective liquidity is dramatically higher than a constant product pool with the same total value. More liquidity means less price impact per trade, which means tighter spreads, which means more volume, which means more fees for the LP.
Capital efficiency. The principle is clear. The implementation is where everything gets complicated.
The Highway Mile Marker System
To understand ticks, think about the American interstate highway system.
A highway doesn't have infinite resolution for location. You don't describe your position as "47.382917 miles from the state border." You use mile markers — discrete, evenly spaced reference points. Mile marker 47. Mile marker 48. Between them, you're "somewhere near mile 47." The system sacrifices precision for manageability. Nobody needs to know your position to the millionth of a mile. Mile-level resolution is sufficient for navigation, emergency response, and exit numbering.
Ticks work the same way. The continuous price space — which mathematically contains infinite possible prices — gets divided into discrete points. Each tick corresponds to a specific price. Between two adjacent ticks, the price is treated as constant and the liquidity is treated as uniform. A swap that starts at tick 100 and moves toward tick 101 uses the same liquidity value for the entire segment between those two ticks.
This discretization is what makes concentrated liquidity computationally tractable. Without it, the program would need to handle truly continuous positions — an infinite number of possible price boundaries — which is impossible on a blockchain where every computation costs compute units and every byte of storage costs rent.
The spacing between ticks is called tick_spacing, and it determines the resolution of the price grid. Orca Whirlpool uses different tick spacings for different pool types: 1, 64, 128, and 256. A tick spacing of 1 means every single tick is a valid boundary for LP positions. A tick spacing of 64 means LPs can only place boundaries at every 64th tick. Smaller tick spacing gives higher precision — tighter ranges, more granular liquidity placement — but it also means more ticks to traverse during a swap, which means more computation.
Think of it like stadium seating. A high-end concert venue might number every individual seat — row A, seat 1 through seat 40. That's tick spacing of 1. A football stadium numbers by section — Section 100, Section 200, Section 300. That's a wider tick spacing. The football stadium can't seat you at "Section 147" — you're in Section 100 or Section 200. Less precision, but far fewer reference points to manage. The concert venue knows exactly where you are but needs a much more detailed seating chart.
For an arbitrage bot, tick spacing matters because it affects how many tick boundaries a swap can cross and therefore how many discrete calculations I need to perform. A pool with tick spacing 1 might cross dozens of ticks on a moderately sized swap. A pool with tick spacing 256 might cross only one or two for the same swap. More crossings means more math, more accounts to load, and more opportunities for something to go wrong.
The Parking Garage Problem
Here's where Solana's architecture forces a design choice that doesn't exist on other chains.
On Ethereum, a smart contract can store essentially unlimited data in its own storage slots. A Uniswap V3 contract keeps a mapping of every initialized tick in its contract storage. When a swap needs to check a tick's liquidity data, it reads from the mapping. Simple. The contract has access to all its own data.
Solana doesn't work this way. Solana accounts have a maximum size — around 10 megabytes in theory, but practically much smaller for cost reasons. A Whirlpool pool with tick spacing of 1 could have tens of thousands of initializable ticks. Storing all of them in a single account is impractical. The account would be enormous, and every transaction that touches the pool would need to load the entire thing, paying rent for data it doesn't need.
The solution is tick arrays. Instead of one massive account holding all ticks, the tick data gets split into chunks of 88 ticks each. Each chunk lives in its own separate account — a tick array.
Imagine a parking garage. Instead of one colossal structure with 10,000 spaces, the garage is split into levels, each holding 88 parking spots. Each level has its own entrance, its own numbering scheme, and its own structural support. To find a specific spot, you first determine which level it's on, then navigate within that level.
Each tick array is a PDA — a Program Derived Address — derived from the pool's public key and the starting tick index of that array. For a pool with tick spacing of 1 (the finest granularity), the first array covers ticks 0 through 87, the second covers 88 through 175, and so on. For larger tick spacings, each array spans a proportionally wider range — with tick spacing of 64, a single array covers 88 × 64 = 5,632 ticks. The derivation is deterministic: given the pool address and the starting tick index, anyone can calculate the tick array's address without looking it up.
This is elegant in principle. In practice, it means that a swap instruction can't just reference "the pool." It needs to reference the specific tick arrays that the swap will traverse. And figuring out which tick arrays those are — before executing the swap — is the central challenge of integrating a tick-based DEX.
Pre-Calculating the Unpredictable
With a constant product AMM, I know exactly which accounts a swap needs before I build the instruction. The pool account, the vaults, the token programs, the user accounts — all static. The same swap instruction works at any price, at any time, for any input amount.
With a tick-based AMM, the accounts change depending on the current price and the expected price movement. A small swap might stay within a single tick array. A larger swap might cross into the next tick array. A very large swap on a thin pool might cross multiple tick arrays. Each tick array the swap might touch must be included in the transaction's account list.
This is the parking garage analogy extended: I need to tell the garage attendant not just "I want to park" but "I'll be parking on levels 3, 4, and possibly 5." I have to predict which levels I'll use before I enter the garage. If I say levels 3 and 4 but end up needing level 5, the attendant turns me away at the entrance. "Insufficient accounts provided."
So I need to simulate the swap to determine which tick arrays it'll cross — but to simulate the swap accurately, I need the data from those tick arrays. It's chicken-and-egg. My approach: start from the current price (which tells me the current tick array), then estimate the price impact of the swap to determine how many additional tick arrays I might need. I include a buffer — typically one or two extra tick arrays beyond what the math suggests — because the liquidity distribution can make the price move farther than expected if there's a gap with no active liquidity.
Orca Whirlpool allows up to three tick arrays in a single swap instruction. Three. That's the constraint. If the swap needs to traverse more than three tick arrays' worth of ticks, the transaction will fail — not because the math is wrong, not because the accounts are wrong, but because the program enforces a hard limit on tick array crossings per instruction.
For an arbitrage bot, this is a critical constraint. Large price dislocations — exactly the situations that create the most profitable arbitrage opportunities — are the ones most likely to require crossing many ticks and therefore many tick arrays. The three-array limit caps how much price movement a single swap can capture. If the opportunity requires moving the price across four tick arrays, I can't capture it in one instruction. Splitting across multiple instructions introduces complexity and race conditions.
Tick Traversal: The Meter-by-Meter Walk
Here's the most fundamental difference in swap simulation between constant product and tick-based AMMs.
Constant product: one formula. Input amount, reserves, fee rate. One calculation. Done.
Tick-based: I walk through the ticks one by one.
The swap starts at the current tick. Between the current tick and the next initialized tick, the liquidity is constant. I calculate how much of my input amount gets consumed crossing to that next tick. If there's input remaining, I move to the next tick — but now the liquidity might be different, because an LP's position boundary is at this tick, so liquidity either increases (entering a new position) or decreases (exiting one). I recalculate with the new liquidity. Repeat until my input is fully consumed or I run out of tick arrays.
It's like driving on a highway where the speed limit changes at every mile marker. Between mile 47 and mile 48, the limit is 65. Between mile 48 and mile 49, it drops to 55 because there's a construction zone. Between mile 49 and mile 50, it's back to 65. To calculate my total travel time, I can't just use one speed — I need to calculate the time for each segment separately, using each segment's speed limit, then add them up. The total travel time is the sum of many small calculations, not one big one.
Each tick crossing is a segment calculation. The formula within each segment is similar to constant product — it's a liquidity-based calculation using a quantity called sqrt_price, the square root of the price — but the liquidity value can change at each tick boundary. This means the simulation is a loop, not a formula. And the loop's iteration count depends on the liquidity distribution, which I don't fully know until I load the tick arrays.
Here's where precision becomes excruciating. Whirlpool uses sqrt_price encoded as a Q64.64 fixed-point number — a 128-bit integer where the upper 64 bits represent the integer part and the lower 64 bits represent the fractional part. This is not floating point. There's no rounding mode to select, no IEEE 754 behavior to worry about. But there are overflow risks — multiplying two 128-bit numbers can produce a 256-bit result — and there are division precision issues that compound across tick crossings.
If I'm simulating a swap that crosses 12 ticks, any precision error in tick crossing 1 propagates through crossings 2 through 12. A rounding difference of 1 unit in the intermediate sqrt_price calculation at tick 3 might produce a final output that's off by dozens of lamports. Being off by even 1 lamport can cause the transaction to fail on-chain validation.
The precision requirements aren't theoretically interesting. They're practically punishing. I've spent hours tracking down simulation discrepancies that come down to the order of multiplication and division in a single line of code. Multiply first, then divide? Or divide first, then multiply? In floating-point math, these are equivalent. In 128-bit integer math, they can produce different results because of truncation at different magnitudes. The on-chain program does it in a specific order. My simulation must do it in the same order.
The Uninitialized Tick Array Trap
There's a failure mode with tick arrays that has no equivalent in constant product AMMs, and it caught me off guard the first time.
Tick arrays are only created — initialized on-chain — when someone needs them. If no LP has placed a position with a boundary in a particular tick range, the tick array for that range doesn't exist. It's not an empty account. It's not a zeroed-out account. It literally doesn't exist as an account on the chain.
Now imagine a swap that's large enough to push the price into a tick range where no LP has ever placed a position. The tick array covering that range has never been initialized. My simulation, running off-chain, determines that the swap will cross into this range. I calculate the PDA for the required tick array. I include it in the transaction's account list. The transaction submits. And it fails — because the account at that PDA address doesn't exist.
This is like looking up a book in the library's Dewey Decimal catalog, finding the call number, walking to the shelf, and discovering that shelf section hasn't been built yet. The catalog says the book should be at 535.84. The library has shelves through 535.79. The rest of the 535.8x section is an empty wall. The catalog entry is valid — the classification is correct — but the physical infrastructure to hold that book doesn't exist.
In a constant product pool, this can't happen. The pool has two token vaults, period. The price can move to any point on the curve and the pool handles it within the same account structure. There's no concept of "the pool doesn't have infrastructure at this price."
For tick-based pools, I need to handle this case in my simulation. Before including a tick array in my transaction, I check whether it exists on-chain. If it doesn't, the swap can only proceed to the boundary of the last initialized tick array. Beyond that boundary, there's no liquidity and no infrastructure, and the swap effectively hits a wall.
This means the effective price range of a tick-based pool isn't determined by the math — it's determined by which tick arrays LPs have populated. A pool might theoretically support prices from 0 to infinity, but practically, it only works within the ranges where tick arrays have been initialized. For an arbitrage bot, this defines the maximum price movement I can execute in a single swap, which directly constrains which opportunities I can capture.
Accounts: Static vs. Dynamic
Let me make the account comparison concrete, because this is where the integration complexity difference is most visible.
A constant product swap: roughly 15 accounts. All deterministic. Given a pool address, I can derive every account I need without knowing the current price or the trade size. I build the account list once per pool and reuse it forever.
A Whirlpool swap: over 20 accounts. The base set — pool state, token vaults, user accounts, oracle — is deterministic, similar to constant product. But then there are the tick arrays. Up to three of them. And which three depends on the current price, which changes with every trade.
This means my instruction builder for Whirlpool can't pre-compute the full account list. Every time I want to execute a swap, I need to:
- Read the current pool state to get the current tick index
- Determine which tick array contains that tick
- Estimate which adjacent tick arrays the swap might reach
- Verify that those tick arrays exist on-chain
- Build the account list with the correct tick array addresses
Steps 1 through 4 require fresh on-chain data. In a competitive MEV environment, this data can be milliseconds stale. A swap that another bot executes between the time I read the pool state and the time my transaction lands might move the price into a different tick array — making my pre-calculated tick array addresses wrong. My transaction fails. Not because my math was wrong at the time I calculated it, but because the world moved between calculation and execution.
Constant product pools have this race condition too — the reserves change between my simulation and my execution. But the account list doesn't change. The transaction might produce a different output than expected, and the slippage check might reject it, but the instruction is structurally valid. With tick-based pools, the race condition can make the instruction structurally invalid — the tick arrays I included might not be the ones the program needs at execution time.
This is the difference between a GPS that occasionally recalculates your route (same road network, different recommended path) and a GPS where the roads themselves move between when you plan your route and when you start driving. The constant product pool has a stable road network with variable traffic. The tick-based pool has roads that rearrange based on where other drivers have been.
Debugging in Tick Space
When a constant product swap fails, my debugging surface is relatively contained. Is the pool state account correct? Are the vault addresses right? Is the fee calculation matching? The formula is one equation. The accounts are a fixed list. The possible failure points are enumerable.
When a tick-based swap fails, the debugging surface expands dramatically. Every question I'd ask for a constant product swap still applies. But now I also need to ask:
- Is the current tick index I used still accurate?
- Did I calculate the correct starting tick array?
- Are the adjacent tick arrays the right ones?
- Are any of the tick arrays uninitialized?
- Did I compute the tick array start indices correctly given the tick spacing?
- Is my sqrt_price calculation using the right fixed-point precision?
- Am I handling tick boundary crossings in the correct direction?
- Does my liquidity delta have the right sign at each tick crossing?
And the error messages are, of course, no more helpful than they are for constant product swaps. "Program error. Custom error 0x1771." Looking up the error code: "Invalid tick array." Which tick array? The first, second, or third? Wrong address, wrong tick range, or uninitialized? The error doesn't say.
I develop a systematic debugging approach: I take a successful swap from on-chain transaction history, extract the accounts it used, compare them to the accounts my builder generates for the same pool and price point, and diff. The differences tell me which accounts I'm getting wrong. Then I trace back through my tick array calculation to find the logic error.
This approach works, but it's slow. Finding a successful swap on the right pool, at a comparable price point, with accessible transaction details — that's its own research task. And sometimes the issue isn't the accounts at all. Sometimes the accounts are correct and the problem is in the instruction data — the sqrt_price_limit I'm passing, or the amount parameters, or a flag I'm setting wrong.
Token-2022: Another Layer
As if the tick architecture weren't enough complexity, there's another dimension I need to handle that doesn't exist in older constant product pools.
Solana has two token programs: the original SPL Token program and the newer Token-2022 program (also called Token Extensions). Different tokens are managed by different programs. SOL and most established tokens use the original SPL Token program. Some newer tokens — including those created by certain token launchpads — use Token-2022.
A Whirlpool pool can contain one token managed by SPL Token and another managed by Token-2022. The swap instruction needs to reference the correct token program for each side of the swap. This isn't a static property I can hardcode — the same pool type (Whirlpool) can have different token program combinations depending on which tokens it contains.
My instruction builder dynamically determines which token program to use for each token in the pool. This adds another lookup to the instruction building process and another potential failure point. Including the wrong token program is a silent structural error — the instruction looks valid, the accounts are all real addresses, but the program rejects the swap because the token operations are directed at the wrong program.
Older constant product AMMs were built before Token-2022 existed. They only support the original SPL Token program, so the token program account is always the same address. Hardcoded. Simple. Whirlpool's flexibility is more powerful but demands more from integrators.
The Cost of Generality
I'm now several days into the Whirlpool integration, and I keep comparing the experience to the constant product integration that preceded it.
The constant product AMM took days because of account ordering, byte-level parsing, CPI nuances, and legacy dependencies. But the core pattern was linear: parse the state, compute the output, build the instruction. Once I got it working, it stayed working. The accounts don't change. The math doesn't change. The complexity is front-loaded.
Whirlpool's complexity is ongoing. Every swap requires fresh computation. The tick arrays I need change with the price. The simulation requires iterative calculation through an unknown number of ticks. The precision requirements compound across iterations. The failure modes include structural issues (wrong tick arrays) in addition to mathematical issues (wrong output calculation). The debugging surface is multidimensional.
And this is the tradeoff of concentrated liquidity. The capital efficiency gains for LPs — real, significant, well-documented gains — come at the cost of implementation complexity for everyone who interacts with the protocol programmatically. Users interacting through a frontend never see this. The UI fetches the tick arrays, simulates the swap, builds the transaction, and submits it. The user sees a swap button. Behind that button is a tick traversal engine, a PDA calculator, an account validator, and a fixed-point math library.
For an arbitrage bot, every millisecond of this computation matters. My constant product swap simulation runs in microseconds — one multiplication, one division, done. My tick-based swap simulation runs through a loop whose iteration count varies per swap. In a competitive environment where bots race to capture the same opportunity, the simulation time directly affects whether I can identify and execute the opportunity before someone else does.
I'm building the same thing I built for the constant product AMM — a swap simulator and an instruction builder. The input is the same: a pool address and a trade amount. The output is the same: an expected return and a ready-to-submit instruction. But the internal machinery connecting input to output is an order of magnitude more complex.
Same concept. Same goal. Completely different engineering reality. The phrase "concentrated liquidity" sounds like a feature description. After integrating it, it sounds like a warning label.
What does a DEX's architecture choice mean for the broader arbitrage ecosystem? If concentrated liquidity pools dominate trading volume but are harder for bots to integrate, does that create a barrier that concentrates MEV extraction among the few teams capable of handling the complexity? Or does the difficulty of tick-based simulation create more pricing inefficiencies — more opportunities that go uncaptured because fewer bots can exploit them?
Disclaimer
This article is for informational and educational purposes only and does not constitute financial, investment, legal, or professional advice. Content is produced independently and supported by advertising revenue. While we strive for accuracy, this article may contain unintentional errors or outdated information. Readers should independently verify all facts and data before making decisions. Company names and trademarks are referenced for analysis purposes under fair use principles. Always consult qualified professionals before making financial or legal decisions.