The First Live Run — And 0% Landing Rate
It's 2:33 in the morning and I'm about to flip the switch.
Not metaphorically. There's a literal command waiting in my terminal. One press of the Enter key and my bot goes live on Solana mainnet. Real tokens. Real pools. Real money — not much of it, but enough to make my palms sweat. The kind of money where losing it all wouldn't ruin me, but it would sting. It would sting enough that I'd feel stupid about it for months.
I've been building toward this moment for weeks. I've written the pool scanner, the cycle detector, the AMM math engine, the on-chain router program, the transaction builder, the Jito bundle submission layer. I've run unit tests. I've done simulations. I've dry-run the whole pipeline end to end and watched it identify opportunities that looked real. The numbers made sense. The math checked out. The architecture felt solid.
And now it's time to find out if any of that matters.
This is the part of building a race car where you stop tuning the engine on jack stands and actually put it on the track. You've dyno-tested it. You've checked the tire pressure. You've topped off the fluids. But none of that tells you what happens at 180 mph in Turn 3 with thirty other cars around you. The only way to know is to drive it.
I hit Enter.
The Log Flood
The terminal erupts.
Lines of text cascade down the screen so fast they blur together. My bot is alive and it's noisy — I made it noisy on purpose, because I want to see everything that's happening. Every pool update, every cycle evaluation, every decision it makes. Right now, in these first seconds, I need full transparency. I need to know what it's thinking.
[02:33:01] Pool monitor started. Subscribing to 247 pools...
[02:33:02] WebSocket connection established.
[02:33:02] Evaluating cycle: SOL → USDC → TDCCP → SOL
[02:33:02] Opportunity detected! Estimated profit: +0.3%
[02:33:03] Evaluating cycle: SOL → USDT → USD1 → SOL
My heart rate spikes. "Opportunity detected." Those two words hit different when it's real money. During testing, I saw them a thousand times and felt nothing. Now, each one sends a jolt of adrenaline through me. The bot is doing what I built it to do. It's scanning the market. It's finding price discrepancies. It's evaluating whether the spread is wide enough to cover fees and still turn a profit.
More lines scroll past. The numbers are small — fractions of a percent, pennies of potential profit — but they're there. The bot is finding real opportunities on a real blockchain in real time.
[02:33:14] Bundle submitted! Tip: [REDACTED] lamports
[02:33:15] Awaiting confirmation...
[02:33:17] Bundle submitted!
[02:33:19] Bundle submitted!
"Bundle submitted." There it is. My bot has constructed a transaction, packaged it into a Jito bundle, attached a tip, and sent it off to compete in the block space market. Somewhere out there, a Jito validator is looking at my bundle, comparing my tip to everyone else's tips, and deciding whether my transaction deserves to be included in the next block.
I'm watching the logs with the intensity of a poker player staring at a river card. Every line could be the one that says "Bundle landed. Profit captured." Every second that passes without that confirmation is another second of uncertainty.
The logs keep scrolling. More opportunities. More bundles submitted. The bot is humming along, doing exactly what it's supposed to do. Four minutes of operation. The evaluator has processed over five thousand audit events. Over two thousand cycles evaluated. Six executions attempted.
I'm grinning. This is working. The architecture holds. The pipeline flows. The math produces numbers that look reasonable. This is actually, genuinely working.
Now I just need to check the results.
The Gut Punch
I pull up the bundle status dashboard. I'm expecting to see at least a few landed transactions. Maybe not all six — I know the competition is fierce and landing rates for new bots are low. But some of them. Even one. One successful arbitrage would mean the entire pipeline works end to end. One landed bundle would validate months of work.
I scan the results.
Bundle 1: on_chain_not_found
Bundle 2: on_chain_not_found
Bundle 3: on_chain_not_found
Bundle 4: on_chain_not_found
Bundle 5: on_chain_not_found
Bundle 6: on_chain_not_found
Zero. Out of six.
I stare at the screen. I refresh. Same result. I check the status API directly — getBundleStatuses — and the response comes back: "unknown after 5 polls." The bundles weren't rejected. They weren't invalid. They just... vanished. Jito accepted them, acknowledged receipt, and then silently dropped them into the void.
Zero percent landing rate.
The grin is gone. In its place is a specific kind of confusion that I imagine is similar to what a baseball pitcher feels when he throws what he thinks is a perfect fastball, right down the middle, and the umpire calls it a ball. You saw it leave your hand. You felt the seam rotation. Everything was right. But the result says otherwise.
I sit back in my chair. It's 2:37 AM. My bot ran for exactly four minutes. Five thousand events processed, two thousand cycles evaluated, six bundles submitted, zero bundles landed. The world's most elaborate mechanism for accomplishing nothing.
"Is My Code Broken?"
The first instinct — the immediate, reflexive, 2:37-AM instinct — is to assume the code is broken. Something crashed. Something threw an exception. There's a typo somewhere, an off-by-one error, a null reference that my tests didn't catch. The code is wrong and I just need to find the bug and fix it.
I tear through the logs looking for errors. Red text. Stack traces. Exception messages. Anything that says "here's what went wrong."
Nothing. The logs are clean. No errors, no warnings, no crashes. The bot operated exactly as designed. It found opportunities, built transactions, submitted bundles, and... got nothing back. The code didn't break. The code worked perfectly.
This is more unsettling than a crash. When code crashes, you have a stack trace. You have a line number. You have a starting point for debugging. When code works perfectly and produces zero results, you have nothing. You're standing in an empty room looking for clues, and the room is spotless.
I dig deeper into the numbers. That first run: 5,139 audit events processed. 2,567 cycles evaluated. But of those 2,567 evaluations, 2,560 were skipped. Only 7 made it past the screening phase. And of those 7, only 6 were actually executed.
Why were 2,560 skipped? The logs tell me: 81% were killed by a consecutive_failures filter. The bot had tried to execute on a particular set of pools three times in a row and failed each time, so it automatically blacklisted them for a cooldown period. Smart design, in theory. But in practice, it means the vast majority of what the bot was finding wasn't real opportunity — it was phantom signal.
And the phantoms have a name. There's one cycle that dominates the evaluation count: SOL to USDC to TDCCP to SOL. Out of 2,567 evaluations, 2,510 — nearly all of them — are this single cycle. It keeps showing up as profitable. It keeps getting evaluated. And it keeps failing. Not because the math is wrong, but because the data feeding the math is wrong. The pool state for one of the hops is stale. My bot is computing profits based on prices that no longer exist.
It's like checking the stock price on yesterday's newspaper and trying to day-trade with it. The numbers are real — they were real, twelve hours ago. But the market has moved on, and you're trading against a ghost.
The Second Run
I don't give up. Of course I don't give up. One failed run proves nothing. Maybe it's a timing issue. Maybe I was unlucky. Maybe the market conditions were bad for those four minutes. I need more data.
But I'm also not stupid enough to run the exact same thing and expect different results. That's the definition of — well, you know the quote. Before the second attempt, I fix the things I can see. I set up Address Lookup Tables to compress my transactions. I fix a WSOL token account issue. I double-check the router program's account configuration. Real, tangible improvements based on problems I can identify.
The second run is longer. More confident. More bundles submitted: 65 executions. That's ten times more than the first run. The pipeline is flowing better. The fixes helped. The bot is finding opportunities, evaluating them, building transactions, submitting bundles at a healthy clip.
Results:
Executions: 65
Landings: 0
Landing rate: 0%
Sixty-five attempts. Zero successes. A perfect, unblemished record of failure.
It's like a basketball player taking 65 free throws and missing every single one. It defies probability. Even if you're bad — even if you're historically, legendarily bad — at some point, one should go in by accident. The rim is right there. The hoop is regulation size. Just one. Please.
But no. Zero.
I fix more things. I find issues with the token account setup, with the CPI call configuration, with how intermediate hop amounts are being calculated. Each fix feels meaningful. Each fix addresses a real problem. I rebuild. I redeploy. I run again.
Third run: 83 executions. Zero landings.
The cumulative scoreboard reads: 220 executions attempted. Zero landings. Landing rate: 0.000%. Every single bundle I've sent to Jito has returned on_chain_not_found — a status message that, translated from blockchain-speak to English, essentially means: "We have no idea what happened to your transaction. It's gone. Don't ask us about it."
The Cascade
Here's what I'm starting to understand at 3 AM, running on caffeine and stubbornness: the problem isn't one thing. If it were one thing, I'd have found it by now. One bug, even a subtle one, leaves a trail. You follow the trail, you find the bug, you fix it, you move on. That's how debugging works. That's how it's supposed to work.
But this isn't one bug. This is multiple systems failing simultaneously, and their failures are masking each other. It's like taking your car to the mechanic because the engine won't start, and the mechanic discovers that the battery is dead, the spark plugs are fouled, the fuel pump is broken, the ignition coil is cracked, the starter motor is shot, the timing belt is off, the fuel injectors are clogged, and there's a squirrel living in the air filter. Fix any one of those things and the car still won't start. You have to fix all of them.
The first layer I peel back: my transactions are too big. Solana has a hard limit on transaction size — 1,232 bytes. My transactions are blowing past that limit because I'm not using Address Lookup Tables to compress the account references. Without compression, a three-hop cyclic arbitrage transaction needs to list every account for every swap instruction — the pool, the token vaults, the authority, the token program, the mint, the associated token accounts — and all of those 32-byte public keys add up fast. Way too fast.
So I'd been splitting each arbitrage into multi-transaction bundles. Instead of one atomic transaction that does all three swaps, I'm sending two or three transactions that do the swaps sequentially. But this creates a new problem: the transactions aren't atomic. The first swap can succeed while the second one fails, leaving my funds in an intermediate token that I didn't want to hold. And even if they all succeed, the splitting adds overhead and complexity that eats into the already-thin margins.
Fix one. Six more remain.
The second layer: intermediate hop amounts. When my cycle goes SOL to USDC to BONK to SOL, the output of the first swap becomes the input of the second swap. But the precision of that handoff is off. The estimated output and the actual on-chain output don't match, because my AMM math is slightly wrong — not hugely wrong, but enough that the second instruction receives an amount that doesn't match what's actually available. The transaction fails with a cryptic error code that tells me nothing useful.
The third layer: phantom profits. I'm computing opportunity sizes based on pool data from my local cache, but some of that data is stale. Not all pools are subscribed via WebSocket — some are fetched periodically, and between fetches, the on-chain state can shift dramatically. A pool that shows a 2% price discrepancy in my cache might have zero discrepancy on-chain. But my bot doesn't know that. It sees the stale data, computes a profit, and sends a bundle that has no chance of succeeding because the opportunity evaporated minutes ago.
This is the TDCCP cycle. The one that appeared 2,510 times in a four-minute window. It's not a real opportunity. It's a mirage — a trick of stale data that keeps regenerating because the underlying pool state in my cache never updates. My bot is chasing the same ghost over and over, wasting execution slots and tip money on transactions that are dead on arrival.
The fourth layer: WSOL token accounts. On Solana, you can't just swap SOL directly — you need Wrapped SOL, which lives in an Associated Token Account. My bot assumed that account exists. It doesn't. Every transaction that tries to use it fails with error code 6001 — InvalidHopData — because the program can't find the account it's been told to deposit into. It's like writing a check to someone who doesn't have a bank account. The check might be perfectly valid, but there's nowhere to put the money.
Layer five. Layer six. Layer seven. Each one is a distinct problem in a distinct part of the system. Missing program accounts in CPI calls. Stale bin array data in Meteora DLMM pools that aren't subscribed via WebSocket. A hardcoded token list that's artificially limiting which cycles the bot can evaluate. An issue with intermediate hops consuming existing token balances from previous failed attempts, corrupting the amount calculations for subsequent trades.
Eight problems. Eight distinct systems. Each one independently capable of causing a transaction to fail. And all eight are active simultaneously.
The Code Is Correct But the Answers Are Wrong
There's a specific kind of frustration that comes from code that works exactly as designed but produces wrong results. It's different from a bug. A bug is when the code does something other than what you intended. This is when the code does precisely what you intended, and what you intended is wrong.
My AMM math is technically correct — for the inputs it receives. But the inputs are stale. My transaction builder is technically correct — for the account layout it's given. But the account layout is incomplete. My bundle submission is technically correct — the tip is calculated, the bundle is formatted, the API call succeeds. But the transactions inside the bundle are DOA.
It reminds me of the story about the GPS that drives you into a lake. The GPS is working perfectly. It's calculating routes correctly. It's following its map data faithfully. But the map is wrong. There's a lake where the road used to be. The system is functioning exactly as designed, and the result is catastrophic.
My system is functioning exactly as designed. And the result is 220 straight failures.
What 0% Actually Means
At some point during the third failed run, I stop trying to fix things and start trying to understand things. There's a difference. Fixing is tactical — you find a problem, you patch it, you move on. Understanding is strategic — you step back and ask why the problems exist in the first place.
The answer I arrive at is humbling.
I built a bot that's good at math and bad at reality. The math engine is solid. Given accurate pool states, it computes swap outputs correctly. Given valid transaction layouts, the on-chain program executes swaps correctly. Given properly formed bundles, the submission layer delivers them to Jito correctly. Each component, in isolation, works.
But "in isolation" is the key phrase. In the real world, the components don't run in isolation. They run in a chain, and the chain is only as strong as its weakest link. And I have eight weak links.
This is the difference between building something in a workshop and deploying it to the field. In the workshop, conditions are controlled. Inputs are clean. The environment is predictable. In the field, data arrives stale. Accounts don't exist. Pool states change between when you read them and when your transaction executes. Other traders are competing for the same opportunities and getting there first. The network is adversarial in ways that your test suite never imagined.
I think about all those weeks of building and testing and simulating. I was so careful. I wrote tests for everything. I verified the AMM math against known values. I simulated transaction flows and checked the outputs. I thought I was being thorough. And I was — for the workshop. I built a beautiful car that runs perfectly on jack stands. I just didn't account for the fact that the racetrack has potholes, the other drivers are aggressive, the weather is unpredictable, and the fuel might be contaminated.
Zero percent. Not a single bundle landed. Not a single cent of profit captured. Weeks of work, hundreds of hours, and the scoreboard says zero.
Sitting in the Parking Lot
It's almost 4 AM. I've stopped the bot. The terminal is quiet. The cascade of log lines has been replaced by a blinking cursor.
I'm sitting here the way you sit in your car after a job interview that went badly. You're not ready to drive home yet. You're not ready to call anyone and talk about it. You're just... sitting. Processing. Running through the conversation in your head, identifying the moments where things went wrong, wondering if there's still a chance or if it's time to move on.
Here's the thing about a 0% landing rate: it doesn't mean the project is a failure. It means I haven't solved the problem yet. Those are different things. A failed project is one where the problem itself is unsolvable or the approach is fundamentally wrong. A 0% landing rate just means there are bugs to fix and systems to tune. The architecture works. The pipeline works. The math works. The components work. The integration doesn't.
This is like the first time you try to parallel park. You know the theory. You've watched videos. You understand the geometry — turn the wheel, back up at an angle, straighten out. But when you actually try it, you hit the curb. You're too far from the car in front. You're at the wrong angle. You end up three feet from the curb, blocking traffic, with someone honking behind you. The theory is correct. The execution needs work.
Two hundred and twenty failures tell me something important: the problem space is bigger than I thought. I wasn't fighting one enemy — I was fighting eight, and I didn't even know most of them existed until now. Each failure peeled back a layer and revealed something new.
The stale data problem alone is enormous. It's not enough to subscribe to pool updates — I need to subscribe to the right pool updates, at the right frequency, and I need to know which pools I can trust and which ones are feeding me ghosts. I need the bot to distinguish between a real opportunity and a phantom one, and right now, it can't.
The transaction size problem cascades into a bundle design problem, which cascades into an atomicity problem, which cascades into a risk management problem. Pull one thread and three more unravel.
The account existence problem reveals a gap in my understanding of Solana's runtime. I assumed things about the state of the chain that turned out to be false. Each false assumption became a failure mode.
Here's what I keep coming back to: the distance between "the code compiles" and "the code makes money" is vast. Incomprehensibly vast. Every programmer knows the gap between "it compiles" and "it works." But in MEV, there's a third gap — "it works" and "it works fast enough, accurately enough, and reliably enough to compete against other bots that have been doing this for years." That third gap is where I'm standing right now, and I can't even see the other side.
The logs are saved. The data is recorded. The eight problems are identified, even if most of them aren't solved yet. I have a list of things that need to change, and each item on that list is a project in itself.
I close my laptop. Not because I'm giving up. Because it's 4 AM and I need to sleep. And because the problems I've found aren't the kind you solve by pushing through fatigue. They're the kind you solve by thinking clearly, and thinking clearly requires not being awake for twenty-two hours straight.
Zero percent. Two hundred and twenty straight failures. The bot works perfectly and produces nothing.
I'm not sure what the right analogy is. Maybe it's like building a fishing rod, testing the cast in your backyard, driving to the lake, and spending four hours casting perfectly — beautiful form, textbook technique, consistent distance — and catching absolutely nothing. The rod works. The cast works. But working gear and good technique don't guarantee fish. You also need to be in the right spot, with the right bait, at the right time, reading the water correctly. The rod is the easy part. Reading the water is the hard part.
I built a rod. Now I need to learn to read the water.
But that's a problem for tomorrow. Right now, it's 4 AM and the cursor is blinking and the log file is 23,000 lines of things that didn't work, and I'm going to bed with a 0% landing rate and a head full of questions I don't know how to answer yet.
The question that keeps circling as I drift off: is 0% the bottom, or is there a layer underneath this that I haven't found yet? The eight problems I've identified — are those all of them? Or are there more hiding behind the ones I can see, waiting to reveal themselves once I fix the obvious ones?
I genuinely don't know. And that uncertainty — that specific flavor of not-knowing-what-you-don't-know — is the most honest thing I've felt since I started this project.
Disclaimer
This article is for informational and educational purposes only and does not constitute financial, investment, legal, or professional advice. Content is produced independently and supported by advertising revenue. While we strive for accuracy, this article may contain unintentional errors or outdated information. Readers should independently verify all facts and data before making decisions. Company names and trademarks are referenced for analysis purposes under fair use principles. Always consult qualified professionals before making financial or legal decisions.