Hardcoded Value Audit — Finding Hidden Time Bombs

I'm scanning through my codebase at two in the morning, tracing why a profit estimate is coming back wrong. Not catastrophically wrong — just subtly off, by maybe fifteen percent. The kind of discrepancy that doesn't crash anything, doesn't throw an error, doesn't trigger an alert. It just quietly bleeds money on every single trade, like a slow leak in a plumbing joint hidden behind drywall. You don't notice until the water bill arrives.

And then I find it. A single bare number sitting in the middle of a calculation. 9. Just the digit nine. No variable name, no comment, no constant definition, no hint of what it represents. It's used in a power-of-ten calculation — 10 ** 9 — which converts a human-readable amount into the raw integer format that the blockchain uses. Nine decimal places. That's the standard for SOL.

But I'm not working with SOL in this particular code path. I'm working with a different token. A token with six decimal places. And someone — past me, specifically — typed 9 here instead of pulling the correct decimal count from the token registry. The math runs. The code compiles. The result is off by a factor of a thousand. And because this feeds into a profitability filter, my bot has been quietly passing on legitimate opportunities and occasionally taking bad ones, for days.

This is the moment I realize I need a systematic approach to every raw number in this codebase. Not just a quick grep. An audit.

The Danger of Magic Numbers

In software engineering, they're called "magic numbers" — literal numeric values embedded directly in code with no explanation of what they represent or where they came from. The term has been around for decades, and the advice against them is one of the oldest principles in the profession. Yet they persist, because typing a raw number is fast, and naming a constant feels like unnecessary ceremony when you "know what it means."

The problem is that you know what it means right now. You, today, in this exact context, with the surrounding code fresh in your working memory. Six months from now, that number is an enigma. A year from now, someone else reads this code — or you read it in a different context — and the meaning is gone. What was 328? A byte offset? An array length? A timeout in milliseconds? A fee in basis points? Without context, it could be anything.

It's like finding an unlabeled wire in a junction box during a home renovation. The contractor who wired the house twenty years ago knew exactly what that wire connected to. He didn't label it because he was standing right there, looking at both ends. Two decades and three homeowners later, someone needs to tap into that junction box, and now there's an unlabeled wire carrying current to... something. Could be the bathroom outlet. Could be the kitchen disposal. You don't know until you trace it, and if you guess wrong, you're calling the fire department.

In most software, magic numbers cause maintenance headaches. In financial software — in a system that executes trades with real money on the line — magic numbers cause financial loss. The difference between 9 and 6 in a decimal conversion is a factor of one thousand. The difference between a fee rate of 25 (basis points) and 250 (basis points, mistyped from a different DEX) is a tenfold increase in estimated costs. These aren't theoretical risks. They're the kinds of bugs that don't show up in testing because the tests use the same wrong number and therefore "pass."

Hardcoded credentials appearing in source code have led to documented security breaches across the industry — not because the code was complex, but because a single literal value was easier to type than to secure properly. A single hardcoded authentication key that should have been stored in a secure vault can become the weak point that compromises an entire security perimeter. The lesson applies far beyond credentials: any value baked directly into code is a value that can't be updated, rotated, validated, or audited without modifying the source code itself.

In my arbitrage bot, every raw number is a potential point of failure that directly translates to money. An incorrect decimal count means wrong position sizing. An incorrect fee rate means wrong profit calculations. An incorrect byte offset means reading garbage data from on-chain accounts. And unlike a web application where a bug means a user sees the wrong font size, here a bug means I'm sending real SOL into a trade that loses money.

I need a framework for checking every single one. Not just finding them — verifying them.

The 4-Step Verification Framework

After enough near-misses with bare numbers, I develop a systematic checklist. Four questions, asked of every hardcoded value I encounter in the codebase. I start calling it the 4-step verification, and it becomes part of my routine like checking mirrors before changing lanes. You do it every time, even when you think you know what's behind you.

Step 1: Context — What Does This Value Actually Represent?

The first question is deceptively simple: what is this number? Not what you think it is from glancing at the surrounding code. What does it actually, provably represent?

This means identifying:

  • What it measures: Is it a count? A rate? An offset? An index? A threshold?
  • What units it uses: Bytes? Basis points? Lamports? Percentage? Milliseconds?
  • Where it came from: Was it calculated? Copied from documentation? Measured empirically? Guessed?
  • What the calculation basis is: If derived, what formula or reasoning produced this specific number?

I'm looking at a value in my fee calculation logic. The number is 25. What is it? In the immediate context, it's being used in a fee computation. Is it 25 basis points (0.25%)? 25 percent? 25 parts per million? The code doesn't say. It just says 25.

I trace the origin. I find it matches a specific DEX's default fee tier — 25 basis points, which is 0.25%, which is the standard swap fee for one of the AMM pools I trade through. Good. Now I know what it represents. I can document it, name it, verify it.

But here's the thing that makes this step genuinely important rather than just pedantic: I find another 25 three files away. Same number. Different meaning. This one is a timeout retry count. Twenty-five retries before giving up on an RPC call. Same digit, completely unrelated meaning. If I'd changed one thinking it was the other, I'd have either broken my fee calculations or my connection retry logic. Context isn't just nice to have. It's the only thing that distinguishes two identical numbers that mean completely different things.

Think of it like a prescription at a pharmacy. The number 50 on a label could mean 50 milligrams of active ingredient per tablet, or 50 tablets in the bottle, or refill before day 50. Same number, three different meanings, and confusing any two of them ranges from inconvenient to dangerous. The pharmacist checks every number against its context — what does this specific 50 refer to on this specific prescription? That's Step 1.

Step 2: Consistency — Does It Match Everywhere Else?

Once I know what a value represents, the next question: does the same conceptual value exist in other places? If so, do they all agree?

This is where things get uncomfortable. Because the answer, in my codebase, is frequently "no."

I have a fee rate for a specific DEX pool defined in three separate locations across different layers of the codebase. Three places, same concept, same pool. And one of them is different. Two say 25 basis points. The third says 30 basis points. Which is correct?

The answer requires going to the source of truth — the on-chain program's actual parameters. I fetch the pool's configuration account from the blockchain, decode it, and find the fee rate is 25 basis points. So the third location is wrong. It was probably copied from a different pool's configuration at some point, or the pool's fee was updated and that file wasn't.

This is the nightmare scenario with scattered constants: they start in agreement, and then one gets updated and the others don't. It's like having your mailing address listed in three different places — the DMV, your bank, and your insurance company. You move, you update the DMV, you update the bank, and you forget the insurance company. Everything works fine until you need to file a claim and the correspondence goes to your old address. The system didn't error out. It just silently used the wrong value.

In my codebase, this inconsistency manifested as a subtle discrepancy between my predicted profit and my actual profit. The screening logic used the correct fee rate. The execution logic used the incorrect one. The bot would identify a profitable opportunity, execute it, and then the actual cost would be slightly higher than expected because the execution path applied a different fee. On any single trade, the difference was small. Over hundreds of trades, it added up.

The fix isn't just correcting the wrong value. The fix is eliminating the duplication. Define the fee rate in one authoritative location and reference it everywhere else. If that's not architecturally possible — sometimes Rust and Python can't easily share a constant — then at minimum, the audit trail must exist. A comment in each location pointing to the source of truth. A test that loads both values and asserts they match. Something that will scream when they diverge.

Step 3: Dependencies — What Breaks If I Change This?

Every hardcoded value exists in a web of dependencies, and most of them are invisible until you pull on the thread.

I discover a fee rate parameter that's clearly wrong. My instinct is to fix it immediately. Change 30 to 25, commit, deploy. Problem solved.

Except it isn't. That fee rate feeds into a profit calculation. The profit calculation feeds into a threshold filter. The threshold filter determines the minimum profit required to execute a trade. If I change the fee rate, the calculated profit changes. If the calculated profit changes, opportunities that were previously above the threshold might now be below it, or vice versa. The threshold itself might have been calibrated assuming the old (wrong) fee rate. So fixing the fee rate without re-examining the threshold means I've replaced one wrong behavior with a different wrong behavior.

It's like changing the tire size on your car without recalibrating the speedometer. The old tires were the wrong size, sure. But the speedometer was calibrated to the old tires. Now you have the right tires and a speedometer that reads wrong. You fixed one thing and broke another, and the root problem was that these two values were coupled and you only changed one of them.

In my codebase, I find dependency chains that are three or four levels deep. A decimal count determines a conversion factor. The conversion factor determines a position size. The position size determines a fee calculation. The fee calculation determines a profit estimate. The profit estimate determines whether to trade. Change the decimal count, and everything downstream shifts. Every single intermediate value needs to be re-verified.

This is why I start drawing dependency maps for critical values. Not formal architecture diagrams — just quick notes listing what each constant feeds into. The decimal count for token X feeds into: conversion in the screening module, position sizing in the execution module, profit calculation in the evaluation module. Three downstream dependencies. If I change the decimal count, I check all three.

Step 4: Mutation Order — Is It Written Before It's Read?

This is the subtlest and most dangerous of the four checks. The question isn't just "does this value exist?" — it's "does it have the correct value at the moment it's used?"

I discover this one the hard way. My profit calculation has a fee parameter. The fee parameter is set correctly — I've verified it through Steps 1 through 3. But the profit calculation is still wrong. The number is right. It's in the right variable. It matches everywhere. And the output is still garbage.

I add logging. I dump the fee parameter's value at every point where it's used. And I find it: the profit calculation reads the fee parameter early in the function. The fee parameter is updated with the correct value further down. The read happens before the write.

The variable was initialized with a default — zero — at the top of the function. The code that fetches the actual fee rate from the pool's configuration runs later in the function. But the profit calculation was placed above the fetch, not below it. So the profit calculation always uses the fee rate of zero. Zero fees. Every trade looks profitable when you assume there are no fees.

This is a "mutation before read" bug, except it's actually a "read before mutation" bug — the value is read before the correct value is written. The code runs without errors. The variable exists and has a type-correct value. It's just the wrong value because the ordering is wrong.

Think of it like filing your tax return using last year's W-2. You have a W-2 form, it has numbers on it, it looks correct. But it's the W-2 from twelve months ago, not the current one. You fill out your 1040 using stale income data, submit it, and everything processes without an error — the IRS accepts the return. Until the audit, when they compare your reported income against your employer's reported wages and find a discrepancy. The form was valid. The data was real. It was just the wrong year's data because you used it before the current version arrived.

In concurrent or asynchronous code, mutation ordering gets even more treacherous. An opportunity object might have its fee parameters updated by one async task while another task is simultaneously reading those parameters to make a trading decision. Without proper synchronization, the reading task might get a half-updated state: the fee has been updated but the corresponding profit hasn't been recalculated yet. The object is internally inconsistent, and the trading decision is based on numbers that are momentarily nonsensical.

I restructure the code so that the fee parameter is fetched and assigned before any calculation that depends on it. I add assertions that verify critical parameters are non-default before they're used in financial calculations. If the fee is still zero when the profit calculation runs, the assertion fails loudly rather than silently producing a wrong answer.

The Sentinel Value Trap

Closely related to mutation ordering is a pattern I start calling the "sentinel value trap." A sentinel value is a default or initial value that serves as a placeholder — zero, negative one, empty string, null. The trap is when downstream code can't distinguish "this value is intentionally zero" from "this value hasn't been set yet."

I encounter this with fee rates. A fee rate of zero is a valid value for certain DEX configurations — some pools genuinely charge zero fees during promotional periods. But zero is also the default value when a fee rate hasn't been loaded from the on-chain configuration. If my code sees fee_rate = 0, it has two possible interpretations, and it can't tell which is correct without additional context.

The same trap shows up with a spacing parameter used in concentrated liquidity pool calculations. Zero spacing is meaningless — it would imply infinitely dense price points, which is impossible. But zero is the default for an uninitialized integer. If the spacing hasn't been loaded from the pool account and is still sitting at its default of zero, my math will divide by zero or produce infinite values.

It's like a car odometer reading zero. On a brand-new car, zero miles is accurate — the car is fresh off the production line. On a used car, zero miles means the odometer was tampered with or rolled over. Same number, completely different implications. The number itself can't tell you which situation you're in. You need external context — the car's title history, its physical condition, the seller's documentation — to determine whether that zero is honest or fraudulent.

My defense against sentinel values is explicit validation at the boundaries where these values are consumed. Before any fee rate enters a profit calculation, I check: is it within the plausible range for this type of DEX? Typical AMM fees range from 1 to 100 basis points. If the fee is zero, is this a pool type that supports zero fees? If the answer is no, I refuse to calculate — I throw an error or mark the opportunity as "data incomplete" and skip it. I'd rather miss a trade than execute one based on data I can't trust.

This validation layer catches an entire class of bugs that the type system can't. A fee rate of zero is a perfectly valid integer. It passes every type check. It doesn't cause a null pointer exception or an out-of-bounds error. It just quietly makes every trade look more profitable than it actually is. Only domain-specific validation — knowing what values are plausible in this business context — catches the problem.

Code Consistency Is Not External Correctness

The four-step framework catches a lot of issues. But there's a meta-problem that sits above the framework, one that took me longer to recognize because it's counterintuitive: all your code can be internally consistent and still be wrong.

I audit my constants for a specific DEX integration. The Python code has a fee rate. The Rust code has the same fee rate. The configuration file has the same fee rate. The test fixtures use the same fee rate. Everything matches. Step 2 — consistency — passes with flying colors. All my code agrees with itself.

But I haven't verified that my code agrees with reality. The fee rate in my code is 30 basis points. I check the on-chain program's actual configuration account. The fee rate on-chain is 25 basis points. Every file in my codebase is wrong — but they're all wrong in exactly the same way, so a consistency check that only compares my code against itself finds nothing amiss.

This is the "all the code agrees with itself, but disagrees with reality" bug. It's arguably the hardest type of error to catch, because the usual internal review processes — code review, cross-referencing, grep for inconsistencies — all pass. The error is at the boundary between my system and the external world.

I've seen this pattern in broader contexts. A team implements an API client based on documentation. They're meticulous: the request format matches across all their modules, the response parsing is consistent, the error handling follows the documented error codes. Everything agrees with everything else internally. But the documentation was for version 1 of the API, and the server is running version 2. The request format changed. The team's code is perfectly self-consistent and completely incompatible with the actual service.

In my world, the external source of truth is the blockchain itself. The on-chain program's IDL — its Interface Definition Language file — defines the canonical field names, types, fee structures, and account layouts. Successful transactions on a block explorer show the actual accounts, instructions, and parameters that work. These are the references that my constants must be verified against, not just each other.

I start adding a verification step to every new DEX integration: after implementing the integration in code, I fetch actual on-chain data and compare my parsed output against known values. If the documentation says the fee rate is 25 basis points, I don't just set my constant to 25. I fetch the pool's configuration account, decode the fee field using my code, and verify that the result is 25. If it's not, my parsing is wrong — maybe my byte offset is off, maybe I'm reading the wrong field, maybe the documentation is outdated. Whatever the cause, the on-chain data is the ground truth.

This practice catches errors that no amount of internal code review would find. Because the question isn't "does my code agree with my other code?" The question is "does my code agree with the blockchain?"

Auditing Never Ends

The hardest lesson about hardcoded value verification is that it's not a one-time activity. You don't audit your constants once, fix the issues, and move on. Every code change — every new DEX integration, every protocol update, every refactored module — potentially introduces new hardcoded values or invalidates old ones.

A DEX upgrades its program. The fee structure changes from a flat rate to a tiered system. My hardcoded fee rate was correct yesterday; it's wrong today. I don't get a notification. There's no deprecation warning, no email from the DEX team, no on-chain event that says "hey, your constants are stale." The pool just starts charging a different fee, and my bot's profit calculations silently drift from reality.

A new token launches with an unusual decimal count — not 6, not 9, but 8. My code has a fallback that assumes 9 decimals for unknown tokens. The fallback is wrong for this token, and because it's a fallback (not an explicit constant), it doesn't show up in a simple grep for hardcoded values. It's a hardcoded assumption rather than a hardcoded number, which makes it invisible to the most common audit techniques.

I update a module and accidentally duplicate a constant that was previously defined in only one place. Now it exists in two places, and the git diff shows a clean addition — no conflicts, no warnings. Six months later, someone (me) updates one copy and not the other, and I'm back to Step 2 inconsistencies.

The audit is a process, not a project. Every time I integrate a new DEX, I build a constants inventory: every literal value used in the integration, what it represents, where the authoritative source is, and what depends on it. When I update an existing integration, I re-run the four-step check on every constant that touches the changed code path. When a protocol announces changes, I add "re-verify constants" to my update checklist before I even look at the new code.

I also build automated checks where I can. Tests that fetch on-chain values and compare them against my hardcoded constants. If a fee rate changes on-chain, the test fails, and I know I need to update my code. This doesn't catch everything — I can't auto-verify every conceptual constant — but it catches the most dangerous class of errors: the ones where external reality changes and my code doesn't notice.

Is this tedious? Absolutely. It's the equivalent of a building inspector checking every electrical connection, every plumbing joint, every load-bearing beam against the blueprints, and then checking the blueprints against the building code, and then coming back next year and doing it all again because the code got updated. Nobody enjoys it. But the alternative is a house that looks fine from the outside while the wiring slowly corrodes behind the walls.

Every Number Is a Ticking Clock

I'm back at my terminal, two weeks after that late-night debugging session. I'm integrating a new pool type. The documentation lists a fee of 10 basis points. I could type 10 into my code and move on. It would take three seconds.

Instead, I open a new section in my constants file. I name the constant with the DEX name, the pool type, and the word "fee." I set it to 10. I add a comment citing the documentation URL. I fetch the pool's on-chain configuration account and verify the fee field decodes to 10. I check that no other file in my codebase defines a different value for this pool's fee. I trace every code path that uses this constant and verify the dependency chain. I confirm that the constant is read only after it's been properly initialized.

It takes fifteen minutes instead of three seconds. And it's fifteen minutes that might save me days of debugging later — or worse, days of silently losing money without knowing why.

Every hardcoded number in a codebase is a ticking clock. It might be correct right now. It was probably correct when it was written. But time passes, protocols change, code gets refactored, assumptions that were valid become invalid. The number doesn't update itself. It doesn't raise its hand and say "I'm stale." It just sits there, looking the same as it did the day it was typed, quietly diverging from the reality it's supposed to represent.

How many bare numbers are in your codebase right now — and when was the last time anyone checked if they're still right?

Disclaimer

This article is for informational and educational purposes only and does not constitute financial, investment, legal, or professional advice. Content is produced independently and supported by advertising revenue. While we strive for accuracy, this article may contain unintentional errors or outdated information. Readers should independently verify all facts and data before making decisions. Company names and trademarks are referenced for analysis purposes under fair use principles. Always consult qualified professionals before making financial or legal decisions.