This article is a personal essay describing the author's experience using AI coding tools to build trading software on a public blockchain. It is for informational and educational purposes only and does not constitute investment, legal, or financial advice, nor a recommendation to engage in MEV, on-chain arbitrage, or any specific trading strategy. The publisher and author hold no positions in and have received no compensation from any platform, protocol, or AI tool referenced. Conduct your own research and consult a licensed advisor before making any decisions.

Building with an AI Agent

I'm staring at a Rust compiler error that spans fourteen lines. Something about lifetime annotations and borrowed references. Six months ago, I would have closed the laptop and gone for a walk. Today, I paste the error into my AI coding tool, and thirty seconds later I have an explanation of what went wrong, why it went wrong, and three ways to fix it.

This is what building software looks like in 2026 — at least for me.

I'm not a developer by training. I don't have a computer science degree. I've never worked at a tech company. But I'm building an on-chain arbitrage system on Solana, one of the most technically demanding blockchains in existence, and I'm doing it with an AI agent as my primary collaborator. This isn't a story about AI replacing programmers. It's about what happens when someone with domain knowledge but no coding background picks up the most powerful tool that's ever existed for building software.

Why AI — or Why I'm Here at All

Let me be blunt: without AI coding tools, I would not have started this project. Period.

Building an MEV bot on Solana requires understanding smart contracts, writing Rust and Python, parsing binary data layouts, interfacing with multiple DEX protocols, managing transaction bundles, and competing against professional trading firms with teams of engineers. That's not exactly a weekend project for someone who learned their first programming language from YouTube tutorials.

But here's the thing about 2026: the barrier to entry for building software has fundamentally shifted. According to recent industry surveys, the vast majority of developers now use AI tools in their workflow, and AI generates a significant portion of all new code. Industry analysts project that the share will only keep growing. These aren't fringe statistics. This is the mainstream.

The question isn't whether to use AI. The question is whether you can use it well enough to build something real.

What AI Does Well

Think of an AI coding agent as hiring a contractor who has read every Stack Overflow answer ever posted, memorized the documentation for every programming language and framework, and can type at the speed of light — but has never actually built a house.

That's simultaneously incredibly useful and deeply insufficient.

Here's what my AI agent handles on a daily basis:

Code generation. I describe what I need — "parse this account data structure with these byte offsets" — and the AI produces working code in seconds. Not always correct code, but structurally sound code that's usually 80% of the way there. For someone who would spend an hour writing the same function from scratch, that's transformative.

Debugging assistance. When something breaks (and in blockchain development, things break constantly), the AI can analyze error messages, trace through logic, and suggest fixes. It's like having a colleague who's always available for rubber duck debugging, except this duck actually talks back with useful suggestions.

Documentation and API analysis. Blockchain protocols are notoriously under-documented. The AI can read through source code, SDK implementations, and scattered documentation to synthesize an understanding of how something works. It's a research assistant that never gets tired and never complains about reading through a thousand lines of uncommented Rust.

Repetitive task automation. Writing tests, refactoring patterns across multiple files, generating boilerplate — these are tasks where AI genuinely shines. The boring stuff that eats hours of a developer's day gets compressed into minutes.

Combined, it's like having a tireless research assistant, a junior developer, and a universal translator all rolled into one. The translator part matters more than you'd think. In blockchain development, you're constantly moving between Rust, Python, TypeScript, RPC protocols, binary data formats, and on-chain program interfaces. The AI doesn't just translate between human languages — it translates between programming paradigms and protocol specifications.

What AI Cannot Do

Now for the part that AI evangelists don't like to talk about.

My AI agent cannot tell me what to build. It cannot decide whether a particular arbitrage strategy is profitable. It cannot assess whether the risk-reward ratio of a trade makes sense. It cannot understand the current state of on-chain liquidity or predict how competing bots will behave. It cannot make judgment calls about when to be aggressive and when to be cautious.

In short: AI gives answers, but I have to ask the right questions.

This is the part that separates "vibe coding" from actual engineering. The AI is a powerful engine, but someone needs to be behind the steering wheel, reading the road, and deciding where to go. A GPS can calculate the fastest route, but it can't decide whether the destination is worth driving to.

Strategy, risk assessment, and deep domain context remain firmly human territory. When I'm deciding how to structure my transaction flow or evaluating whether a particular DEX pool is worth targeting, the AI is silent. Those decisions require understanding market dynamics, competitive landscapes, and risk tolerances that no language model can fully grasp — at least not yet.

84% Use It. 29% Trust It.

Here's a number that tells you everything about the current state of AI coding: according to Stack Overflow's developer survey, 84% of developers use AI coding tools, but only 29% trust the output. That trust number dropped 11 percentage points from the previous year. Let that sink in. The more people use AI for coding, the less they trust it.

And they're right not to trust it blindly.

Studies show that AI-generated code has 1.7 times more "major issues" compared to human-written code, and — this one should make anyone building financial software nervous — 2.74 times more security vulnerabilities. Ninety-six percent of developers say they don't trust AI code without manual verification. Nearly a quarter of working time is now spent verifying AI output.

I live this reality every day. The AI produces code that looks clean, compiles successfully, and seems logically sound. Then you run it against actual on-chain data and discover that a byte offset is wrong by one position, or a fee calculation uses the wrong decimal precision, or an account is passed in the wrong order to a cross-program invocation. These aren't hypothetical examples. These are bugs I've found — and they're the kind of bugs that, in a financial system, can drain your wallet in a single transaction.

"Almost right but not quite" is the defining characteristic of AI-generated code. And in blockchain development, "almost right" can mean "catastrophically wrong."

This is why I treat AI output the way a good editor treats a first draft: useful raw material that requires careful review. Every function gets tested. Every data layout gets verified against actual on-chain data. Every transaction gets simulated before it touches real money. The AI accelerates the writing, but it doesn't eliminate the need for judgment.

AI's Special Value in Blockchain

Despite the trust issues, AI provides something uniquely valuable in blockchain development that I haven't seen discussed much: it reads the source code so you don't have to.

In traditional software development, you have documentation. You have tutorials. You have well-maintained API references with examples and edge case notes. In blockchain? The source code is the documentation. Want to know how a particular DEX calculates swap fees? Read the Rust source. Want to understand the account layout of a liquidity pool? Parse the struct definitions yourself. Want to figure out why your transaction is failing with error code 0x1773? Good luck — that's a custom error code defined somewhere in a program you didn't write.

This is where AI becomes genuinely indispensable. I can point my AI agent at a protocol's source code and say, "Explain this data layout. What's at byte offset 48? What does this fee calculation actually do?" And it will produce a coherent explanation, often catching subtleties I would have missed on my own.

Byte layout parsing is a perfect example. Solana programs store state in binary formats — raw bytes with specific fields at specific offsets. An 8-byte discriminator here, a 32-byte public key there, a u64 amount somewhere else. Getting a single offset wrong means you're reading garbage data. The AI can parse these layouts from source code, cross-reference them with on-chain data, and flag inconsistencies. It turns what would be hours of manual binary forensics into a conversation.

Error code decoding is another strength. When a Solana transaction fails, you often get a numeric error code with no human-readable description. The AI can trace that code back through program source, identify the exact condition that triggered it, and suggest what went wrong. For a non-developer navigating unfamiliar protocol code, this capability is close to magical.

In a world where reading dense, uncommented smart contract source is a prerequisite for building anything, having an AI that can consume and explain that source is not a luxury. It's a necessity.

The Division of Labor

After months of working this way — what the industry now calls AI pair programming — I've settled into a clear division of labor with my AI agent.

The AI writes code. I set direction.

More specifically:

The AI handles implementation — translating my intentions into working code, debugging issues, writing tests, and managing the mechanical aspects of software development. It's the hands on the keyboard.

I handle architecture — deciding what to build, which protocols to integrate, how to structure the system, what trade-offs to accept, and when something is good enough to ship. I'm the brain behind the blueprint.

Think of it like being a general contractor who doesn't personally lay bricks. I know what the building should look like, how the plumbing should run, and where the load-bearing walls go. The AI does the actual construction. But if I don't inspect the work, things go wrong fast. Walls end up in the wrong place. Pipes don't connect. The building passes a superficial inspection but fails under load.

The human role is orchestration. Managing project context through documentation. Keeping track of what's been built, what needs to be built, and what the current state of the system is. Making sure the AI has enough context to produce useful output, and enough constraints to avoid going off the rails.

This orchestration is itself a skill — one that's rapidly becoming as important as coding ability. The quality of what the AI produces is directly proportional to the quality of context and direction you provide. Garbage in, garbage out has never been more literally true.

Can Non-Developers Actually Do This?

The "vibe coding" market — tools and platforms that let non-technical people build software using AI — is growing rapidly. The pace of investment and adoption tells you that a lot of people believe non-developers can build real things with AI.

Can they? Yes — with heavy caveats.

Here's what I've learned: domain expertise matters more than coding ability. I don't know Rust the way a senior systems programmer does. But I understand arbitrage mechanics, market microstructure, and on-chain transaction flows at a level that most programmers don't. That domain knowledge is what lets me direct the AI effectively. I know what questions to ask because I understand the problem space, even if I can't always implement the solution myself.

But calling this "easy" would be dishonest.

Working with an AI coding agent requires a specific kind of discipline. You need to verify everything. You need to understand what the code does at a conceptual level, even if you didn't write it line by line. You need to develop an instinct for when the AI is confidently wrong — which happens more often than you'd like. You need patience, because the AI will sometimes lead you down a path that looks promising for thirty minutes before hitting a dead end.

You also need to be comfortable with a constant, low-grade anxiety that comes from building on a foundation you don't fully understand. I know enough Rust to read my codebase, debug most issues, and evaluate AI-generated solutions. But there are corners of the language — lifetime annotations, trait object dispatch, async runtime internals — where I'm operating on trust. The AI says this is correct, the tests pass, the code works in production. But do I deeply, fundamentally understand why? Not always.

That's an uncomfortable place to be when real money is involved.

So yes, non-developers can build real software with AI in 2026. But the operative word is "build," not "generate." Building implies understanding, testing, iterating, and taking responsibility for what you ship. The AI is a force multiplier, not a replacement for competence.

The Tool and the Skill

There's a quote I keep coming back to: "A fool with a tool is still a fool."

AI coding tools are the most powerful software development tools ever created. They compress months of learning into days. They make expertise accessible to anyone willing to put in the work of directing them well. They democratize building in a way that would have seemed like science fiction five years ago.

But they are tools. And using tools well is a skill in itself.

I'm not building this project because AI makes it easy. I'm building it because AI makes it possible. The difference matters. Every day is still a grind of debugging, testing, verifying, and iterating. The AI handles the syntax; I handle the strategy. The AI writes the code; I take responsibility for what that code does.

If you're thinking about starting a technical project that seems beyond your skill level — whether it's blockchain, machine learning, data engineering, or anything else — AI might be the thing that tips the scales from "impossible" to "hard but doable." Just don't mistake the tool for the craft.

The tool is sitting right there on your screen. The craft? That's on you.

Disclaimer

This article is for informational and educational purposes only and does not constitute financial, investment, legal, or professional advice. Content is produced independently and supported by advertising revenue. While we strive for accuracy, this article may contain unintentional errors or outdated information. Readers should independently verify all facts and data before making decisions. Company names and trademarks are referenced for analysis purposes under fair use principles. Always consult qualified professionals before making financial or legal decisions.