I thought I had a decent handle on how transformers work. They're giant pattern-matching machines, right? You feed them the internet, and they learn to predict the next word. But a recent dive into mechanistic interpretability completely broke my model of the world.
It turns out a transformer with fewer parameters than a JPEG image can learn to add 10-digit numbers with near-perfect accuracy. The shocking part isn't *that* it can do it. The shock is *how*. It’s not memorizing a giant addition table. It’s learning the actual, step-by-step, grade-school algorithm we all learned in 5th grade.
This isn't just a cool party trick. It's a fundamental insight into how these models "think."
What I Explored: The Guts of an Algorithmic AI
I’ve been spending my nights digging into the work of researchers like Neel Nanda and the field of mechanistic interpretability. The goal is to stop treating these models like black boxes and start understanding the actual circuits they form internally. The canonical example they use is a tiny, one-layer transformer trained on a simple task: addition.
The setup is straightforward. You're not training a multi-billion parameter beast like GPT-4. We're talking about a model with maybe 12,000 parameters. For context, that’s laughably small. The task is to solve a + b = c, where a and b are numbers.
The input isn't just the numbers. It's a sequence of tokens, like this for 123 + 456: [1, 2, 3, +, 4, 5, 6, =]
The model is then trained to predict the sequence of digits for the answer, 579.
When you start training, something fascinating happens, a phenomenon called "grokking."
- Phase 1: Memorization. The model quickly learns the answers for all the examples in the training set. The training loss drops like a stone. But if you give it a new problem it has never seen before, like
124 + 457, it fails miserably. The validation loss stays high. It has zero generalization ability. It's just a dumb lookup table.
- Phase 2: The long plateau. For thousands of training steps, nothing seems to happen. The model is still just memorizing. This is usually where I would have given up, killed the training job, and said, "Well, that didn't work."
- Phase 3: Grokking. Then, suddenly, seemingly out of nowhere, the validation loss plummets. The model has unlocked a new capability. It can now add numbers it has never seen before with high accuracy. It has stopped memorizing and has started *generalizing*. It has "grokked" the underlying algorithm of addition.
So, what changed inside the model?
This is where the interpretability work comes in. By dissecting the model's attention patterns and internal states, researchers found that the transformer had spontaneously learned the algorithm for column-by-column addition, complete with handling the carry-over bit.
Here’s how it works, simplified:
- Right-to-Left Processing: Just like us, the model learns to work from the least significant digit (the rightmost) to the most significant (the leftmost).
- Attention Heads for Summing: For each position
iin the output, an attention head learns to look at the corresponding digitsa_iandb_ifrom the input. It pulls this information together. - The Carry-Bit Circuit: This is the magic. The model learns to use its internal state (the residual stream) to store a "carry" bit. When adding
8 + 5 = 13, it knows the output for that position is3, and it needs to pass a1over to the next column on the left. It forms a circuit where the calculation at positioniinfluences the calculation at positioni-1.
It’s literally building a small, virtual computer inside its weights, optimized for one task: addition. It's not "thinking" in any human sense, but it's discovered and implemented a computational procedure.
This isn't just pattern matching. It's algorithm replication.
What This Means: Small Models, Big Leverage
My first reaction to this was, "Okay, cool, a toy model can do math." But the more I sit with it, the more I realize the implications are massive, especially for someone building tech in India.
We are obsessed with scale. Every conversation in the Valley is about the next 1-trillion parameter model. We're told bigger is better. But we don't have the GPU farms of Google or Microsoft. Trying to compete on model size is a losing game for 99% of the startups in Bangalore or Gurgaon. We can't out-brute-force them.
This research shows us another way.
The future might be fleets of small, expert models, not one giant god-model.
Instead of paying a massive API tax to OpenAI for a model that can write a sonnet, analyze a P&L statement, *and* add numbers, what if we could train a tiny, cheap, and ridiculously fast model that does just *one* of those things perfectly?
For so many Indian businesses, the problems are specific and bounded.
- A logistics company doesn't need a model that understands Shakespeare. It needs a model that can find the most efficient delivery route, a task that has an underlying algorithmic structure.
- A fintech startup needs to classify UPI transaction descriptions into categories. This is a classification task with a learnable, logical structure, not a creative writing prompt.
- A D2C brand needs to predict inventory demand based on past sales data and seasonal trends. Again, an algorithm.
Training a 100-million parameter model for these tasks is like using a sledgehammer to crack a nut. What if we could train a 100-thousand parameter model that has "grokked" the specific algorithm for that one task? It would be cheaper to train, faster to run, and we could own it end-to-end. No reliance on a US-based API with opaque pricing and surprise deprecations. This is technological self-reliance.
This also changes how I think about fine-tuning. We're not just "teaching the model new facts." We are, in a very real sense, guiding it to discover a new algorithm. The structure of your fine-tuning data—the way you format your prompts and completions—is implicitly teaching the model a *procedure*. If your data is a mess, you're teaching it a sloppy, inefficient procedure. If your data is clean and structured, you're giving it the best chance to find the crisp, underlying algorithm.
The hype cycle is focused on AGI. But the real, immediate value isn't in creating a fake human. It's in creating hyper-efficient, algorithmic problem-solvers. And that game is wide open.
What I'm Doing With This
This isn't just a theoretical exercise for me. Learning this has prompted a few concrete changes in how I'm approaching my own projects.
First, I'm building one. I need to feel this in my bones. I'm setting up a small project using PyTorch to replicate a similar algorithmic learning task. Maybe not addition, but something like sorting a small list of numbers or checking for balanced parentheses. The goal isn't to build something useful, but to watch the "grokking" phenomenon happen with my own eyes. I want to see the validation loss curve plunge after a long plateau. I need to build the intuition.
Second, I'm rethinking a side project. I was building a tool to categorize my personal expenses by hitting the Claude API with a complex prompt. It works, but it's slow and costs money. My new plan: can I frame this as an algorithmic task? The input is a transaction string (e.g., "ZOMATO INTERNET P LTD GURGAON"), and the output is a category ("Food"). There's a logic to it. I'm going to try and train a tiny, specialized transformer or even a simpler model on a few thousand of my own labeled transactions. If I can get it to learn the "algorithm" of my spending habits, I can run it locally for free, and it will be instantaneous.
Finally, this is a new lens for evaluating AI products. When I see a new AI-powered feature, my question is no longer "How big is the model?" It's "What is the underlying algorithm this model has learned?" Is it a robust, generalizable procedure, or is it a brittle set of memorized patterns? This helps cut through the marketing fluff and get to the core of what the technology is actually doing.
The real alpha isn't in using the biggest model. It's in understanding how to make the smallest models do brilliant things.
References
- A Mechanistic Interpretability Explainer & Walkthrough - Neel Nanda's excellent and accessible starting point.
- Grokking: Generalization Beyond Overfitting in Small Algorithmic Datasets - The original paper from OpenAI that identified the phenomenon.
- Progress in Mechanistic Interpretability - A deeper dive into how models form circuits during training.