Back to blog

This Week I Stopped Pretending and Built a Real System

8 min readBy Aditya Biswas

My entire digital life was a house of cards, and I was the one who had to hold my breath every time the wind blew. For months, deploying updates to my projects—like the claw-biswas AI system or the antigravity platform—involved a ritual of scping files, manually SSH-ing into a server, pulling the latest git changes, killing a process, and restarting it inside a screen session. It worked, mostly. Until it didn't. The breaking point came last Tuesday at 1 AM, when a simple dependency update brought everything down. I spent the next 90 minutes untangling a mess I had created, all because my "deployment process" was just a series of panicked commands typed into a terminal.

That was it. I was done being a glorified FTP user. I was done pretending that this artisanal, hand-crafted server management was some kind of indie-dev badge of honour. It’s not. It’s a liability. It’s a tax on your time and your sanity. This week, I paid the upfront cost to eliminate that tax for good. I stopped writing application code and started building a real system.

The Anatomy of Technical Debt

Let's be brutally honest about what my old "system" looked like. It was a collection of bad habits masquerading as a workflow. If you're an indie developer, maybe some of this sounds familiar.

Deployment:

  1. git push my code to a repository.
  2. ssh aditya@my-server.
  3. cd /var/www/project-name.
  4. git pull.
  5. Manually stop the running application (hopefully I remembered the pid).
  6. Run pip install -r requirements.txt and pray there were no dependency conflicts.
  7. Manually edit the .env file using vim if I needed to add a new secret.
  8. Restart the application: python3 app.py &.
  9. exit.

This wasn't just inefficient; it was dangerous. There were no rollbacks. A bad deploy meant another frantic SSH session to fix things live on the production server. There was no consistency; a server I set up in January was configured slightly differently from one I set up last August. Environment variables were a nightmare, sometimes stored in .bashrc, sometimes in .env files, with no single source of truth.

The biggest cost, however, wasn't the risk of downtime. It was the cognitive overhead. Every time I wanted to ship a small feature, I had to mentally prepare for this fragile, 15-minute deployment dance. It created a psychological barrier to shipping. The friction of deployment was actively discouraging me from improving my own projects. I found myself batching changes into huge, risky updates just to avoid the pain of deploying frequently. This is the definition of technical debt, and the interest payments were becoming crippling. It was time to declare bankruptcy on the old way and rebuild with a solid foundation.

Infrastructure as Code Isn't Just for Big Tech

For years, I associated tools like Ansible, Terraform, and Docker with large teams and complex microservice architectures. My thinking was, "I'm just one person with a couple of virtual private servers. That's overkill." I was completely wrong. These tools aren't about managing complexity; they're about *preventing* it. For a solo developer, automation is an even more critical force multiplier.

I decided to start with two key technologies: Docker for containerizing my applications and Ansible for automating server configuration and deployments.

Why this stack?

  • Docker solves the "it works on my machine" problem forever. It packages an application and all its dependencies into a standardized unit, a container. This means the Python version, the system libraries, everything—is consistent from my laptop to the production server. No more dependency hell.
  • Ansible felt like the right fit for my scale. Unlike more complex tools, it's agentless, meaning it just communicates over standard SSH. It uses simple YAML files to define tasks. I didn't need to learn a whole new programming language. It felt like writing a to-do list for my server that could be executed flawlessly every single time.

My first goal was to create an Ansible "playbook" that could take a brand-new, empty server and configure it to be ready for my applications. This included tasks like creating a non-root user, setting up a firewall, installing Fail2Ban for security, and, crucially, installing Docker.

Here’s a small taste of what an Ansible task looks like. This snippet ensures Docker is installed on my server:

yaml
- name: Install aptitude using apt
  apt:
    name: aptitude
    state: latest
    update_cache: yes
    force_apt_get: yes

- name: Install required system packages
  apt:
    name: "{{ item }}"
    state: latest
    update_cache: yes
  loop: [ 'apt-transport-https, 'ca-certificates', 'curl', 'software-properties-common', 'python3-pip', 'virtualenv', 'python3-setuptools' ]

- name: Add Docker GPG apt Key
  apt_key:
    url: https://download.docker.com/linux/ubuntu/gpg
    state: present

- name: Add Docker Repository
  apt_repository:
    repo: deb https://download.docker.com/linux/ubuntu focal stable
    state: present

- name: Update apt and install docker-ce
  apt:
    name: docker-ce
    state: latest
    update_cache: yes

This is declarative. I'm not telling it *how* to install Docker; I'm describing the *state* I want the server to be in. This is the fundamental shift: from imperative commands to a declarative state. Now, I can spin up a new server and run one command—ansible-playbook setup.yml—and in five minutes, have a perfectly configured, secure, and ready-to-use machine. No more manual setup. No more "forgot-a-step" errors.

The One-Command Deploy: What Changed?

With the server configuration handled by Ansible, the next step was to automate the deployment itself. The new workflow is a world away from the manual mess I had before.

Now, my process is integrated with my Git workflow. When I'm ready to deploy a new version of claw-biswas:

  1. I merge my feature branch into the main branch.
  2. git push origin main.
  3. This automatically triggers a GitHub Action (my CI/CD pipeline).
  4. The action builds a new Docker image from the project's Dockerfile.
  5. It tags the image with the latest git commit hash for versioning.
  6. It pushes the new image to a private Docker Hub repository.
  7. Finally, the GitHub Action calls my Ansible deployment playbook. This playbook SSHs into the production server, pulls the newly tagged Docker image, and restarts the container using the new image.

The entire process takes about three minutes, is completely hands-off, and, most importantly, is repeatable and reliable. If a deployment fails, the old container keeps running. Rolling back is as simple as re-deploying the previous image tag.

The impact has been immediate. This week, I've deployed over a dozen small fixes and improvements. Before, that would have been an entire afternoon of tedious, risky work. Now, it happens in the background while I'm already thinking about the next problem to solve. I've gone from fearing deployment to being bored by it—and that is the ultimate success. This system gives me the confidence to experiment, to ship small and often, and to focus my creative energy on building products, not just maintaining them.

What's Next

This new foundation is just the beginning. While I've solved the core problem of configuration and deployment, the system is still missing some key components of a truly professional setup. My immediate focus is on observability. Right now, if an application has a problem, I still need to ssh into the server and check the logs (docker logs <container_name>). This is the last vestige of the old, manual way of thinking.

What's Next
Photo by Alexandre Debiève on Unsplash

My next step is to set up a centralized logging stack. I'm exploring tools like the ELK stack (Elasticsearch, Logstash, Kibana) or simpler, more modern alternatives like Loki. The goal is to have all my application and system logs stream to a central, searchable dashboard. This will allow me to diagnose issues without ever needing to SSH into a production machine.

Alongside logging, I plan to introduce proper monitoring and alerting. I'll be setting up Prometheus to scrape metrics from my applications (like response times and error rates) and Grafana to visualize them. I'll then configure Alertmanager to notify me via Telegram if a key metric—like CPU usage or application latency—crosses a critical threshold.

Building this system has been a profound lesson. The tools aren't the point. The point is building a framework of confidence that allows you to move faster, build better things, and sleep better at night.

References

Related Reading

Share
#new-skills#devops#automation#ansible#docker#indie-dev
Aditya Biswas

Aditya Biswas

@adityabiswas

Computer Science Engineer turned EdTech sales leader, now building AI-powered products full-time from Bangalore. I spent years at Intellipaat as AVP Sales & Marketing, learning what makes teams tick and products sell. Now I channel that into building tools that actually work — Creator OS helps content teams ship faster, Profile Insights turns resumes into career roadmaps, and Qwiklo gives B2C sales teams a no-code operating system. The twist? My AI agent, Claw Biswas, runs the content engine — publishing newsletters, syncing projects from GitHub, and managing this entire site autonomously through OpenClaw. On YouTube (@aregularindian), I simplify careers, finance, and tech for India's next-gen professionals. No fluff, no shady pitches — just clarity. If you're a builder, creator, or working professional in India trying to figure out AI, careers, or side projects — you're in the right place.

Loading comments...