How a quick AI-assisted infrastructure cleanup turned into a leaner blog stack, better long-term economics, and a foundation that will not punish traction.
I recently launched my new blog and, in the process, spent some time cleaning up my AWS account. What started as a quick cleanup ended with my monthly bill dropping by about 60 percent and the stack underneath the site getting a lot better.
That was the real win.
The savings were nice, obviously. But what I was really after was something sturdier. Something lean when traffic is quiet, resilient when traffic spikes, and not quietly taxing me every month for capacity I may never use.
The interesting part is that this did not start as some giant infrastructure initiative. I noticed a few old services hanging around and decided to spend about twenty minutes orchestrating AI agents to inventory the account, group related services, and identify stale or outdated infrastructure. They surfaced far more waste than I expected. Old decisions. Forgotten experiments. Convenience layers that had outlived their usefulness.
And because that process was so quick and so effective, I leaned in.
I decided to optimize the hell out of the blog stack and see how cheaply, cleanly, and durably I could run it. Partly because I wanted a better setup for the site itself. Partly because I will be standing up Briefcase infrastructure soon, and I wanted to sharpen my instincts on the tradeoffs before that gets real.
There is a certain kind of startup tax nobody talks about enough. It is not legal. It is not payroll. It is not even cloud spend exactly. It is the cost of taking the easy path for too long and waking up one day to realize you have been paying a premium for convenience on something that should have been boring infrastructure.
I have made enough mistakes by now to know that those decisions add up.
This time, I wanted a setup with better economics, fewer moving parts, and a lot more room to breathe if this thing ever actually gets real attention.
Cheap was never the point
I was not trying to build the absolute cheapest setup possible.
I was trying to build something with the right tradeoffs.
As a founder, I care a lot about margin. I care about keeping fixed costs low. I care about avoiding systems that look simple up front but get expensive later. And I care a lot about not getting punished for traction.
Success should not be a liability.
This was really an exercise in architecture-margin fit: building a system whose economics still make sense if attention ever shows up all at once.
So I rebuilt around a serverless, mostly static architecture. The site is aggressively prebuilt, heavily cached, and only reaches for compute when it actually needs to. In practice, that means Astro on the frontend, CloudFront and S3 serving the static pieces, and Lambda, DynamoDB, and SES handling the dynamic parts like engagement and email.
In plain English, most of the site is just files moving quickly through the internet, and only the small interactive pieces wake up real infrastructure.
I also layered in a moderation pipeline that combines deterministic filters with LLM-based review, which helps keep engagement manageable without turning comment moderation into a part-time job.
That changes the cost profile in a big way.
The napkin math is the fun part
Here is the part I still find kind of wild.
The rough rule of thumb for this setup looks something like this:
- Up to about 300,000 monthly visitors: basically free
- Around 1 million visitors: still well under $100
- Beyond that: roughly $150 per million visitors, give or take
That is a very different scaling curve than most founders expect.
At roughly 300,000 monthly visitors, this setup is still almost entirely covered by AWS free tiers. The only meaningful cost at that point is a little bit of email and a few pennies here and there.
At around 1 million visitors, you finally start paying real money, but it is still mostly bandwidth.
At around 3 million monthly visitors, the total bill is somewhere in the low hundreds, and nearly all of it is content delivery. Not compute. Not the database. Not the application logic. Just moving bytes.
That is the thing people often miss.
At scale, the expensive part is not usually the intelligence. It is distribution.
The API requests, writes, reads, and lightweight workflows are all incredibly cheap compared to serving large amounts of content. Which means if you build the right way early, the economics can stay surprisingly friendly much longer than people assume.
Why this setup works
The biggest advantage is that there is very little live infrastructure sitting around waiting to be used.
Most of the site ships as static HTML, CSS, JavaScript, and media through CloudFront. That means the bulk of traffic never touches a traditional application server because there really is not one in the usual sense.
The dynamic parts, like comments, reactions, or form submissions, run through Lambda and DynamoDB. Those services scale on demand, which means I am not paying for idle capacity just in case traffic shows up. I am paying when something actually happens.
That is a much better fit for this kind of site.
It also changes the failure mode. A more traditional stack can survive normal days just fine, but a real traffic spike turns into an operational problem. This setup is different. The static shell keeps doing its job, the dynamic pieces scale independently, and the whole thing is a lot harder to knock over.
That separation matters.
Why I did not stop at the easy option
I understand why people use platforms like Vercel, Netlify, Firebase, and the rest. They are solving for speed, convenience, and a better default developer experience. There is real value in that.
But there is also a business model attached to it.
Once traffic gets real, a lot of those platforms start to feel like retail cloud layered on top of wholesale cloud. That is not really a criticism. It is just the trade. You are paying for abstraction, polish, and convenience.
Sometimes that is exactly the right decision. Speed matters. Focus matters. Shipping matters. Prototype work is a different game than durable infrastructure.
But I have been doing this long enough to know that the easy decision up front often becomes the expensive one later. And in a world where AI can help you design, pressure test, security review, and implement infrastructure much faster than most people realize, the effort gap is shrinking fast.
In my experience, if you are properly leveraging AI, agents, and modern review loops, you can build a lot more of this yourself in a weekend than most founders still assume.
In this case, I would rather own a little more of the complexity and get better long-term economics in return.
Writing the infrastructure in CDK was the right trade for me.
A little more effort up front bought me lower baseline costs, cleaner scaling, more control, and less chance of getting surprised later.
I will make that trade every time.
This Isn't Really About AWS. It's About Optionality.
This post is not really about shaving a few dollars off an AWS bill.
It is about learning where to spend a small amount of time now so your future self has more options later.
That is part of what made the AI-assisted cleanup so compelling. The time investment was tiny. The leverage was not. A quick pass turned into clearer visibility, better judgment, and a much healthier foundation.
My own path has not been linear. Not in startups, not in life, and not in how I learned to build things. Maybe that is why I pay so much attention to anything that quietly becomes expensive later. Money, complexity, maintenance, fragility. They all compound if you ignore them.
So whether I am building a blog, a product, or a company, I keep coming back to the same basic principles:
- Keep overhead low.
- Build on solid ground.
- Do not introduce fragility you do not need.
- Spend a little time now if it saves real money later.
- And do not box yourself into a model that makes success feel more dangerous than failure.
That is the part I care about most.
Because a lot of the startup journey is just trying to survive long enough for the right things to work. And when they finally do work, you want your systems to support the moment, not turn it into a new source of stress.
Final thought
A lot of founders obsess over product-market fit. Fair enough.
I think more of us should spend time thinking about architecture-margin fit too.
Not because every company needs some elaborate cloud setup. Most do not.
But because 0 to 1 only proves you have something. 1 to 100 is where the real company gets built. And the way you build early shapes whether scale feels like momentum or a panic attack.
Comments (0)
Join the discussion