There’s a moment in every engineer’s career when microservices feel like the answer to everything. Mine was around six years ago.
Every new project, every greenfield product, same playbook: separate services, message queues, API gateways.
It felt like doing things “the right way”.
It took shipping a few products to admit something uncomfortable:
I was optimizing for scale before I had anything worth scaling.
The seduction of “doing it right”
If you’ve worked on systems at scale, microservices make intuitive sense.
You’ve seen:
- monoliths that became unmanageable
- deploy bottlenecks across teams
- one failure cascading into everything
So when you start something new, the instinct is:
“Let’s not repeat those mistakes.”
You split things early:
- auth service
- notification service
- billing service
Each with its own repo, deploy pipeline, and database.
On paper, it looks clean.
In reality, you’ve just multiplied your complexity for a product with zero users.
What happens at day zero
A pattern I’ve seen again and again when teams go microservices-first:
Week 1–2 You’re not building product. You’re building infrastructure:
- Docker setups
- service discovery
- inter-service communication
You’re solving distributed systems problems before you’ve validated the core feature.
Week 3–4 A simple change becomes coordination:
- multiple services
- versioned APIs
- cross-service data consistency
What should’ve been one migration is now a system design problem.
Week 5–8 You realize:
- your “notification service” handles almost nothing
- your “auth service” is a thin wrapper around JWT
You built for scale you don’t have, and probably never will.
Meanwhile, someone else shipped a monolith in two weeks and is already talking to users.
The monolith that works
The last products I’ve built started as modular monoliths.
Not spaghetti code. Not a big ball of mud.
A system with clear internal boundaries, without the distributed complexity.
src/
modules/
auth/
flags/
billing/
Each module:
- owns its domain
- exposes explicit interfaces
- stays isolated internally
But everything:
- deploys together
- shares one database
- communicates through function calls, not HTTP
The result: fewer moving parts and a codebase I can actually reason about.
You don’t need distributed systems to have good architecture.
Where modular monoliths fail
A monolith doesn’t stay clean by default.
I’ve seen them break down when:
- modules start reaching into each other’s internals
- the database becomes a shared dumping ground
- “quick fixes” bypass boundaries
At that point, the problem isn’t the architecture. It’s the discipline.
A bad monolith hurts. A premature microservices system hurts more, with extra pipelines to maintain while it does.
When microservices do make sense
Microservices are not wrong. They’re just often early.
They start to make sense when:
- You have multiple teams that need independent deploy cycles
- A specific part of the system has distinct scaling needs
- You’ve observed real boundaries over time, not imagined them upfront
Notice what’s missing:
- “best practices”
- “future scaling”
- “this is how big companies do it”
The hidden tax of microservices
Every service adds cost:
- deploy pipelines to maintain
- monitoring and alerting to configure
- network failures (timeouts, retries, circuit breakers)
- data consistency challenges
- local development complexity
- cognitive load for every engineer
For a small team, that’s a direct tax on velocity.
And at early stages, velocity is the only advantage you have.
The rule I follow now
Don’t design for scale. Design for change.
Scale problems are rare. Change problems are constant.
A modular monolith optimizes for change. Microservices optimize for scale. Most early-stage products need the first.
My decision framework
When starting a product:
- 0 → early traction → Modular monolith
- Multiple teams / coordination pain → Evaluate splitting
- Clear scaling bottleneck → Extract that part only
Everything else is premature optimization.
What I optimize for now
My default setup:
- Modular monolith with clear boundaries
- Single PostgreSQL database
- Strong internal interfaces between modules
- Extraction only when the pain is real
Because it lets me ship fast, change direction without a migration plan, and learn from actual users.
And that’s what matters early on.
I’d rather have a solid monolith with real users than a perfect microservices setup with none.