The Modular Monolith: A Practical Middle Ground

Why jumping straight to microservices might be your most expensive architectural mistake — and how modular monoliths give you the best of both worlds.

#Architecture#JavaScript#Node.js

There's this weird cultural pressure in our industry where if you're not running microservices, you're somehow doing it wrong. I've watched teams break a perfectly functional application into 15 services, add a message broker, a service mesh, and distributed tracing — then spend the next year debugging issues that simply didn't exist before.

Here's the thing nobody wants to admit: most applications don't need microservices. What they need is structure. And a modular monolith gives you exactly that without the operational tax.

What Even Is a Modular Monolith?

A modular monolith is a single deployable application where the code is organized into well-defined, loosely coupled modules — each with clear boundaries, its own internal logic, and explicit interfaces for communication with other modules.

It's not a "big ball of mud" monolith where everything depends on everything. It's not microservices either. It's the middle ground that experienced architects have been quietly using for years while Twitter argues about service meshes.

The key properties:

  • Single deployment unit — one build, one deploy, one runtime
  • Strong module boundaries — modules can't reach into each other's internals
  • Explicit contracts — modules communicate through defined interfaces
  • Independent data ownership — each module owns its data, even if they share a database
Why Not Just Go Straight to Microservices?

Because distributed systems are genuinely hard, and you're trading one set of problems for a much larger, more expensive set.

With microservices you inherit:

  • Network latency between every service call
  • Distributed transactions (good luck)
  • Service discovery and load balancing
  • Independent deployment pipelines for each service
  • Monitoring and tracing across service boundaries
  • Data consistency headaches that keep you up at night

A modular monolith gives you the organizational benefits — clear ownership, separated concerns, independent development — without any of that operational complexity.

And here's the part people miss: a well-structured modular monolith is far easier to split into microservices later than a tangled monolith. You're not closing any doors. You're buying yourself time to learn where the real boundaries are.

What This Looks Like in Practice

Let's say you're building an e-commerce platform. Instead of one flat directory of controllers, services, and models, you organize by domain:

src/
├── modules/
│   ├── catalog/
│   │   ├── catalog.module.ts
│   │   ├── catalog.service.ts
│   │   ├── catalog.repository.ts
│   │   └── catalog.types.ts
│   ├── orders/
│   │   ├── orders.module.ts
│   │   ├── orders.service.ts
│   │   ├── orders.repository.ts
│   │   └── orders.types.ts
│   ├── payments/
│   │   ├── payments.module.ts
│   │   ├── payments.service.ts
│   │   ├── payments.repository.ts
│   │   └── payments.types.ts
│   └── users/
│       ├── users.module.ts
│       ├── users.service.ts
│       ├── users.repository.ts
│       └── users.types.ts
├── shared/
│   ├── events.ts
│   └── types.ts
└── app.ts

Each module exposes a public API — a set of functions or a class — and nothing else leaks out.

Enforcing Module Boundaries

Boundaries that aren't enforced don't exist. Here's a simple pattern: each module exports exactly what it wants to expose.

// modules/catalog/catalog.module.ts

import { CatalogService } from './catalog.service';
import { CatalogRepository } from './catalog.repository';

const repository = new CatalogRepository();
const service = new CatalogService(repository);

// This is the public API. Nothing else leaves this module.
export const catalogModule = {
  getProduct: (id: string) => service.getProductById(id),
  listProducts: (filters: ProductFilters) => service.listProducts(filters),
  onProductUpdated: service.productUpdatedEvent,
};

export type { Product, ProductFilters } from './catalog.types';

Now the orders module can use catalogModule.getProduct() but can never import CatalogRepository directly. The boundary is explicit.

You can take this further with tooling. In a TypeScript project, you can use ESLint rules to prevent cross-module internal imports:

// eslint.config.js
export default [
  {
    rules: {
      'no-restricted-imports': ['error', {
        patterns: [
          {
            group: ['*/modules/*/!(*.module|*.types)'],
            message: 'Import from the module file, not internal files.',
          },
        ],
      }],
    },
  },
];

Now you'll get a linting error if anyone tries to bypass the module boundary.

Communication Between Modules

Modules need to talk to each other. You have two choices: direct calls or events. Use both — they serve different purposes.

Direct calls for synchronous queries where you need a response:

// modules/orders/orders.service.ts

import { catalogModule } from '../catalog/catalog.module';

class OrdersService {
  async createOrder(userId: string, productId: string, quantity: number) {
    const product = await catalogModule.getProduct(productId);

    if (!product) {
      throw new Error(`Product ${productId} not found`);
    }

    return this.repository.create({
      userId,
      productId,
      quantity,
      totalPrice: product.price * quantity,
    });
  }
}

Events for when a module needs to notify others without caring who's listening:

// shared/events.ts

type EventHandler<T> = (payload: T) => void | Promise<void>;

export class EventBus {
  private handlers = new Map<string, EventHandler<any>[]>();

  on<T>(event: string, handler: EventHandler<T>) {
    const existing = this.handlers.get(event) || [];
    this.handlers.set(event, [...existing, handler]);
  }

  async emit<T>(event: string, payload: T) {
    const handlers = this.handlers.get(event) || [];
    await Promise.all(handlers.map((h) => h(payload)));
  }
}

export const eventBus = new EventBus();
// modules/orders/orders.service.ts
import { eventBus } from '../../shared/events';

// After creating an order:
await eventBus.emit('order.created', { orderId, userId, productId, quantity });
// modules/payments/payments.module.ts
import { eventBus } from '../../shared/events';
import { PaymentsService } from './payments.service';

const service = new PaymentsService();

eventBus.on('order.created', async (order) => {
  await service.initiatePayment(order);
});

The orders module doesn't know or care that payments is listening. That's clean separation. And when you eventually need to extract payments into its own service, you swap the in-process event bus for a real message broker — the module's internal code doesn't change.

Data Ownership Without Separate Databases

You don't need separate databases to have data isolation. Use schema separation or table-level ownership rules:

// modules/catalog/catalog.repository.ts

class CatalogRepository {
  // This module owns these tables. No other module touches them.
  private readonly TABLES = {
    products: 'catalog_products',
    categories: 'catalog_categories',
  } as const;

  async getById(id: string): Promise<Product | null> {
    const row = await db.query(
      `SELECT * FROM ${this.TABLES.products} WHERE id = $1`,
      [id]
    );
    return row ? this.toProduct(row) : null;
  }
}

The convention is simple: prefix tables with the module name. The catalog module owns catalog_* tables, orders owns orders_* tables. If one module needs data from another, it goes through the module's public API — never through a direct database query.

Is this enforceable through the database itself? Not really, unless you go the separate-schemas route. But conventions backed by code review and linting go a long way. Perfect enforcement isn't the goal — clear intent is.

When to Actually Move to Microservices

A modular monolith isn't a forever architecture for every team. Here are legitimate reasons to extract a module into a separate service:

  • Independent scaling — one module genuinely needs 10x the compute resources of the rest
  • Different runtime requirements — a module needs a GPU, a different language, or a wildly different deployment cadence
  • Team autonomy at scale — you have 50+ engineers and coordination overhead is the actual bottleneck
  • Fault isolation — a module's failure is cascading and crashing unrelated functionality

Notice what's not on this list: "because it's the modern way" or "because it looks good on our tech blog."

The nice thing is, if you've done the modular monolith work, extraction is straightforward. The module already has a defined interface. You put a network boundary where the function call boundary was, swap the event bus for a message queue, and you're most of the way there.

What I'd Actually Recommend

Start with a modular monolith. Be disciplined about boundaries from day one. Use linting and code review to enforce them. Build an event system early — it's cheap and it pays off whether you stay monolithic or not.

Resist the urge to prematurely distribute. Every network hop you add is a new failure mode, a new latency source, and a new thing to monitor. Earn your complexity by proving you need it.

The best architecture is the simplest one that solves your actual problems. For most teams, most of the time, that's a well-structured monolith — not a constellation of services held together by YAML and hope.