The microservices hype has led many teams to split their monoliths prematurely. Before reaching for the saw, you need to understand when decomposition is worth the cost and how to do it without bringing your system to its knees.

When to Stay Monolithic

A well-structured monolith is not a failure. It’s often the right choice:

  • Small teams (< 10 engineers) — coordination overhead of microservices exceeds the benefits
  • Early-stage products — you don’t yet know where the boundaries should be
  • Low traffic — you don’t need independent scaling
  • Unclear domain — splitting too early locks you into wrong boundaries

Martin Fowler calls this the Monolith First approach: start with a monolith, understand your domain, then extract services when the pain justifies it.

Signs You Need to Split

Real signals that decomposition would help:

  1. Deploy coupling — a one-line change in billing requires deploying the entire system, including unrelated user management code.
  2. Team conflicts — multiple teams stepping on each other’s code daily, merge conflicts everywhere.
  3. Scaling mismatch — your search feature needs 20 instances but your admin panel needs 1.
  4. Technology lock-in — one module would benefit from Python ML libraries, but the monolith is all Java.
  5. Blast radius — a bug in reporting crashes the entire checkout flow.

Finding Service Boundaries

The hardest part isn’t the technical migration — it’s deciding where to cut. Domain-Driven Design (DDD) gives you the best tools for this.

Bounded Contexts

A bounded context is a boundary within which a particular domain model applies. Inside that boundary, terms have precise meaning. Across boundaries, the same word might mean different things.

┌─────────────────┐     ┌──────────────────┐
│   ORDERING       │     │   SHIPPING        │
│                  │     │                   │
│  Order = items   │     │  Order = packages │
│  + prices +      │     │  + addresses +    │
│  discounts       │     │  tracking numbers │
│                  │     │                   │
│  Customer =      │     │  Customer =       │
│  name + email +  │     │  address +        │
│  payment info    │     │  delivery prefs   │
└─────────────────┘     └──────────────────┘

Each bounded context is a candidate microservice.

Event Storming

Event Storming is a workshop technique to discover bounded contexts:

  1. Gather domain experts and developers in a room with a long wall
  2. Write domain events on orange sticky notes (“Order Placed”, “Payment Received”, “Item Shipped”)
  3. Arrange chronologically left to right
  4. Identify commands that trigger events (blue stickies)
  5. Group events into clusters — each cluster often maps to a bounded context
  6. Draw boundaries around clusters

This gives you service candidates grounded in actual business processes, not technical layers.

Migration Strategies

The Strangler Fig Pattern

The safest approach. Named after strangler fig trees that grow around a host tree, eventually replacing it.

// Step 1: Route traffic through a facade
class OrderFacade {
  constructor(
    private legacyOrderService: LegacyMonolith,
    private newOrderService: OrderMicroservice,
    private featureFlags: FeatureFlags,
  ) {}

  async createOrder(data: CreateOrderDTO): Promise<Order> {
    if (this.featureFlags.isEnabled('new-order-service')) {
      return this.newOrderService.create(data);
    }
    return this.legacyOrderService.createOrder(data);
  }
}
Phase 1: All traffic → Monolith
Phase 2: New feature → New service, existing → Monolith  
Phase 3: Migrated features → New service, remaining → Monolith
Phase 4: All traffic → New services, Monolith decommissioned

Branch by Abstraction

When you need to replace a component inside the monolith before extracting it:

// 1. Create an abstraction
interface NotificationSender {
  send(userId: string, message: string): Promise<void>;
}

// 2. Wrap the old implementation
class LegacyEmailSender implements NotificationSender {
  async send(userId: string, message: string): Promise<void> {
    // Old monolith code
    this.legacyMailer.sendEmail(userId, message);
  }
}

// 3. Build the new implementation
class NotificationServiceClient implements NotificationSender {
  constructor(private httpClient: HttpClient) {}

  async send(userId: string, message: string): Promise<void> {
    await this.httpClient.post('http://notification-service/send', {
      userId,
      message,
    });
  }
}

// 4. Switch via configuration
const sender: NotificationSender = config.useNewNotificationService
  ? new NotificationServiceClient(httpClient)
  : new LegacyEmailSender();

Database Decomposition

The database is usually the hardest part. Don’t try to split it all at once.

Phase 1: Logical separation — separate schemas within the same database:

-- Before: shared tables everywhere
SELECT * FROM users JOIN orders ON users.id = orders.user_id;

-- After: each service owns its schema
-- ordering_schema.orders
-- user_schema.users
-- No cross-schema joins allowed!

Phase 2: Read replicas — the new service reads from a replica of the monolith’s database while building its own data store.

Phase 3: Physical separation — each service gets its own database. Use events to synchronize necessary data.

// Order service publishes events
class OrderService {
  async createOrder(data: CreateOrderDTO): Promise<Order> {
    const order = await this.repository.save(new Order(data));

    // Publish event — other services react to this
    await this.eventBus.publish({
      type: 'OrderCreated',
      payload: {
        orderId: order.id,
        customerId: order.customerId,
        total: order.total,
        timestamp: new Date().toISOString(),
      },
    });

    return order;
  }
}

// Shipping service consumes events
class ShippingEventHandler {
  async handle(event: DomainEvent): void {
    if (event.type === 'OrderCreated') {
      await this.shippingRepository.createShipment({
        orderId: event.payload.orderId,
        status: 'pending',
      });
    }
  }
}

Communication Patterns

Synchronous (HTTP/gRPC)

Good for: queries, operations that need immediate responses.

// Simple but creates coupling
const user = await fetch(`http://user-service/users/${userId}`).then(r => r.json());

Risk: If the user service is down, your order service fails too. Use circuit breakers:

import CircuitBreaker from 'opossum';

const breaker = new CircuitBreaker(fetchUser, {
  timeout: 3000,
  errorThresholdPercentage: 50,
  resetTimeout: 30000,
});

breaker.fallback(() => ({ id: userId, name: 'Unknown User' }));

Asynchronous (Events/Messages)

Good for: commands, operations that can be eventually consistent.

// Producer
await rabbitMQ.publish('orders', 'order.created', {
  orderId: '123',
  items: [...],
  timestamp: Date.now(),
});

// Consumer
await rabbitMQ.subscribe('shipping', 'order.created', async (msg) => {
  await shippingService.prepareShipment(msg.orderId);
});

Prefer async communication when possible — it reduces coupling and improves resilience.

Common Mistakes

1. Distributed Monolith

If every service call requires synchronous calls to five other services, you’ve built a distributed monolith — all the complexity of microservices with none of the benefits.

2. Shared Database

Two services reading from the same table defeats the purpose. Each service owns its data.

3. Splitting Too Fine

A service that does one tiny thing and always needs to call another service isn’t a microservice — it’s a function that should live inside the other service.

4. Ignoring Data Consistency

In a monolith, you have transactions. In microservices, you need the Saga pattern for operations spanning multiple services:

CreateOrder → ReserveInventory → ProcessPayment → ConfirmOrder
     ↓              ↓                 ↓
  (compensate)  (compensate)     (compensate)
  CancelOrder   ReleaseStock     RefundPayment

Each step can fail, and you need compensating actions to undo previous steps.

A Pragmatic Approach

  1. Start with the monolith — understand your domain first
  2. Modularize the monolith — separate bounded contexts inside the monolith with clear interfaces
  3. Extract one service — pick the one with the clearest boundary and most independent data
  4. Learn from it — observe the operational overhead (logging, tracing, deployment)
  5. Decide if it’s worth it — then proceed or stop
  6. Repeat incrementally — never do a “big bang” migration

The goal is not microservices. The goal is a system architecture that supports your team’s ability to deliver value. Sometimes that’s microservices. Often, it’s a well-structured monolith.