Rewriting a system from scratch is one of the most dangerous things you can do in software. It sounds appealing — “clean slate, no legacy baggage” — but it means freezing features for months or years, rebuilding things that already work, and deploying a big bang release that breaks everything at once.

The Strangler Fig pattern, coined by Martin Fowler, offers a safer path. Named after the strangler fig tree that grows around a host tree, slowly replacing it over time, the pattern lets you incrementally migrate a legacy system by routing traffic to new services piece by piece.

The Idea

Instead of replacing the monolith in one shot, you:

  1. Put a facade (a proxy or API gateway) in front of the existing system
  2. Migrate one capability at a time to new services
  3. Route traffic for that capability to the new service
  4. Repeat until the monolith is empty and can be decommissioned

The old system stays live. Users see no disruption. Each migration is small and reversible.

Before:
[Client] ──→ [Monolith]

After introducing the facade:
[Client] ──→ [Facade/Proxy] ──→ [Monolith] (all routes)

Migrating the /orders route:
[Client] ──→ [Facade/Proxy] ──→ [Order Service] (/orders)
                             ──→ [Monolith] (everything else)

Eventually:
[Client] ──→ [Facade/Proxy] ──→ [Service A]
                             ──→ [Service B]
                             ──→ [Service C]
                             (Monolith decommissioned)

Step 1: The Facade

The facade is the most critical piece. It intercepts all traffic and routes to the right backend. In the beginning, it routes everything to the monolith.

import express from 'express';
import { createProxyMiddleware } from 'http-proxy-middleware';

const app = express();

// Feature flags control routing
const routeConfig = {
  '/api/orders': process.env.ORDERS_SERVICE_URL ?? null,
  '/api/payments': process.env.PAYMENTS_SERVICE_URL ?? null,
};

app.use((req, res, next) => {
  const newServiceUrl = Object.entries(routeConfig)
    .find(([path]) => req.path.startsWith(path))?.[1];

  if (newServiceUrl) {
    return createProxyMiddleware({ target: newServiceUrl, changeOrigin: true })(req, res, next);
  }

  // Default: proxy to monolith
  return createProxyMiddleware({
    target: process.env.MONOLITH_URL,
    changeOrigin: true,
  })(req, res, next);
});

app.listen(3000);

Set ORDERS_SERVICE_URL when the orders service is ready. Until then, the monolith handles everything.

Step 2: Extract a Capability

Pick the smallest, most self-contained capability first. Don’t start with the most complex or most coupled part of the monolith.

Good first candidates:

  • Static content or reports
  • A standalone API endpoint with few dependencies
  • Authentication (if it’s already somewhat isolated)

Bad first candidates:

  • The core order processing flow (everything depends on it)
  • Anything that modifies shared database tables
// New orders service — TypeScript, clean architecture
// Talks to its own database, doesn't share tables with monolith

import express from 'express';

const app = express();

app.get('/api/orders/:id', async (req, res) => {
  const order = await orderRepository.findById(req.params.id);
  if (!order) return res.status(404).json({ error: 'Not found' });
  res.json(order);
});

app.post('/api/orders', async (req, res) => {
  const order = await orderService.create(req.body);
  res.status(201).json(order);
});

Step 3: Handle the Data Migration

The hardest part of strangling a monolith is usually the shared database. Monoliths often have one giant database with everything in it.

Strategy 1: Dual write
Write to both the monolith DB and the new service DB until migration is complete:

async function createOrder(data: OrderData): Promise<Order> {
  // Write to new service DB
  const order = await newOrdersDb.create(data);

  // Also write to legacy DB (keep monolith in sync)
  await legacyDb.query(
    'INSERT INTO orders (id, customer_id, total) VALUES ($1, $2, $3)',
    [order.id, order.customerId, order.total]
  );

  return order;
}

Strategy 2: Change Data Capture (CDC)
Use Debezium or similar to stream changes from the monolith DB to the new service. Less code, more infrastructure.

Strategy 3: Read from monolith, write to new service
During transition, new service reads from monolith DB (cross-service DB access, temporarily allowed) and writes to its own DB. Remove cross-access when migration is done.

Step 4: Canary Routing

Before fully switching over, route a small percentage of traffic to the new service:

app.use('/api/orders', (req, res, next) => {
  const rollout = parseInt(process.env.ORDERS_SERVICE_ROLLOUT ?? '0');
  const useNewService = Math.random() * 100 < rollout;

  if (useNewService && process.env.ORDERS_SERVICE_URL) {
    return createProxyMiddleware({ target: process.env.ORDERS_SERVICE_URL })(req, res, next);
  }

  return createProxyMiddleware({ target: process.env.MONOLITH_URL })(req, res, next);
});

Start at 5%, monitor errors and latency, ramp up to 100%. If anything goes wrong, set ORDERS_SERVICE_ROLLOUT=0.

Common Pitfalls

Shared database coupling: if the new service still reads/writes monolith tables directly, you haven’t actually decoupled anything. Plan the data migration explicitly.

Migration stalls: teams migrate one service, celebrate, and stop. The monolith stays around forever with 80% of the original functionality. Define a migration completion date and enforce it.

Not having a facade from the start: if clients call the monolith directly, you can’t reroute without coordinating with every client. The facade is non-negotiable.

Migrating too many things at once: strangling works because each migration is small and reversible. Big bang migrations inside the strangler pattern defeat the purpose.

Key Takeaways

  • The strangler fig pattern migrates legacy systems incrementally, without big-bang rewrites
  • A facade/proxy is the key infrastructure — put it in front of the monolith from day one
  • Migrate the smallest, most isolated capabilities first
  • Handle shared database coupling explicitly — it’s the hardest part
  • Use canary routing to validate each migration before full cutover
  • Set a deadline for decommissioning the monolith — stranglers that never finish are just extra complexity