“We require 80% test coverage” is one of the most repeated rules in software teams. It sounds rigorous. It’s measurable. It goes in the CI pipeline. And it provides almost no meaningful safety guarantee.

This isn’t an argument against testing. It’s an argument against treating coverage as a proxy for test quality.

What Coverage Actually Measures

Code coverage tells you which lines were executed during your tests. It does not tell you whether your tests actually verified anything useful.

function divide(a: number, b: number): number {
  if (b === 0) throw new Error('Division by zero');
  return a / b;
}

// This test achieves 100% line coverage
test('divide', () => {
  divide(10, 2); // Line executed — no assertion!
});

That test executes every line. It asserts nothing. If divide returns 'potato', the test still passes.

Coverage is a necessary but insufficient condition for good tests.

The Four Types of Coverage (and Their Limits)

Line coverage: was this line executed?
Branch coverage: was each if/else branch taken?
Function coverage: was this function called?
Statement coverage: was each statement executed?

Branch coverage is more useful than line coverage, but still doesn’t tell you if the right value was checked:

function getDiscount(user: User): number {
  if (user.isPremium) {
    return 0.2; // 20% discount
  }
  return 0;
}

test('discount', () => {
  const premium = { isPremium: true };
  const regular = { isPremium: false };
  
  getDiscount(premium);  // branch 1 covered
  getDiscount(regular);  // branch 2 covered
  // 100% branch coverage, zero assertions
});

What High Coverage Can Hide

Happy-path bias

Tests often cover the main success path and miss error cases. You can hit 80% coverage without testing a single error condition.

async function createUser(data: UserData): Promise<User> {
  const existing = await db.users.findByEmail(data.email);
  if (existing) {
    throw new DuplicateEmailError(); // rarely tested
  }
  return db.users.create(data); // always tested
}

Trivial code inflating the number

Getters, constructors, simple mappings — all easy to cover, all low-risk. If your coverage is 80% but 40% of it is trivial boilerplate, your 80% means much less than you think.

Missing integration

Coverage measures unit tests in isolation. The integration between a controller, a service, and a database can be 0% covered even with 90% unit coverage.

What Actually Predicts Test Quality

Assertion density

Are your tests actually checking the right things?

// ❌ Covered but useless
test('creates order', async () => {
  await orderService.create({ items: [] });
});

// ✅ Covered and useful
test('creates order with correct total', async () => {
  const order = await orderService.create({
    items: [{ price: 10, quantity: 3 }],
  });

  expect(order.id).toBeDefined();
  expect(order.total).toBe(30);
  expect(order.status).toBe('PENDING');
});

Edge case coverage

Does your test suite include:

  • Empty inputs ([], '', null, undefined)
  • Boundary values (0, max int, empty string)
  • Error conditions (network failure, DB timeout, invalid input)
  • Concurrent operations

Mutation testing

The gold standard for test quality. Mutation testing automatically introduces bugs into your code and checks whether your tests catch them.

# Using Stryker (mutation testing for JS/TS)
npx stryker run

Stryker might change > to >=, delete a return statement, or flip a boolean. For each mutation, it runs your tests. If your tests still pass with the mutation, you have a weak test.

Mutation score is a better metric than coverage. 70% mutation score with 60% line coverage is better than 95% line coverage with 20% mutation score.

The Right Way to Use Coverage

Coverage is useful as a floor, not a ceiling. Some practical rules:

Use it to find untested code: coverage reports are excellent for spotting code paths you forgot to test. Use them as a map, not as a score.

Set thresholds per module: core business logic might warrant 90%. Infrastructure glue code might be fine at 50%. A single project-wide number hides this nuance.

Don’t game it: exclude generated code, config files, and framework boilerplate. Don’t write empty tests to hit the number.

Combine with other metrics:

{
  "coverage": {
    "lines": 80,
    "branches": 70
  },
  "mutationScore": {
    "minimum": 60
  }
}

What a Good Test Suite Looks Like

Instead of asking “did we hit 80%?”, ask:

  • Does this test fail when the feature breaks?
  • Does it test edge cases and error paths?
  • Is it testing behavior, not implementation details?
  • Would a new team member trust this test as specification?

A test suite with 70% coverage and meaningful assertions is far safer than one with 95% coverage and hollow tests.

Key Takeaways

  • Coverage measures execution, not correctness — you can have 100% coverage with zero assertions
  • Branch coverage is better than line coverage, but still insufficient on its own
  • High coverage often masks happy-path bias and missing error tests
  • Mutation testing (Stryker) gives a more honest picture of test quality
  • Use coverage as a floor to find gaps, not as a quality metric to optimize
  • A smaller suite with strong assertions beats a large suite with weak ones