Cyclomatic Complexity: Measuring Code Health
Learn what cyclomatic complexity is, why it matters, and practical techniques to reduce it. Includes real refactoring examples and tooling recommendations.
Every developer has seen that function. The one with nested if-else chains three levels deep, a switch statement with fifteen cases, and early returns sprinkled like landmines. You know it’s bad, but how bad? And how do you explain “this code is too complex” to someone who says “it works, ship it”?
Cyclomatic complexity gives you a number. A concrete, measurable number that correlates directly with bug density and test difficulty. Let’s understand what it means and how to keep it low.
What Is Cyclomatic Complexity?
Cyclomatic complexity counts the number of independent paths through a function. Every decision point (if, else, for, while, case, catch, &&, ||, ?:) adds a path. More paths = more complexity = more bugs = harder to test.
The formula is simple: complexity = number of decision points + 1.
// Complexity: 1 (no decisions)
function add(a: number, b: number): number {
return a + b;
}
// Complexity: 2 (one if)
function abs(n: number): number {
if (n < 0) return -n;
return n;
}
// Complexity: 5 (four decision points)
function categorize(score: number): string {
if (score >= 90) return "A"; // +1
else if (score >= 80) return "B"; // +1
else if (score >= 70) return "C"; // +1
else if (score >= 60) return "D"; // +1
return "F";
}
Why the Number Matters
Research consistently shows:
| Complexity | Risk Level | What It Means |
|---|---|---|
| 1-5 | Low | Simple, easy to test |
| 6-10 | Moderate | Reasonable, needs attention |
| 11-20 | High | Difficult to test, bug-prone |
| 21+ | Very High | Almost untestable, refactor immediately |
A function with complexity 15 has 15 independent paths. To fully test it, you need at least 15 test cases. In practice, the interactions between paths mean you need far more. Most developers write 3-4 tests and call it done, leaving 11+ paths untested.
A Real-World Example
Here’s a function you might find in a production codebase:
// Complexity: 12 — dangerously high
function processOrder(order: Order, user: User): OrderResult {
if (!order) { // +1
throw new Error("Order is required");
}
if (!user) { // +1
throw new Error("User is required");
}
if (order.items.length === 0) { // +1
return { status: "empty", total: 0 };
}
let total = 0;
for (const item of order.items) { // +1
if (item.quantity <= 0) { // +1
continue;
}
if (item.isOnSale) { // +1
total += item.price * item.quantity * 0.8;
} else {
total += item.price * item.quantity;
}
}
if (user.membershipLevel === "gold") { // +1
total *= 0.9;
} else if (user.membershipLevel === "platinum") {// +1
total *= 0.85;
}
if (total > 1000) { // +1
total -= 50; // Bulk discount
}
if (order.couponCode) { // +1
const coupon = lookupCoupon(order.couponCode);
if (coupon && coupon.isValid) { // +1
total *= (1 - coupon.discount);
}
}
return { status: "processed", total };
}
This function is doing too many things: validation, price calculation, discount logic, coupon application. Let’s break it apart.
Reducing Complexity: Extract and Simplify
Step 1: Extract Validation
function validateOrderInput(order: Order, user: User): void {
if (!order) throw new Error("Order is required");
if (!user) throw new Error("User is required");
}
// Complexity: 3
Step 2: Extract Price Calculation
function calculateItemTotal(items: readonly OrderItem[]): number {
return items
.filter(item => item.quantity > 0)
.reduce((total, item) => {
const price = item.isOnSale ? item.price * 0.8 : item.price;
return total + price * item.quantity;
}, 0);
}
// Complexity: 3
Step 3: Extract Discount Logic
function applyMemberDiscount(total: number, level: string): number {
const discounts: Record<string, number> = {
gold: 0.9,
platinum: 0.85,
};
return total * (discounts[level] ?? 1);
}
// Complexity: 1 — the lookup table eliminated all branches!
function applyBulkDiscount(total: number): number {
return total > 1000 ? total - 50 : total;
}
// Complexity: 2
function applyCoupon(total: number, couponCode?: string): number {
if (!couponCode) return total;
const coupon = lookupCoupon(couponCode);
if (!coupon?.isValid) return total;
return total * (1 - coupon.discount);
}
// Complexity: 3
Step 4: Compose
function processOrder(order: Order, user: User): OrderResult {
validateOrderInput(order, user);
if (order.items.length === 0) {
return { status: "empty", total: 0 };
}
let total = calculateItemTotal(order.items);
total = applyMemberDiscount(total, user.membershipLevel);
total = applyBulkDiscount(total);
total = applyCoupon(total, order.couponCode);
return { status: "processed", total };
}
// Complexity: 2
The original function had complexity 12. Now the most complex function has complexity 3, and the orchestrator is at 2. Every function is easy to test in isolation.
Techniques for Reducing Complexity
Replace Conditionals with Lookup Tables
// Before: complexity 5
function getStatusLabel(status: string): string {
if (status === "pending") return "Pending Review";
if (status === "approved") return "Approved";
if (status === "rejected") return "Rejected";
if (status === "archived") return "Archived";
return "Unknown";
}
// After: complexity 1
const STATUS_LABELS: Record<string, string> = {
pending: "Pending Review",
approved: "Approved",
rejected: "Rejected",
archived: "Archived",
};
function getStatusLabel(status: string): string {
return STATUS_LABELS[status] ?? "Unknown";
}
Replace Nested Conditions with Guard Clauses
// Before: complexity 4, deeply nested
function getDiscount(user: User): number {
if (user.isActive) {
if (user.yearsAsMember > 5) {
if (user.totalSpent > 10000) {
return 0.2;
}
return 0.1;
}
return 0.05;
}
return 0;
}
// After: same complexity, but flat and readable
function getDiscount(user: User): number {
if (!user.isActive) return 0;
if (user.yearsAsMember <= 5) return 0.05;
if (user.totalSpent <= 10000) return 0.1;
return 0.2;
}
Replace Type Checks with Polymorphism
# Before: complexity grows with every new shape
def calculate_area(shape: dict) -> float:
if shape["type"] == "circle":
return math.pi * shape["radius"] ** 2
elif shape["type"] == "rectangle":
return shape["width"] * shape["height"]
elif shape["type"] == "triangle":
return 0.5 * shape["base"] * shape["height"]
raise ValueError(f"Unknown shape: {shape['type']}")
# After: complexity 1 per class
class Circle:
def __init__(self, radius: float):
self.radius = radius
def area(self) -> float:
return math.pi * self.radius ** 2
class Rectangle:
def __init__(self, width: float, height: float):
self.width = width
self.height = height
def area(self) -> float:
return self.width * self.height
Tooling: Measure It Automatically
ESLint (TypeScript/JavaScript)
{
"rules": {
"complexity": ["warn", { "max": 10 }]
}
}
ESLint will warn you when any function exceeds complexity 10.
Radon (Python)
pip install radon
radon cc your_module.py -s -a
# Output:
# your_module.py
# F 12:0 process_order - C (12) ← "C" grade, complexity 12
# F 45:0 validate_input - A (2) ← "A" grade, complexity 2
# Average complexity: B (7.0)
SonarQube / SonarCloud
Set quality gates that block PRs when cognitive complexity exceeds thresholds. This is the most effective approach — make it impossible to merge overly complex code.
Cyclomatic vs. Cognitive Complexity
Cyclomatic complexity counts paths. Cognitive complexity measures how hard code is for a human to understand. They usually correlate, but not always:
// Cyclomatic: 4, Cognitive: 1 (easy to read)
function classify(value: number): string {
if (value < 0) return "negative";
if (value === 0) return "zero";
if (value < 100) return "small";
return "large";
}
// Cyclomatic: 4, Cognitive: 7 (hard to read)
function classify(value: number): string {
let result = "unknown";
if (value >= 0) {
if (value === 0) {
result = "zero";
} else {
if (value < 100) {
result = "small";
} else {
result = "large";
}
}
} else {
result = "negative";
}
return result;
}
Both have the same cyclomatic complexity, but the second version is significantly harder to reason about because of nesting. Modern tools like SonarQube measure cognitive complexity too — use both metrics.
A Pragmatic Target
- Keep most functions under complexity 5
- Allow up to 10 for orchestrator functions
- Treat anything over 15 as a bug (it’s almost certainly undertested)
- Use tooling to enforce limits in CI
Complexity isn’t just a number — it’s a proxy for how likely that function is to harbor bugs. Measure it, reduce it, and automate the enforcement. Your future self will thank you when the 3 AM production incident doesn’t happen because the function was simple enough to get right.