You know that feeling when you’re staring at a five-line getter function, and your linter is screaming at you because coverage is at 87% instead of 95%? Yeah. That’s the moment I want to talk about. The testing community has done an incredible job evangelizing unit tests—and for good reason. Tests catch bugs, they provide confidence, they act as safety nets. But somewhere along the way, we’ve collectively developed test-writing religiosity. The idea that every single line of code deserves a test. That code without 100% coverage is somehow morally inferior. That testing is always, unequivocally, an unquestionable good. I’m here to say: that’s not quite right. Or rather, it’s not the whole story.
The Cult of Complete Coverage
There’s a beautiful irony happening in modern software development. We’ve become so obsessed with reducing bugs through exhaustive testing that we’ve created an entirely new problem: wasted engineering effort. Think about what happens when you mandate 100% test coverage for everything:
- Your team spends 20% of their time writing business logic
- They spend 80% of their time writing tests that validate what the business logic obviously does
- They become resentful, burned out, and ironically, less careful with the tests they do write
- Those tests become brittle, tightly coupled to implementation details, and nightmares to maintain The worst part? These tests don’t actually catch bugs. They’re not preventing anything. They’re just… there. Like architectural ornaments.
The Categories of Code That Don’t Deserve Tests
Let me be controversial. Some code simply doesn’t warrant the testing investment. Not because it’s okay to be sloppy—but because the cost-benefit analysis doesn’t make sense.
Trivial Code (The Five-Liner Problem)
Consider this:
function formatCurrency(amount: number): string {
return `$${amount.toFixed(2)}`;
}
A test for this function would look like:
describe('formatCurrency', () => {
it('should format a number as currency', () => {
expect(formatCurrency(42)).toBe('$42.00');
});
it('should handle decimals', () => {
expect(formatCurrency(42.5)).toBe('$42.50');
});
it('should handle edge cases', () => {
expect(formatCurrency(0)).toBe('$0.00');
expect(formatCurrency(1000000)).toBe('$1000000.00');
});
});
You’ve now written three tests to validate five lines of code. The test file is longer than the source file. When JavaScript’s native toFixed() method works differently, your test tells you nothing—it just fails in the same way the code does. You’ve bought testing overhead without testing protection.
The rule of thumb? If the code is just wrapping a well-tested standard library function, consider skipping the test. Your integration tests will catch problems anyway.
Exploratory Code
You’re prototyping a new algorithm. You’re not sure if it’ll even work. You’re iterating rapidly, trying different approaches, tweaking parameters. Writing unit tests while you’re in full exploration mode is like writing extensive documentation for a rough draft. It’s premature. It’s anchoring you to implementation details you’re about to throw away. The better move? Get to a proof of concept first. Then decide if it’s worth testing.
Legacy Code (The Ancient Ruins Problem)
You inherit a codebase that’s been around since the Obama administration. It’s held together with duct tape and prayers. There are no tests. Should you write comprehensive tests for everything? No. That’s a disaster waiting to happen. Instead:
- Write tests for the new features you’re adding
- Write tests for the bugs you fix
- Leave the rest alone unless you’re actively refactoring it Retrofitting tests onto legacy code is like trying to write an instruction manual for a car that’s already been driven for 100,000 miles. It’s expensive, and the manual won’t teach the car how to work better—it’ll just document the current mess.
UI Rendering Code
Want to know the biggest waste of developer time? Writing tests that verify that your React component renders a button when a prop is true. That your Vue template displays text when data exists. That your Angular directive adds a CSS class. These tests are almost pure friction:
// This is a test that nobody asked for
it('should render the button when isVisible is true', () => {
const component = shallow(<MyButton isVisible={true} />);
expect(component.find('button').exists()).toBe(true);
});
Your E2E tests will catch this. Your manual testing will catch this. Your browser will catch this if you actually click it. The unit test? It’s just creating a false checkpoint that doesn’t prevent real problems (like the button being positioned off-screen, or being disabled, or having bad accessibility attributes). There’s a reason the industry is slowly moving away from heavy unit testing of UI components toward integration tests and E2E tests. Turns out, verifying that UI looks right is more valuable than verifying that it renders.
When You Should Write Tests (The Nuanced Part)
I’m not arguing against testing. I’m arguing for strategic testing. Tests are investments, and like any investment, they need to have a return. Write tests for:
Business Logic (The Crown Jewels)
Your payment processing function? Your discount calculation algorithm? Your user authentication logic? These absolutely need tests. The cost of bugs here isn’t just embarrassment—it’s financial and legal liability.
// This deserves comprehensive testing
function calculateDiscount(
subtotal: number,
customerTier: 'bronze' | 'silver' | 'gold',
isHoliday: boolean
): number {
let discountPercent = 0;
if (customerTier === 'gold') discountPercent += 15;
else if (customerTier === 'silver') discountPercent += 10;
else if (customerTier === 'bronze') discountPercent += 5;
if (isHoliday) discountPercent += 5;
// Cap discount at 30%
discountPercent = Math.min(discountPercent, 30);
return subtotal * (1 - discountPercent / 100);
}
This function has multiple conditional paths, edge cases, and business rules. It needs tests:
describe('calculateDiscount', () => {
it('should apply correct tier discounts', () => {
expect(calculateDiscount(100, 'gold', false)).toBe(85);
expect(calculateDiscount(100, 'silver', false)).toBe(90);
expect(calculateDiscount(100, 'bronze', false)).toBe(95);
});
it('should apply holiday bonus', () => {
expect(calculateDiscount(100, 'bronze', true)).toBe(90);
});
it('should cap discount at 30%', () => {
expect(calculateDiscount(100, 'gold', true)).toBe(70);
});
});
These tests aren’t overhead—they’re guards against expensive mistakes.
Integration Points
The boundaries between your code and external systems? Those are where most bugs live. Third-party APIs, database queries, file system operations—when your code talks to the outside world, mocking those dependencies and testing behavior is genuinely valuable. You’re not just checking that your code runs—you’re verifying it handles failures gracefully, retries appropriately, and fails safely.
Regression-Prone Code
You’ve got a function that’s been the source of three separate production bugs. It’s complex. People keep misunderstanding it. Now? Write tests. Not because it’s policy, but because you’ve learned it’s dangerous.
The Practical Framework: Test Triage
Here’s how I decide whether a piece of code deserves a test:
┌─ Is this business logic?
│ └─ YES: Write tests
│
├─ Does this have complex conditional logic?
│ └─ YES: Write tests
│
├─ Does this integrate with external systems?
│ └─ YES: Write tests
│
├─ Has this caused bugs before?
│ └─ YES: Write tests
│
├─ Is this mission-critical infrastructure?
│ └─ YES: Write tests
│
├─ Is this just wrapping a standard library?
│ └─ Skip it
│
├─ Is this trivial enough that the code is its own documentation?
│ └─ Skip it
│
├─ Am I still iterating and exploring?
│ └─ Skip it (write tests later)
│
└─ Does this have obvious, linear logic flow?
└─ Skip it
Don’t just follow this blindly—use it as a starting point for thinking strategically about where tests provide actual value.
The Real Cost You Need to Worry About
Here’s what the testing purists won’t tell you: bad tests are worse than no tests. A false-passing test creates false confidence. A brittle test that breaks when you refactor creates friction. Tests that are tightly coupled to implementation details become maintenance nightmares. A test suite that takes 10 minutes to run becomes a development impediment. If you’re going to write tests, write them well. Write them to validate behavior, not implementation. Write them to catch real bugs, not to hit coverage targets. If you can’t do that, skip the test and use that time for manual testing, E2E tests, or just thinking harder about the code you’re writing. The goal isn’t 100% coverage. The goal is confidence that your code works. Those are different things.
A Small Confession
I used to be a coverage fundamentalist. I thought 95% coverage was a moral imperative. Then I worked on a codebase where we had 97% coverage and shipped a critical bug that affected 10% of our users. You know what caught it? A customer. Not the tests. The tests were all passing because they were testing implementation details. They were checking that variables got assigned, that functions got called, that the code did what it was supposed to do. What they didn’t catch was that what it was supposed to do was wrong. That’s when I learned the difference between validated code (tests pass) and correct code (it actually solves the problem).
The Bottom Line
Not every line of code deserves a test. Some code is so trivial that testing adds nothing but overhead. Some code is so exploratory that tests just create technical debt. Some code is in domains where integration tests and manual testing provide more value than unit tests ever could. The mature approach to testing isn’t “test everything.” It’s “test strategically.” It’s understanding that testing is a tool—a powerful tool, absolutely—but still a tool. And like any tool, using it everywhere for every job doesn’t make you more productive. It makes you slower. Write tests for business logic. Write tests for complex algorithms. Write tests for integration points. Write tests for code that’s caused problems before. Skip tests for trivial wrappers. Skip tests for exploratory code. Skip tests for simple accessors. Skip tests for code that’s already well-covered by E2E tests. Your future self—and your team—will thank you for the pragmatism. And your test suite will actually be something people want to maintain instead of resent. The real craft isn’t in achieving 100% coverage. It’s in knowing which pieces of code actually deserve testing, and which ones don’t. That’s the expertise that separates the developers who slow teams down from the ones who actually get things done.
