We’ve been here before. Twenty years ago, managers thought they’d cracked the code: count the lines of code developers write, and boom—instant productivity measurement. It was simple, objective, and completely wrong. Lines of code became the programming equivalent of paying soldiers by the bullet fired—quantity over sense. Yet here we are in 2026, making the exact same mistake with a fresh coat of paint. We’ve just swapped “lines of code” for “tickets closed,” and everyone’s acting like we invented something revolutionary. Spoiler alert: we didn’t. We’re running the same flawed playbook with new terminology.

The Familiar Pattern Nobody Wants to Admit

Think about it. The problems are virtually identical. When you measure developers by lines of code, they write unnecessary code. When you measure them by tickets closed, they close unnecessary tickets. It’s the same incentive misalignment wearing different clothes. I’ve watched this unfold at companies large and small. A developer wants to look productive on their performance review, so they break up a single coherent feature into five separate tickets—each marked “closed” by end of day. Another dev discovers a customer support teammate measured by tickets closed, and suddenly every legitimate problem gets a dismissive response: “Just open another ticket with the other department.” The system rewards throughput over outcomes, and everything becomes about the count rather than what actually happened. The uncomfortable truth? Tickets closed tells you exactly nothing about whether the code is good, whether it solves the actual problem, or whether customers care about it. A developer can close fifty tickets while building nothing of value. Conversely, someone tackling a genuinely difficult problem might close only two tickets but ship something that makes customers’ lives meaningfully better.

Why We Keep Making This Mistake

The reason is almost banal: metrics are easy. They’re clean. They fit on dashboards. They survive PowerPoint presentations to executives who need to make decisions about headcount and bonuses. They give managers the illusion of objectivity in what is inherently a subjective process—evaluating human performance. But here’s the thing that trips everyone up: just because something is measurable doesn’t mean it’s meaningful. We can measure anything. We can measure how many times developers blink at their keyboards. That doesn’t tell us who’s productive. The seduction of ticket metrics is particularly strong because they feel connected to real work. Unlike lines of code, which everyone now recognizes as absurd, tickets seem legitimately related to shipping software. A ticket represents actual work, right? A feature request, a bug fix, a task. It’s just a step away from meaningfulness, which makes it dangerous. You almost believe it. But almost isn’t good enough when you’re making decisions about people’s careers and compensation.

The Gaming Always Starts the Same Way

Once you tie incentives to tickets closed, gaming becomes inevitable and immediate. It’s not that developers are malicious—they’re just rational actors responding to perverse incentives. If I’m being measured on closed tickets, I will optimize for closing tickets. That’s not a character flaw; that’s physics. Here’s how it typically unfolds: Week 1-2: Management announces the new productivity metric. Developers think, “Okay, I’ll close more tickets.” Week 3-4: The ambitious ones start breaking complex features into smaller tickets. More closed tickets = better metrics. Month 2: Now everyone’s doing it. Tickets that used to be one coherent unit are five separate tickets. The backlog explodes. But the numbers look great. Month 3: Someone discovers that if you close a ticket even if it’s not really fixed, the system doesn’t know the difference. Why spend time actually solving the problem when you can close it and open a new one? Month 4: Management sees ticket velocity way up and is confused why customer satisfaction is down, quality metrics are tanking, and good developers are leaving. By then, the damage is done. The metric has poisoned the feedback loop between effort and outcome. You’ve incentivized the appearance of productivity instead of actual productivity.

What Actually Matters (Spoiler: It’s More Complex)

Real developer productivity isn’t a number. It’s a system. And measuring a system requires thinking about what actually creates value. Start here: outcomes beat outputs. How many users actually use that feature you shipped? Did it reduce support tickets or increase them? Is revenue moving? Those are outcomes. Story points completed, velocity increases, tickets closed—those are outputs. The dashboard can look beautiful while everything falls apart if you’re optimizing outputs instead of outcomes. But outcomes alone aren’t enough either, because they’re lagging indicators. You need the right system metrics to diagnose what’s happening in real time: Lead time measures how long it takes code to go from commit to production. This captures the full journey, including all the waiting before development even starts. It’s your most honest delivery metric. Cycle time tracks how long developers actively work on code from first commit to going live. Shorter cycle times mean your team ships value faster and gets feedback faster. Deployment frequency shows how often you successfully push changes to production. Teams like Google and GitLab use this because it correlates directly with responsiveness and agility. Time to first review matters because code sitting in review limbo creates invisible delays. Track PRs without review for more than 24 hours, and you’ll often find a process bottleneck you didn’t know existed. Failed deployments and rework rates tell you how much of your team’s time goes to fixing things instead of building new ones. When developers spend three hours dealing with flaky tests or configuration errors, that’s productivity lost to friction. None of these are perfect. None should be used in isolation. But together, they paint a picture of how your actual development system works—not just how busy people look.

The Architecture of Better Metrics

Here’s how this actually looks when implemented properly:

graph TB A["Output Metrics
Stories/Tickets Closed"] -->|GAMING RISK| B["Bloated Backlog
False Productivity"] C["System Metrics
Lead/Cycle Time
Deployment Frequency"] -->|HONEST SIGNAL| D["Process Bottlenecks
Real Constraints"] E["Outcome Metrics
Feature Usage
Customer Satisfaction"] -->|TRUE VALUE| F["Business Impact
Revenue/Retention"] D -->|DIAGNOSTIC| G["Actionable Improvements
Reduce Waiting
Improve Reviews"] G -->|ENABLES| F B -->|HIDES| H["Actual Problems"] H -->|LEADS TO| I["Poor Decisions
Wrong Priorities"]

The cascade matters. Process metrics help you identify where time actually goes. Once you know where developers are stuck (waiting for builds, waiting for code review, dealing with flaky tests), you can fix it. Then when you ship features, outcome metrics tell you if anyone cares. If they do, and customers are using it, then you’ve created genuine value. Tickets closed metrics sit completely outside this loop. They measure neither process nor outcomes. They measure activity.

What to Actually Do Starting Monday

If your organization is currently measuring developers by tickets closed, here’s the concrete path forward: Step 1: Acknowledge the problem Have an honest conversation with your team about why the current metric is broken. Show them examples of how it creates perverse incentives. This sounds obvious, but most organizations skip this step and wonder why engineers resist the change. They’re resisting because they know the metric is wrong. Step 2: Track what actually happens Spend one full week measuring how developers actually spend their time. Use calendar audits, time logs, or simple spreadsheets with 30-minute blocks. You’ll probably discover that developers spend way more time in meetings, waiting for deploys, and dealing with blocked work than anyone realized. Make this visible. Step 3: Implement protected time Block out 3-hour “no meeting” zones on every developer’s calendar and treat them as sacred as client meetings. Set up interrupt budgets where developers can only be interrupted 2-3 times per day for non-emergencies. This immediately improves cycle time without any complicated tooling. Step 4: Measure the system, not the person Set up tracking for lead time and cycle time using whatever tools you already have (GitHub, GitLab, Jira, etc.). Most of these platforms can calculate it automatically if configured correctly. Make this visible to everyone, not just management. When the team sees that code review is taking 3 days on average, they’ll naturally want to improve it. Step 5: Connect to outcomes Once a quarter, pick a few features that shipped recently. Check actual usage metrics. Ask customers if they’re using it. Did it reduce support tickets? This creates the cognitive link that actually matters: the work we do should create value that people care about.

The Real Measure of Productivity

Here’s what I’ve learned after watching this metric carousel spin for two decades: developer productivity isn’t about how many things someone closes or completes. It’s about how effectively the entire system converts effort into value. A single developer shipping one feature that 10,000 customers use daily is more productive than another developer closing 50 tickets that nobody touches. A team that takes 30 days from idea to production is more productive than a team that ships weekly to features nobody uses. The best teams I’ve worked with weren’t measured on tickets at all. They were measured on what mattered: Are customers happier? Is the product more reliable? Is the business growing? Everything else flowed from those outcomes. But measurement serves a purpose beyond just knowing what’s happening. It serves as a signal to the team about what the organization actually values. When you measure tickets closed, you’re sending a very specific message: “We care about volume.” When you measure outcomes, you’re saying: “We care about impact.” The message you send shapes behavior. Choose wisely.

The Path Forward Isn’t Perfect, But It’s Better

I’m not going to pretend there’s a magic metric that perfectly captures productivity. There isn’t. Even the best measurement systems are incomplete. But incompleteness is better than incentive misalignment. A system that captures 80% of what matters will outperform a system that perfectly measures the wrong thing. The key is moving from metrics that encourage gaming to metrics that encourage excellence. From outputs to outcomes. From individual measurements to system measurements. From dashboards that look good to dashboards that drive actual improvement. This is uncomfortable work. It requires admitting that what you’ve been measuring was wrong. It requires trusting your team instead of relying on proxy metrics. It requires having harder conversations about what productivity actually means in your organization. But the alternative is keeping the ticket treadmill spinning, watching your best developers leave because they’re tired of perverse incentives, and wondering why everything looks productive on paper but feels broken in reality. We’ve learned this lesson before with lines of code. Let’s try not to take another twenty years to learn it again with tickets closed.