Perhaps the worst carryover from the industrial age is the notion of time tracking. Managers feel a strong urge to measure something, and “time spent on a task” becomes a powerful shiny object that is hard to resist. Thus, teams are often required to track the amount of time it takes to fix each bug or complete individual tasks. These times are then compared between developers to provide some measure of productivity and a means of determining success. A lower average bug-fixing time is good, right? Or is it?
The worst metric of all
Time tracking of software developers is — in a word — awful. No two bugs are alike, and inexperienced developers might be able to quickly crank out fixes for many easy bugs while more experienced developers, who might generally be given the more challenging issues, take longer. Or maybe the junior developer is assigned a bug that they can’t handle? Worse, time tracking encourages developers to try gaming the system. When they’re worrying about how long a task might take, they may avoid those that might take longer than the “estimate” and all manner of “non-productive” activities.
Don’t we have to admit that there is no way to determine how long any particular software work unit should or will take? Having to account for every minute of the day merely creates bad incentives to cut corners. It also can make a smart, capable developer feel anxious or concerned for taking “too long” to solve challenging problems that were supposed to be an “easy, one-line change.”