How Long Does it Take to Catch a Fish
If you have recently looked at a job board for software engineering positions, you might wonder how we ever developed software before the advent of “agile.” Many job descriptions include agile jargon in the list of applicant requirements, but have agile principles truly improved software development? In this post, I will look at the most popular form of agile, namely SCRUM.
I think few developers would argue with most of the 12 principles of agile as described in the Manifesto for Agile Software Development, but two that I question are these:
Business people and developers must work together daily throughout the project
Face-to-face conversation is the best form of communication (co-location)
What's wrong with daily cooperation between business people and developers? Nothing, except the timing. And in SCRUM, timing is important, maybe too important. Everything in SCRUM is time-boxed. Sprints are 2 weeks (for example), daily status meetings are 15 minutes, sprint retrospectives are an hour, etc. This means that the feature you have worked on for 2 weeks, that is 95% complete on the last day of the sprint, goes back into the backlog, maybe to be completed in a future sprint. Just as software features don't always fit neatly into 2-week time slots, neither is it appropriate to force communication between team members into rigid (i.e., daily) time slots. You’ve probably been in those daily stand-ups where the product owner and half of the developers sit by quietly as someone strays off topic, talking about coding details that don’t even belong in a 15-minute status meeting. If you make everyone talk about what they’re doing every day, you’re likely to hear about details you don’t care about. I’ll be talking a lot about time-boxing in this article, so when I refer to “agile,” I am primarily talking about the SCRUM implementation of agile.
Regarding communication, how is face-to-face conversation better than the alternatives? Face-to-face conversation requires that all members of the team come to a common location. The last time I worked from a corporate office, that office was 16 miles from my home, and it took about an hour each day driving back and forth. Besides the extra time spent and the cost of driving ($5,145 per year for me, based on the current IRS mileage rate), the rigidity of a daily schedule based on rush hour traffic patterns and my teammates' schedules made face-to-face conversation quite inconvenient. Today, we have the technology to make working remotely as good as or better than co-location. In fact, even when I sat in an office, if I needed to talk to the person in the cube next door, I would often do it through my computer using email or a chat tool because I could attach a screenshot or a file, or include a link, and I could easily refer back to our conversation later.
I love elegant software and elegant processes. Elegant solutions are clever and simple. Agile sounds like it might be an elegant solution to the problems of planning and executing software development. It eliminates much of the upfront planning and design work, preferring a more ad hoc approach, and it supports delivering usable software incrementally as new features are developed. These are good things. You build a minimally viable application, release it, then enhance it every 2-4 weeks. This allows real users to provide feedback before the development team has gone down a long path implementing features users don't care about. So, how might agile be able to deliver these things? Maybe the most important way is by breaking the deliverables into small pieces that have value to users. Those pieces must be small enough to be developed in the duration of one sprint, typically 2-4 weeks. And for one of these sprint-sized features to be valuable, it must be a complete solution to some user story.
One of the problems I have seen in agile projects is that the product owners, in an effort to make user stories small enough to fit into one sprint, write stories that provide nothing of value to the user until combined with other stories. For example, a new feature might require a significant database component plus a user interface component. Both components cannot be completed during a single sprint by a single developer, so the database component is written up as one story, and the user interface component as a second story. The problem is that neither of these stories has value to a user by itself; therefore, these are not "user stories." Maybe you're thinking that the two stories could just be assigned to two different developers, thus completing both in one sprint, but what if one story completes and the other does not for some reason? Then, no user value has been delivered from that sprint. One indicator that a story is not a proper "user story" is that to demo the story, the developer must use tools outside of the software's user interface. I have had to demonstrate many stories by executing an API call, then executing a database query that shows the effect of the call. These were not "user stories". A true user story should be able to be demonstrated entirely within the user interface of the software product.
There is another problem that comes from sizing user stories to fit into the time slots we have chosen for our sprints - developers do not get to see the big picture. A product owner may write a self-contained user story with the intent of writing subsequent stories that extend the functionality of the initial story. In agile, the developer is encouraged by time constraints to write just enough code to satisfy the requirements of the current user story. When a related story comes along, it may be necessary to rework code written for the earlier story because the developer wasn't aware of the additional requirements now being placed on the earlier code. Sometimes this is easy, sometimes it is not and requires a complete rewrite of the earlier code.
A third problem in sizing user stories is that we often estimate wrong about how much we can accomplish in a sprint. If I get a story almost done, I cannot extend the sprint by a day to finish. Instead, the story goes back into the backlog where it will be re-prioritized and re-estimated. By the time it gets picked up again, some relearning may be needed to get back up to speed on the feature. It may even be assigned to a different developer. In my experience with agile, I have found that corporate managers do not tolerate incomplete stories well. They adopted agile to solve the problem of software not being delivered "on time," and now they are using agile but still not delivering on time. You can chalk this up to the manager not truly understanding agile development, but it's a hard argument to make because if stories are not delivered as planned, what did the manager get from moving to agile?
Estimating is hard. We developers tend to underestimate how long it will take to develop a feature because there is often hidden complexity that is discovered during the implementation. When asked to estimate a development task, we quickly try to think of the major components and how they might be built, and how we would integrate those components into a complete solution. We may overlook a component or make incorrect assumptions about how some underlying functionality works that we are intending to use to build our new feature. Occasionally, we get lucky and a feature is easier than we expected, but it usually goes the other way. Some estimating approaches have been devised to address this difficulty. A common technique calls for putting tasks into different-sized buckets. The idea is that if I am tasked with transporting a sofa from Minneapolis to Milwaukee, and I am given a choice of using a Volkswagen Beetle, a pickup truck, or a railroad boxcar, I can probably pick the right-sized vehicle to use. This is the thinking behind using Fibonacci numbers (1, 2, 3, 5, 8, 13, 21, etc.) or powers of two (1, 2, 4, 8, 16, etc.) for assigning story points. If a story feels like it is a bit bigger than 8 points, you don't have to decide if it is 9 or 10 or 11 or 12 points; you just jump up to 13 points (using the Fibonacci sequence) and you're done.
So, what exactly does a story point estimate? Is it how long the story will take to complete? If so, then it depends on which developer it is assigned to. This means the stories cannot be estimated until after they are assigned, but generally, this is not what we want. If you do a web search for agile story points, you will find definitions such as "the difficulty of completing a story," or "the complexity of a story," or "the amount of work to do in a story." These are not very satisfying because, usually, when we measure something, we attach a unit to a number, e.g., 14 feet or 2 grams, but here our unit is story point, the very thing we are trying to define. I have also heard it doesn't matter what we are measuring as long as we always assign points in a consistent way. The reasoning goes that as we get more practice assigning story points, we’ll get better at assigning them, and therefore, we'll get better at predicting how much work we can get done in a sprint. To this, I say, "How long does it take to catch a fish?" If you have a lot of fishing experience, you should be very good at estimating this, but, in fact, there are factors you don't know about. How many fish are in the lake? What kind of fish are they? Are they hungry now? What kind of bait would they eat? Are they congregated in a particular part of the lake? What depth are they at? You might be able to tell me, on average, how long it takes to catch a fish in this lake, but no matter how much fishing experience you have, you cannot tell me how long it will take to catch the next fish.
As I've already mentioned, managers don't like unpredictability. If we committed to 30 story points in a sprint, they want 30 points delivered. After all, we are assigning story points for the purpose of predicting how long it will take to achieve some product milestone. If we put stories back into the backlog, we fall behind on achieving the milestone. At one of my previous employer’s, when it became clear that our project was falling behind schedule (as determined by the CEO and some other top officers of the company), the developers began getting pressured to work longer hours. The argument went like this: "Your team committed to 30 story points this sprint, but based on your progress, it looks like you won't get all your stories done on time. So, how will your team address this? Do you want to work some evenings or work over the weekend to catch up?" As it became clear how far behind the project was, managers began referring to the team's story point estimates as "promises," so the team was not missing their estimates but breaking their promises. Managers told some developers to take on more stories, and the managers decided they would assign the story points instead of letting the scrum team assign them. Everyone came under intense scrutiny, having not only their team's story points monitored but their individual story points. Of course, this didn't promote a collaborative atmosphere as everyone became more concerned with their individual story points than with the team's. One morning, every member of the engineering team received an email from the CTO stating that, effective immediately, it was required that all engineers work from 7 a.m. to 10 p.m., 7 days per week (i.e., 105 hours/week total) to catch the project up. The CEO used some pitiful metaphor about using the pressure of our schedule to turn coal into diamonds. The corporate leadership team had staked critical business decisions on a particular completion date for the project, and the project was desperately behind schedule in spite of using agile. You can argue that this company was not implementing agile correctly, but this same CTO had proclaimed at the start of this project that we would be doing agile by the book, and he literally meant a particular book written by Ken Schwaber, one of the co-creators of SCRUM.
So, how is it that with a fervent commitment to doing agile by the book, the project described above went so wrong? In part, it's because agile, while easy to understand, is hard to implement. Ken Schwaber writes, "Iterative, incremental development is much harder than waterfall development; everything that was hard in waterfall engineering practices now has to be done every iteration, and this is incredibly hard. It is not impossible, but has to be worked toward over time." In the previous paragraph, I described how our management team began to tinker with the agile process. As the project progressed, the managers imposed more rules and more adjustments to the scrum team's processes. Whether or not these managers truly understood agile, they thought they understood it well enough to tinker with it, and telling them they did not know what they were doing would have been a dead-end discussion.
The emphasis agile places on time-boxing is a cause of conflicting incentives for developers. As I've already mentioned, developers tend to underestimate user stories, which leads to stories being returned to the backlog. The stigma of failed sprints leads to more conservative estimating. While this sounds like the right adjustment to make, it can also lead to avoiding risk by taking only stories which the team knows it can easily complete. The incentive for achieving much is replaced by an incentive to do just enough to get by, and nobody is happy with the result. The business gains only minimal value from each sprint, and developers are bored with their work. Further, the emphasis on delivering story points discourages developer growth because there is less risk in assigning a story to someone who has done similar work before. Everyone continues to work in their own niche, and they don't learn new things. Learning new things doesn't deliver story points, so developer growth takes a hit.
Earlier I said that elegant solutions are clever and simple, so it's time to ask: "Is agile clever and simple?" I do think agile is clever in its approach to delivering value incrementally. It lets the software's users guide feature development, and it eliminates much of the upfront planning needed by more conventional project management. It also lets us try out new features before other features are built on top of them. It gets us thinking about software releases as delivery mechanisms for incremental progress, rather than unloading dozens of new features on our users in one go. Agile is also simple in concept, but as I have said earlier, time-boxing gets messy. So much of agile's success depends on time estimates, and we aren't very good at doing them. And there are subtle incentives for deliberately doing them wrong, usually, too conservatively. Just as a marathon is not run as 262 one-tenth mile long sprints, maybe software development is not best accomplished by many little spurts of effort. The glue to integrate all those little efforts is the overhead of agile processes, including backlog grooming, sprint planning, daily status meetings (aka stand-ups), demos, and retrospective meetings. (The latter four are often called the scrum ceremonies.)
A process that tempts us to tinker with it to get it to work is probably not all that simple. And, if it entails more complexity and nuance than the development team or their managers are willing to learn, it is not simple enough to be useful. Simple things don't need much explanation and aren't very nuanced. Elegant processes are like well designed cordless tools - they’re easy to deploy and fun to use. So, if we don’t like waterfall anymore, and SCRUM isn’t working for us, what should we use? I’ll talk about that in an upcoming post.
Subscribe to my newsletter
Read articles from Brian Ness directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Brian Ness
Brian Ness
I began my career as a software engineer with Cray Research, primarily focused on software tools and processes. There, I developed a passion for elegant design. If our tools and processes aren't fun to use, there's probably a better way to work. I like to streamline things that aren't efficient, and find better ways to develop software.