Just Deliver, Baby
“When you have great coaches, and you have great players, and you have a great organization, you tell’m one thing– Just Win, Baby.”
In software, it is so easy to get bogged down in process and tooling and finding the perfect, best-optimized way to do something. Agile methodology is just as susceptible to navel-gazing as tech-stack choice and architecture. Worse, it affects HOW engineers get to work on the stuff they are having those other holy wars about.
So what we can do is drop the ego, assume we’re probably wrong about stuff, and chase the only outcome that matters: delivering software. The best way to work is the one that will work. It won’t be perfect. It will have contradictions. Work will settle un-done in the recesses of ambiguity. But if you’re being candid and honest, you’ll be able to respond to problems in a way that requires neither over-engineering a process nor ditching methodology altogether.
For the record, this is exactly the same mentality that led to the Agile Manifesto. It’s just the kind of thing that needs to be rediscovered, over and over, by groups of engineers once orthodoxy overtakes outcomes-thinking. Don’t be the voice of orthodoxy, be the voice of What Works.
That’s easier said than done– many of us bear the trauma of huge failures and frustrations from previous jobs. Many of us want perfect, clean systems that would account for every possible permutation if only humans didn’t have to interact with each other. Many of us think of process as a way for the man to get us down.
The secret to getting past it is, again, remembering the outcome: delivering software.
So, borrow an idea from Kanban. “Start where you are.” Don’t up-end your whole system. Along with the rest of your team, pick the pain point that is most getting in the way of your delivering software and construct an experiment to attack it.
If you don’t have a Where You Are to start from (say, you’re a new team), then you can use my preferred starting point.
How do you construct an experiment?
Experiments are the key to agility. No matter what tools you’re using and what process you adopt, you’re going to need to actually respond to stimulus and trauma. Let’s talk about the components of a well-constructed experiment, and then I’ll run through an example or two.
Define the problem. This is about what’s making it harder to deliver software. It could be “our release process is slow” or “the tests are so brittle we turn them off and then stuff breaks” or “we’re doing too much operational work” or anything, really, that is getting in the way of delivering software.
Define the preferred outcome. This is tightly coupled to the first item. What is the thing you want changed to try to solve the problem– a faster time from merge-to-deploy, or a lower percentage of failed tests, or fewer commented-out tests.
Define the metric. Again, coupled to the above – what does a ‘faster’ merge-to-deploy cycle mean? Days, hours, minutes? What percentage of tests fail and what is a reasonable expectation of what can be fixed? Can you dig into metrics like time-in-status, number of deployments, or anything concrete to help guide what the experiment should be?
Define the timebox. Knowing the outcome and metric, set an aggressive and realistic timeframe (usually no more than a couple weeks, but sometimes as much as a month) to test out the change. You may scale your metric based on what you reasonably believe you can get done in the timebox.
Define the attempted solution. Now that you know the details, you can propose possible solutions and try one. What if we skipped manual testing after product acceptance? What if we allowed teams to release on their own instead of waiting for a batch on a schedule? What if we spent Fridays doing a test-jam to get our coverage up?
Solutions can be technical or process-related, but the idea is that you take a small swing at making a pain-point better. If the experiment succeeds, you reassess and build on the success or move to the next pain point. Importantly: if the experiment fails, that is also a great learning. It’s also where the ‘ego-free’ idea comes in – if your favorite pet idea doesn’t move the needle, believe the data. It’s easy to indict the experiment when you get a result you don’t like– resist that instinct. Run more experiments. You may want to try a similar, but better-constructed, version later (probably not right away). This is good! Refining how you implement a new process before you commit to it ensures that the team understands the value of the change.
We all want to deliver software. We’re designing and adapting so that we can deliver software better.
An Example
A team I was working with was having an issue with ‘bursty’ velocity. Measured over a quarter, their delivery was roughly in line with what we would expect. But in any given week or two-week period (we didn’t do Sprints at this time, but tracked velocity on roughly that cadence), they’d sometimes deliver almost nothing, and other times deliver two or three times what their ‘average’ would come out to. This is not a problem in an absolute sense, but it made predictability nearly impossible and limited our ability to diagnose any other issues with the team’s flow and delivery.
Digging some metrics in Jira, we found that the ‘In Review’ status was a surprising bottleneck. Tickets were averaging around 9 business days in that status. For this team, that could mean no one was looking at the ticket, but it could also have meant that there was too much work that was failing review and ‘bouncing back’ to the initial engineer. Looking at comment histories (mostly spot-checking, not pulling detailed reports), both options seemed viable, but the feeling on the team was that people weren’t prioritizing code reviews as much as they could.
So we set a week long experiment–
Every time you break flow (for a meeting, to go to the bathroom, to eat lunch), you would come back and do a review. If we saw a reduction by a noticeable number of days, we’d consider it a success.
The end results were middling. We got the average number down to 7.5 days, which is a good percentage change, but the team felt that the experiment wasn’t sustainable– they would dread coming back to big PRs that would stop them from getting into good, deep flow time.
Notice that finding.
The team was dreading big PRs.
The next experiment we tried for a week?
Limit PRs to under 200 lines.
The result was a drop to less than 12 business hours in the ‘In Review’ status.
We made that limit a team norm. We wanted to enforce new process. The team treated it as a point of pride to drop that number until it was “around 100 lines.”
A ‘failed’ experiment led to one that actually worked, and it created an incentive to behavior that started chipping away at the ‘bursty’ nature of our work. We started thinking about shipping small pieces of work regularly.
Was this the ‘ideal’ size? Probably not.
Could a different experiment have led to a better outcome? Maybe!
But this one worked, accomplished something we wanted, and helped us deliver software better.
Many, many more experiments were necessary to get us to a ‘flow’ state, but this one solution set a pattern for the team really enjoying thinking about and experimenting with process.
That’s the essence of agility – owning your process, defining outcomes, and trying something. Especially if it’s a bold departure from what you would expect. Especially if the experiment goes against long-held assumptions. You’ll either validate what you believe or learn something new. Either way, it gives you the real information you need to improve and deliver.
How Bold?
When I say ‘bold departure from what you would expect,’ I mean it. Nothing is sacred, everything is up for experimentation– but you need to construct the experiment in a way that doesn’t screw over your coworkers or customers.
Want to require pair programming on every ticket?
Cool! What’s your measurement for ensuring delivery is still on track? Maybe you’re worried that some executive will catch wind of this and break it up out of a desire for ‘efficiency.’ The solution to that is simple:
Deliver software.
Honestly measure if the outcomes are better, and if challenged, provide that data. More likely, if you’re delivering software, you’ll give no one any reason to inspect the ‘how’ at all. And sometimes when they do? They’re looking to their highest performers to see what they can glean for the rest of the organization.
Do you want to try a No-Meeting-Thursday?
Do it! How will you measure success? How will you ensure responsiveness to emergent issues? What if someone on support complains that they can’t get you on “just a ten minute call on Thursday”?
Deliver software.
Show why that entire flow-time day improves your delivery. Make it more than just vibes. Demonstrate the through-line on ‘heads down on Thursday’ to ‘getting the last bugfix you needed out in less than a day.’
Want to eliminate story point estimates because refinement takes up too much time?
Okay! Do it! Just make sure folks who are relying on your estimates for forecasting are getting something that will satisfy what they need. Product, marketing, exec– they may have to be part of your experiment, too.
You will be shocked at how on-board other departments are if you include them in determining the outcome, defining the measurement and, you guessed it:
Delivering software.
A relentless pursuit of improvement in service to delivery will keep your team focused on the important parts of how they work. That pursuit will challenge your assumptions. It will likely piss you off. But when you come out the other side, your process will be custom-tailored to your team. And the things that start to pinch, the stuff that gets in your way? You’ll have the tools to get them out of the way too.