A “Where” to Start From

I would never assume to tell someone what their perfect agile structure will be. My entire philosophy on agile development is that teams own their process— and that you should start where you are.

All that said, sometimes you just need…something. If you want a heavyweight system, you can completely follow the Scrum guide. You can even use SAFe or LeSS or some other strangely-capitalized system.

If you do try to implement something with a name and a structure, do me a favor.

Implement all of it.

If you’re relying on the research and years of experience of someone trying to coach you, don’t dismiss whole pieces of it because they look like something you didn’t like before. Especially with Scrum, if you’re just picking and choosing pieces, and it fails, you will have no way of knowing if the problem was Scrum or a lack of followthrough. See Ron Jeffries’ excellent piece “We Tried Baseball and It Didn’t Work”.

For the rest of you— folks who are bought in on the way I’ve run teams, for instance—I’m going to lay out the basic structure I use when I am launching a team. It’s not perfect, but remember: it is a system designed to be changed by experimentation. This is a predictable starting point, not an installable-system.


Teams

A team consists of:

  • 2-6 engineers (including SDETs/quality engineers)

  • A product owner

  • (Possibly) 1-2 designers

  • (Possibly) An agile coach

The team is the atomic unit of delivery-responsibility. Heroics may be called out occasionally, but the responsibility for delivery falls on the team as a whole. Notice that SDETs/QA have engineering responsibilities and vice-versa. The whole team is responsible for validation and automation. SDETs are subject matter experts the same way a Front-End developer may be an SME for React or Angular.

A team has a Charter, which consists of:

  • Values - The beliefs the team holds to. Faced with pressure, and in the absence of any other guidance, team members are expected to follow these values when making decisions.

    • Example: “We are educators-- it is as important that we can make others understand concepts as it is that we can implement them. We have to be constantly learning in order to better serve our teams. To this end, we do not do work where we cannot learn or improve experience.”

  • Norms - Work-specific items. Some are generalized, but most will be generated by experiments that get promoted to Norms.

    • Example: “When rolling out large changes, we bring the items to tech sync, dev jam, or an ad-hoc meeting for cross-team discussion so that our customers have a say in what we build and how."

  • Experiments - A list of the ongoing experiments that are trying to improve our process of Delivering Software.

    • Example: “We are enforcing a 100-line per PR limit for 1 week to see if it reduces our time-in-status for ‘In Review’, thus improving flow”


Basic SDLC

The basic software development lifecycle I prefer is roughly based on Kanban. I’ll write more articles over time on each phase, but the flow I start with is as follows. These phases can be roughly mapped to columns in a Kanban board on Jira or the like.

  • Backlog - Work is defined. Acceptance criteria is created and the “what are we gonna do” question is answered, generally in tickets.

  • Work Refined - Things that have been refined by the team. This means that the engineering and design team has talked with the Product Owner about the work, figured out how to break it into small segments, and ideally estimated how much effort / complexity / uncertainty / risk is involved.

  • Development - Work that is actively being developed. “Actively being developed” means not just writing code, but creating necessary technical design documentation and automated test code. Engineers validate that their code meets acceptance criteria by running it in a development environment before moving it on to review.

  • In review - Work is being reviewed by peers (generally, for me, through PRs in Github) and the engineer(s) who did the coding is/are responding to feedback. Code and automated tests are reviewed for accuracy, style, security, and performance. But also team members consider whether review feedback should be a blocker or just a note for a follow-up ticket. In general, I prefer teams to have two approvals on any PR, but that is easily negotiable per team.

  • PO Acceptance - Code is stood up in a pre-production environment in order for the product owner to validate that code has passed their written acceptance criteria.

  • Done - Code is merged and pushed to production, live for use by customers (pending potential feature flagging).

This system is very much a “Happy Path”.

Many intermediate states may be necessary for, first, primary work — namely if a ‘UX Review’ or ‘Manual QA’ status is needed— but also for waiting states like ‘Ready for Acceptance’ or ‘Ready to Deploy’. Teams are encouraged to add statuses to visualize their implicit queues so that metrics can be gathered (time-in-status). This helps identify where inefficiencies exist.

All that said, teams should assess if they actually need those intermediate states. Teams are expected to optimize flow through the whole system, meaning that in these categories, Developers aren’t only responsible for “Development,” and SDETs aren’t only responsible for “In Review”: the whole team must focus on getting work flowing through the system at a constant rate, even if that means working outside of their specialties.


Flow

The idea of ‘flow’ is taken from Kanban. In essence, it is a concept that optimizes how work moves through a system. Consider a car factory — it is not helpful to have a machine that stamps out and builds doors lead to a pile-up because the ‘construct chassis’ section of the factory is understaffed. The goal is to construct a whole car, and passing blame via “well I did MY part” is not good enough. It delivers no value.

In that way, software teams are all responsible for the lifecycle of their product features. Design, development, review, and deployment are responsibilities of the team as a whole. Optimizing the flow through every step in that process, not just the most natural for any member’s role, is the heart of agile development. Faced with “if only X, we’d be in great shape,” the team should find a way to change X, or, failing that, agitate for the ability to effect change in X with upper management.


CI/CD

If you’re using this as a basic guide, my hope is you’re not stuck maintaining legacy software with legacy build and deployment technology. As such, my recommendation is that you pursue true Continuous Integration / Continuous Deployment— namely, that code is regularly merged, and that each merge is automatically deployed to production.

This kind of system requires a comprehensive automated test suite that runs in the pipeline. It requires good feature-flagging so that half-finished software doesn’t break user experience. This is good! These are the kinds of things that are inexpensive to add early, and exponentially harder to bolt on over time.

The reason I prioritize CI/CD early is because an ability to continue to deliver code in near-real-time is difference-maker in claiming market-share and gaining confidence with sales and support. Engineering’s ability to deliver is directly tied to how easy it is to deploy and release code. You don’t need to get ready (to deploy code) if you stay ready (to deploy code).


General Values

The above is intended as a starting point, but you’re going to run into problems that aren’t covered. For those, I base decisions on some very basic values that help guide decision-making organization-wide:

  • We prefer smaller changes, released more frequently.

  • We prefer short feedback cycles — both while developing (within the team) and after release (with customers).

  • We prefer team-based ownership of work, not individual siloing. Our job is building and maintaining software that leads to outcomes, not slamming out code.

  • We prefer automating any onerous task — both within product features and in how we test and ensure quality.

  • We prefer experimentation with clear goals and measurement over pretending that we can know “the right” answer at the beginning.

  • We prefer building only that which differentiates us from competitors— we buy services and software that provide commodity value.

Previous
Previous

Just Deliver, Baby