Architecture Review
Most teams I work with exhibit outward signs of health like collaboration, positive reinforcement amongst peers, and resilience to change. Where those teams consistently fall apart is trust. Positive reinforcement is not a substitute for trust. Trust encourages innovative thinking and outcomes greater than an individual’s own capabilities. When a group develops trust, they can go past positive reinforcement and deliver critique.
Architectural reviews are a form of collaborative critique. The author and reviewers work together to gather the required implementation information and contextualize it within the larger desired outcomes. The work of the reviewers is to answer three questions:
- Does the plan meet the criteria that has been independently laid out? Typically this is some larger acceptance criteria the team has developed with other stakeholders.
- Does the plan provide a straightforward path to those outcomes or does it meander unnecessarily?
- Can the work succeed within its team or organizational context? The perfect plan for one organization or team may be of no use to another.
This critique helps authors better understand their problem and find flaws in the solution long before it falls apart in practice. Unfortunately, poorly executed architecture reviews haunt our industry. Deeply detailed documents filled with finger-wagging comments on petty changes seem to be the norm. Our fear of bad critique forms a vicious cycle with our lack of trust until we hit the low ceiling of what we, as individuals, are capable of.
What goes into an architectural design doc
A good architectural review can occur in a short document and involve only a few people. They are hard to finish in one sitting, but can usually be completed in a week of mostly asynchronous communication. They are neither a checklist nor a detailed breakdown, and their artifact is just enough information for the reviewers to answer the questions outlined above. Keep this document short and small early on. Most authors provide too much ancillary information that doesn’t help answer the reviewers answer the questions, and instead clouds the doc. Keep it lean, and add as the reviewers require.
Summary of Approach
We start with the basics of how we are going to build it. The easiest way to do this is to tie some high-level requirements, typically recorded more fully elsewhere, to implementations. So if our requirement is to invalidate signed tokens when a user is deactivated, we might specify the emission of a `UserDeleted` event from a `UserService` and the addition of a listener and blacklist cache at the authentication layer. Most authors will make this section far too long and detailed early on. This information is here for context and “add more details about X” is some of the easiest feedback you can get.
The only other point to capture here is ancillary technical goals. If you are hoping to accomplish something not captured in the existing requirements, it’s important to clearly document it here.
Cross-Team Concerns
Ideally this section would capture anywhere this work either relies on another team or could benefit another team. In practice, this is where we list whatever the team can’t do alone. This section should be comprehensive but not detailed enough to function as full requirements for the other teams we are depending on.
That Which is Hard to Change
In most reviews, this is the section that merits the most detail. Software projects consistently make a lot of early decisions that are hard to change. Patterns are chosen, APIs are designed, and dependencies are set before the first deliverable actually lands. In the types of software I work on, the most urgent additions to this section are typically service APIs, particularly if they are public, and architectural tomfoolery.
Service APIs represent a surface area by which clients will couple themselves to the service. There is no scheme in existence that will remove the burden of supporting many versions of an API concurrently. Yet unless we also change every single existing client that is our only choice. APIs, typically specified in some type of Interface Definition Language, also represent the clearest statement of what will be built.
Architectural tomfoolery refers to patterns, often complex ones, that are deployed in the foundation of a system. Reactive streams. CQRS. Automated dependency injection. Plugin-based architecture. Monadic IO. In some cases, these approaches will be well-known in the organization and they deserve little to no mention here. When these approaches are new or not well understood, they merit great scrutiny. Why here and why now?
That which is different
Finally, we cover any remaining substantial deviations from explicit or de facto standards. This information helps us define risk and additional maintenance cost. As with That Which is Hard To Change, we want to capture some of our reasoning here. If the goal is to experiment, define what that experiment is.
I think this information is harder for less experienced folks to gather, because everything is still new to them. In some cases it might make sense to do a little pre-review work with a more experienced peer. To give a sense of scope, in my twelve years of experience, these decisions probably should have been recorded in a section like this:
- A custom Java web framework (on top of Netty) and accompanying large functional programming library. Used for building cross-marketing tool SaaS integrations.
- A language independent logic rule system implemented in Java and Closure Javascript. Used to validate a form.
- A regular expression based parser for XML, in particular for OpenOffice documents. Written in R. Used to add charts to documents.
- Adding Hysterix and Spring into a Java HTTP service. Used to implement two circuit breakers.
Two of those were mine, and I’ll leave the audience to guess which ones.
Roles during a review
As an author, your role is to complete the initial information outlay. You’ll need to select one or more reviewers and work with them to provide the information they need or to make the changes they require. The author doesn’t need to agree with every point made by a reviewer or to make every change exactly as it was requested. As an author, I often find it easier to take a pool of feedback all at once rather than point-by-point. It helps me focus on the changes that need to be made as a whole rather than getting frustrated with more minor details of the critique.
As a reviewer, your goal is to provide feedback — primarily questions and suggestions — to the author until you can positively answer the review questions laid out above. Your goal is not to make it the best plan or the plan you would have written. That being said, offering personalized advice about how you would approach the problem is fine so long as it’s caveated.
How to get started with reviews
The best way for anyone, but particularly anyone in technical leadership, to get started with these reviews is to author an architectural design document and go through the review process yourself. This works better if you can source a few reviewers that have experience in a successful review process. There is great value for an organization in seeing this process operate successfully. There is even greater value for leadership in understanding what the process feels like on the ground.
It’s common to find that an author or team’s first few attempts at architecture reviews cover bodies of work that are too open-ended and/or large. Instead, review a smaller and more well-understood subset. There is no right size of work to review, but reviewers can’t work with unknown or uncertain requirements.