The Architect's Dilemma

January 20th 2008

Most architects and developers are familiar with this fork in the road:

Your system needs to do X now, but in the future it may need to do Y as well. Knowing that X and Y are closely related, do you spend a little extra time now and design X in such a way that adding Y in the future would be easy? Or do you keep things simple, focus on the more pressing X now, and ignore Y until the day when you really need it?

Advocates of the first path are in the Big Design Up Front (BDUF) camp, and would argue that an investment in up-front design will pay dividends in maintainability, extensibility, quality in the future.

Proponents of the latter path are in the Agilist camp, and argue for keeping things as simple as possible, since you're probably not going to need Y in the future anyway (YAGNI) or if you do need it, by that time things will probably have changed such that Y will really be more like Z, and your up-front design will still be for naught.

Most of us, however, realize the truth is somewhere in the middle. Some things obviously must be designed up-front since they're just too tough to refactor later, but other things can wait. Unfortunately though, in practice, this middle-of-the-road sagacity isn't very helpful, because it still doesn't tell us which things are which. When exactly should we design up-front? And when is it just better to wait?


Whether to design something up front of course depends on what that thing is. Cross-cutting concerns (e.g. transactions, exception handling, etc), database schemas, infrastructure choices, and core IP - all these things are good candidates for big up-front design.

But doesn't the context of the project also matter? A start-up with 6 months of funding left can't spend precious hours designing up front in the same way that a Fortune 500 company can, right? The start-up needs to release the product or it won't be around.

And what about the complexity of the system? How much up-front design is needed for a simple, static web site?

And while we're at it, what about the size or experience of the team? Or the maturity of the domain? Or the competitiveness of the market? And on and on and on...

Wow. What seemed like a simple this-path-or-that-path decision, is actually a very complex, multi-attribute, decision under uncertainty. And with such a complex decision, it's very easy to overlook some important factor, or get lured by a spurious goal. In other words, bad decisions are easy to make.

Decision Trees

I've found that trying to make sense of this type of problem without knowing how to frame it in a way that takes into account these different factors and risks, is a recipe for a bad decision. To that end, white-boarding a simple decision tree can go a long way it makes the decision easier to think about, easier to talk about, and extracts features that you might have overlooked otherwise. And it doesn't take a whole lot of time.

Let me be clear though: I'm not proposing this as some new silver-bullet methodology for making all architecture decisions, just a helpful technique for thinking about a non-trivial decision.

Ok, here's a simple decision tree for the YAGNI vs. BDUF decision.

The top node in the tree is the actual decision to be made: "design for change or don't design for change". This is the fork in the road either you go one way or you go the other.

The next node is a possibility of the world; a state of reality that is out of your control. For example, in the YAGNI vs. BDUF scenario, either the business changes in such a way that feature Y is necessary, or the business doesn't change and Y is irrelevant, or the business changes but Y is now Z. On each of these branches, assign a probability to the possibility you're degree of belief that that possibility will become a reality. For instance, what do you think the chance is that the business will need Y (and exactly Y) in 6 months? 80%? 50%? The sum of each branch should be 100%, since it's a certainty that one of those things will happen.

Lastly, the boxes at the leaves of the tree are the outcomes i.e. what happens if that particular path in the tree became a reality. The better you can quantify these outcomes, the better decision you will make. In the world of software development, however, assigning cardinal values (e.g. time, $$, etc.) to the outcomes is a stretch you'd have to know how much the feature would cost to implement, how much it would add to the bottom line, etc. Giving it an ordinal value, however, can be a good approximation. On a scale of 1-10, how well off would the organization be if you didn't consider Y up front, but it turned out that you did need it in 6 months? Would massive refactoring be required? On the other hand, how well off would the organization be if it spent time designing for Y, but it turned out that Y was unnecessary? Would you have missed a critical market window?

Once you have both the probabilities for the branches and ordinal values for the outcome you can calculate the expected value of the tree. For each outcome, multiply the utility times the branch probability. Then, sum the leaves for each decision, and compare.

For example, for the YAGNI vs. BDUF decision described above, I've assigned the following probabilities and utilities:

             Design up front or not?
                |                |
              YAGNI             BDUF
            |      |         |     |     |
           .50    .50       .50   .25   .25
            |      |         |     |     |
            8      5         3     10    2

YAGNI = (.50 * 8) + (.50 * 5) = 6.5
BDUF = (.50 * 3) + (.25 * 10) + (.25 * 2) = 4.5

Decision: Don't design up front!!

Initially, without using this decision tree, I might have invested the time up-front to design for Y, so if I needed it, it'd be easy implement. However, taking a few minutes to sketch out the tree has shown me that even though this outcome is highly desirable (10 on a 1-10 scale, by my estimates), there's only a small chance (~25%) that I'll need exactly Y, and a larger chance (~75%) that some other bad possibility will come true (i.e. either I'll never need Y and have over-designed, or I'll need not Y but Z, and will have to refactor). Because of this, it'd be much wiser (i.e. higher expected value) if I just focused on X now, and worried about Y later.

Like all good decision making techniques, the value isn't in the actual calculation as much as in the process of using the technique itself. In this case, making this tree forces you as the decision maker to think about the options open to you, the probabilities of certain events happening, and the relative utilities of all possible outcomes. Considering all of these things will hopefully lead to a better decision, but will definitely lead to a more defensible and more informed decision. Good luck!

I'm an "old" programmer who has been blogging for almost 20 years now. In 2017, I started Highline Solutions, a consulting company that helps with software architecture and full-stack development. I have two degrees from Carnegie Mellon University, one practical (Information and Decision Systems) and one not so much (Philosophy - thesis here). Pittsburgh, PA is my home where I live with my wife and 3 energetic boys.
I recently released a web app called TechRez, a "better resume for tech". The idea is that instead of sending out the same-old static PDF resume that's jam packed with buzz words and spans multiple pages, you can create a TechRez, which is modern, visual, and interactive. Try it out for free!
Got a Comment?
Comments (5)
Geoffrey Clapp
January 21, 2008
I agree, it does get you to think. For that, it has merit. However, layers and layers of YAGNI tend not to pile up with out a design, and without a systematic refactoring approach, lead to a pile of items that are impossible to keep working together. Is YAGNI+Refactoring it's own area (in the gray)? I do not disagree with your tree, it's just where /how it is applied in the evolution of a system. You do make this point, I just wish it was stronger, - at the application layer, you have a very good point (depending on business and market), but at the BI and Data layers, it feels a little haphazard for a enterprise class system.
January 22, 2008
Thanks for the comments, Geoffrey! Agreed, when you actually would use this in the course of building a system is a bit fuzzy. Obviously, developers and architects make scores of decisions every day, and sketching a decision tree for each would be crazy. Further, this tree is a big simplification of the actual decisions we make - modeling a more real-life example would be almost intractable.

In general, however, I think they could be helpful in the early inception/elaboration phases when thinking about large, important, strategic, architectural decisions. I think they could also be helpful for justifying/documenting these decisions for management for the purpose of getting buy-in.

BTW - the SEI has a concept of a Utility Tree in their ATAM methodology that's pretty cool - basically a way weigh non-functional, "ility"-type tradeoffs when doing an architecture assessment:

January 23, 2008
I like very much the idea of using decision trees to adapt the process to the project. I have however a small remark: using the acronym BDUF to define the design camp/process is in my opinion implying a negative view about software design. This term is mainly used by agile people that are against this practice.
March 25, 2008
Nice use of a decision tree.

As a developer (and graduate student as well), this situation comes up often. Deciding where to draw the line with respect to project structure, the build process, unit tests, or developing new features is rarely a simple effort.

I have to agree with the wise words of the Dalai Lama when he says that there are no absolutes and we must judge according to the circumstances. The challenge then becomes understanding the circumstances. Awareness and information seeking behavior is critical. As it also said, "see things for yourself."

Very interesting blog, I look forward to reading more of it.
James Mauldin
August 31, 2011
Your math is off but you're still correct. The total utilities of each design decision should be equal. Let's set YAGNI utility to 10 +5 = 15 so that it matches BDUF utility 3 + 10 + 2 = 15, in which case YAGNI comes out even better (7.5).

Now let's test the sensitivity of your argument by changing the YAGNI probabilities to .25 and .75 (change is inevitable!) and you're still ahead (5.0).

But my experience is that YAGNI always wins - with one caveat. Too often there is a knee-jerk reflex to build in flexibility where the requirements don't ask for it, with all the resulting evil consequences: code bloat, bugs, greater testing and maintenance effort, higher configuration and deployment costs and so on.

The one caveat is that you have to define the major building blocks of BDUF, along with some drop-dead dates, otherwise the whole process just drags on forever.