Find my posts on IT strategy, enterprise architecture, and digital transformation at ArchitectElevator.com.
Finding more precise and repeatable ways to make design decisions seems to be the holy grail of software development productivity. Compilers already perform the tedious task of converting high-level instructions into low-level assembly codes or byte codes. There is little debate that the transition from coding in assembler to coding in high-level programming languages has brought productivity gains. Similar gains are now being sought by a number of groups around the Model Driven Architecture (MDA) movement. The basic promise of the MDA is that instead of writing solutions in a high-level programming language, developers / analysts will define a platform-independent model of the problem to be solved and leave the code generation to a model compiler. Of course, this compiler will have to make design decisions during the code generation. This forces the developers of the model compiler to find precise guidelines on when one design is preferred over another one, much like a novice developer expects clear-cut answers on questions like "should I use a factory?" or "is it worthwhile to extract a common base class?". Most of the time, the novice developer is frustrated that the master developer cannot give him a clear dividing line for such decisions, but instead mumbles "it depends", while having scratching his or her chin.
Now what does it depend on? is surely the burning question on the novice developer's mind. Every electrician can tell me that I need to use gauge 8 wire instead of gauge 10 if I have more than X amps of current going through the wire! Why can't I get such a book to make my software design decisions? The answer typically includes such things as how flexible the solution has to be, how much time is available, how fast it has to be etc. These criteria are the forces that form an important part of documenting design patterns. One force that is often not spelled out explicitly, though is What has been done so far?
Design decisions are hardly ever made in a vacuum. Typically something has been created and the next decision builds on what already has been created. The system that is being created has an inherent memory of what has been done to it in the past. This notion reminded me a physics concept called hysteresis. According to Webster's, hysteresis is "a retardation of an effect when the forces acting upon a body are changed". For non-Physics PhD's the most common textbook example is that of magnetism. Once magnetic field is applied to a piece of metal, the metal will "remember" the orientation of the magnetic particles. When the field is reversed, it takes more strength to undo the existing magnetism before the piece changes polarity. This is the retardation of effect the definition talks about. This process is typically associated with a curve like the following, where the X axis shows the force being applied (e.g. the magnetic field) and the Y axis shows the effect (e.g. the magnetic orientation of a piece of metal subjected to the field):
Many control systems intentionally build in hysteresis. For example, a refrigerator may start the compressor when the temperature is above 45 degrees. If the compressor stopped whenever the temperature goes below 45 degrees, the compressor would run for a few seconds until the temperature reaches 44.99999 degrees and stop again, just to kick in again as soon as the temperate climbs one thousands of a degree. No fridge would last very long at that rate. Therefore, thermostats are designed with a built-in hysteresis, so that the compressor runs until the temperature reaches 43 degrees.
Thanks for the refresher in physics and home appliances, you might think, but what this have to do with software design?. The interesting result of hysteresis is that it shows a certain amount of "it depends". Should the fridge's compressor run at 44 degrees? It depends -- on whether the fridge is in a cooling or warning cycle. At what temperature should the compressor go on / off? Again, it depends -- it should go on at 45 degrees but turn off at 43 degrees. The area in the middle of the picture above describes the area where there is no clear cut answer unless you know the history of the system you are observing.
The same could not be more true of software design. Most decisions are not made by a certain parameter exceeding a specific threshold, such as "if you have more than X application, you should use a message broker vs. point-to-point connections. These decisions depend very much on what you have done so far. Have you used a message broker everywhere else? Then it is probably reasonable to use one for only 3 applications. have you build a sophisticated framework that lets you build point-to-point connections very easily? In that case using point-to-point connections for 4 applications might still be fine. You could imagine the graph shown above with the X axis being the number of applications and the Y axis indicating broker or point-to-point. Just as with the magnetic material, the decision is subject to a retardation effect. So there is no precise answer to the question "how many applications do you need to have to justify the use of a message broker?", even if most external factors such as scalability or reliability are held constant.
I hope the little excursion to hysteresis illustrated one aspect why software design decisions are not as clear cut as some engineering decisions that typically apply to new construction and do not have to deal with the "memory" of the existing system. I believe this is yet another example that underlines the usefulness of patterns in explaining design decisions. Patterns allow us to list the individual forces impacting the design decision (often including the history of design decisions) without having to present a magic number that forces the decision one way or another. And when someone does nag you for that magic decision-making number, just tell them it has something to do with hysteresis...