Part V and last of the series, "Architecture, Agile, and personal re-invention"
For my long time readers who may be scratching their heads over the last few posts, here's where I bring it back home.
Evolving the Calavera simulation of enterprise IT
As I've stated elsewhere, the Calavera project is a microkernel. But the intent is to understand the emergent dynamics. So it needs to scale, in a couple of ways.
A simple one-server Tomcat instance needs to expand to a multi-tier, multi-node example on more modern technology (e.g. the MEAN stack). In keeping with 12-factor architecture, I will look to containers and embedded Web servers such as Jetty. Then we need to find some way to simulate several of those multi-tier examples, then several hundred.The simulation moves from VMs, to containers, to some simplified abstraction of a pipeline as an object in a simulation engine.
Now things get interesting. We need to define a metamodel. The following is my interpretation of 12-factor principles combined with other learnings.
First, there is the overall concept of the pipeline. This represents the control and choreography architecture that manages the product lifecycle. See this Chef announcement for usage of the term.
(I'll be elaborating the other concepts further as I progress on the third edition of my book.)
Notice that the pipeline can bootstrap itself; that is, because the pipeline is itself composed of particular deployed products, I can use the pipeline to build the pipeline. This will be the Calavera approach across the board.
We can scale a simulation of a product by simulating more and more nodes with associated complexity, ala Cockcroft. But thanks to modern automation, the # of people needed to build and run a product scales much, much more slowly. Nowadays your product can scale up an order of magnitude and the staff needed remains constant or increases by one or a simple linear factor.
Simultaneously, we need to understand the emergent dynamics as the associated people scale. This is not the same as scaling an individual simulated product.
This article is an excellent discussion of such emergence. I am developing a related theory that brings together the Agile, product-centric, unicorn world with the traditional enterprise world of ITIL and EA in a unified framework:
I am currently using this framework as the basis for my teaching; will elaborate in an upcoming blog and it also will be the basis for the third edition of my book. The simulation will build in similar manner:
- Simple collaboration capabilities (kanban, basic ticketing, etc)
- Then more differentiated processes (ITSM, but with a skeptical, incremental, and queue-aware approach)
- Then finally the sensing mechanisms (including architecture & analytics) needed for adaptively coping with and directing complexity at scale
Each level can be roughly mapped onto the Cynefin framework:
- Inception and collaboration: Cynefin Obvious (but with a big asterisk, because we are dealing with computers)
- Elaboration: Cynefin Complicated
- Maturation: Cynefin Complex
Chaos as defined in Cynefin can emerge at any point...
A couple thoughts in closing ...
Architecture as code
As I was dabbling with Docker, the thought "Architecture as code" occurred. What if Docker containers were mapped onto UML or Archimate components? In such a way that a containerized testbed system could be generated from the modeling exercise?
Advantage: Pre-packaged product access
Finally, I have my students installing Nagios for a reason: because HP and IBM make it way too hard to stand up their stuff on similar terms! Commercial software vendors need to make it much simpler for developers, architects, and engineers to try out their software. If I can sudo apt-get install and download from a resident package manager, why am I going to try something that requires a sales cycle interaction?
All for now... but much more to come.