Part III of the series, "Architecture, Agile, and personal re-invention"
In the previous post we covered lightweight virtualization and its usefulness for architects. On to the actual software environment. OK, so Vagrant gives you raw servers. You still need to install stuff on them. First, I followed Vagrant's advice and used simple shell scripts to do things like install Java and Tomcat. That broke down when I got to configuring Jenkins, so I took a couple weeks and worked through a Chef tutorial.
As in this excellent two-parter, "Vagrant is easy - Chef is hard." But it was the only tool that had full support for automating the Jenkins configuration at an API level, so I went with it, via the Chef Development Kit (ChefDK). I favored its imperative style as opposed to Puppet's declarative approach, as better for an introductory class.
Calavera wound up with 6 nodes:
Manos: Java/Junit/Ant/Tomcat development workstation with remote git
Cerebro: central git. The git instances on the cluster are not Github. This emulates an enterprise environment, hosted on premise. Webhook kicks off Jenkins job on checkin.
Hombros: Jenkins master server
Brazos: Jenkins slave server
Espina: Artifactory package repository (Jenkins archives the build here)
Cara: “production” node (Tomcat) - installed via Chef which pulls from Artifactory. Chose to NOT deploy using Jenkins, in the interest of a distinct release boundary.
Nodes are provisioned with Chef Zero (part of ChefDK) running “out of band” as part of the foundational services. I had considered running Chef Server as one of the cluster, but its resource requirements were too steep. (When the Mac Air gets to 16 or 32gb main memory...)
I had a chicken-and-egg choice of driving everything from a VagrantFile versus a ChefDK .kitchen.yml file and opted for the VagrantFile, so I can replace Chef if I want to. No reason to replace Vagrant, or any logical alternatives. It's a great lightweight tool for pulling it all together.
To set up a DevOps pipeline, one needs at least one programming language. I first learned test-driven development with Java, jUnit and Ant, and wanted a language requiring an actual build step, so I created a simple Tomcat servlet wrapped in a test harness that Ant could execute.
This is trivial and old school and I'm looking at node.js and thinking multi-tier architectures. But from an architectural perspective the pipeline is the pipeline. Walking skeleton is the point - it need not have the flesh of multiple tiers or the latest sexy new stack.
Curious to see if a MEAN stack pipeline MUST differ from a Java pipeline and why. Or how a data and app tier complicate things. This is where reality challenges the convenient abstractions, and why I am doing this explicitly as an exercise in enterprise architecture.
After endless cycles of building and destroying, I released the Calavera alpha earlier this month. I've built it without error many times now, but it's an ephemeral experiment, not well hardened against version updates or even URL changes for the components it needs. Doesn't matter, not the point. It's been useful to give me hands-on grounding in the modern tech, ensuring that my mental model of a DevOps pipeline is well founded, and providing a testbed for my UST labs.
Finally, even though I purchased and scanned the book on Test-Driven Chef, I didn't get there yet. There are interesting nuances to test-driven infrastructure as code. Didn't seem to be the best use of time, and would have made things harder for student understanding. I realize there are strongly held views on the topic…
All is documented in more detail at https://github.com/CharlesTBetz/Calavera.
Was it worth it? There were several learnings and insights generated by this effort, which I was otherwise overlooking in the IT4IT work. A better understanding of the new role of package repos like Artifactory was a benefit. More on this to come, the IT4IT Agile workstream snapshot should be released within a couple months.