Moving on with the 2nd in my series of Enterprise Devops posts.
As I write this, I am listening to this excellent presentation of John Allspaw and Paul Hammond of Flickr, which goes into great detail around the core DevOps problem of writing code and applying continuous integration principles.
However, my intent in this blog has more to do with the problem of "infrastructure as code." As I noted previously, DevOps practices in the enterprise case would need to accomodate a wide variety of tools and middleware beyond Java, RoR, and the LAMP stack.
In general, I view this all as an exercise in element configuration management (discussion of three flavors of configuration management, also element vs. enterprise config). The question for the enterprise is
to what degree is your current infrastructure technology portfolio suitable for DevOps practices?
This is a hard question, when we consider the plethora of products operating in a modern enterprise - portfolios can range as high as the tens of thousands of distinct products across multiple eras of major platform technologies.
But there are significant commonalities as well, which is why I believe that the concept of "Enterprise DevOps" should not be dismissed out of hand. Regardless of platform, scale, or vintage, I would propose that ANY configurable product that might figure into a DevOps pipeline needs to be "well behaved" in certain ways:
- Ideally, it needs to treat configuration directives as scriptable input and output. In general, these configuration files should be based on a human-understandable character-based syntax. XML files, JSON, and key-value syntax are common. Semantics are typically tool specific. Richer semantics are used for richer functionality (e.g. SQL DDL). Of course, general-purpose computing languages can be seen as the apex of this continuum, but in general the "infrastructure as code" discussion is about more bounded functionality. (The configuration vs. customization debate lurks just under the surface here.)
- These configuration files and their consuming utilities in turn should be amenable to management in a source repository (text differencing will be a key tool to detect drift)
- They also need to support scripted, choreographed control, invokable from a command line. Well known examples would be using isql (isql.exe) for MIcrosoft SQL Server, or its analog SQL*Plus for Oracle, to run data definition language (DDL) scripts (foo.sql) that create, alter, or delete tables and other database objects. Higher order middleware and production services products (batch scheduling, ETL, EAI, etc) often follow this pattern.
- Configuration changes should generate predictable output in the form of logs or other systems events. This output should provide positive confirmation that the configuration change was enacted as expected. Ideally, the command line utiility has rich flagging options so you can tune the log output to some degree.
- Fully automating a pipeline that requires changes across multiple platforms will doubtless present architectural challenges. How do I execute DDL on a mainframe DB2 database and report the result back to a Linux-based orchestration/choreography engine in an ALM tool? Ability to communicate across those highest platform walls is key.
The GUI problem. All of this may seem obvious to a person whose daily work is software engineering, as well as many systems engineers. But some systems admins spend much time in graphical user interface-based consoles, which become their primary means of interacting with the elements under their management - whether databases, messaging middleware, batch schedulers, or what have you.
Now, I have on occasion encountered platforms with such a strong GUI bias that lights out, scripted processing was almost impossible, but this thank goodness has been relatively rare. (Anyone out their with stories to share?)
In the more common case, while command line capabilities exist, the sales cycle and training centers on the executive-friendly GUI console, with command line use being a matter of personal choice by administrators. I have on occasion spent much time as an architect researching the full CLI (command line interface) of a given product as there was no other way to "get there from here" in terms of a solution.
Often, I would unearth functionality that was unknown or underappreciated by the designated tool administrator. (Tact may be called for in pointing this out.) In general, I have always found surprises in that kind of research, usually good, sometimes disappointing, in terms of a tool's true power.
Thus, the constraint for some of you may in fact be your operations teams' unfamiliarity or unwillingness to crack that obscure "Command LIne Reference" manual they received with the product documentation, or never downloaded from the vendor. Again, respect and tact may be called for in pursuing this particular line of inquiry.
Some of you may have started to manage the Technology Product Lifecycle. If you have enumerated the technology product portfolio of your enterprise, one attribute you might track is "DevOps suitable." See that link also for how to distinguish between the Infrastructure Service Lifecycle and the Application Service Lifecycle.
As always, constructive dialog, criticism and correction much appreciated.
Introducing DevOps in traditional enterprise, Niek Bartholomeus
Enterprise DevOps: Scaling Build, Deploy, Test, Release, Eric Minick, Urbancode
10+ Deploys per Day, John Allspaw and Paul Hammond of Flickr
The "Hosting Zone of Contention" - an oldie from the 1st edition but still relevant.