Even though my exploration into the questions I broached in my last post didn’t actually continue with what you will find in this post, I’m going to pretend it did because it makes for a better narrative. Please bear with me.
I’ve been working my way through the very new and excellent book Pro Puppet by James Turnbull and Jeffrey McCune. Given my last set of questions, I was excited to hit the third chapter in the book which is all about workflow, how Puppet gets used with a VCS (git) and all that good stuff. And then I started to read…
[stextbox id=”warning”]Let me say right up front that this is (so far) an excellent book. I am enjoying it thoroughly. So good that I’m linking to Amazon and showing a picture of the cover so I can help their book sales. It has a ton of stuff in it at exactly the right level for the sophisticated user who wants to engage with Puppet.
I need to say this because in just a moment I’m going to be quoting from the book in an apparently critical fashion. This is because I want to level some criticism at the tools it describes and how we are being compelled to use them by the lack of a better alternative, not the book or the authors’ work. I have no doubt Turnbull and McCune are describing the state of the art. I just desperately want the state of the art to suck less.[/stextbox]
Chapter three tells the tale of a standard infrastructure (mail, web, DB servers) managed using separate development, test and production environments–all of this handled by a single Puppet install. And in this happy little world we have three team members: the system administrator1, the developer and an operator who are all attempting to play nice2. Sounds pretty much like your workplace, right?
Ok, so let me see if I can summarize just how the authors propose this all should work. First the prep work:
- Within the /etc/puppet directory, we have a modules directory for the production environment configs. This directory is made into a git repository3.
- We clone that repository into (newly made) /etc/puppet/environments/development and /etc/puppet/environments/testing directories4. They will be used for the dev and testing environments respectively. Git “remote” references are then added between the repositories to make it easier to move things between them as necessary.
- Next we create a new “bare” central repository5 that will be used as a rendezvous point for the three team members to exchange changes between themselves and with the Puppet server config directory (which will now be checked out from this central repository).
- Each of the team members is expected to check out a working copy of the central repository6 into their home directory, then…
Now the actual work to make an edit, each person will:
- create a branch in their working copy within which they will make their edits.
- make the change to a file in that working copy
- commit that change7
- push that staged commit with the new branch in it up to the central repository
- on the Puppet server itself, logged in as the puppet user, in the right config directory, use git to check out the the right branch from the central repository into that directory8. This check out operation will switch that directory to the branch.
- run the puppet agent command (maybe in –noop mode to make sure the change really makes sense)
Doesn’t that sound like fun? Does the following quote from the book make it sound any more fun?
(speaking of a second team member repeating the process we just described with his own change…)
This process will switch the current development environment away from whatever branch it was previously on. This could potentially interfere with the work of {the first team member}. If this becomes a common problem, it is possible to set up more environments to ensure each contributor has their own location to test their changes without interfering with others.
So we are at 5 separate git controlled spaces, each with its own state (branch, remote references, etc.) and we’re still bound to bump into our colleagues. On top of that we’ve got a lovely multi-step process after a change is made that the book more succinctly elsewhere describes as:
The overall workflow {the second team member} follows is to push their topic branch to the central repository, fetch the changes in the development environment’s repository, check out the topic branch, then run the Puppet agent against the development environment.
I can’t tell whether to be dismayed more by the number of steps, the possibility for human error , the sheer quantity of git commands, the need to have everyone run something manually on the server as a separate shared user or what. At the very least it appears each person has to keep lots of different sets of context (what branch, what environment, what remote repos, what change, what is it going to effect, and so on) in their head for each change to the environment.
Now, I’m sure that some of this can be ameliorated by writing a number of shell scripts9, but boy does it give me the heebie jeebies. I know it certainly doesn’t make me feel any better about the questions I raised in the my last post.
Summary: love the book, dislike this particular solution in it.
Luckily, I did find a better answer…
- who is talked about using a female pronoun, kudos to the authors! [↩]
- as opposed to some the potential Lord of the Flies scenarios [↩]
- Count along with me boys and girls as we create a number of git-controlled directories/repositories in our journey. This will be number one. [↩]
- That would be git repos 2 and 3. [↩]
- Yup, #4. [↩]
- One copy each, but we will just follow one ball at time so call this git area #5. [↩]
- since we are in git-land, the git commit command is perhaps better described as “staging” the change. [↩]
- yes, you heard that all correctly, hope you got it all right [↩]
- I don’t begrudge the authors for not demonstrating that; as an author myself I understand how it sometimes doesn’t make sense to add another layer to an already complex explanation. [↩]
{ 0 comments… add one now }