Understanding Hierarchical Data Lookups

An in-depth explanation of hierarchical data lookups and how it relates to infrastructure configuration

In this post I’m going to explain a concept called hierarchical data lookups. That’s a term that could mean different things in different contexts, so to be clear, although this is a pretty general concept I’m going to discuss hierarchical lookups how they relate to Jerakia, an open source data lookup tool. I’ll start by looking at what we mean by hierarchical lookups and then move on to how that is relevant to infrastructure management and other uses.

Defining hierarchical lookups

In essence, Jerakia is a tool that can be used to lookup a key value pair, that is to say, given a key it will give back the appropriate value.  This is certainly nothing special or new, but the crucial difference here is the way in which the data is looked up.  Rather than just querying a flat data source and returning the value for a requested key, when doing a hierarchical lookup we perform multiple queries against a configured hierarchy, transcending down to the next layer in the hierarchy until we find an answer.    The end result is that we can define key value pairs on a global basis but then override them under certain conditions based on the hierarchical resolution by placing that key value pair further up the hierarchy for a particular condition. Let’s look at a fairly simple example, you need to determine from a data lookup what currency you need to bill a user coming into your site.  You already have data that tells you which country and continent that your user is based in.  You determine that to start with you will bill everyone in USD regardless of where they come from.  So you store the key currency with a value of USD in a data source somewhere, and whenever a user starts a transaction, you look up that key, and they get billed in USD.

Now comes the fun part.  You decide that you would like to now start billing customers from European countries in EUR.  Since you already know the continent your user is coming from you could add another key to your data store and then use conditional logic within your code to determine which key to look up, but now we’re adding complexity within the code implementing conditional logic to determine how to resolve the correct value.  This is the very thing that hierarchical lookups aim to solve, to structure the data and perform the lookups in such a way that is transparent to the program requesting the data.

Lets add another layer of complexity, you’ve agreed to use EUR for all users based in Europe, but you must now account for the UK and Switzerland which deal in GBP and CHF respectively, and potentially more.  Now the demands for conditional logic on the program requesting the data are getting more complicated.  To avoid lots of very convoluted conditional logic in your code you could simply map every country in the world to a currency and look up one key, that would be the cleanest method right now.  But remember that we generally want to use USD for everyone and only care about changing this default under certain circumstances.  If we think about this carefully, we have a hierarchy of importance.  The country (in the case of UK or Switzerland), the continent in the case of Europe and then the rest of the world.  This is where a hierarchical lookup simplifies the management of this data.  The hierarchy we need to search here is quite simple;

When we’re dealing with storing data for hierarchical searches, we end up with a tiered hierarchy that looks something like this.  A lookup request to Jerakia contains two things, they key that is being looked up, in this case “currency”, and some data about the context of the request which Jerakia refers to as the scope.  In this instance the scope contains the country and continent of the user. The scope and the key are used together, so when we are talking about hierarchical lookups, rather than just saying “return the value for this key” we are saying “return the value for this key in the context of this scope”.  That’s the main distinction between a normal key value lookup and a hierarchical lookup.  If you’re thinking of ways to do this in a structured query language (SQL) or some other database API, you might be ok to solve this problem – but this is a stripped down example looking up one value, now imagine we throw in tax parameters, shipping costs, and other fun things into the mix – this becomes a complex beast – but not when we think of this as a simple hierarchy.

With a hierarchical data source we can declare a key value pair at the bottom level, in this case Worldwide.  We can set that to USD, at this point any lookup request for the currency will return USD.  But a hierarchical data source allows us to add a different value for the key “currency” at a different level of the hierarchy, for example we can add a value of EUR at the continent level that will only return that value if the continent is Europe.  We can then add separate entries right at the top of the hierarchy for the UK and Switzerland, for requests where the country meets that criteria.

From our program we are still making one lookup request for the data, but that data is looked up using a series of queries behind the scenes to resolve the right data.   Essentially the lookup will trigger up to three queries.  If one query doesn’t return an answer (because there is nothing specific configured in that level of the hierarchy) then it will fall back to the next level, and keep going until it hits an answer, eventually landing at the last level, Worldwide in our example.   So a typical lookup for the currency of a user would be handled as;

What is the value for currency for this specific country?
What is the value for currency for this specific continent?
What is the value for currency for everything worldwide?

Whichever level of the hierarchy responds first will win, meaning that if a user from China will get a value of USD – because we haven’t specified anything for Asia or China on the continent or country levels of the hierarchy so the lookup will fall through to our default set at the “worldwide” level.  However, at the continent level of the hierarchy we specified an override of EUR for requests where the continent of the requestor is Europe, so users from Germany, France and Spain would get EUR.  This wouldn’t be the case for the UK or Switzerland though because we’ve specifically overridden this at the country level, which is higher in the hierarchy so will win over the continent that the country belongs to.

So hierarchical lookups are generally about defining a value at the widest possible catchment (eg: worldwide) and moving up the hierarchy overriding that value at the right level.

What is key here is that rather than implementing three levels of conditional logic in our code, or mapping the lowest common denominator (country) one to one with currencies for every country in the world (remember in some cases we may not be able to identify the lowest common denominator) we have found a way to express the data in a simple way and provide one simple path to looking up the data.  Our program still makes one request for the key currency, the logic involved in resolving the correct value is completely transparent.

In this case, we had a scope (the country and continent of the requestor) and a hierarchy to search against that uses both elements of the scope and then falls back to a common catch all.

Applying this to infrastructure management

Jerakia is standalone and can be used for any number of applications that can make use of a hierarchical type of data lookup, but it was originally built with configuration management in mind.  Infrastructure data lends itself incredibly well to this model of data lookup.  Infrastructure data tends to consist of any number of configurable attributes that are used to drive your infrastructure.  These could be DNS resolvers, server hostnames, IP addresses, ports, API endpoints…. there is a ton of stuff that we configure on our infrastructures, but most of it is hierarchical.  Generally speaking a lot of infrastructure data starts off with a baseline default, for example, what DNS resolver to use.  That could be a default value thats used across the whole of your company and you add that as a key value pair to a datastore.  Then you find yourself having to override that value for systems configured in your development environment because that environment can’t connect to the production resolvers on your network, you then may deploy your production environment out to a second data centre and you need that location to be different.  But we are still dealing with simple hierarchies, so rather than programming conditionals to determine the resolution path of a DNS resolver we could build a simple hierarchy that best represents our infrastructure, such as;

When dealing with a hierarchy like this, a data lookup must give us a key to lookup and  contain a scope that tells us the hostname, environment and location of the request. Using the same principles as before our lookup will make up to 4 queries;

What is the DNS resolver for my particular hostname?
What is the DNS resolver for machines that are in my environment?
What is the DNS resolver for machines that are in my location?
What is the DNS resolver for everyone else?

Again, this is hierarchical search pattern that will stop at the first query to return an answer and return that value.  We can set our global parameters and then override them at the areas we care about.  We’ve even got a top level hierarchy entry for that one edge case special snowflake server that is different from everything else on the network, but the lookup method is identical and transparent to the application requesting the data.

Jerakia

I’ve tried to give a generic overview of hierarchical lookups, but in particular as they relate to Jerakia.  Jerakia has way more features that build on top of this principle, like cascading lookups which don’t stop at the first result and will build a combined data structure (HashMap or Array) from all levels of the hierarchy and return a unified result based on the route taken through the hierarchy, and I’ll cover those in a follow up post.  It’s also built to be extremely flexible and pluggable allowing you to source your data from pretty much anywhere and ships with an HTTP API meaning you can integrate Jerakia with any tool regardless of the underlying language.

Our focus has been very much in the configuration management space particularly integration with Puppet and Hiera, and more recently a lookup plugin for Ansible.  But Jerakia could be used for any number of applications where data needs to be organised hierarchically without introducing logic into the code.

It’s an open source project, please feel free to contribute or give feedback on the GitHub site

Managing Puppet Secrets with Jerakia and Vault

A new approach to managing encrypted secrets in Puppet using Vault and Jerakia

Introduction and History

Over the past couple of years I’ve talked a lot about a project called Jerakia. Jerakia is a data lookup system inspired by Hiera, but built to be a stand-alone solution that is decoupled from Puppet, or any particular configuration management system that not only offers opportunities to integrate with other tools aside from Puppet thanks to it’s REST API server architecture but also offers a solution to people with far reaching edge cases around data complexity that are hard or impossible to solve in Hiera, it does this largely thanks to being configurable with native Ruby DSL.  If you’ve never heard of Jerakia before, you can read my initial blog post that covers the basics or see the official website.

Being able to deal with secret data, such as passwords and other sensitive data that needs to be served by Puppet has proved to be a very important requirement for Puppet users.  Shortly after Hiera was first released as a third party tool by R.I Pienaar in 2012 I developed one of the first pluggable backends for it, the now deprecated hiera-gpg.  hiera-gpg became hugely popular very quickly as people finally had a way to store production sensitive data along side other non-production data (eg: in the same Git repo) without compromising the details since anyone browsing the Git repo could only see the encrypted form of the key values.

As hiera-gpg grew in popularity as the first plugin of it’s kind to be able to solve the problem, it also suffered from a few design limitations and eventually hiera-eyaml was developed and became the next evolutionary step for handling sensitive data from Hiera.  hiera-eyaml had a better and more modern design than hiera-gpg and has served many users well over the years, but it re-implements a lot of what the yaml backend does with added capabilities to handle encryption, Hiera has always had the ability to support pluggable backends so you can source your data from a variety of different systems, whether they be files, databases or REST API services, but to be able to support encryption within a Hiera lookup you are tied to using a file based YAML back end.

Jerakia initially released with the ability to handle encrypted values from any data source, and up until now it’s done that using a mish-mash of the hiera-eyaml library to provide the decryption mechanism.  I’ve always felt this level of integration wasn’t ideal, hiera-eyaml was never designed to be a standalone solution to be used outside of Puppet and Hiera and the role of providing reliable and secure encryption for your sensitive data is an important one, and so I started looking at platforms that were built specifically for encryption, and more importantly, a shared encryption solution that I could use throughout my toolchain and still maintain the flexibility to store data where and how I want.  I’ve settled on Vault (but you don’t  have to!).

Vault

Vault is an open source encryption platform by Hashicorp, the makers of many great software platforms such as Vagrant and Terraform.  Vault is a highly feature rich system for handling all of your encryption and cryptography needs, most of the features of Vault I won’t even touch on in this post, since there are so many.  To take a quote directly from the website; [source]

Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. Vault handles leasing, key revocation, key rolling, and auditing. Through a unified API, users can access an encrypted Key/Value store and network encryption-as-a-service, or generate AWS IAM/STS credentials, SQL/NoSQL databases, X.509 certificates, SSH credentials, and more.

Many people use Vault as a place to store their secret data, like an encrypted database, and can use either the command line or the HTTP API to authenticate and retrieve the encrypted values.  Jerakia strives not to be a database but to offer users flexibility in where and how they want to store their data but perform hierarchical lookups in a uniformed fashion regardless of the source of the data, so I was particularly interested in Vault when I read about the introduction of the Transit Backend that is now available.

In a nutshell, the transit backend turns vault from an encrypted database into “cryptography as a service”.  You create an encryption key that authenticated clients can use to encrypt and decrypt data on-the-fly using Vaults’ API but Vault itself never stores the encrypted or decrypted data, but as a dedicated encryption platform it offers a an excellent level of protection around authentication and key storage protection.

This immediately seemed like a great idea for providing the encryption functions to support sensitive data in Puppet and other tools and for a tool like Jerakia.  So the 2.0.0 release of Jerakia now has native support for Vault integration using the Transit secret backend.

Jerakia Encryption

Jerakia has always had the concept of output filters. These are pluggable, data source agnostic and give you the ability to pass the results from any data lookup from any source to a filter that can perform modifications to the results before it’s sent back to the requestor.  In Jerakia 1.x there was an output filter called encryption which was a filter that tried to pick out hiera-eyaml style encoded strings and decrypt them using the slightly hacky integration I touched on earlier.

In Jerakia 2.0 the concept of encryption has become a bit more of a first class citizen, and it’s also pluggable.  In Jerakia 2.0 you can enable encryption and specify a provider for the encryption mechanism you want to use – the shipped provider for encryption in 2.0 is vault, but the API allows you to extend Jerakia with your own providers if you wish, even hiera-eyaml.

The output filter encryption will now use whichever encryption provider has been configured to provide decryption of data regardless of the source, based on a signature (a regular expression) that is advertised by the provider. So if a returned value matches the regular expression advertised by the providers signature, the encryption output filter will flag that as an encrypted value and attempt to decrypt it, otherwise the value will be returned unaltered.  We’re also not tied to a particular data source, Jerakia will detect an decrypt the data no matter which data source is used for the lookup.

Furthermore, an encryption provider can advertise an encrypt capability which allows you to encrypt and decrypt values right on the command line using the Jerakia CLI.   You can use the Jerakia CLI to encrypt a secret string, copy that into your data, whether that be YAML files, or a database or some other data source, and that’s it.

Jerakia + Vault

So I’ve covered Vault and the encryption capabilities provided by Jerakia – now let’s look at how to tie the two together, and use Vaults’ transit backend as an encryption provider for Jerakia, and therefore handle secrets in Puppet / Hiera.

To start with, you’ll need to install and configure Vault and unseal it. Once unsealed, theres a few steps we need to cover in Vault before we integrate Jerakia.  The following steps assume you have an unsealed Vault installation that you can run root commands against.

Configuring Vault

The first thing we need to do is enable the transit backend in Vault, this can be achieved by mounting it with the following command

Once the backend is mounted, we need to create a key for Jerakia to use for encryption and decryption of values.  The name of the key is configurable, but by default Jerakia will try and use a key called jerakia.

Now we have a dedicated key that Jerakia will use for encrypting and decrypting data.  The second step is to create a policy to restrict activity to just the endpoints used for encrypting and decrypting, we’ll also call that policy jerakia. To create a policy, we’re going to create a new file called jerakia_policy.hcl and then import that policy into Vault.

The file should contain the following rules;

Once created and saved, we need to import the policy into vault.

We can do a quick test now to make sure everything is ok by trying to encrypt a value on the command line using the Jerakia transit key and the policy that we’ve just created

If you had a similar output, then we’re good to go! But before we can plug Jerakia into this mix we need to give Jerakia itself something to authenticate with against Vault.  We could use a simple Vault token, which is supported, but this raises issues of expiry and renewal which we probably don’t want to be dealing with every 30 days (or whatever you set your token TTL to).  The recommended way of authenticating to Vault is to use the AppRole authentication backend of Vault.  When using this method of authentication, we configure Jerakia with a role_id and a secret_id and Jerakia uses this to obtain a limited lifetime token from the Vault server to use for interacting with the transit backend API.  When that token expires, Jerakia will automatically request a new token using it’s role_id and secret_id to get a new token.

First we need to create a new AppRole for Jerakia, we’ll give it a token TTL of 10 minutes (optional) but it’s important that we tie this role to the access policy that we created earlier;

Now we can read the AppRole jerakia and determine the role_id

The role_id that we see here we are going to use later on.  But we’re not quite done yet, we need to create a secret_id along with our role_id and the combination of these two values will give Jerakia the authentication it needs to request tokens.  So let’s create that;

Now we have the two crucial pieces of information that we need to integrate Jerakia with Vault, the role_id and secret_id.  Now Vault is ready to be a cryptography provider to Jerakia, we just now need to add some simple configuration to Jerakia to glue all this together.

Configuring Jerakia

With an out-of-the-box installation of Jerakia, we don’t have encryption configured by default, it must be enabled.  If we look at the options on the command line for the sub-command secret we’ll see that there are no sub-commands available.

So the first thing we need to do is enable an encryption provider and give the configuration that it needs. We can do that in jerakia.yaml.  In the configuration file we configure the encryption option with a provider of vault and the specific configuration that our provider requires.  In this example I’m using a Vault instance that is over HTTP, not HTTPS so I need to set vault_use_ssl to false, see the documentation for options to enable SSL.  Because I’m not using SSL I need to set the vault_addr option as well as the secret_id and role_id.

Now I’ve configured an encryption provider in jerakia.yaml, that provider should advertise it’s capabilities to Jerakia and on the CLI I now see some new options available when running jerakia help secret….

Now we should be all set to test encrypting and decrypting data using the Vault encryption provider in Jerakia, we can use the CLI commands to encrypt and decrypt data… Let’s give that a spin;

Now Jerakia can use Vault as a provider of “cryptography as a service”, in a secure, authenticated way.  The only thing left now is to enable our Jerakia  data lookups to be encryption enabled, and we do that by calling the encryption output filter our lookup written in our policy.  We use the output_filter method to add a filter to our lookup, like this;

The inclusion of output_filter :encryption in this lookup tells Jerakia to pass all results to the encryption filter, which will match all returned data values against the signature provided by the encryption provider and if it matches, it will use the encryption provider to handle the decryption of the value before it it passed to the requestor.

Looking up secrets

So let’s add our encrypted value from earlier to this lookup…

This encrypted value can then be imported into any type of data source that you are using with Jerakia, here we’re using the default file data source so we’ll add it to test.yaml in our common path.

Because this string matches the regular expression provided by the vault encryption providers signature, and we’ve enabled the encryption output filter, if we try and look up the key password from the namespace test (test::password in Hiera speak), Jerakia automatically decrypts the data using the Vault transit backend.

Tying it up all into Puppet

Of course, this also means when Puppet / Hiera is integrated with Jerakia this now becomes transparent to Puppet and now we have our Puppet secrets, stored encrypted in any data source with decryption provided by Vault.

Summary

Theres a whole lot of really awesome functionality about Vault that I haven’t even touched on in this post, it’s extensive.  Having one tool for your cryptography needs across your infrastructure rather than a variety of smaller less dedicated tools doing their own thing simplifies things a lot.  If you don’t want to use Vault, the encryption feature of Jerakia is entirely pluggable and could be stripped out and replaced with whatever platform you want to use.

The subject of handling sensitive data in Puppet, and other tools, is an ongoing challenge, I’d certainly welcome any feedback over the approach used here.

 

 

Composite namevars in Puppet

An advanced look at everything you never wanted to know about composite namevars in Puppet resource types

In this post I’m going to look at some of the more advanced concepts around Puppet resource types known as composite namevars. I’m going to assume you have a reasonable understanding of types and providers at this point, if not, you should probably go and read Fun with providers part 1 and Seriously, what is this provider doing, two excellent blog posts from Gary Larizza.

A composite what-now?… let’s start with the namevar

Maybe I’m getting ahead of myself, before we delve in it might be worth a quick refresh of the basics, let’s start with a fairly primitive concept of any Puppet resource type, the namevar. Take this simple example;

It’s important to get our terminology right here. The above piece of code is a Puppet resource declaration, the resource type is a package, there is a resource title of mysql and we have declared an attribute. When we run this code, Puppet will elect a provider to configure the managed entity. When we talk about the managed entity in this example, we are referring to the actual mysql package on the node (deb, rpm…etc). When we refer to the package resource we are talking about the resource within the Puppet catalog.

Puppet needs a way to map the resource declaration to the actual managed entity that it needs to configure, this is where the namevar comes in. Each resource type has a namevar that the provider will use to uniquely identify the managed entity. The namevar attribute is normally, and sensibly, called “name” – although this should never be assumed as we will find out later. What this means is I can use the namevar in a resource declaration like this…

In this example, we’ve changed our resource title to “mysql database” and added the name attribute with a value of “mysql”. The title of the resource in the world of Puppet is “mysql database”, and we would refer to the resource within Puppet as Package['mysql database']. But the provider will identify the managed entity to configure using the namevar, the name attribute. That is to say, whilst the resource is called “mysql database”, the actual thing that the provider will try and manage is a package called mysql.

So, you might be asking yourself, what’s the deal with the first example? We didn’t specify a namevar, so what happened there? The short answer is that in the absence of a namevar being specified in the resource declaration, Puppet will use the title of the resource as the namevar, or more correctly, the value of the name parameter will magically be set to the resource title.

Not all resources have name as their namevar. To find out which attribute is a namevar you can use the puppet describe command to view the documentation of resource attributes to see which attribute is namevar. For example, if we look at the file resource type;

Note that puppet describe will only tell you which attribute is the namevar if it isn’t name, which is confusing.

So, for the file resource, both of the following examples do the same thing, they both manage a file called /etc/foo.conf

Puppet is a stickler for uniqueness

You’ve probably already figured out that things in Puppet must be unique, that is to say, only one resource type can define the desired state of a managed entity. We already know that you can’t declare multiple resources with the same title, so what happens with the above code, the two resource declarations have different titles, let’s see…

Puppet is smart enough to figure out that although the resource titles are different, the namevar is resolving to the same value, which means you are trying to configure the same managed entity twice, which is firmly against the law of Puppet.

So, that was a quick recap of namevars and what they are. Pheew that was easy, are we done? not by a long shot!

When a name is not enough

Sometimes you can’t quantify a managed entity just from a single name alone. A classic example of this is actually the package example used earlier. Let’s revisit that, it sounded simple right? If we want to manage the mysql package we just declare:

This is all well and dandy, if you’re only using one packaging system, but sadly modern day systems come with a whole host of packaging systems, and we can use other providers of the package resource type to install different types of packages, like if I want to install a rubygem I can declare something like

But now consider what happens if you want to manage the yum package “mysql” and the rubygem that is also called “mysql”. We clearly can’t do this;

This will obviously fail to compile, we have two resources declared in Puppet with the same title, and that’s breaking one of the number one rules of Puppet that resources must be unique. So what if we change the titles to be different and use the namevar stuff we’ve just talked about;

Of course, as we’ve already dicussed, name is the package resources namevar and we’ve defined it twice, and as we saw with the file resource, Puppet is so smart if we try and run this code it is obviously going to fail like this;

WTF now? why did that work? Surely we’ve just broken one of Puppet’s golden rules? In general circumstances yes, but this bring us to the crux of this post, Puppet has the concept of composite namevars to solve this very issue. That is, a resource type can actually have more than one namevar, and rather than uniqueness being based soley on one parameter, Puppet can evaluate uniqueness based on a combination of different resource attributes.

Confused?? me too, let’s write some code then….

We’re going to craft a fairly basic type and provider to manage a fictional config file /etc/conf.ini. We want to be able to manage the value of configuration settings within a particular section of the INI file, something like:

Sounds easy enough! We have three basic elements to our managed entitiy, the section, the setting and the value. So let’s create a module called “config” and start with a very primitive resource type.

Now let’s add a provider so our resource type actually does something, for the sake of trying to keep the code to a minimum and focus on the relevant topics I’m going to re-use the ini_file library from the module puppetlabs/inifile… so if you want to follow along from home you’ll need to install that module so puppet/util/ini_file is in your ruby load path. Heres the provider we’ll use;

Whilst not the most comprehensive provider ever written, this should do the job, all we need to do now is write some Puppet code to implement it, let’s do that now.

Let’s just recap on that resource declaration statement, in our resource type we have said that the config resource type has three attributes, the section and setting parameters, and the value property. We don’t have an attribute called name, but we have made the parameter setting the namevar. Like in our file example at the beginning, this means that if we don’t explicitly give the setting parameter in the resource declaration, then the title of the resource, in this case hostname will automatically be used. Now to make sure everything is working…

Now it works, lets break it.

So far so good, but now consider that want to manage settings in other sections within the config file, that may have the same name, for example;

Both sections have a setting called server, so how can we express this in our Puppet manifest? We can’t declare both resources with the same title, and neither can we declare them with different titles but with the setting set to foo.com because, as we saw with the file example earlier, Puppet will fail to compile because the namevars must be unique as well as the resource titles and server is our namevar.

To solve this, we must tell Puppet that a config setting is not only uniquly identifiable by it’s setting name, but rather by the combination of the setting name and the section together. To do this, we need to make the section attribute of our type a namevar. That’s not rocket science, we just add a :namevar argument to our section parameter, like this;

What about the title? introducing title_params!

We’re not done yet! Remember in our earlier example using the package resource we discussed how in the absence of the namevar the resource title will be allocated as the namevar. Ths is still the case, but now we have two namevars we need to give Puppet a hand in deciding what to do with the resource title. We do this by creating a method within our type called self.title_patterns, and it goes something like this;

The self.title_patterns method returns a simple array of arrays of arrays containing arrays, or something like that. This particular nugget of insanity is provided in the Puppet::Type class, with a comment saying # The entire construct is somewhat strange…. No shit. If we delve into the Puppet core code in lib/puppet/type.rb we see the output of this method should return this mad data structure;

What we are saying with our above method is that any resource title that matched /(.*)/, which matches anything, will be assigned to the setting attribute if we have not declared it, meaning that we can still run our original Puppet code and get the same behaviour;

Now back to the problem at hand, now we have composite namevars, if we need to manage the setting hostname in both the [server] and [client] setion of our INI file, this is now possible by using our compisite namevars with differing titles.

Let’s do more with title_patterns

You’re probably wondering why the title_patterns method is so complicated (even by Puppet internals standards) for something that does so little, actually, it’s a rather powerful, albeit cryptic, beast. Our current method assigns any title to the setting attribute, we can make this smarter. We can enhance this method to also look for patterns matching section/setting and assign the relevant parts of the title to the right attributes. So let’s change our original regexp and add another element to the main array.

No, I haven’t broken my keyboard and you’re not going blind (yet!). We now have a title pattern with two matches. If we do not specify a title pattern containing / then the type will behave as before and the title will be assigned to the setting attribute, however, if it matches a string with a / then it will parse this as section/setting and assign the section and setting attributes from the title. This means, aswell as using the puppet declarations above, we could also shorten these and write;

The above shortened version has the same behaviour and the provider see’s the attributes in the same way.

How cool was that!

Here’s a recap of what our final type looks like

Final thoughts

Hopefully this has given you a basic understanding of what we mean by composite namevars, and maybe you didn’t break your brain reading this. I think I’ve only touched on the surface of what you could possibly do with title_vars, theres some very scary patterns involving procs out there! But I’ll leave off here. So there you have it, composite namevars and title_patterns, clear as mud, right?

Designing modules in a Puppet 4 world

Puppet 4.0 has been around for a while now, and most of it’s language features have been around even longer using the future parser option in later versions of the 3.x series. Good module design has been the subject of many talks, blog posts and IRC discussions for years and although best practice has evolved with experience, fundamentally things haven’t changed that much for a long time… Until now.

Puppet 4.x introduces some of the biggest changes to the Puppet language and module functionality in a single release. Some of these new features dramatically change how you should be designging Puppet module to leverage the the most power out of Puppet. After diving in deep with Puppet 4 recently I wanted to write about some things that I see as the most fundamental and beneficial changes in how modules are now written in the new age.

Type Casting

Is it a bird?, is it a plane?… no it’s a string. The issue of type casting has been a long standing irritation with Puppet. Theres a lot of insanity to be found with type casting (or lack thereof) within Puppet that I won’t go over here, but Stephen Johnson covered this really well in his talk at Puppet Camp Melbourne a while back. Puppet’s inability to natively enforce basic data types was confusing enough within the scope of Puppet itself, but when you add in ERB templates and functions these issues bleed over to the Ruby world which is much less forgiving of Puppets whimsical attitude to what a variable actually is and there have been countless problems as a result. Fortunately, Puppet 4.0 has finally come to address this shortcoming and this really is a great leap forward for the Puppet language.

Previously we would create a class with parameters, random variables that might be any number of types. And then, if we were being careful we could use the validate_* functions from stdlib to check that the provided values met the required type, or we could just blindly hope for the best. Now Puppet 4.0 has an in-built way of doing this by being able to define the types of variables that a parameterized class accepts. Eg:

For the most part I’ve found this to be an excellent addition, it’s much easier to define your data types when writing classes and seeing what types a class supports has become a whole lot more readable and removes the need to use external functions to validate your data. There still seems to be some quirky behaviour with it though that makes me wonder how solid these data types are under the hood, the difference between a string and an integer isn’t as well enforced as you might expect in other languages, take this example;

[Edit: see comments]. But quirks aside, using data types in your class will help with validation and readability of your module and you should always use them

Data

Back in the early days of Puppet managing your site specific data was a nightmare involving hard coded variables and elaborate coding patterns (a.k.a nasty hacks) that lead to non-reusable un-sharable modules, each organisation maintaining their own copy of modules to manage software. Then in 2011, Hiera was born and all that changed. The ability to maintain data separate from Puppet code gave way to a new generation of Puppet modules that could be shared and used without modifying the code and the Puppet Forge flourished as a result with dependable and maintainable modules. It wasn’t long before Hiera was incorporated officially into Puppet core and Puppet released the data binding features which automatically lookup class parameters, so by simply declaring include foo any parameters of the foo class will be automatically looked up from Hiera. This lead to a design pattern that became very popular with module authors to take advantage of the data binding features of hiera and give the module author some degree of flexibility with setting dynamic default values. The pattern is commonly known as the params pattern. In short, it works by a base class inheriting a class called params and setting all of the base classes parameter defaults to variables defined within the params class, eg:

This widely adopted pattern gives the implementor of the module the ability to override settings directly from hiera by setting foo::some_setting but also gives the module author more flexibility to dynamically set intelligent defaults. I don’t think it’s a terrible pattern, and it’s certainly the cleanest way of designing a module in Puppet 3, but it can get complicated with data nested in deep conditionals in your class. We already have a much better proven way of handling data in Puppet using Hiera, so why can’t we adopt this same approach with module defaults? This was an idea first proposed by R.I Pienaar, original author of Hiera, who went on to release a POC for it. The idea was solid and made a lot of sense, and now Puppet have adopted this approach natively in 4.3. Puppet modules can now ship with their own self-contained Hiera data in the module, which Puppet will use as a fall back if no other user-defined data is found (eg: your regular Hiera data). So using data in modules our class now looks like:

We can then enable a module-specific Hiera configuration to read the variable defaults from ./data

Class parameter defaults can now be easily read in a familiar Hiera layout

I’m really only touching on the surface of the new data in modules additions here, there are plenty more really cool features available, including using Puppet functions to provide data defaults. R.I Pienaar wrote an excellent article covering this in a bit more detail and the offical puppet documentation explains things in depth.

The main advantage of using data in modules over params.pp is the ability to store your module parameter defaults in an easy-to-read style and maintains a degree of consistency between how module data and other site data is handled and avoids the need for cumbersome in-code setting of data, such as the params.pp pattern. I love the fact I can download a third party module off the forge and easily look through all of the configured defaults in a nicely formatted YAML hierarchy rather than trawling through a mish mash of conditional logic trying to understand how a variable gets set. This feature is a major plus.

Iteration

The final enhancement to the Puppet language I want to focus on in this post is iteration. The idea of iteration in Puppet was something that historically I used to be against. People have been asking for loops in Puppet since it first got released, I always felt that it wasn’t necessary and detracted from the declarative principles of Puppet. I like to think that I was half right though, most people wanted iteration because they didn’t understand the declarative aspects of Puppet and if they thought differently about their problem they would realise that 99% of the time defined resources were a perfectly fitting solution. Puppet, and how people use it, has changed since I forged those opinions. In the new age of data separation where we try and avoid hard coding site specific data inside Puppet code, that data now lives in a structured format inside Hiera and we need a way to take that data model and manage Puppet resources from it.

The current well adopted method of doing this is to use the create_resources() function in Puppet. The idea is simple enough, the function takes a resource type as an argument followed by a hash where the keys represent resource titles and the values are nested hashes containing the attributes for those resources and voila, it creates Puppet resources in your catalog

Is a more dynamic, data driven way of declaring;

I have a love/hate relationship with create_resources(). It feels like a sticking plaster designed to hammer Puppet into doing something it fundamentally wasn’t designed to do, but on the other hand, in the absence of any other solution I couldn’t have survived without it. It’s also fairly restrictive solution, in the ideal world of data separation I should be able to model my data in the best possible way to represent the data, which may look quite different from how they are represented as Puppet resources. The create_resources() pattern offers no way to filter data or to munge the structure of a hash for example. Take the following example;

If I want to model the above modified data structure as user resources with the UID’s corresponding to the value of each element, I cannot do this with create_resources() directly managing the user resources since the function expects the hash to be representative of resource titles and attributes. I could parse this with a function to munge the data first, or I could write an elaborate but messy defined resource type to do it, but neither of these seems ideal. Examples like this prove that Puppet desperately needs more, and now (some would say FINALLY) it’s arrived in the form of iterators and loops natively included in the language. Using iteration, we now have a lot more flexibility in scenarios such as this and I can easily solve the above dilema with a very simple iterator;

Despite my initial reluctance about iteration some years ago, it’s obvious now that this is a huge improvement to the language and bridges the gap between modelling data and declaring resources. I don’t however agree with some camps that say defined resources are obsolete, I think they have a genuine place in the world. If you are managing a bunch of resources from a hash it is still going to be cleaner in some circumstances to have a defined resource type to model that behaviour rather than declaring lots of resources within an iterator loop, but having iterators gives authors the ability to chose which method best achieves their objectives in the cleanest way.

Summary

I have touched on three major changes that will change how people write modules going forward, and there are plenty more great features in 4.x worthy of discussion that I will go into in future posts. But I feel the above changes are the most relevant when it comes to module design patterns and they offer a huge improvement in the module quality and functionality. In a nutshell;

  • validate functions (for basic types) are dead. Validate your parameters using native types
  • params.pp is dead. Use data in modules
  • create_resources() is dead. Use iterators

Using data schemas with Jerakia 0.5

Introduction

In my previous posts I introduced and talked about a new data lookup tool called Jerakia.   I also recently gave a talk at Config Management Camp in February about it.

This week saw the release of version 0.5.0.  You can view the release notes here.  In this post I am going to focus on one new feature that has been added to this release, data schemas.

What are data schemas?

In short, schemas provide a definition layer to control lookup behaviour for particular keys and namespaces.  So, what do we mean by lookup behaviour?  Jerakia is a hierarchical based lookup tool and there are a couple of different ways that it can perform a search.  The simplest form of lookup will walk through the hierarchy and return the first result found.  A Jerakia lookup may also request a cascading lookup by setting the cascade option to true.  For a cascading lookup, Jerakia will continue to walk through the hierarchy and search for all instances of of the requested key and combine them together  into a hash or an array.   When performing a cascading lookup, the requestor can select the desired merge behaviour using the merge flag of the lookup.  Supported options for this parameter are array, hash or deep_hash.   When performing a cascading hash lookup, hashes are merged and key conflicts at the first level of the hash are overwritten according to their priority, whereas deep_hash attempts to merge the hash at all nested levels.

These concepts will be familiar to Hiera users as the legacy functions hiera_array() and hiera_hash().

Why use schemas?

The ability to do hash or array type lookups is a very useful and popular feature, and has always existed in Hiera.  For a long time the problem was where should/could you declare that a particular lookup key should be looked up this way.   Initially people used the hiera_hash() and hiera_array() directly in their modules.  This has several drawbacks though.  Most notably the incompatibility with Puppets’ data binding feature meant hash and array lookups had to be dealt with separately outside of the classes parameters, and  hard coding Hiera functions within modules is not best practice as it makes assumptions that the implementor of the module is using Hiera in the first place.

Later versions of Puppet and Hiera have made this nicer.  The new lookup functionality in Puppet 4 provides a lookup() function that takes a lookup strategy as an argument.  This is certainly nicer than the legacy hiera functions as it is provider agnostic and you can swap out Hiera for a different data lookup tool, such as Jerakia, transparently, which makes it more acceptable to use this function in a module.  And there is the new data in modules feature which allows for Puppet modules to determine the lookup behaviour of the parameters it’s classes contain

I think this approach is definitely going in the right direction, and it solves the problem of overriding behaviour for parameterised classes.

Jerakia takes a new approach and provides a new layer of logic called schemas.  When Jerakia receives a lookup request for a key, it first performs a search within the schema for the key and if found, will override lookup and merge behaviour based on what is defined in the schema.

The advantage of using schemas is that a user can download a module from the forge and override the lookup behaviour of the keys without modifying any of the Puppet code or adding anything Puppet specific to the data source.

How schemas work

Controlling lookup behaviour

When a request for a lookup key is recieved by Jerakia, it first performs a lookup against the schema.   It currently uses the in built file datasource to perform a separate lookup, but the source of data read by this lookup is different to the main lookup.  By default, Jerakia will search for a JSON (or YAML) file with a name corresponding to the namespace of the request in /var/lib/jerakia/schema.  Within this JSON document it searches for the key corresponding to the lookup key requested.  The data returned can override the lookup behaviour.  For example;

The above example will override a lookup for the key sysadmins in the namespace accounts (accounts::sysadmins) to be a cascading search merging the results into an array (just like hiera_array())

The big advantage here is that this data is separated from our actual configuration data, which could be in a YAML file structure, database, REST API endpoint…etc.

Using schema aliases

Another feature of schemas is the ability to create pseudo keys and namespaces that can be looked up and mapped to other keys within the data.   Schemas have the ability to override the namespace and key part of a request on the fly.  As a very hypothetical example, let’s say you have an array of domains in your data defined as webserver::domains eg:

If you need the same data to populate the vhosts parameter of a class called apache you could simply alias this in the schema rather than declaring the data twice or performing lookups from within Puppet…

The above schema entry will mean that lookups for apache::vhosts and webserver::domains will return the same data set.

You can also use a combination of aliases and lookup overrides to declare a  psuedo key to lookup data in a different way.

Here we have created two pseudo keys, security::firewall_rules and security::all_firewall_rules, both of which alias to the firewalld::rich_rules data set but will be looked up in different ways.  The security namespace itself may not even exist in the actual data set.

Future plans for schemas

The current implementation of schemas is fairly basic.  I see this as being quite a fundamental part of Jerakia in the future and it’s an area that could see functionality such as sub-lookups, views and even light “stored procedure” type functions to add some powerful functionality to data lookups whilst keeping the actual source of data in it’s purest form thus not stifling innovation of data source back ends.

Although currently limited to searching JSON or YAML files, schema searches are actually done with Jerakia lookups, the same functionality that does a regular lookup, so it should be trivial to allow users a lot more flexibility in how schema searches are done by using custom data sources and policies in future releases.

Want to know more?

Check out the Jerakia docs for how to configure the behaviour of schemas and more on how to use them.

Next up…

Puppet 4.0 delivered some great new functionality around data lookups, including environment data providers and the internal lookup functions that I feel will go really well with Jerakia.  I’m currently working on integration examples and a new environment data provider for Jerakia that will be available soon.

 

Extending Jerakia with lookup plugins

Introduction

In my last post, I introduced Jerakia as a data lookup engine for Puppet, and other tools. We looked out how to use lookup policies to get around complex requirements and edge cases including providing different lookup hierarchies to different parts of an organisation. In this post we are going to look at extending the core functionality of Jerakia. Jerakia is very plugguble, from data sources to output filters, and I’ll cover all of them in the coming days, but today we are going to cover plugins.

Lookup plugins

Last week we looked at Jerakia polcies, which are containers for lookups. A lookup, at the very least contains a name and a datasource. A classic lookup would be;

Within a lookup we have access to both the request and all scope data sent by the requestor. Having access to read and modify these gives us a great deal of flexibility. Jerakia policies are written in Ruby DSL so there is nothing stopping you from putting any amount of functionality directly in the lookup. However, that makes for rather long and complex policies and isn’t easy to share or re-use. The recommended way therefore to add extra functionality to a lookup is to use the plugin mechanism.

As an example, let’s look at how Jerakia differs from a standard Hiera installation in terms of data structure and filesystem layout between Hieras’ YAML backend and Jerakias’ file datasource. Puppet data mappings are requested from Hiera as modulename::key and are searched from the entries in the configured hierarchy. Jerakia has the concept of a namespace and a key, and when requesting data from Puppet, the namespace is mapped to the module name initiating the request. Jerakia looks for a filename matching the namespace and the variable name as the key. Take this example;

A standard hiera filesystem would contain something like;

In Jerakia, by default this would be something like;

The difference is subtle enough, and if we wanted to use Jerakia against a Hiera-style file layout with keys formatted as module::key we could manipulate the request to add the first element of request.namespace to the key, separated by ::, and then drop the namespace completely. You could implement this directly in the lookup, but a better way is to use a plugin keeping the functionality modular and shareable. Jerakia ships with a plugin to do just this, it’s called, unsuprisingly, hiera.

Using lookup plugins

To use a plugin in a lookup it must be loaded using the :use parameter to the lookup block, eg:

If you want to use more than one plugin, the argument to :use can also be an array

Once a plugin is loaded into the lookup, it exposes it’s methods in the plugin.name namespace. For example, the hiera plugin has a method called rewrite_lookup which rewrites the lookup key and drops the namespace from the request, as described above. So to implement this functionality we would call the method using the plugin mechanism;

Writing plugins

Lookup plugins are loaded as jerakia/lookup/plugin/pluginname from the ruby load path, meaning they can be shipped as a rubygem or placed under

jerakia/lookup/plugin relative to the plugindir option in the configuration file. The boilerplate template for a plugin is formed by creating a module with a name corresponding to your plugin name in the Jerakia::Lookup::Plugin class… in reality that looks a lot simpler than it sounds

We can now define methods inside this plugin that will be exposed to our lookups in the plugin.mystuff namespace. For this example we are going to generate a dynamic hierarchy based on a top level variable role. The variable contains a colon delimited string, and starting with the deepest level construct a hierarchy to the top. For example, if the role variable is set to web::frontend::application_foo we want to generate a search hierarchy of;

To do this, we will write a method in our plugin class called role_hierarchy and then use it in our lookup. First, let’s add the method;

We can now use this within our module by loading the mystuff plugin and calling our method as plugins.mystuff.role_hierarchy. Here is the final lookup policy using our new plugin;

Conclusion

My example here is pretty simple, but it demonstrates the flexibility of Jerakia to create a dynamic search hierarchy. With access to the request object and the requestors scope, lookup plugins can be very powerful tools to get around the most complex of edge cases. Being able to write Jerakia policies in native Ruby DSL is great for flexibility, but runs the risk of having excessive amount of code complicating your policy files, the plugin mechanism offers a way to keep extended lookup functionality separate, and make it shareable and reusable.

Up next…

We’re still not done, Jerakia offers numerous extension points. In my next post we will look at output filters to parse the data returned by the backend data source. We will look first at what I consider the most useful of output filters, encryption which uses hiera-eyaml to decrypt data strings in your returned data, no matter what datasource is used, and look at how easy it is to write your own output filters. After that we will look at extending Jerakia to add further data sources, so stay tuned!

Solving real world problems with Jerakia

Background

I’ve always been a great admirer of Hiera, and I still remember the pain and turmoil of living in a pre-Hiera world trying to come up with insane code patterns within Puppet to try and organize my data in a sensible way. Hiera was, and still is, the answer to lots of problems.

For me however, when I moved beyond a small-scale, single-customer orientated Puppet implementation into larger, complex and diverse environments I started to find that I was spending a lot of time trying to figure out how to model things in Hiera to meet my use requirements. It’s a great tool but it has some limitations in the degree of flexibiity it offers around how to store and look up your data.

Some examples of problems I was trying to solve were; How can I…

  • use a different backend for one particular module?
  • give a team a separate hierarchy just for their app?
  • give access to a subset of data to a particular user or team?
  • enjoy the benefits of eyaml encryption without having to use YAML?
  • implemenet a dynamic hierarchy rather than hard coding it in config?
  • group together applciation specific data into separate YAML files?
  • There are many more examples, and after some time I began exploring some solutions. Initially I started playing around with the idea of a “smart backend” to hiera that could give me more flexibility in my implementation and that eventually grew into what is now Jerakia. In fact, you can still use Jerakia as a regular Hiera backend, or you can wire it directly into Puppet as a data binding terminus.

    Introducing Jerakia

    Jerakia is a lookup tool that has the concept of a policy, which contains a number of lookups to perform. Policies are written in Ruby DSL allowing the maximum flexibility to get around those pesky edge cases. In this post we will look at how to deploy a standard lookup policy and then enhance it to solve one of the use cases above.

    Define a policy

    After installing Jerakia the first setup is to create our default policy in /etc/jerakia/policy.d/default.rb

    Jerakia policies are containers for lookups. A policy can have any number of lookups and they are run in the order they are defined

    Writing our first lookup

    A lookup must contain, at the very least, a name and a datasource to use for the lookup. The current datasource that ships with Jerakia is the file datasource. This takes several options, including format and searchpath to define how lookups should be processed. Within the lookup we have access to scope[] which contains all the information we need to determine what data should be returned. In Puppetspeak, the scope contains all facts and top-level variables passed from the agent

    We now have a fairly standard lookup policy which should be fairly familar to Hiera users. A Jerakia lookup request contains two parts, a lookup key and a namespace. This allows us to group together lookup keys such as port, docroot and logroot into a namespace such as apache. When integrating from Hiera/Puppet, the module is used for the namespace, and the variable name for the key. In Puppet we declare;

    This will reach Jerakia as a lookup request with the key port in the namespace apache, and with our lookup policy above a typical request would look for the key “port” in the following files, in order

    This is slightly different behaviour than you would find in Puppet using Hiera, if you are using Jerakia against an existing Hiera filesystem layout which has namespace::key in path, rather than key in path/namespace.yaml then check out the hiera plugin which provides a lookup method called plugin.hiera.rewrite_lookup to mimic hiera behaviour. More on lookup plugins in the next post!

    Adding some complexity

    So far what we have done is not rocket science, and certainly nothing that can’t be easily achieved with Hiera. So let’s mix it up a bit by defining a use case that will change our requirements. This use case is based on a real world scenario.

    We have a team based in Ireland. Their servers are identified with the top level variable location. They need to be able to manage PHP and Apache using Puppet, but they need a data lookup hierarchy based on their project, which is something only they use. Furthermore, we wish to give them access to manage data specifically for the modules they are responsible for, without being able to read, override or edit data for other modules (eg: network, firewall, kernel).

    So, the requirements are to provide a different lookup hierarchy for servers that are in the location “ie”, but only when configuring the apache or php modules, and to source the data from a different location separate from our main data repo. With Jerakia this is easily solvable, lets first look at creating the lookup for the Ireland team…

    So now we have defined a separate lookup for our Ireland based friends. The problem here is that every request will first load the lookup ireland and then proceed down to the main lookup. This is no different than just adding new hierarchy entries in hiera, they are global. This means potentially bad data creeping in, if for example they accidentally override the firewall rules or network configuration.

    To get around this we can use the confine method in the lookup block to restrict this lookup to requests that have “location: ie” in the scope, and are requesting keys in the apache or php namespaces, meaning the requesting modules. If the confine criteria is not met then the lookup will be invalidated and skipped, and the next one used. Finally, we do not want to risk dirty configuration from default values that we have in our hierarchy for apache and php, so we need to tell Jerakia that if this lookup is considered valid (eg: it has met all the criteria of confine) then only use this lookup and don’t proceed down the chain of available lookups. To do this, we use the stop method.

    The confine takes two arguments, a value and a match. The match is a string that can contain regular expressions. The confine method supports either a single match, or an array of matches to compare. So in order to confine this lookup to the location ie we can confine it as follows

    By confining in this way we tell Jerakia to invalidate and skip this lookup unless location is “ie”. Similarly we can add another confine statement to ensure that only lookups for the apache and php namespaces are handled by this lookup. Our final policy would look like this:

    Conclusion

    This example demonstrates that using Jerakia lookup policies you can tailor your data lookups quite extensivly giving a high amount of flexibility. This is especially useful in larger organisations with many customers using one central Puppet infrastructure.

    This is just one example of using Jerakia to solve a use case, I hope to blog a small mini-series on other use cases and solutions, and welcome any suggestions that come from the real-world!

    Next up…

    Jerakia is still fairly experimental at the time of writing (0.1.6) and there is still a lot of room for improvement both in exposed functionality and in the underlying code. I’d like to see it mature and there are still plenty of features to add, and code to be tidied up. There is some excellent work being done in Puppet 4.0 with regards to internal handling of data lookups that I think would complement our aims very well (currently all work has been done against 3.x) and the next phase of major development will be exploring these options.

    Also, I talk about Puppet a lot because I am a Puppet user and the problems that I were trying to solve were Puppet/Hiera related, that doesn’t mean that Jerakia is exclusively a Puppet tool. The plan is to integrate it with other tools in the devops space, which given the policy driven model should be fairly straightforward.

    My next post will focus on extending Jerakia and will cover writing and using lookup plugins to enhance the power of lookups and output filters to provide features like eyaml style decryption of data regardless of the data source. I will also cover Jerakia’s pluggable datastore model that encourages community development.

    Puppet data from CouchDB using hiera-http

    Introducing hiera-http

    I started looking at various places people store data, and ways to fetch it and realized that a lot of data storage applications are RESTful, yet there doesn’t seem to be any support in Hiera to query these things, so I whipped up hiera-http, a Hiera back end to connect to any HTTP RESTful API and return data based on a lookup. It’s very new and support for other stuff like SSL and Auth is coming, but what it does support is a variety of handlers to parse the returned output of the HTTP doc, at the moment these are limited to YAML and JSON (or just ‘plain’ to simply return the whole response of the request). The following is a quick demonstration of how to plug CouchDB into Puppet using Hiera and the hiera-http backend.

    Hiera-http is available as a rubygem, or from GitHub: http://github.com/crayfishx/hiera-http

    CouchDB

    Apache CouchDB is a scalable database that uses no set schema and is ideal for storing configuration data as everything is stored and retrieved as JSON documents. For the purposes of this demo I’m just going to do a very simple database installation with three documents and a few configuration parameters to demonstrate how to integrate this in with Puppet.

    After installing couchdb and starting the service I’m able to access Futon, the web GUI front-end for my couchdb service – using this I create three documents, “dev”, “common” and “puppet.puppetlabs.lan”

    CouchDB documents
    CouchDB documents

    Next I populate my common and dev documents with some variables.

    Common document populated with data

    Now CouchDB is configured I should be able to query the data over HTTP

    Query with Hiera

    After installing hiera-http I can query this data directly from Hiera…

    First I need to configure Hiera with the HTTP back end. The search hierarchy is determined by the :paths: configuration parameter and since CouchDB returns JSON I set that as the output handler.

    I can now query this directly from Hiera on the command line

    And of course, that means that this data is now available from Puppet and if I add some overriding configuration variables to my dev document in CouchDB, my lookup will resolve based on my environment setting in Puppet

    Hiera-http is fully featured and supports all standard Hiera back end functions such as hiera_hash, hiera_array order overrides.

    Future stuff

    I’m going to carry on working on new features for hiera-http – including basic auth, HTTPS/SSL, proxys and a wider variety of output handlers – I would like for this back end to be flexible enough to allow users to configure Hiera with any network service that uses a RESTful API to perform data lookups. Keep watching.

    I’m joining Puppet Labs

    Back in September last year I flew over to the US for PuppetConf 2011, and more recently took a trip to Edinburgh for PuppetCamp. Both times when I returned my wife asked me how it went, and both times my answer was simple; “I’ve gotta work for these guys!” I said. So after recently returning from a short excursion to Portland, Oregon I am very thrilled and honoured to announce that I’ve accepted a full time position with Puppet Labs.

    I’ve been an IT contractor for many years now, and started working with Puppet in 2008. Since then I’ve worked more and more with Puppet at numerous companies including most recently at the BBC. I’ve also become increasingly more involved in the Puppet user community at large over the past 4 years. I have a real passion for working with the product, I love the Puppet community and now I’m really looking forward to joining the company that made it all happen and being a part of all the awesome things to come.

    I’ll be joining Puppet Labs later this month as a Professional Services Engineer and look forward to maintaining and building upon the many relationships I have with various Puppet users as well as engaging with a wider section of the user community in my new professional capacity too. I would like to thank the various people at Puppet Labs involved in my interviews for the opportunity, with special thanks extended to Aimee Fahey (@PuppetRecruiter) for all her assistance during the hiring process.

    Designing Puppet – Roles and Profiles.

    Update, Feb 15th.

    Since writing this post some of the concepts have become quite popular and have generated quite a lot of comments and questions in the community. I recently did I talk at Puppet Camp Stockholm on this subject and hopefully I might have explained it a bit better there than I did below :-). The slides are available here and a YouTube video will be uploaded shortly.

    Introduction

    So you’ve installed Puppet, downloaded some forge modules, probably written a few yourself too. So, now what? You start applying those module to your nodes and you’re well on your way to super-awesomeness of automated deployments. Fast forward a year or so and your infrastructure has grown considerably in size, your organisations business requirements have become diverse and complex and your architects have designed technical solutions to solve business problems with little regard for how they might actually be implemented. They look great in the diagrams but you’ve got to fit in to Puppet. From personal experience, this often leads to a spell of fighting with square pegs and round holes, and the if statement starts becoming your go-to guy because you just can’t do it any other way. You’re probably now thinking its time to tear down what you’ve got and re-factor. Time to think about higher level design models to ease the pain.

    There is a lot of very useful guidance in the community surrounding Puppet design patterns for modules, managing configurable data and class structure but I still see people struggling with tying all the components of their Puppet manifests together. This seems to me to be an issue with a lack of higher level code base design. This post tries to explain one such design model that I refer to as “Roles/Profiles” that has worked quite well for me in solving some off the more common issues encountered when your infrastructure grows in size and complexity, and as such, the requirements of good code base design become paramount.

    The design model laid out here is by no means my suggestion on how people should design Puppet, it’s an example of a model that I’ve used with success before. I’ve seen many varied designs, some good and some bad, this is just one of them – I’m very interested in hearing other design models too. The point of this post is to demonstrate the benefits of adding an abstraction layer before your modules

    What are we trying to solve

    I’ve spent a lot of time trying to come up with what I see as the most common design flaws in Puppet code bases. One source of problems is that users spend a lot of time designing great modules, then include those modules directly to the node. This may work but when dealing with large and complex infrastructures this becomes cumbersome and you end up with a lot of node level logic in your manifests.

    Consider a network consisting of multiple different server types. They will all share some common configuration, some subsets of servers will also share configuration while other configuration will be applicable only to that server type. In this very simple example we have three server types. A development webserver (www1) that requires a local mysql instance and PHP logging set to debug, a live webserver (www2) that doesn’t use a local mysql, requires memcache and has standard PHP logging, and a mail server (smtp1). If you have a flat node/module relationship with no level of abstraction then your nodes file starts to look like this:

    Note: if you’re already thinking about ENC’s this will be covered later

    As you can see, the networking and users modules are universal across all our boxes, Apache, Tomcat and JDK is used for all webservers, some webservers have mysql and PHP logging options vary depending on what type of webserver it is.

    At this point most people try and simplify their manifests by using node inheritance. In this very simple example that might be sufficient, but it’s only workable up to a point. If you’re environment grows to hundreds or even thousands of servers, made up over 20 or 30 different types of server, some with shared attributes and subtle differences, spread out over multiple environments, you will likely end up with an unmanagable tangled web of node inheritance. Nodes also can inherit only one other node, which will be restrictive in some edge cases.

    Adding higher level abstraction

    One way I have found to minimise the complexity of node definitions and make handling nuances between different server types and edge case scenarios a lot easier is to add a layer (or in this case, two layers) of seperation between my nodes and the modules they end up calling. I refer to these as roles and profiles.

    Consider for a moment how you would represent these servers if you weren’t writing a Puppet manifest. You wouldn’t say “www1 is a server that has mysql, tomcat, apache, PHP with debug logging, networking and users” on a high level network diagram. You would more likely say “www1 is a dev web server” so really this is all the information I want to be applying directly to my node.

    So after analysing all our nodes we’ve come up with three distinct definitions of what a server can be. A development webserver, a live webserver and a mailserver. These are your server roles, they describe what the server represents in the real world. In this design model a node can only ever have one role, it cant be two things simultaneously. If your business now has an edge case for QA webservers to be the same as live servers, but incorporate some extra software for performance testing, then you’ve just defined another role, a QA Webserver.

    Now we look at what a role should contain. If you were describing the role “Development webserver” you would likely say “A development webserver has a Tomcat application stack, a webserver and a local database server”. At this level we start defining profiles.

    Unlike roles, which are named in a more human representation of the server function, a profile incorporates individual components to represent a logical technology stack. In the above example, the profile “Tomcat application stack” is made up of the Tomcat and JDK components, whereas the webserver profile is made up of the httpd, memcache and php components. In Puppet, these lower level components are represented by your modules.

     

    puppet

    Now our nodes definitions look a lot simpler and are representitive of their real world roles…

    Roles are simply collections of profiles that provide a sensible mapping between human logic and technology logic. In this scenario our roles may look something like:

    Whether or not you choose to use inherited classes in the way I have done is up to you of course, some people stay clear of inheritence completely, others over use it. Personally I think it works for the purposes of laying out roles and profiles to minimise duplication.

    The profiles included above would look something like the following

    In summary the “rules” surrounding my design can be simplified as;

    • A node includes one role, and one only.
    • A role includes one or more profiles to define the type of server
    • A profile includes and manages modules to define a logical technical stack
    • Modules manage resources
    • Modules should only be responsible for managing aspects of the component they are written for

    Let’s just clarify what we mean by “modules”

    I’ve talked about profiles and roles like they are some special case and modules being something else. In reality, all of these classes can be, and should be modularised. I make a logical distinction between the profile and role modules, and everything else (e.g.: modules that provide resources).

    Other useful stuff to do with profiles.

    So far I’ve demonstrated using profiles as collections of modules, but it has other uses too. As a rule of thumb, I don’t define any resources directly from roles or profiles, that is the job for my modules. However, I do realise virtualised resources and occasionally do resource chaining in profiles which can solve problems that otherwise would have meant editing modules and other functionality that doesn’t quite fit in the scope of an individual module. Adding some of this functionality at the modular level will reduce the re-usability and portability of your module.

    Hypothetically lets say I have a module, let’s call it foo for originalities sake. The foo module provides a service type called foo, in my implementation I have another module called mounts that declares some mount resource types. If I want all mount resource types to be initiated before the foo service is started as without the filesystems mounted the foo service will fail. I’ll go even further and say that foo is a Forge module that I really don’t want to (and shouldn’t have to) edit, so where do I put this configuration? This is where having the profiles level of abstraction is handy. The foo module is coded perfectly, it’s the use case determined from my own technology stack that is requiring that my mount points exists before the foo service, so since my stack is defined in the profile, this is where I should specify it. e.g.:

    It’s widely known that good modules are modules that you don’t need to edit. Quite often I see people reluctant to use Forge modules because their set up requires some peripheral set up or dependancies not included in the module. Modules exist to manage resources directly related to what they were written for. For example, someone may choose to edit a forge mysql module because their set up has dependancies on MMM being installed after MySQL (purely hypothetical). The mysql module is not the place to do this, mysql and mmm are separate entities and should be configured and contained within their own modules, tying the two together is something you’ve defined in your stack, so again, this is where you’re profiles come in…

    This approach is also potentially helpful for those using Hiera. Although Hiera and Puppet are to become much more fused in Puppet 3.0, at the moment people writing forge modules have to make them work with Hiera or not, and people running Hiera have to edit the modules that aren’t enabled. Take a hypothetical module from the Forge called fooserver. This module exposes a paramaterized class that has an option for port, I want to source this variable from Hiera but the module doesn’t support it. I can add this functionality into the profile without the need for editing the module.

    What about using an ENC?

    So you’re probaby wondering why I haven’t mentioned using an ENC (External Node Classifier). The examples above don’t use any kind of ENC, but the logic behind adding a layer of separation between your nodes and your modules is still the same. You could decide to use an ENC to determine which role to include to a node, or you could build/configure an ENC to perform all the logic and return the list of components (modules) to include. I prefer using an ENC in place of nodes definitions to determine what role to include and keep the actual roles and profiles logic within Puppet. My main reason for this is that I get far greater control of things such as resource chaining, class overrides and integration with things like Hiera at the profile level and this helps overcome some tricky edge cases and complex requirements.

    Summary

    None of the above is set in stone, what I hope I’ve demonstrated though is that adding a layer of abstraction in your Puppet code base design can have some significant benefits that will avoid pitfalls when you start dealing with extremely complex, diverse and large scale set ups. These include

    • Reducing complexity of configuration at a node level
    • Real-world terminology of roles improves “at-a-glance” visibility of what a server does
    • Definition of logical technology stacks (profiles) gives greater flexibility for edge cases
    • Profiles provide an area to add cross-module functionality such as resource chaining
    • Modules can be granular and secular and tied together in profiles, thus reducing the need to edit modules directly
    • Reduced code duplication

    I use Hiera to handle all of my environment configuration data, which I won’t go into detail about in this post. So, at a high level my Puppet design can be represented as;

    puppet_big

     

    As I said previously, this is not a the way to design Puppet, but an example of one such way. The purpose of this post is to explore higher level code base design for larger and more complex implementations of Puppet, I would love to hear other design models that people have used either successfully or not and what problems it solved for you (or introduced :)) so please get in touch with your own examples.