Managing Puppet Secrets with Jerakia and Vault

Introduction and History

Over the past couple of years I’ve talked a lot about a project called Jerakia. Jerakia is a data lookup system inspired by Hiera, but built to be a stand-alone solution that is decoupled from Puppet, or any particular configuration management system that not only offers opportunities to integrate with other tools aside from Puppet thanks to it’s REST API server architecture but also offers a solution to people with far reaching edge cases around data complexity that are hard or impossible to solve in Hiera, it does this largely thanks to being configurable with native Ruby DSL.  If you’ve never heard of Jerakia before, you can read my initial blog post that covers the basics or see the official website.

Being able to deal with secret data, such as passwords and other sensitive data that needs to be served by Puppet has proved to be a very important requirement for Puppet users.  Shortly after Hiera was first released as a third party tool by R.I Pienaar in 2012 I developed one of the first pluggable backends for it, the now deprecated hiera-gpg.  hiera-gpg became hugely popular very quickly as people finally had a way to store production sensitive data along side other non-production data (eg: in the same Git repo) without compromising the details since anyone browsing the Git repo could only see the encrypted form of the key values.

As hiera-gpg grew in popularity as the first plugin of it’s kind to be able to solve the problem, it also suffered from a few design limitations and eventually hiera-eyaml was developed and became the next evolutionary step for handling sensitive data from Hiera.  hiera-eyaml had a better and more modern design than hiera-gpg and has served many users well over the years, but it re-implements a lot of what the yaml backend does with added capabilities to handle encryption, Hiera has always had the ability to support pluggable backends so you can source your data from a variety of different systems, whether they be files, databases or REST API services, but to be able to support encryption within a Hiera lookup you are tied to using a file based YAML back end.

Jerakia initially released with the ability to handle encrypted values from any data source, and up until now it’s done that using a mish-mash of the hiera-eyaml library to provide the decryption mechanism.  I’ve always felt this level of integration wasn’t ideal, hiera-eyaml was never designed to be a standalone solution to be used outside of Puppet and Hiera and the role of providing reliable and secure encryption for your sensitive data is an important one, and so I started looking at platforms that were built specifically for encryption, and more importantly, a shared encryption solution that I could use throughout my toolchain and still maintain the flexibility to store data where and how I want.  I’ve settled on Vault (but you don’t  have to!).

Vault

Vault is an open source encryption platform by Hashicorp, the makers of many great software platforms such as Vagrant and Terraform.  Vault is a highly feature rich system for handling all of your encryption and cryptography needs, most of the features of Vault I won’t even touch on in this post, since there are so many.  To take a quote directly from the website; [source]

Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. Vault handles leasing, key revocation, key rolling, and auditing. Through a unified API, users can access an encrypted Key/Value store and network encryption-as-a-service, or generate AWS IAM/STS credentials, SQL/NoSQL databases, X.509 certificates, SSH credentials, and more.

Many people use Vault as a place to store their secret data, like an encrypted database, and can use either the command line or the HTTP API to authenticate and retrieve the encrypted values.  Jerakia strives not to be a database but to offer users flexibility in where and how they want to store their data but perform hierarchical lookups in a uniformed fashion regardless of the source of the data, so I was particularly interested in Vault when I read about the introduction of the Transit Backend that is now available.

In a nutshell, the transit backend turns vault from an encrypted database into “cryptography as a service”.  You create an encryption key that authenticated clients can use to encrypt and decrypt data on-the-fly using Vaults’ API but Vault itself never stores the encrypted or decrypted data, but as a dedicated encryption platform it offers a an excellent level of protection around authentication and key storage protection.

This immediately seemed like a great idea for providing the encryption functions to support sensitive data in Puppet and other tools and for a tool like Jerakia.  So the 2.0.0 release of Jerakia now has native support for Vault integration using the Transit secret backend.

Jerakia Encryption

Jerakia has always had the concept of output filters. These are pluggable, data source agnostic and give you the ability to pass the results from any data lookup from any source to a filter that can perform modifications to the results before it’s sent back to the requestor.  In Jerakia 1.x there was an output filter called encryption which was a filter that tried to pick out hiera-eyaml style encoded strings and decrypt them using the slightly hacky integration I touched on earlier.

In Jerakia 2.0 the concept of encryption has become a bit more of a first class citizen, and it’s also pluggable.  In Jerakia 2.0 you can enable encryption and specify a provider for the encryption mechanism you want to use – the shipped provider for encryption in 2.0 is vault, but the API allows you to extend Jerakia with your own providers if you wish, even hiera-eyaml.

The output filter encryption will now use whichever encryption provider has been configured to provide decryption of data regardless of the source, based on a signature (a regular expression) that is advertised by the provider. So if a returned value matches the regular expression advertised by the providers signature, the encryption output filter will flag that as an encrypted value and attempt to decrypt it, otherwise the value will be returned unaltered.  We’re also not tied to a particular data source, Jerakia will detect an decrypt the data no matter which data source is used for the lookup.

Furthermore, an encryption provider can advertise an encrypt capability which allows you to encrypt and decrypt values right on the command line using the Jerakia CLI.   You can use the Jerakia CLI to encrypt a secret string, copy that into your data, whether that be YAML files, or a database or some other data source, and that’s it.

Jerakia + Vault

So I’ve covered Vault and the encryption capabilities provided by Jerakia – now let’s look at how to tie the two together, and use Vaults’ transit backend as an encryption provider for Jerakia, and therefore handle secrets in Puppet / Hiera.

To start with, you’ll need to install and configure Vault and unseal it. Once unsealed, theres a few steps we need to cover in Vault before we integrate Jerakia.  The following steps assume you have an unsealed Vault installation that you can run root commands against.

Configuring Vault

The first thing we need to do is enable the transit backend in Vault, this can be achieved by mounting it with the following command

Once the backend is mounted, we need to create a key for Jerakia to use for encryption and decryption of values.  The name of the key is configurable, but by default Jerakia will try and use a key called jerakia.

Now we have a dedicated key that Jerakia will use for encrypting and decrypting data.  The second step is to create a policy to restrict activity to just the endpoints used for encrypting and decrypting, we’ll also call that policy jerakia. To create a policy, we’re going to create a new file called jerakia_policy.hcl and then import that policy into Vault.

The file should contain the following rules;

Once created and saved, we need to import the policy into vault.

We can do a quick test now to make sure everything is ok by trying to encrypt a value on the command line using the Jerakia transit key and the policy that we’ve just created

If you had a similar output, then we’re good to go! But before we can plug Jerakia into this mix we need to give Jerakia itself something to authenticate with against Vault.  We could use a simple Vault token, which is supported, but this raises issues of expiry and renewal which we probably don’t want to be dealing with every 30 days (or whatever you set your token TTL to).  The recommended way of authenticating to Vault is to use the AppRole authentication backend of Vault.  When using this method of authentication, we configure Jerakia with a role_id and a secret_id and Jerakia uses this to obtain a limited lifetime token from the Vault server to use for interacting with the transit backend API.  When that token expires, Jerakia will automatically request a new token using it’s role_id and secret_id to get a new token.

First we need to create a new AppRole for Jerakia, we’ll give it a token TTL of 10 minutes (optional) but it’s important that we tie this role to the access policy that we created earlier;

Now we can read the AppRole jerakia and determine the role_id

The role_id that we see here we are going to use later on.  But we’re not quite done yet, we need to create a secret_id along with our role_id and the combination of these two values will give Jerakia the authentication it needs to request tokens.  So let’s create that;

Now we have the two crucial pieces of information that we need to integrate Jerakia with Vault, the role_id and secret_id.  Now Vault is ready to be a cryptography provider to Jerakia, we just now need to add some simple configuration to Jerakia to glue all this together.

Configuring Jerakia

With an out-of-the-box installation of Jerakia, we don’t have encryption configured by default, it must be enabled.  If we look at the options on the command line for the sub-command secret we’ll see that there are no sub-commands available.

So the first thing we need to do is enable an encryption provider and give the configuration that it needs. We can do that in jerakia.yaml.  In the configuration file we configure the encryption option with a provider of vault and the specific configuration that our provider requires.  In this example I’m using a Vault instance that is over HTTP, not HTTPS so I need to set vault_use_ssl to false, see the documentation for options to enable SSL.  Because I’m not using SSL I need to set the vault_addr option as well as the secret_id and role_id.

Now I’ve configured an encryption provider in jerakia.yaml, that provider should advertise it’s capabilities to Jerakia and on the CLI I now see some new options available when running jerakia help secret….

Now we should be all set to test encrypting and decrypting data using the Vault encryption provider in Jerakia, we can use the CLI commands to encrypt and decrypt data… Let’s give that a spin;

Now Jerakia can use Vault as a provider of “cryptography as a service”, in a secure, authenticated way.  The only thing left now is to enable our Jerakia  data lookups to be encryption enabled, and we do that by calling the encryption output filter our lookup written in our policy.  We use the output_filter method to add a filter to our lookup, like this;

The inclusion of output_filter :encryption in this lookup tells Jerakia to pass all results to the encryption filter, which will match all returned data values against the signature provided by the encryption provider and if it matches, it will use the encryption provider to handle the decryption of the value before it it passed to the requestor.

Looking up secrets

So let’s add our encrypted value from earlier to this lookup…

This encrypted value can then be imported into any type of data source that you are using with Jerakia, here we’re using the default file data source so we’ll add it to test.yaml in our common path.

Because this string matches the regular expression provided by the vault encryption providers signature, and we’ve enabled the encryption output filter, if we try and look up the key password from the namespace test (test::password in Hiera speak), Jerakia automatically decrypts the data using the Vault transit backend.

Tying it up all into Puppet

Of course, this also means when Puppet / Hiera is integrated with Jerakia this now becomes transparent to Puppet and now we have our Puppet secrets, stored encrypted in any data source with decryption provided by Vault.

Summary

Theres a whole lot of really awesome functionality about Vault that I haven’t even touched on in this post, it’s extensive.  Having one tool for your cryptography needs across your infrastructure rather than a variety of smaller less dedicated tools doing their own thing simplifies things a lot.  If you don’t want to use Vault, the encryption feature of Jerakia is entirely pluggable and could be stripped out and replaced with whatever platform you want to use.

The subject of handling sensitive data in Puppet, and other tools, is an ongoing challenge, I’d certainly welcome any feedback over the approach used here.

 

 

facebooktwittergoogle_plusredditpinterestlinkedinmail

Designing modules in a Puppet 4 world

Puppet 4.0 has been around for a while now, and most of it’s language features have been around even longer using the future parser option in later versions of the 3.x series. Good module design has been the subject of many talks, blog posts and IRC discussions for years and although best practice has evolved with experience, fundamentally things haven’t changed that much for a long time… Until now.

Puppet 4.x introduces some of the biggest changes to the Puppet language and module functionality in a single release. Some of these new features dramatically change how you should be designging Puppet module to leverage the the most power out of Puppet. After diving in deep with Puppet 4 recently I wanted to write about some things that I see as the most fundamental and beneficial changes in how modules are now written in the new age.

Type Casting

Is it a bird?, is it a plane?… no it’s a string. The issue of type casting has been a long standing irritation with Puppet. Theres a lot of insanity to be found with type casting (or lack thereof) within Puppet that I won’t go over here, but Stephen Johnson covered this really well in his talk at Puppet Camp Melbourne a while back. Puppet’s inability to natively enforce basic data types was confusing enough within the scope of Puppet itself, but when you add in ERB templates and functions these issues bleed over to the Ruby world which is much less forgiving of Puppets whimsical attitude to what a variable actually is and there have been countless problems as a result. Fortunately, Puppet 4.0 has finally come to address this shortcoming and this really is a great leap forward for the Puppet language.

Previously we would create a class with parameters, random variables that might be any number of types. And then, if we were being careful we could use the validate_* functions from stdlib to check that the provided values met the required type, or we could just blindly hope for the best. Now Puppet 4.0 has an in-built way of doing this by being able to define the types of variables that a parameterized class accepts. Eg:

For the most part I’ve found this to be an excellent addition, it’s much easier to define your data types when writing classes and seeing what types a class supports has become a whole lot more readable and removes the need to use external functions to validate your data. There still seems to be some quirky behaviour with it though that makes me wonder how solid these data types are under the hood, the difference between a string and an integer isn’t as well enforced as you might expect in other languages, take this example;

[Edit: see comments]. But quirks aside, using data types in your class will help with validation and readability of your module and you should always use them

Data

Back in the early days of Puppet managing your site specific data was a nightmare involving hard coded variables and elaborate coding patterns (a.k.a nasty hacks) that lead to non-reusable un-sharable modules, each organisation maintaining their own copy of modules to manage software. Then in 2011, Hiera was born and all that changed. The ability to maintain data separate from Puppet code gave way to a new generation of Puppet modules that could be shared and used without modifying the code and the Puppet Forge flourished as a result with dependable and maintainable modules. It wasn’t long before Hiera was incorporated officially into Puppet core and Puppet released the data binding features which automatically lookup class parameters, so by simply declaring include foo any parameters of the foo class will be automatically looked up from Hiera. This lead to a design pattern that became very popular with module authors to take advantage of the data binding features of hiera and give the module author some degree of flexibility with setting dynamic default values. The pattern is commonly known as the params pattern. In short, it works by a base class inheriting a class called params and setting all of the base classes parameter defaults to variables defined within the params class, eg:

This widely adopted pattern gives the implementor of the module the ability to override settings directly from hiera by setting foo::some_setting but also gives the module author more flexibility to dynamically set intelligent defaults. I don’t think it’s a terrible pattern, and it’s certainly the cleanest way of designing a module in Puppet 3, but it can get complicated with data nested in deep conditionals in your class. We already have a much better proven way of handling data in Puppet using Hiera, so why can’t we adopt this same approach with module defaults? This was an idea first proposed by R.I Pienaar, original author of Hiera, who went on to release a POC for it. The idea was solid and made a lot of sense, and now Puppet have adopted this approach natively in 4.3. Puppet modules can now ship with their own self-contained Hiera data in the module, which Puppet will use as a fall back if no other user-defined data is found (eg: your regular Hiera data). So using data in modules our class now looks like:

We can then enable a module-specific Hiera configuration to read the variable defaults from ./data

Class parameter defaults can now be easily read in a familiar Hiera layout

I’m really only touching on the surface of the new data in modules additions here, there are plenty more really cool features available, including using Puppet functions to provide data defaults. R.I Pienaar wrote an excellent article covering this in a bit more detail and the offical puppet documentation explains things in depth.

The main advantage of using data in modules over params.pp is the ability to store your module parameter defaults in an easy-to-read style and maintains a degree of consistency between how module data and other site data is handled and avoids the need for cumbersome in-code setting of data, such as the params.pp pattern. I love the fact I can download a third party module off the forge and easily look through all of the configured defaults in a nicely formatted YAML hierarchy rather than trawling through a mish mash of conditional logic trying to understand how a variable gets set. This feature is a major plus.

Iteration

The final enhancement to the Puppet language I want to focus on in this post is iteration. The idea of iteration in Puppet was something that historically I used to be against. People have been asking for loops in Puppet since it first got released, I always felt that it wasn’t necessary and detracted from the declarative principles of Puppet. I like to think that I was half right though, most people wanted iteration because they didn’t understand the declarative aspects of Puppet and if they thought differently about their problem they would realise that 99% of the time defined resources were a perfectly fitting solution. Puppet, and how people use it, has changed since I forged those opinions. In the new age of data separation where we try and avoid hard coding site specific data inside Puppet code, that data now lives in a structured format inside Hiera and we need a way to take that data model and manage Puppet resources from it.

The current well adopted method of doing this is to use the create_resources() function in Puppet. The idea is simple enough, the function takes a resource type as an argument followed by a hash where the keys represent resource titles and the values are nested hashes containing the attributes for those resources and voila, it creates Puppet resources in your catalog

Is a more dynamic, data driven way of declaring;

I have a love/hate relationship with create_resources(). It feels like a sticking plaster designed to hammer Puppet into doing something it fundamentally wasn’t designed to do, but on the other hand, in the absence of any other solution I couldn’t have survived without it. It’s also fairly restrictive solution, in the ideal world of data separation I should be able to model my data in the best possible way to represent the data, which may look quite different from how they are represented as Puppet resources. The create_resources() pattern offers no way to filter data or to munge the structure of a hash for example. Take the following example;

If I want to model the above modified data structure as user resources with the UID’s corresponding to the value of each element, I cannot do this with create_resources() directly managing the user resources since the function expects the hash to be representative of resource titles and attributes. I could parse this with a function to munge the data first, or I could write an elaborate but messy defined resource type to do it, but neither of these seems ideal. Examples like this prove that Puppet desperately needs more, and now (some would say FINALLY) it’s arrived in the form of iterators and loops natively included in the language. Using iteration, we now have a lot more flexibility in scenarios such as this and I can easily solve the above dilema with a very simple iterator;

Despite my initial reluctance about iteration some years ago, it’s obvious now that this is a huge improvement to the language and bridges the gap between modelling data and declaring resources. I don’t however agree with some camps that say defined resources are obsolete, I think they have a genuine place in the world. If you are managing a bunch of resources from a hash it is still going to be cleaner in some circumstances to have a defined resource type to model that behaviour rather than declaring lots of resources within an iterator loop, but having iterators gives authors the ability to chose which method best achieves their objectives in the cleanest way.

Summary

I have touched on three major changes that will change how people write modules going forward, and there are plenty more great features in 4.x worthy of discussion that I will go into in future posts. But I feel the above changes are the most relevant when it comes to module design patterns and they offer a huge improvement in the module quality and functionality. In a nutshell;

  • validate functions (for basic types) are dead. Validate your parameters using native types
  • params.pp is dead. Use data in modules
  • create_resources() is dead. Use iterators
facebooktwittergoogle_plusredditpinterestlinkedinmail

Using data schemas with Jerakia 0.5

Introduction

In my previous posts I introduced and talked about a new data lookup tool called Jerakia.   I also recently gave a talk at Config Management Camp in February about it.

This week saw the release of version 0.5.0.  You can view the release notes here.  In this post I am going to focus on one new feature that has been added to this release, data schemas.

What are data schemas?

In short, schemas provide a definition layer to control lookup behaviour for particular keys and namespaces.  So, what do we mean by lookup behaviour?  Jerakia is a hierarchical based lookup tool and there are a couple of different ways that it can perform a search.  The simplest form of lookup will walk through the hierarchy and return the first result found.  A Jerakia lookup may also request a cascading lookup by setting the cascade option to true.  For a cascading lookup, Jerakia will continue to walk through the hierarchy and search for all instances of of the requested key and combine them together  into a hash or an array.   When performing a cascading lookup, the requestor can select the desired merge behaviour using the merge flag of the lookup.  Supported options for this parameter are array, hash or deep_hash.   When performing a cascading hash lookup, hashes are merged and key conflicts at the first level of the hash are overwritten according to their priority, whereas deep_hash attempts to merge the hash at all nested levels.

These concepts will be familiar to Hiera users as the legacy functions hiera_array() and hiera_hash().

Why use schemas?

The ability to do hash or array type lookups is a very useful and popular feature, and has always existed in Hiera.  For a long time the problem was where should/could you declare that a particular lookup key should be looked up this way.   Initially people used the hiera_hash() and hiera_array() directly in their modules.  This has several drawbacks though.  Most notably the incompatibility with Puppets’ data binding feature meant hash and array lookups had to be dealt with separately outside of the classes parameters, and  hard coding Hiera functions within modules is not best practice as it makes assumptions that the implementor of the module is using Hiera in the first place.

Later versions of Puppet and Hiera have made this nicer.  The new lookup functionality in Puppet 4 provides a lookup() function that takes a lookup strategy as an argument.  This is certainly nicer than the legacy hiera functions as it is provider agnostic and you can swap out Hiera for a different data lookup tool, such as Jerakia, transparently, which makes it more acceptable to use this function in a module.  And there is the new data in modules feature which allows for Puppet modules to determine the lookup behaviour of the parameters it’s classes contain

I think this approach is definitely going in the right direction, and it solves the problem of overriding behaviour for parameterised classes.

Jerakia takes a new approach and provides a new layer of logic called schemas.  When Jerakia receives a lookup request for a key, it first performs a search within the schema for the key and if found, will override lookup and merge behaviour based on what is defined in the schema.

The advantage of using schemas is that a user can download a module from the forge and override the lookup behaviour of the keys without modifying any of the Puppet code or adding anything Puppet specific to the data source.

How schemas work

Controlling lookup behaviour

When a request for a lookup key is recieved by Jerakia, it first performs a lookup against the schema.   It currently uses the in built file datasource to perform a separate lookup, but the source of data read by this lookup is different to the main lookup.  By default, Jerakia will search for a JSON (or YAML) file with a name corresponding to the namespace of the request in /var/lib/jerakia/schema.  Within this JSON document it searches for the key corresponding to the lookup key requested.  The data returned can override the lookup behaviour.  For example;

The above example will override a lookup for the key sysadmins in the namespace accounts (accounts::sysadmins) to be a cascading search merging the results into an array (just like hiera_array())

The big advantage here is that this data is separated from our actual configuration data, which could be in a YAML file structure, database, REST API endpoint…etc.

Using schema aliases

Another feature of schemas is the ability to create pseudo keys and namespaces that can be looked up and mapped to other keys within the data.   Schemas have the ability to override the namespace and key part of a request on the fly.  As a very hypothetical example, let’s say you have an array of domains in your data defined as webserver::domains eg:

If you need the same data to populate the vhosts parameter of a class called apache you could simply alias this in the schema rather than declaring the data twice or performing lookups from within Puppet…

The above schema entry will mean that lookups for apache::vhosts and webserver::domains will return the same data set.

You can also use a combination of aliases and lookup overrides to declare a  psuedo key to lookup data in a different way.

Here we have created two pseudo keys, security::firewall_rules and security::all_firewall_rules, both of which alias to the firewalld::rich_rules data set but will be looked up in different ways.  The security namespace itself may not even exist in the actual data set.

Future plans for schemas

The current implementation of schemas is fairly basic.  I see this as being quite a fundamental part of Jerakia in the future and it’s an area that could see functionality such as sub-lookups, views and even light “stored procedure” type functions to add some powerful functionality to data lookups whilst keeping the actual source of data in it’s purest form thus not stifling innovation of data source back ends.

Although currently limited to searching JSON or YAML files, schema searches are actually done with Jerakia lookups, the same functionality that does a regular lookup, so it should be trivial to allow users a lot more flexibility in how schema searches are done by using custom data sources and policies in future releases.

Want to know more?

Check out the Jerakia docs for how to configure the behaviour of schemas and more on how to use them.

Next up…

Puppet 4.0 delivered some great new functionality around data lookups, including environment data providers and the internal lookup functions that I feel will go really well with Jerakia.  I’m currently working on integration examples and a new environment data provider for Jerakia that will be available soon.

 

facebooktwittergoogle_plusredditpinterestlinkedinmail

Extending Jerakia with lookup plugins

Introduction

In my last post, I introduced Jerakia as a data lookup engine for Puppet, and other tools. We looked out how to use lookup policies to get around complex requirements and edge cases including providing different lookup hierarchies to different parts of an organisation. In this post we are going to look at extending the core functionality of Jerakia. Jerakia is very plugguble, from data sources to output filters, and I’ll cover all of them in the coming days, but today we are going to cover plugins.

Lookup plugins

Last week we looked at Jerakia polcies, which are containers for lookups. A lookup, at the very least contains a name and a datasource. A classic lookup would be;

Within a lookup we have access to both the request and all scope data sent by the requestor. Having access to read and modify these gives us a great deal of flexibility. Jerakia policies are written in Ruby DSL so there is nothing stopping you from putting any amount of functionality directly in the lookup. However, that makes for rather long and complex policies and isn’t easy to share or re-use. The recommended way therefore to add extra functionality to a lookup is to use the plugin mechanism.

As an example, let’s look at how Jerakia differs from a standard Hiera installation in terms of data structure and filesystem layout between Hieras’ YAML backend and Jerakias’ file datasource. Puppet data mappings are requested from Hiera as modulename::key and are searched from the entries in the configured hierarchy. Jerakia has the concept of a namespace and a key, and when requesting data from Puppet, the namespace is mapped to the module name initiating the request. Jerakia looks for a filename matching the namespace and the variable name as the key. Take this example;

A standard hiera filesystem would contain something like;

In Jerakia, by default this would be something like;

The difference is subtle enough, and if we wanted to use Jerakia against a Hiera-style file layout with keys formatted as module::key we could manipulate the request to add the first element of request.namespace to the key, separated by ::, and then drop the namespace completely. You could implement this directly in the lookup, but a better way is to use a plugin keeping the functionality modular and shareable. Jerakia ships with a plugin to do just this, it’s called, unsuprisingly, hiera.

Using lookup plugins

To use a plugin in a lookup it must be loaded using the :use parameter to the lookup block, eg:

If you want to use more than one plugin, the argument to :use can also be an array

Once a plugin is loaded into the lookup, it exposes it’s methods in the plugin.name namespace. For example, the hiera plugin has a method called rewrite_lookup which rewrites the lookup key and drops the namespace from the request, as described above. So to implement this functionality we would call the method using the plugin mechanism;

Writing plugins

Lookup plugins are loaded as jerakia/lookup/plugin/pluginname from the ruby load path, meaning they can be shipped as a rubygem or placed under

jerakia/lookup/plugin relative to the plugindir option in the configuration file. The boilerplate template for a plugin is formed by creating a module with a name corresponding to your plugin name in the Jerakia::Lookup::Plugin class… in reality that looks a lot simpler than it sounds

We can now define methods inside this plugin that will be exposed to our lookups in the plugin.mystuff namespace. For this example we are going to generate a dynamic hierarchy based on a top level variable role. The variable contains a colon delimited string, and starting with the deepest level construct a hierarchy to the top. For example, if the role variable is set to web::frontend::application_foo we want to generate a search hierarchy of;

To do this, we will write a method in our plugin class called role_hierarchy and then use it in our lookup. First, let’s add the method;

We can now use this within our module by loading the mystuff plugin and calling our method as plugins.mystuff.role_hierarchy. Here is the final lookup policy using our new plugin;

Conclusion

My example here is pretty simple, but it demonstrates the flexibility of Jerakia to create a dynamic search hierarchy. With access to the request object and the requestors scope, lookup plugins can be very powerful tools to get around the most complex of edge cases. Being able to write Jerakia policies in native Ruby DSL is great for flexibility, but runs the risk of having excessive amount of code complicating your policy files, the plugin mechanism offers a way to keep extended lookup functionality separate, and make it shareable and reusable.

Up next…

We’re still not done, Jerakia offers numerous extension points. In my next post we will look at output filters to parse the data returned by the backend data source. We will look first at what I consider the most useful of output filters, encryption which uses hiera-eyaml to decrypt data strings in your returned data, no matter what datasource is used, and look at how easy it is to write your own output filters. After that we will look at extending Jerakia to add further data sources, so stay tuned!

facebooktwittergoogle_plusredditpinterestlinkedinmail

I’m joining Puppet Labs

Back in September last year I flew over to the US for PuppetConf 2011, and more recently took a trip to Edinburgh for PuppetCamp. Both times when I returned my wife asked me how it went, and both times my answer was simple; “I’ve gotta work for these guys!” I said. So after recently returning from a short excursion to Portland, Oregon I am very thrilled and honoured to announce that I’ve accepted a full time position with Puppet Labs.

I’ve been an IT contractor for many years now, and started working with Puppet in 2008. Since then I’ve worked more and more with Puppet at numerous companies including most recently at the BBC. I’ve also become increasingly more involved in the Puppet user community at large over the past 4 years. I have a real passion for working with the product, I love the Puppet community and now I’m really looking forward to joining the company that made it all happen and being a part of all the awesome things to come.

I’ll be joining Puppet Labs later this month as a Professional Services Engineer and look forward to maintaining and building upon the many relationships I have with various Puppet users as well as engaging with a wider section of the user community in my new professional capacity too. I would like to thank the various people at Puppet Labs involved in my interviews for the opportunity, with special thanks extended to Aimee Fahey (@PuppetRecruiter) for all her assistance during the hiring process.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Designing Puppet – Roles and Profiles.

Update, Feb 15th.

Since writing this post some of the concepts have become quite popular and have generated quite a lot of comments and questions in the community. I recently did I talk at Puppet Camp Stockholm on this subject and hopefully I might have explained it a bit better there than I did below :-). The slides are available here and a YouTube video will be uploaded shortly.

Introduction

So you’ve installed Puppet, downloaded some forge modules, probably written a few yourself too. So, now what? You start applying those module to your nodes and you’re well on your way to super-awesomeness of automated deployments. Fast forward a year or so and your infrastructure has grown considerably in size, your organisations business requirements have become diverse and complex and your architects have designed technical solutions to solve business problems with little regard for how they might actually be implemented. They look great in the diagrams but you’ve got to fit in to Puppet. From personal experience, this often leads to a spell of fighting with square pegs and round holes, and the if statement starts becoming your go-to guy because you just can’t do it any other way. You’re probably now thinking its time to tear down what you’ve got and re-factor. Time to think about higher level design models to ease the pain.

There is a lot of very useful guidance in the community surrounding Puppet design patterns for modules, managing configurable data and class structure but I still see people struggling with tying all the components of their Puppet manifests together. This seems to me to be an issue with a lack of higher level code base design. This post tries to explain one such design model that I refer to as “Roles/Profiles” that has worked quite well for me in solving some off the more common issues encountered when your infrastructure grows in size and complexity, and as such, the requirements of good code base design become paramount.

The design model laid out here is by no means my suggestion on how people should design Puppet, it’s an example of a model that I’ve used with success before. I’ve seen many varied designs, some good and some bad, this is just one of them – I’m very interested in hearing other design models too. The point of this post is to demonstrate the benefits of adding an abstraction layer before your modules

What are we trying to solve

I’ve spent a lot of time trying to come up with what I see as the most common design flaws in Puppet code bases. One source of problems is that users spend a lot of time designing great modules, then include those modules directly to the node. This may work but when dealing with large and complex infrastructures this becomes cumbersome and you end up with a lot of node level logic in your manifests.

Consider a network consisting of multiple different server types. They will all share some common configuration, some subsets of servers will also share configuration while other configuration will be applicable only to that server type. In this very simple example we have three server types. A development webserver (www1) that requires a local mysql instance and PHP logging set to debug, a live webserver (www2) that doesn’t use a local mysql, requires memcache and has standard PHP logging, and a mail server (smtp1). If you have a flat node/module relationship with no level of abstraction then your nodes file starts to look like this:

Note: if you’re already thinking about ENC’s this will be covered later

As you can see, the networking and users modules are universal across all our boxes, Apache, Tomcat and JDK is used for all webservers, some webservers have mysql and PHP logging options vary depending on what type of webserver it is.

At this point most people try and simplify their manifests by using node inheritance. In this very simple example that might be sufficient, but it’s only workable up to a point. If you’re environment grows to hundreds or even thousands of servers, made up over 20 or 30 different types of server, some with shared attributes and subtle differences, spread out over multiple environments, you will likely end up with an unmanagable tangled web of node inheritance. Nodes also can inherit only one other node, which will be restrictive in some edge cases.

Adding higher level abstraction

One way I have found to minimise the complexity of node definitions and make handling nuances between different server types and edge case scenarios a lot easier is to add a layer (or in this case, two layers) of seperation between my nodes and the modules they end up calling. I refer to these as roles and profiles.

Consider for a moment how you would represent these servers if you weren’t writing a Puppet manifest. You wouldn’t say “www1 is a server that has mysql, tomcat, apache, PHP with debug logging, networking and users” on a high level network diagram. You would more likely say “www1 is a dev web server” so really this is all the information I want to be applying directly to my node.

So after analysing all our nodes we’ve come up with three distinct definitions of what a server can be. A development webserver, a live webserver and a mailserver. These are your server roles, they describe what the server represents in the real world. In this design model a node can only ever have one role, it cant be two things simultaneously. If your business now has an edge case for QA webservers to be the same as live servers, but incorporate some extra software for performance testing, then you’ve just defined another role, a QA Webserver.

Now we look at what a role should contain. If you were describing the role “Development webserver” you would likely say “A development webserver has a Tomcat application stack, a webserver and a local database server”. At this level we start defining profiles.

Unlike roles, which are named in a more human representation of the server function, a profile incorporates individual components to represent a logical technology stack. In the above example, the profile “Tomcat application stack” is made up of the Tomcat and JDK components, whereas the webserver profile is made up of the httpd, memcache and php components. In Puppet, these lower level components are represented by your modules.

 

puppet

Now our nodes definitions look a lot simpler and are representitive of their real world roles…

Roles are simply collections of profiles that provide a sensible mapping between human logic and technology logic. In this scenario our roles may look something like:

Whether or not you choose to use inherited classes in the way I have done is up to you of course, some people stay clear of inheritence completely, others over use it. Personally I think it works for the purposes of laying out roles and profiles to minimise duplication.

The profiles included above would look something like the following

In summary the “rules” surrounding my design can be simplified as;

  • A node includes one role, and one only.
  • A role includes one or more profiles to define the type of server
  • A profile includes and manages modules to define a logical technical stack
  • Modules manage resources
  • Modules should only be responsible for managing aspects of the component they are written for

Let’s just clarify what we mean by “modules”

I’ve talked about profiles and roles like they are some special case and modules being something else. In reality, all of these classes can be, and should be modularised. I make a logical distinction between the profile and role modules, and everything else (e.g.: modules that provide resources).

Other useful stuff to do with profiles.

So far I’ve demonstrated using profiles as collections of modules, but it has other uses too. As a rule of thumb, I don’t define any resources directly from roles or profiles, that is the job for my modules. However, I do realise virtualised resources and occasionally do resource chaining in profiles which can solve problems that otherwise would have meant editing modules and other functionality that doesn’t quite fit in the scope of an individual module. Adding some of this functionality at the modular level will reduce the re-usability and portability of your module.

Hypothetically lets say I have a module, let’s call it foo for originalities sake. The foo module provides a service type called foo, in my implementation I have another module called mounts that declares some mount resource types. If I want all mount resource types to be initiated before the foo service is started as without the filesystems mounted the foo service will fail. I’ll go even further and say that foo is a Forge module that I really don’t want to (and shouldn’t have to) edit, so where do I put this configuration? This is where having the profiles level of abstraction is handy. The foo module is coded perfectly, it’s the use case determined from my own technology stack that is requiring that my mount points exists before the foo service, so since my stack is defined in the profile, this is where I should specify it. e.g.:

It’s widely known that good modules are modules that you don’t need to edit. Quite often I see people reluctant to use Forge modules because their set up requires some peripheral set up or dependancies not included in the module. Modules exist to manage resources directly related to what they were written for. For example, someone may choose to edit a forge mysql module because their set up has dependancies on MMM being installed after MySQL (purely hypothetical). The mysql module is not the place to do this, mysql and mmm are separate entities and should be configured and contained within their own modules, tying the two together is something you’ve defined in your stack, so again, this is where you’re profiles come in…

This approach is also potentially helpful for those using Hiera. Although Hiera and Puppet are to become much more fused in Puppet 3.0, at the moment people writing forge modules have to make them work with Hiera or not, and people running Hiera have to edit the modules that aren’t enabled. Take a hypothetical module from the Forge called fooserver. This module exposes a paramaterized class that has an option for port, I want to source this variable from Hiera but the module doesn’t support it. I can add this functionality into the profile without the need for editing the module.

What about using an ENC?

So you’re probaby wondering why I haven’t mentioned using an ENC (External Node Classifier). The examples above don’t use any kind of ENC, but the logic behind adding a layer of separation between your nodes and your modules is still the same. You could decide to use an ENC to determine which role to include to a node, or you could build/configure an ENC to perform all the logic and return the list of components (modules) to include. I prefer using an ENC in place of nodes definitions to determine what role to include and keep the actual roles and profiles logic within Puppet. My main reason for this is that I get far greater control of things such as resource chaining, class overrides and integration with things like Hiera at the profile level and this helps overcome some tricky edge cases and complex requirements.

Summary

None of the above is set in stone, what I hope I’ve demonstrated though is that adding a layer of abstraction in your Puppet code base design can have some significant benefits that will avoid pitfalls when you start dealing with extremely complex, diverse and large scale set ups. These include

  • Reducing complexity of configuration at a node level
  • Real-world terminology of roles improves “at-a-glance” visibility of what a server does
  • Definition of logical technology stacks (profiles) gives greater flexibility for edge cases
  • Profiles provide an area to add cross-module functionality such as resource chaining
  • Modules can be granular and secular and tied together in profiles, thus reducing the need to edit modules directly
  • Reduced code duplication

I use Hiera to handle all of my environment configuration data, which I won’t go into detail about in this post. So, at a high level my Puppet design can be represented as;

puppet_big

 

As I said previously, this is not a the way to design Puppet, but an example of one such way. The purpose of this post is to explore higher level code base design for larger and more complex implementations of Puppet, I would love to hear other design models that people have used either successfully or not and what problems it solved for you (or introduced :)) so please get in touch with your own examples.

facebooktwittergoogle_plusredditpinterestlinkedinmail

Introducing hiera-mysql MySQL Backend for Hiera

Introduction

Some time ago I started looking at Hiera, a configuration datastore with pluggable back ends that also plugs seamlessly into Puppet for managing variables. When I wrote hiera-gpg a few months ago I realised how easy extending Hiera was and the potential for really useful backends that can consolidate all your configuration options from a variety of systems and locations into one streamlined process that systems like Puppet and other tools can hook into. This, fuelled by a desire to learn more Ruby, lead to hiera-mysql, a MySQL Backend for Hiera.

Installing

hiera-mysql is available as a ruby gem and can be installed with:

Note: this depends on the Ruby mysql gem, so you’ll need gcc, ruby-devel and mysql-devel packages installed. Alternativley the source can be Downloaded here

MySQL database

To demonstrate hiera-mysql, here’s a simple MySQL database some sample data;

Configuring Hiera

In this example we’re going to pass the variable “env” in the scope. hiera-mysql will interpret any scope variables defined in the query option, and also has a special case for %{key}. Example:

Running Hiera

With the above example, I want to find the value of the variable colour in the scope of live

If I add more rows to the database that match the criteria, and use Hiera’s array search function by passing -a I can make Hiera return all the rows

Hiera’s pluggable nature means that you can use this back end alongside other back ends such as YAML or JSON and configure your search order accordingly.

Limitations

Currently hiera-mysql will only return the first element of a row, or an array of first elements, so you can’t do things like SELECT foo,bar FROM table. I intend to introduce this feature by implementing Hiera’s hash search in a future release. Also, the module could do with slightly better exception handling around the mysql stuff. Please let me know if theres anything else that would improve it.

Puppet

And of course, because Hiera is completely transparent, accessing these variables from Puppet couldn’t be easier!

References

  • Github homepage for hiera-mysql
  • Official Hiera Project Homepage
  • Hiera – A pluggable hierarchical data store
    facebooktwittergoogle_plusredditpinterestlinkedinmail
  • Secret variables in Puppet with Hiera and GPG

    Last week I wrote an article on Puppet configuration variables and Hiera. This almost sorted out all my configuration variable requirements, bar one; what do I do with sensitive data like database passwords, hashed user passwords…etc that I don’t want to store in my VCS repo as plaintext.

    Hiera allows you to quite easily add new backends, so I came up with hiera-gpg, a backend plugin for Hiera that will GPG decrypt a YAML file on the fly. It’s quite minimal and there is some stuff I’d like to do better – for instance it currently shells out to the GPG command, hopefully someone has some code they can contribute that’ll use the GPGME gem instead to do the encryption bit.

    Once you’re up and running with Hiera, you can get the hiera-gpg backend from Rubygems…

    We run several Puppetmasters, so for each one I create a GPG key and add the public key to a public keyring that’s kept in my VCS repo. For security reasons I maintain a dev and a live keyring so only live Puppetmasters can see live data.

    Currently hiera-gpg doesn’t support key passwords, I’ll probably add this feature in soon but it would mean having the password stored in /etc/puppet/hiera.yaml as plaintext anyway, so I don’t see that as adding much in the way of security.

    So I have my GPG secret key set up in roots homedir:

    Next I add my GPG public key to the keyring for live puppetmasters (in my set up, /etc/puppet/keyrings/live is a subversion checkout)

    Now I can create a YAML file in my hieradata folder and encrypt it for the servers in my live keyring.

    If like me you have more than one puppetmaster in your live keyrings, multiple -r entries can be specified on the command line for gpg, you should encrypt your file for all the puppet master keys that are allowed to decrypt it.

    Now you just need to tell Hiera about the GPG backend, My previous Hiera configuration now becomes:

    Here we’re telling Hiera to behave exactly as it used to when we just had the YAML back end, and if it doesn’t find the value you are requesting from YAML it will query the GPG back end which will pick up on your %{calling_module}.gpg.

    Now I can query Hiera on the command line to find my live MySQL root password with:

    In Puppet, I reference my variables in exactly the same way as any other variable

    Theres probably lots of stuff I can improve here, but I have the basics of what I need, a transparent method of data storage using GPG encryption and no sensitive data stored in my VCS repo as plain text.

    facebooktwittergoogle_plusredditpinterestlinkedinmail

    Puppet, Parameterized classes .vs. Definitions

    Firstly, a little background on this topic. During PuppetConf this year I attended a very interesting talk by Digant C. Kasundra about the Puppet implementation at Stanford University. At one point he asked, “Who uses definitions?”, and I raised my hand. The next question was, “Who uses parameterized classes?”, and I also raised my hand, this was followed by “Who uses both?”, and I was one of a small minority of people who raised a hand again. The next question was, “Who knows what the difference between a parameterized class and a definition is?”. I took a second or two to think about this and the talk moved on after no-one in the audience raised their hand, and I didn’t get a chance to answer this in the Q&A due to running out of time, but I’ve been thinking about it since. Digant’s view was that the two are very similar and he advocates the use of definitions, which is certainly not a bad point of view, but I don’t think you should be using one or the other, but rather, use either one appropriately, and given the fact that no-one could answer Digant’s question in his talk, I felt it worth expanding on the issue in a blog post and would really welcome any feedback.

    So, what is the core differences in how they are used? Well, the answer is, as Digant rightly pointed out, not a lot, but the small difference between them that does exist is very important, and should easily dictate which one you use in a given situation. Firstly, let’s look at what they actually are;

    A class, parameterized or not, is a grouping of resources (including resources provided by definitions) that can be included into a manifest with one statement. If you make your classs parameterized then you can include that class with dynamic parameters that you can override depending on how your catalog is compiled. A definition is a template that defines what effectively is no different from any other resource type and gives you a boiler plate solution for applying a series of resources in a certain way.

    So now it’s fairly clear that these two things are actually quite different, but you may be thinking, “sure, they’re different but there is nothing that a parameterized class does that you can’t do with a definition, right?” – well, yes, that’s right, but there is plenty that a definition does that you may not want it to do, mainly, puppet allowing it to be instantiated multiple times.

    As an example of this, at the BBC we have a core application, let’s call it acmeapp for semantics sake. The core application takes almost 50 different parameters for configuration and can get deployed differently depending on what type of server profile we are deploying to. We only ever want acmeapp defined once as it is responsible for some core resources such as creating the base install directories, therefore we use a parameterized class for this and sub classes can inherit of this and override any number of variables. Either way, I’m only ever going to apply the acmeapp class once in my catalog. The end result looks something like;

    Whilst the above is possible with a definition, it would allow it to be defined multiple times, which means the resources within the definition would end up duplicated and my catalog failing. I would rather Puppet fails because I’ve tried to use my class twice, rather than fail because I’ve duplicated a resource, such as the file type that creates the installation directory. It makes it very clear to anyone working with my Puppet code that this class should only be applied once.

    Now, lets look at another example. At multiple points in our manifests we need to set up MySQL grants – to initiate an Exec for every grant would be bad practice, as we’ll end up in a lot of code re-use. This is where definitions come in. At the BBC we have a mysql class that not only provides the MySQL packages and services, but also exposes some useful functions to manage your databases and grants through a series of definitions, this is the code to the definition that controls MySQL grants….

    As you can see, this is very different to our usage or a parameterized class, here we’ve templated a significant amount of functionality into one easy-to-use defined resource type. We can re-use this functionality as many times as we need, in the above example we set up two grants for the acmeapp application, one for a VIP IP address and one for an application IP range by specifying something like:

    Hopefully this gives you a good idea of the subtle differences between parameterized classes and definitions, but also that they are both very independent features of Puppet that have their own uses.

    facebooktwittergoogle_plusredditpinterestlinkedinmail

    Part 1: Installing puppet 2.6.1 on CentOS with YUM/RPM

    Installing Puppetmaster 2.6.1

    Assuming, like me, the thought of letting rubygems vommit all over your filesystem is not a pleasant one, then how to get the latest puppet 2.6.1 installed on CentOS 5.5 with yum isn’t very clear. Things may differ on other peoples systems, but the below worked for me.

    Set up yum repositories.

    Do this on both the client and the server
    Add the following files and save them to /etc/yum.repos.d/

    puppet.repo


    epel.repo


    ruby.repo


    Note that we include ruby and puppetlabs as the next steps in this tutorial will be to configure puppet and install puppet-dashboard. We want to upgrade to ruby 1.8.6 in order to run puppet-dashboard, doing this now will save you some pain down the line.

    Upgrade Ruby to 1.8.6

    Do this on both the client and the server
    As mentioned above, use the ruby repo to upgrade.

    Install Puppet Server

    On your puppetmaster server:


    On your puppet client

    That’s it, in part 2 and 3 we will install our client and server and install dashboard.

    Part 2 Configuring Puppetmaster and Puppetd

    facebooktwittergoogle_plusredditpinterestlinkedinmail