Introducing hiera-mysql MySQL Backend for Hiera

Introduction

Some time ago I started looking at Hiera, a configuration datastore with pluggable back ends that also plugs seamlessly into Puppet for managing variables. When I wrote hiera-gpg a few months ago I realised how easy extending Hiera was and the potential for really useful backends that can consolidate all your configuration options from a variety of systems and locations into one streamlined process that systems like Puppet and other tools can hook into. This, fuelled by a desire to learn more Ruby, lead to hiera-mysql, a MySQL Backend for Hiera.

Installing

hiera-mysql is available as a ruby gem and can be installed with:

Note: this depends on the Ruby mysql gem, so you’ll need gcc, ruby-devel and mysql-devel packages installed. Alternativley the source can be Downloaded here

MySQL database

To demonstrate hiera-mysql, here’s a simple MySQL database some sample data;

Configuring Hiera

In this example we’re going to pass the variable “env” in the scope. hiera-mysql will interpret any scope variables defined in the query option, and also has a special case for %{key}. Example:

Running Hiera

With the above example, I want to find the value of the variable colour in the scope of live

If I add more rows to the database that match the criteria, and use Hiera’s array search function by passing -a I can make Hiera return all the rows

Hiera’s pluggable nature means that you can use this back end alongside other back ends such as YAML or JSON and configure your search order accordingly.

Limitations

Currently hiera-mysql will only return the first element of a row, or an array of first elements, so you can’t do things like SELECT foo,bar FROM table. I intend to introduce this feature by implementing Hiera’s hash search in a future release. Also, the module could do with slightly better exception handling around the mysql stuff. Please let me know if theres anything else that would improve it.

Puppet

And of course, because Hiera is completely transparent, accessing these variables from Puppet couldn’t be easier!

References

  • Github homepage for hiera-mysql
  • Official Hiera Project Homepage
  • Hiera – A pluggable hierarchical data store
    Follow and share if you liked this
  • Puppet configuration variables and Hiera.

    Managing configuration variables within Puppet has always given me a bit of a headache, and I’ve never really found a way to do it that I’m all together happy with, particularly when dealing with the deployment of complex applications that require a lot, sometimes hundreds, of different configuration variables and multiple environments. I started thinking a while ago that Puppet wasn’t the best place to be keeping these variables in the first place. For starters, this is really valuable data we’re talking about, there may be lots of other applications that may benefit from having access to the way your software is configured, so why should Puppet retain all of this information exclusively for itself? The original extlookup() function in Puppet provides some decoupling of configuration data from Puppet manifests, but I found it a bit limiting and not very elegant having to maintain a bunch of CSV files. I’ve been interested in R.I.Pienaar’s Hiera for a while and thought I’d give it a proper spin and see if it meets my needs.

    Hiera itself is a standalone configuration data store that supports multiple back ends including YAML, JSON and Puppet itself, and adding more back ends to it is a fairly non-challenging task for anyone competent with Ruby. Thanks to hiera-puppet it plugs nicely into Puppet.

    Configuring a basic Hiera setup

    After installing hiera (gem install hiera), I want to test it by setting up a pretty basic configuration store that will override my configuration variables based on environment settings of dev, stage or live. Let’s take a variable called $webname. I want to set it correctly in each of my three environments, or default it to localhost.

    Firstly, I create four YAML files in /etc/puppet/hieradata

    Now I have a YAML file representitive of each environment, I create a simple config in /etc/puppet/hiera.yaml that tells Hiera to search for my environment YAML file followed by common.yaml.

    Now using hiera from the command line, I can look up the default value of $webname with the following command

    But now if I want to know the value for the live and dev environments I can pass an env flag to Hiera

    Accessing this from Puppet

    I can now access these variables directly from my Puppet modules using the hiera() function provided by hiera-puppet. In this example, I already have a fact called ${::env} that is set to dev, stage or live (in my particular set up we use the puppet environment variable for other things)

    Adding more scoping

    OK, thats a fairly simple set up but demonstrates how easy it is to get up and running with Hiera. The requirements I had were a little more complex. Firstly, our hierarchy is broken down into both environment (live, stage, dev..etc) and location. I have multiple environments in multiple locations, a particular location will either be a live, stage or dev environment. So some variables I want to override on the environment level, and some at the more granular location level.

    Secondly, I don’t like the idea of asking Hiera for $webname. That doesn’t tell me anything; what is $webname, what uses it? Consider a more generic variable called $port – that’s going to be confusing. So I started thinking about ways of grouping and scoping my variables. The way I solved this was to introduce a module parameter as well as environment and location in Hiera and place variables for a particular module in it’s own YAML file, using a filesystem layout to determine the hierarchy.

    My new hierdata file system looks a little like this

    Now for each of my modules, I create a YAML file in the folder level that I want to override with the values for my module. Taking the previous example, lets say that I want $webname to be www.myapp.mycorp.com for all live environments, except for Dublin, which I want to be a special case of www.myapp.mycorp.ie. To accomplish this I create the following two files:

    Hiera-puppet will pass the value of $calling_module from Puppet to Hiera, and we can use this in our hierarchy in hiera.yaml. NOTE: Currently you will need this patch to hiera-puppet in order for this to work!

    So our new /etc/puppet/hiera.yaml file looks like:

    On the command line, we can now see that environment, location and calling module are now used when looking up a configuration variable

    In Puppet, I have ${::env} and ${::location} already set as facts, and since $calling_module will get automatically passed to Hiera from Puppet, my myapplication class looks no different…

    but knowing the module name means I can easily find where this value is set, and I can easily see what configuration a module requires by examining its YAML files under /etc/puppet/hierdata

    Conclusion

    In conclusion, I’m now convinced that moving configuration variable data out of Puppet is a very good thing. Now other systems can easily query this valuable information either on the command line or directly with Ruby. By forcing the use of $calling_module I’ve introduced a sort of pseudo scoping for my variables, so, for example… “port” now becomes “port calling_module=apache” and gives me a lot more meaning.

    Many thanks to R.I.Pienaar for help in setting this up, as well as providing the patch to scope.rb that enabled me to get this working.

    Follow and share if you liked this

    Configuring Tomcat properties files with Augeas and Puppet.

    Introduction

    This post covers quite a few different things, it is taken from a real-world example of something I was asked to do recently which not only involved some cool Puppetmastery using exported resources, storeconfigs and custom definitions, but also forced me to start learning Augeas, which I’ve been meaning to get around to. So, heres the story.

    Some background.

    To put this into context, I recently had a requirement to add some Puppet configuration to manage some Tomcat properties files. On further investigation this turned out to be a little more complicated as the requirements weren’t as simple as chucking some templated configuration files out with a few variables replaced.

    The requirement was for a group of Tomcat servers to contain one properties file with a chunk of configuration in for each server in the group. So for example, each node in the group needed to have something like

    There could be many, many servers in a given group and I don’t want to be maintaining a massive list of variables as that will just get messy, so the answer here is to use exported virtual resources and Puppets’ storeconfig feature. My thinking here is that now I can configure each node in the group with an exported resource that looks something like

    … and then simply collect them all with something like …

    All good so far. Then I started thinking what application::instance() would look like. The requirement was for one properties file with all the nodes configuration in, so I can’t spit out several files from a template, that would be too easy. I looked around at various solutions for building files from fragments but to be honest nothing really appealed to me as elegant or clean, so I started investigating solutions for line-by-line editing. Traditionally this has been done by wrapping a series of piped echo commands and greps in a couple of exec’s, various examples of this exist, often called “line()”, but why do that when we have Augeas, a ready made tool for managing elements within a configuration file in a structured and controlled fashion.

    So, I thought this would be worth experimenting with!

    Creating an Augeas lens

    Augeas uses the term lenses to describe a class that defines how it interacts with different types of files. There are lenses for all sorts of configuration files, such as hosts, httpd.conf, yum.repos.d…etc. You name it, there is probably a lens for it. At the time of writing however, Augeas is not bundled with a lens that can process Tomcat properties files, although I’ve been told this is coming out soon. Thinking that a tomcat properties file is pretty uncomplicated, I decided that instead of searching for someone elses pre-written version of a Tomcat lens I would write my own to gain a better understanding of how Augeas works.

    My first thoughts when reading throught he Augeas documentation was, “Oh my god what have I got myself into”. I soon discovered that this was no simple little tool, and the configuration for lenses seemed immensely complicated. However, then I found this tutorial and ran through it. Slowly it started to make a bit more sense, and I realise that actually this is one powerful application.

    Creating a test file

    My first job was to create a test file, it’s useful to do this first as you’ll want to run augparse periodically to test your lens. The main function of the test file is to parse your example configuration into an augeas tree, and then vica versa and compare the outcomes.

    My Tomcat test file looks like this

    Here I’m testing a variety of scenarios, including indentations, spaces around “=” and comments. The first part tests that when I parse my configuration file using my lens that I get the expected tree, this is the get part. The second is the put part, this tests that setting a couple of variables in the augeas tree and parsing it back out as raw configuration will output in an expected manor, the augparse tool will use the lens I create to compare both of these outcomes and ensure my lens is doing what it should.

    Creating the lens

    At a very basic level, a lens describes a file. So before I started writing the lens for Tomcat properties file I thought about describing my file in plain English, and I came up with

  • Any one line is either a comment, a property or a blank line
  • A comment is a series of spaces/tabs followed by a hash, followed by text
  • A property consists of alphanumerical values seperated by periods
  • A value is a string of text
  • An equals sign separates the property from the value
  • Any line can be indented with tabs and spaces
  • White spaces or tabs can surround the separator

    That doesnt seem so complicated, so then I thought of how I need to represent these in Augeas. Firstly, I thought about my primitive types here that I can use to build up a comment, a key/value pair and a blank line, the 3 functions of any one line. These break down to

  • Blank line
  • End of line
  • Separator
  • Property name part
  • Value part
  • Indentation

    Using these building blocks, I can define a comment, a key/value pair and a blank line, for example, a standard key/value pair line would be…

    (spaces, tabs or null)(alphanumeric characters and periods)(spaces, tabs or null)(=)(spaces, tabs or null)(characters that are not end-of-line)(end-of-line)

    So, when I write regular expressions to define the above, the Augeas configuration looks something like this.

    Now I’ve defined my building blocks I can tell Augeas how these apply to the basic 3 elements of my configuration file; comments, blank lines and properties.

    Finally I set up my lens and filter by telling Augeas that my lens consists of my 3 basic elements, and define which files I wish to be parsed using my lens

    So my final lens file, which I install into /usr/share/augeas/lenses looks like this

    Testing my lens

    I use the augparse command to run my test file I created earlier against my new lens to make sure there are no parsing errors.

    As I final test, I create a test.properties file with the following example configuration

    Now I can use augtool to view and change one of my configurtation variables.

    Pulling this into Puppet

    Now I have a working lens, I can manipulate my configuration file using the augeas resource type provided by Puppet. First off, I want to build my tomcat property type using Augeas.

    Now I have a custom definition of tomcat::property that I can implement in my application::instance type. My instance definition needs to be able to set several tomcat variables in the application.properties file, so now I can do the following:

    Here I’m defining my application::instance type, and after that I include a resource collector to apply all exported definitions of my instance type. Finally, I just need to actually define the instances that I want to configure. Remember, each host in the group needs to know about every other host, so for each node I can now create something like the following and have it export it’s resource for all other nodes to collect.

    Now, with a combination of exported virtual resources, custom definitions and augeas I have the solution.

    If you want to use my Tomcat Augeas module for yourself, you can download it here

    Follow and share if you liked this
  • Preventing users from changing their password with PAM

    Blocking AD users from using passwd

    I had to design a system recently for a client which has a mixture of local users and remote users that are authenticated using LDAP against Active Directory (actually, with Quest Authentication Services running in between). One of the requirements was that AD users should not be able to change their password using the passwd command as they had an external management system for users that fed into AD (and other things). I needed to allow normal users to operate normally but fail AD users with some polite message to tell them what was going on, rather than just a random error that would cause them to call support every time. Trolling the web didn’t seem to reveal much apart from doing nasty things to /bin/passwd like chattr’ing it, or moving it to /sbin… since we’re not in the 90’s anymore I was sure there was a way to do this with PAM.

    My PAM knowledge is limited to say the least, and maybe my google-fu isn’t up to much because I struggled to find anything that did exactly what I wanted.

    Eventually, after some tweaking, I came up with the following which seems to work on CentOS…

    Edit /etc/pam.d/passwd and change it to read :-

    This should work normally for root and local users but give a warning and fail to AD users.

    Follow and share if you liked this

    Puppet – working with define based virtuals

    Define Based Virtuals

    Define-based virtuals are quite a powerful feature of Puppet, but many people either don’t understand them or don’t know how to apply them effectivly to their manifests. Due to Puppet’s normalized configuration structure, you can’t configure the same resource in two different places. For example, if the user mysql is configured in your database class, and then you realise that your webserver class also needs the mysql user, then, short of including the whhole database class, you cannot just configure the user here. The answer to this lies in resource virtualization.

    Resources that are virtualized are effectivly non-real, they won’t be included in any configuration until they are explicitly realized. The good news is you can realize your resource multiple times in your manifests, so by defining the resource as virtual in a seperate class, both your webserver class and databasse class can realize it.

    Version 0.23 of Puppet took this one step further, with the ability to virtualize your definitions rather than just native resource types. Now we can define a collection of resources, virtualize them, and realize them at whim in our manifests. I’ve seen a few Puppet installations and this method is far under used, and under appreciated.

    To demonstrate, this is an example of how to manage your users with virtual define-based resources.

    User management with defines and virtuals.

    I’ve seen a few ways of managing users within a Puppet estate. The following is a quick guide to get you started managing users, passwords and ssh keys for your users. My favourite approach is to use define-based virtuals and split it out into several sections

    • Back end module to specify the definition for a user
    • A class with a list of virtualized users
    • Realizing your users ad-hoc in your classes

    The advantage of this approach is that you can keep a central place for users whilst still being able to easily pick and choose which users get deployed with which classes. Lets break it down into our components;

    Users module

    As we want to manage users as well as ssh keys it makes sense to wrap this up into a definition, which we’ll call localuser. We create a module called users::virtual where we define the following class.

    This class sets up our localuser definition in users::virtual, other resources can also be added here that will affect every user.

    Creating our users class.

    Now you can create a users.pp class that imports your module and defines the specific users you want to configure. We configure these as virtuals to give us the option of realizing them whenever we want.

    The passord should be the fully encrypted string as would appear in the shadow file, don’t forget to use single quotes if your password string contains $ to prevent it from being interpereted as a variable.

    sshey is optional (note the conditional in our users module), and, if defined will set up the key in ~/.ssh/authorized_keys. Now you have a central list of users as virtuals that call our custom define, they can be easily included, or realized, at whim in your other classes.

    Realize the user

    Within your manifests for your server profile, whenever you require the user bobsmith to be part of the configuration, you simply realize it like this

    The above example realizes the user by the name of “bobsmith”. Another very useful way to realise users is by using Puppet’s collection syntax. For instance, to realize all the users in the users group, you would simply replace the realize statement with:

    … and all your virtual users that match the gid of users will be included.

    Conclusion

    What we’ve acheived by using virtual define-based resources here is primarily,

    • An abstracted module for defining what a user is, rather than who
    • A central place to manage all of our virtual users
    • A very flexible and easy way to pull users into our classes based on name or attribute

    For more information on working with Puppet virtuals, check out the documentation here

    Follow and share if you liked this

    Software review: Rundeck

    I’ve been looking around lately for something that can handle command-and-control automation across an estate of Linux servers. My requirements are to be able to run jobs and tasks remotely, either ad-hoc or at specified times and capture the results in a meaningful and useful way. I’ve come across some of the “Enterprise” solutions touted by a few proprietary vendors (cough, Control M, cough) and not found anything that really impresses me, or should I say, doesn’t scare me… until now, that is.

    My own requirements are fairly specific, I want a GUI and a CLI, I want to be able to run it without client agents, remote execution with SSH is a big bonus and a flat-file configuration structure will make integrating it into other systems fairly painless.

    An open source effort to address these requirements has recently been unveiled by DTO Solutions named RunDeck. At first glance it looked like a good fit for my purposes. I decided to give it a test-run after looking at the overall architecture design. Heres a breakdown of my experiences so far with it.

    Installation

    Installation couldn’t be easier. For test purposes I opted to simply download the self contained .jar file and launch it with java -jar launcher.jar and had a working default system in no time. DTO also provide an RPM for use with yum and the source is available in github.

    Configuration

    RunDeck uses the concept of a Resource Model to define a collection of hosts, or nodes, on your network. Minimally, you can add a resource model by defining a list of hosts in a simple XML document. Each node within your resource model can be assigned different custom tags and basic OS information (system type, OS, kernel version…etc). Your nodes should be configured with passwordless ssh keys to allow the RunDeck user to execute your commands. RunDeck also supports external sources for your resource model meaning you can import nodes automatically from other systems such as Chef or Puppet. In fact, James Turnbull recently wrote puppet-rundeck, for integrating RunDeck with Puppet, read more in his blog post

    The GUI

    RunDeck has an HTML GUI that is clean, uncomplicated and easy to navigate. The three main sections are broken down in to History, Jobs and Run and it doesn’t take long to get comfortable moving around. A huge plus for me is that it seems everything I can do in the GUI can also be done from the command line, extra brownie points for that!

    Node overview and Ad-hoc job run with RunDeck

    RunDeck in action

    Configuring jobs in RunDeck is clear and self explanatory, however the options and abilities are surprisingly versatile. Your job can be made up of any of the following;

    • Commands
    • References to other saved jobs
    • Server-side scripts to be run on the remote node
    • Inline scripts saved within your job

    The latter two are particularly useful for me, it means I can import a variety of legacy scripts that I have no hope of getting rid of in the near future, but at least I can wrap some decent execution framework around them.

    Adding a job with RunDeck
    Adding a job with RunDeck

    Once you’ve defined your job you can assign it to a group of hosts based on a very flexible filter that supports regex’s against most properties defined in your resource model (host name, OS, custom tags…etc).

    Your job can either be saved to run manually or scheduled to run automatically, and even supports crontab formatted scheduling data.

    Ad-hoc commands

    It’s worth mentioning that if the above sounds a bit much to go through when all you want to do is loop through a list of servers and execute a one off command, then you can use RunDeck’s Ad-Hoc feature. Under the Run section you are presenting with a list of hosts that can be filtered with multiple criteria, and at the top a text box to issue commands. So simple, but so very useful.

    Conclusion

    I won’t list every feature in depth, as this has been very adequately documented already. The above is my first impressions of RunDeck from spending less than a day test running it. I’m left feeling that this software is slick, well documented, reliable (a few minor bugs, but overall impressive for a 1.0 release) and does what it says on the tin. It’s defiantly placed high in my sysadmin toolstack arsenal.

    Follow and share if you liked this

    Part 3: Installing puppet-dashboard on CentOS / Puppet 2.6.1

    Puppet Dashboard

    Puppet dashboard is a fairly new app with loads of future potential and is great for monitoring your puppet estate. This is a quick guide to getting it running on puppet 2.6.1. Be sure you have the correct yum repos and ruby versions installed, see Part 1 and Part 2 for more details.

    Install the puppet-dashboard package.

    Create a MySQL database for puppet-dashboard

    Create a database for puppet-dashboard to use and set up a user with all privileges to use it. This can be done on a seperate host.

    Configure database.yaml

    Add your database parameters to the development section, note that host: can be ommitted if you are using local sockets to connect to MySQL.

    Migrate the database

    Copy reports module to site_ruby



    I hate doing this but puppetmasterd explicitly looks for reports in puppet/reports and so far I haven’t found a clean workaround to tell it to look in /usr/share/puppet-dashboard for it. If anyone knows of a way, please email me.

    Edit your puppet.conf

    Include the following in the [master] section, changing punch.craigdunn.org to your puppet server

    Restart puppetmaster and start puppet-dashboard

    Test web GUI

    Go to the following link in your browser (replacing the hostname with your fqdn)

    Configure the client

    Edit puppet.conf

    Make sure the following things are set in the [agent] section of puppet.conf on your client node.

    Run puppet in noop mode on the client

    Refresh browser

    If all has gone well, you should now see your reports in puppet dashboard for your client node.

    Follow and share if you liked this

    Part 2: Puppet 2.6.1, configure puppetmaster and puppetd

    Configure Puppetmaster

    For installing puppetmaster 2.4.1 on CentOS please click here for Part 1

    In Part 1 we covered installing the Puppetmaster and Puppetd packages on Centos 5.5. We will now configure a very basic client/server model to serve the /etc/resolv.conf file to our client. Simple enough!

    Create your first module

    Our first module will be called networking::resolver, it’s job will be to push out a resolve.conf file to clients.

    Create the directory structure under /etc/puppet

    Create your resolv.conf file

    Create your module manifest

    Configure your site and nodes

    Create a minimal site.pp


    Create a tempates file

    Create your node file

    Don’t forget to replace judy.craigdunn.org with the fqdn of your client server

    Set up puppetmaster parameters

    Create default configuration

    This is a minimal puppet.conf file, a more detailed file can be produced with puppetmasterd –genconfig

    The autosign will automatically sign certs for new clients, this is discouraged in a production environment but useful for testing. For information on running puppetmaster without autosign see the puppetca documentation.

    Set permissions for your fileserver.
    Note that this allows everything, you should restrict this in a production environment.

    Start puppetmaster

    The puppet client

    Configure puppetd
    On your client, edit puppet.conf and add the following in the [agent] section, remembering to change punch.craigdunn.org to the fqdn of your Puppetmaster.

    Allow puppetrunner

    Create a file called namespaceauth.conf and add the following, note in a production environment this should be restricted to the fqdn of your puppet master

    Start puppetd

    View pending changes

    Use –test along with –noop to do a dry run to view the changes that puppetd will make

    Now you can run puppetd without –noop to pull in your new resolv.conf file

    This is a very basic demonstration of creating a server/client pair with puppet. There is much more documentation on configuring and managing puppet here

    Next: Installing Puppet Dashboard

    Follow and share if you liked this

    Part 1: Installing puppet 2.6.1 on CentOS with YUM/RPM

    Installing Puppetmaster 2.6.1

    Assuming, like me, the thought of letting rubygems vommit all over your filesystem is not a pleasant one, then how to get the latest puppet 2.6.1 installed on CentOS 5.5 with yum isn’t very clear. Things may differ on other peoples systems, but the below worked for me.

    Set up yum repositories.

    Do this on both the client and the server
    Add the following files and save them to /etc/yum.repos.d/

    puppet.repo


    epel.repo


    ruby.repo


    Note that we include ruby and puppetlabs as the next steps in this tutorial will be to configure puppet and install puppet-dashboard. We want to upgrade to ruby 1.8.6 in order to run puppet-dashboard, doing this now will save you some pain down the line.

    Upgrade Ruby to 1.8.6

    Do this on both the client and the server
    As mentioned above, use the ruby repo to upgrade.

    Install Puppet Server

    On your puppetmaster server:


    On your puppet client

    That’s it, in part 2 and 3 we will install our client and server and install dashboard.

    Part 2 Configuring Puppetmaster and Puppetd

    Follow and share if you liked this

    Remote control with expect-lite

    I’ve recently been playing around with expect-lite, a wrapper for expect that allows you to automatically login to hosts with telnet or ssh to run commands on remote hosts without getting too much under the bonnet of expect syntax.

    But for my purposes I needed to do a little tweaking to get it to do what I wanted to do.

    The Problem

    In this scenario, I need to run a series of commands across an estate of hundreds of servers, as root. I must login as a normal user and sudo is not an option, so that leaves me with becoming root using the expect script – the challenge here is how to accomplish this without storing the root password anywhere, or having it show up in the output of ps.

    The solution

    A little digging around the code and the method I chose was to have a shell script read my password on stdin, and pass it as an environment variable to expect-lite to use in my script. By default you can’t directly pass environment variables from your local shell to be used remotely by the script without specifically stating them in the command – eg ./expect-lite foo=$BAR – the problem here is the value of $BAR will show up in ps.

    Implementation

    Modifications to expect-lite

    Firstly, we need to make 2 small modifications to the expect-lite script itself. The first is another option to the supplied arguments to allow me to pass an environment variable by name (not value) to the expect script. To do this you can use this small patch


    --- expect-lite 2010-07-06 15:21:06.271119000 +0000
    +++ expect-lite.new 2010-07-06 15:34:04.490118000 +0000
    @@ -416,6 +416,8 @@
    u {set user $user_value }
    user_password -
    p {set pass $user_value }
    + localenv -
    + e { set cli_namespace($user_value) $env($user_value) }
    default {
    set cli_namespace($user_var) $user_value
    puts "Const: $user_var = $cli_namespace($user_var)"

    This patch will add an option “e” or “localenv” that tells expect-lite to get that environment variable and make it available in the namespace of the expect script.

    Secondly, in order to prevent the root password from being echoed to STDOUT during execution, make sure DEBUG, DEBUG_LOG and INFO are all set to 0 in the script.

    Wrapper script

    Now for your wrapper script, this script will read your root password without outputting the characters to the terminal and read a list of files from hosts.txt and execute your expect-lite script for each host. Here is an example.


    #!/bin/bash

    echo -n "Enter root password: "
    stty -echo
    read rootpwd
    echo
    stty echo
    export rootpwd

    for host in cat hosts.txt; do
    ./expect-lite remote_host=$host c=myscript.elt u=username e=rootpwd
    done

    Expect script

    Now create an expect-lite script called myscript.elt to implement the root password, heres an example


    @30
    > su -
    >$rootpwd
    >touch /tmp/testfile.txt

    Finally

    … create a text file called hosts.txt listing your hosts and run scan.sh, it should prompt you for your root password and go off and execute commands as root on your remote hosts.

    Follow and share if you liked this