Puppet notes

Installation:

  • Install epel: rpm -ivh http://mirror.symnds.com/distributions/fedora-epel/6/i386/epel-release-6-8.noarch.rpm

  • Install prereqs: yum -y install ruby ruby-libs ruby-shadow

  • Install puppet: yum install puppet puppet-server facter

  • Update hosts table. I put vmhost as a dup for the virtual 192.168.122.1. Hopefully, that doesn’t bork things up.

  • Update firewall rules. I used system-config-firewall-tui; not a real good way to do that.

  • In the initial config, $vardir, unless otherwise specified (?) = /var/lib/puppet. So, the ssl files are under /var/lib/puppet/ssl

  • Install the client:

    • yum -y install ruby ruby-libs ruby-shadow

    • yum install puppet puppet-server facter

  • Run the client: puppet agent --server=vmhost --no-daemonize --verbose

  • Sign the cert:: puppet cert --list and puppet cert --sign ldapa.olearycomputers.com

Troubleshooting installation:

  • ldapa got a cert error:

    # puppet agent --server=vmhost --no-daemonize --verbose
    info: Creating a new SSL key for ldapa.olearycomputers.com
    info: Caching certificate for ca
    info: Creating a new SSL certificate request for ldapa.olearycomputers.com
    info: Certificate Request fingerprint (md5): 2F:96:EA:37:9A:08:B3:94:28:78:FB:89:33:41:8D:03
    info: Caching certificate for ldapa.olearycomputers.com
    notice: Starting Puppet client version 2.6.18
    err: Could not retrieve catalog from remote server: hostname was not match with the server certificate
    notice: Using cached catalog
    err: Could not retrieve catalog; skipping run
    
  • Removed the /var/lib/puppet dir and the hosts entry for 192.168.122.1.

  • Apparently, that was it. Difference in the long/short hostnames.

First configuration:

  • Agents are defined using node statements in the nodes.pp file. (standard, not required)

  • = is assignment, == is test. Watch for -> vs => (latter being correct)

  • Placement of {} is irrelevant. I can put ‘em where I want ‘em.

Further configuration:

  • Choices for organizing hosts:

    • multiple hosts in the node line:

      nodes 'web1..', 'web2..','web3
      
    • regex in the node line:

      node /^web\d+\.olearycomputers\.com$/ {}
      
    • External sources (like ldap.. interesting)

    • Default node:

      node default
      {   include defaultclass    }
      
    • Inheritance: Define a node class that everything else matches - useful for ssh keys, maybe? While it seems cool, book suggests that we avoid inheritance.

      node base { include sudo, ssh_keys, etc } node ‘web1’ inherits base {}

    • Inheritance is cumulative:

      node base {..}
      node webserver inherits base {..}
      node web1 inherits webserver {..}
      
  • Organizing modules:

    • separate modules into files instead of having everything stashed in one init.pp file.

    • separate all (most?) conditional checks to a ${mod}::param class

  • Interesting: the word class in the require statements must be capitalized.

class ssh::config
{   file
    {   $ssh::params::ssh_daemon_config:
            ensure   => present,
            owner    => root,
            group    => root,
            mode     => 0600,
            source   => "puppet:///modules/ssh/sshd_config",
            require  => Class["ssh::install"],
            notify   => Class["ssh::service"],
    }
    file
    {   $ssh::params::ssh_client_config:
            ensure   => present,
            owner    => root,
            group    => root,
            mode     => 0644,
            source   => "puppet:///modules/ssh/ssh_config",
            require  => Class["ssh::install"],
    }
}
  • When testing facter vars ($operatingsystem, etc), they are case sensitive.

  • OK: trying my own module for files (/etc/hosts, resolv.conf, nsswitch.conf)

    • Good progress. Need to document what I did; but that can wait until tomorrow.

    • Short version: /etc/resolv.conf, hosts, and nsswitch.conf get automatically updated.

    • Want to do

      • /root/bin/* recursive copy.

      • /root/.kshrc

      • /root/.ssh/authorized_keys (for test env only)

      • automate installation of ossec

Reconfiguring puppet to use apache:

NOTE: 10/26/13: This seemed to work for puppet 2.X; however, the puppet 3.X version I’m running now, the apache set up doesn’t pick up changed files. I go back to the internal web server and the changed files are picked right up. I have a very small environment so apache’s not mandatory so I’m going to leave this for now and continue studying.

Book says that the built in web server that puppet uses shouldn’t be used for production. It’d probably be good for my little environment; but, I want the whole kit’n kaboodle, so I’m going to go for it first in the virtual env here and then in my full, real env.

The book also says that one web server should be able to handle up to two thousand nodes. That seems pretty nice for an open source solution.

I’ll probably start on that tomorrow, though. Tonight, I want to get everything fully synced up, all three nodes and vmhost.

  • Steps:

    1. Install apache and passenger

    2. Configure apache to handle ssl authentication/verification

    3. Connect apache to the puppet master.

  • Install apache and passenter:

    • Apache: Use puppet to ensure apache and ssl libs are installed. This command actually installs httpd if not. You can see the yum command being run if you execute a ps command.

        # puppet resource package httpd ensure=present
        notice: /Package[httpd]/ensure: created
        package { 'httpd':
          ensure => '2.2.15-29.el6.centos',
        }
        # puppet resource package mod_ssl ensure=present
        notice: /Package[mod_ssl]/ensure: created
        package { 'mod_ssl':
          ensure => '2.2.15-29.el6.centos',
        }


*   Phusion Passenter_: apache mod that allows embedding of ruby apps
    similar to mod_perl or mod_php.  Book suggests having a local repo
    for rubygem-passenger; however, as long as epel is available, I
    don't believe that's required.
        # puppet resource package rubygem-passenger ensure=present
        notice: /Package[rubygem-passenger]/ensure: created
        package { 'rubygem-passenger':
          ensure => '3.0.21-5.el6',
        }

*   I had to install an additional package to get the passenger.conf
    file to which the book refers::

        yum -y install mod_passenger-3.0.21-5.el6.x86_64
  • Configure apache. Created files based on info in the book. To be included here when I get it fully functional:

    • /etc/httpd/conf.d/passenger.conf

    • mkdir -p -m 755 /etc/puppet/rack/puppetmaster/{public,tmp}

    • /etc/puppet/rack/puppetmaster/config.ru

    • chown -R puppet:puppet /etc/puppet/rack/puppetmaster/

    • Let ‘er rip:

      # puppet resource service httpd ensure=running enable=true hasstatus=true
      notice: /Service[httpd]/ensure: ensure changed 'stopped' to 'running'
      service { 'httpd':
        ensure => 'running',
        enable => 'true',
      }
      
    • Troubleshooting was a skosh entertaining.

      • First, no typos in the apache configs. Surprising, that, as they’re involved.

      • It seems like there should be an easier way to start/stop the web based puppetmaster. Only thing I have atm is:

        puppet resource service httpd ensure=stopped enable=false hasstatus=true
        puppet resource service httpd ensure=running enable=true hasstatus=true
        
      • My main issue was certificate host matching. I used short vmhost throughout the env. I ended up having to revoke the two (2) vmhost certs and regenerate them.

        1. ID the master certificate name:

        # puppet master --configprint certname
        vmhost.olearycomputers.com ## was short name
        
        1. Stop the master (resource command listed above)

        2. ID the cert dir puppet master --configprint ssldir

        3. Remove any pem files in that dir named vmhost*

        4. Correct to fqdn in /etc/puppet/manifests/site.pp

        5. Restart

  • Deconfigure puppetmaster from autostart and configure httpd to autostart. This will ensure that apache is managing the puppet environment.

# chkconfig puppetmaster off
# chkconfig httpd on
# chkconfig --list | grep -i -e master -e httpd
httpd           0:off   1:off   2:on    3:on    4:on    5:on    6:off
puppetmaster    0:off   1:off   2:off   3:off   4:off   5:off   6:off
  • Had to set selinux to permissive mode. Seems there’s not much choice in the matter… Perhaps, I’ll figure it out later.

Reinstallation:

This is now a separate rst

Remaining notes:

  • Pretty much skimmed chapter 8 re tools and integration. Interesting points:

    • puppet-module:

      • gem install puppet-module

      • Example usage:

        # puppet-module search xymon
        =====================================
        Searching http://forge.puppetlabs.com
        -------------------------------------
        1 found.
        --------
        binbashfr/xymon_report (0.0.1)
        
    • Ruby: if you want to iterate over hosts, you need to know ruby. puck. Things it can be used for:

      • Obtaining resources from data: using datafiles vs puppet manifests to create resources

      • Specific accounts per system

      • Specific motds per region

      • Specific entries in ntp.conf per region

  • Chapter 10: extending factor and puppet:

    • Another fuck: in order to extend facter, I need to know ruby.

    • Can use environment variables FACTER_datacenter=chicago. How/where to do that, though. That shouldn’t be in root’s default environment…

    • some interesting examples that could be used to automate a lot of system data collection.

    • Puppet types, providers, and functions:

      • Types: manage individual configuration items.

      • Providers: handles the management of that configuration time. eg: package type has apt, yum, rpm providers.

      • Example of the shells provider could be used to manage ssh public keys vs having entire files in the files directory.

  • Chapter 11: mcollective:

    • Method of automating command runs against sets of systems. Examples:

      • How many systems hav3 32 gigs of ram

      • Deploy ver 1.2.3 of my app to all sqa systems.

      • run puppet on all systems ensuring that at most 10 runs are happening at once.

      • Restart the apache services on X subset of DMZ systems.

    • Architecture:

      • asynchronous messages sent/received via stomp protocol

      • Client/server model

      • Single rabbitmq server can support 100s of connected mcollective server processes.

Things to do:

  • (done) Get puppet running in real network

  • (done) modify sshd_config and keys to match ssh config site.

  • (done) Complete reinstallation and document for ll

  • Reread ch 9 on reporting

  • Learn ruby and ruby domain specific language (DSL)

  • Generate external node classifier (ENC) (ch 5)

  • Create a mysql dbase for stored configurations (ssh host keys)

  • Install consoles and experiment: (ch 7)

    • dashboard: reporting

    • foreman: provisioning, cf management, and reporting.

  • Integrate [svn|git] w/puppet

  • Test dev/qa/prod environments via puppet, particularly as it relates to global configs, like ssh/security settings. Docs in the first part of chapter 3 of pro book.

  • List of things that should be puppet controlled:

    • ssh keys (done)

    • cluster configs? OCFS?

    • OS dir permissions (done)

    • motd

    • security files:

      • ftpusers & other vsftpd configs

      • syslog.conf

      • ntp.conf

      • etc

    • Scripts to scatter across all nodes.