puppet camp chicago-automated_testing2

Post on 02-Jul-2015

2.162 Views

Category:

Technology

3 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Automated Puppet Testing

“All code is guilty, until proven innocent.” – Anonymous

Scott Nottingham July 23rd, 2012

Before and After Testing

• Before Testing

– Unit tests

• Cucumber-puppet

• Rspec-puppet

• After Testing

– User Space tests

• Did everything that was supposed to get done actuallyget done?

Unit Tests - frameworks

• Cucumber-puppet

– Tests the outcome of the entire module as opposed to just the manifests

– Overly complicated by trying to be simple (imho)

• Rspec-puppet

– Unit tests for the manifests

– Truly simple (and effective)

RSpec-puppet

• Written by Tim Sharpe (rodjek)

• Available on github

– https://github.com/rodjek/rspec-puppet/

RSpec-puppet• PROs

– Simple language and structure

– Easy to maintain

– Fast to execute

– Execution easily automated

• CONs– Only tests what you tell it to

• Easy to make changes to manifests and forget to update RSpec tests

Writing RSpec Tests

• Directory Treemodule|+-- manifests|+-- files|+-- templates|+-- lib|+-- spec

|+-- spec_helper.rb|+-- classes

|+-- <class_name>_spec.rb

Writing RSpec Tests

• spec_helper file– Required for RSpec to know where to find the module

path

– Resides within: <module_name>/spec/spec_helper.rb

– contents:

require 'puppet'require 'rubygems'require 'rspec-puppet'

RSpec.configure do |c|c.module_path = File.join(File.dirname(__FILE__), '../../')c.manifest = '../../../../manifests/site.pp'

end

Writing RSpec Tests

• Spec file

– Contains test logic

– Naming convention:

• <classname>_<pp_filename>_spec.rb

• Ex. nagios_init_spec.rb

– Contents

• Requires– require ‘spec_helper’

• Tests

Writing RSpec Tests

• Defining Tests

– Tests are created using the ‘describe’ method.

– The first test must describe the class defined in your manifest.

describe ‘nagios’ do

end

Writing RSpec Tests

• Accounting for class dependencies

– Some modules define class dependencies

– Example:

• Class*‘yum’+ -> Class*‘nagios’+

– These are handled in RSpec test with ‘:pre_condition’

let(:pre_condition) , 'class ,”yum":-' -

Writing RSpec Tests

• Setting parameters used by puppet manifests

– Optional unless manifest needs them to compile

– Handled with ‘:params’

– Example (nested parameters):

let(:params) { {:key1 => 'value1', :key2 => 'value2', :key3 => {'keyA' => 'valA'}, :key4 => {'keyB' => {'keyX' => 'valX'} } } }

Writing RSpec Tests

• Setting a node name for the tests (optional)

– Might use if your manifests has node name dependent logic

let(:node) { host.example.com }

Writing RSpec Tests

• Setting custom fact values

– Probably the most commonly used

– Example:

let(:facts) { {:fact1 => 'val1', :fact2 => 'val2'} }

Writing RSpec Tests

• Sub-tests– These are the tests for each condition that you want

to test for a given manifest.

describe 'nagios' do

describe 'with valid parameters passed' do

....

end

describe 'with invalid parameters passed' do

....

end

end

Writing RSpec Tests

• Testing the resources

• Generally speaking, the tests you will be writing will be to confirm that the class contains some set of resources. This is done with the ‘contain_<resource_type>’

• Each resource will generally have attributes that you will want to test are present. This is done with the ‘.with_<attribute>(‘value’)’ method.

• Example:

it { should contain_file('foo').with_ensure('present').with_owner('root').with_group('root') }

Writing RSpec Tests

• Cascading .with_ methods can get ugly!

• Cascading should only be used for resources with 1 or 2 attributes defined.

• Alternatively, you can use block notation for resources with many attributes

it do should contain_file('/var/spool/squid/cache').with(

'require' => 'Package[squid]',

'ensure' => 'directory',

'owner' => 'squid',

'group' => 'squid'

) end

Writing RSpec Tests

• Sometimes multiple values are assigned to a single attribute

• These are handled like so:

it do should contain_mount('/var/spool/squid/cache').with(

'require' => ['Logical_volume[lv_cache]', 'Package[xfsprogs]' ],

....

....

) end

Writing RSpec Tests

• Templates– If your file content is generated by a template, rspec can test for this

too. Here’s how:

• Test that a template generates any contentit 'should generate valid content for squid.conf' do

content = catalogue.resource('file', '/etc/squid/squid.conf').send(:paramters)[:content]

content.should_not be_empty

End

• Test that a template generates specific contentit 'should generate valid content for squid.conf' do

content = catalogue.resource('file', '/etc/squid/squid.conf').send(:paramters)[:content]

content.should match('some_string')

end

Writing RSpec Tests

• Test for an expected error

• Sometimes, you want to test that an error is raised.

– For example, if not passing a parameter when one is needed, or passing an invalid parameter.

– Example test

it { expect { should contain_class('nagios::server') }.to raise_error(Puppet::Error) }

require 'spec_helper'

describe 'sendmail' dolet(:params) { {:mail_server => 'foobarbaz'} }

it { should contain_package('sendmail').with_ensure('installed') }

it do should contain_file('/etc/mail/sendmail.cf').with('require' => 'Package[sendmail]','notify' => 'Exec[makemap]'

) end

it 'should generate /etc/mail/sendmail.cf from a template' docontent = catalogue.resource('file', '/etc/mail/sendmail.cf').send(:parameters)[:content]content.should match('foobarbaz')

end

it 'should generate /etc/mail/submit.cf from a template' docontent = catalogue.resource('file', '/etc/mail/submit.cf').send(:parameters)[:content]content.should match('foobarbaz')

end

it do should contain_file('/etc/mail/submit.cf').with('require' => 'Package[sendmail]','notify' => 'Exec[makemap]'

) end

it { should contain_file('/etc/sysconfig/sendmail').with_source('puppet:///sendmail/sendmail') }

it do should contain_service('sendmail').with('require' => 'Package[sendmail]','enable' => 'true','ensure' => 'running'

) endend

Running RSpec Tests

• rspec –color –format documentation /etc/puppet/modules/sendmail/spec/classes/sendmail_init_spec.rb

sendmailshould contain Package[sendmail] with ensure => "installed"should contain File[/etc/mail/sendmail.cf] with notify => "Exec[makemap]" and require =>

"Package[sendmail]"should generate /etc/mail/sendmail.cf from a templateshould generate /etc/mail/submit.cf from a templateshould contain File[/etc/mail/submit.cf] with notify => "Exec[makemap]" and require =>

"Package[sendmail]"should contain File[/etc/sysconfig/sendmail] with source => "puppet:///sendmail/sendmail"should contain Service[sendmail] with enable => "true", ensure => "running" and require =>

"Package[sendmail]"

Finished in 1.98 seconds7 examples, 0 failures

Running RSpec Tests

• Oops! Someone put a comma where they meant to put a semi-colon!

SendmailFailures:

1) sendmailFailure/Error: it { should contain_package('sendmail').with_ensure('installed') }Puppet::Error:

Syntax error at '/etc/sysconfig/sendmail'; expected '}' at /etc/puppetmaster/modules/sendmail/spec/../../sendmail/manifests/init.pp:19 on node neodymium.trading.imc.intra

# ./sendmail_init_spec.rb:6

2) sendmailFailure/Error: it { should contain_package('sendmail').with_ensure('installed') } Puppet::Error:

Syntax error at '/etc/sysconfig/sendmail'; expected '}' at /etc/puppetmaster/modules/sendmail/spec/../../sendmail/manifests/init.pp:19 on node neodymium.trading.imc.intra

# ./sendmail_init_spec.rb:6

Finished in 0.19808 seconds2 examples, 2 failures

Automating RSpec Tests

• Tests can be run by anything capable of executing a system command

• We integrate ours into Team City

– Much like Jenkins

– Allows for event driven execution of tests

• Any time a commit occurs in version control, tests are executed

– If tests pass, manifests are wrapped inside an RPM and pushed to puppet masters

– If tests fail, we are notified

Automating RSpec Tests

• RSpec tests do a great job of making sure manifests are syntactically correct and, generally speaking, function as expected.

• But….Things aren’t always as they seem!

‘After’ Tests

• To ensure our environment really is clean we must test in user space land.

• Probably many solutions out there

– We wrote our own

User Space Testing

• Post Provision Checks (PPC) is a framework for testing stuff in user space.– Originally written as health checks for testing a

machine immediately after provisioning it and as verification testing after maintenance.

– Quickly grew to an extensible framework capable of integrating with Nagios/Icinga and Teamcity, or running as stand alone application.

– Contains libraries of functions for commonly used tasks.

– Easily customize runtime options with profiles– Can be extended with plugins

PPC functionality

puppet_ust plugin

• The puppet user space tests plugin was written to extend the framework for specifically testing puppet applied configurations.

– Auto detects the puppet classes that are applied to a given box

– Determines the parameters defined for the host being tested and utilizes them within tests.

• Also uses facter facts in the same manner

That’s great, but puppet reports will tell you what worked and what didn’t…

misses

• You wouldn’t believe the things we have seen go wrong with applying configurations (that the puppet logs/reports didn’t indicate).– PPC tests can be written to catch all of these!

• Mistakes only happen once• Winning!

– A few examples• Version of package specified not what got installed – not puppet’s

fault, but still a problem.• Package got installed, but some key files were zero length• Drivers upgraded, but not properly loaded• Machines joined to wrong domain• Services running (per status) but not functioning properly• Yum repos configured but not functioning• Sysctl settings still not loaded after sysctl –p

PPC sample output

• Stand Alone mode

PPC Sample Output

• Team City Mode

Automating PPC Tests

• Team City + multiplexed SSH (for parallelization)

• Each version control commit results in a puppet run + user space test run in our dev and qaenvironments.

• Over 20,000 tests performed in ~20 minutes

– This means every 20 minutes new, fully tested changes are pushed from the development environment to QA

– Each stage of the pipeline is tested in a similar fashion

top related