puppetconf 2016: deconfiguration management: making puppet clean up its own mess – josh snyder,...
TRANSCRIPT
That thing you deployed in a hurry isn't always in the place it should belong term.Or...we deploy a thing, and decide next week (or tomorrow) that it isn'tright for usOr we just straight-up make mistakesOr our coworkers make mistakesgit revert can't save me
My colleague sends me a PR to deploy memcached config files under a newpath. Should I comment on their failure to clean up the old ones?
YES!
IGNORE THE PROBLEMIgnore it! It probably won't hurt anythingExcept it will confuse people.It WILL confuse you, at two AMJVM versions will exist on some older dev machines, but not newer ones
ENSEMBLES OF RESOURCES GET COMPLEXif $ensure == 'present' { file { '/etc/init/memcached.conf': ensure => 'file', ... }} else { file { '/etc/init/memcached.conf': ensure => 'absent', ... } }
NO FUN AT ALLmore code to writemore code to reviewmy cleanup code might be brokensomeone has to clean up the cleanup code
REBUILD MACHINESGood! Cattle, not petsI'm deploying new changes every 30 minutesWe only clean things up every day?30 days? 60 days? Five years?∃ an impedance and tooling mismatch
THE SOLUTION (TO ALL YOUR PROBLEMS)1. Use Puppet to specify what should be deployed2. Allow Puppet to remove anything it doesn't know about
WHAT'S TO COME?A few basic examples (directory purging, cronjobs, hosts)A bit of Puppet internalsUse Puppet internals to achieve more purging
DIRECTORY PURGINGfile { '/etc/cassandra': ensure => 'directory', recurse => true, purge => true, force => true, ...}
WHAT WILL THIS DO?Use puppet agent --noopOr add noop => trueLook at the system:
$ ls /etc/cassandra...$ dpkg -S /etc/cassandra...
EXAMPLE 1: PARTIAL MANAGEMENT
Situation: We want to purge /etc/cassandra, but we need to generatethe list of seeds outside of Puppetfile { '/etc/cassandra': ensure => directory, recurse => true, purge => true, force => true, ...}file { '/etc/cassandra/seeds': ensure => file, replace => false, ...}
EXAMPLE 2: CRONJOBSLots of cronjobsLots of cronjobs!Using Yelp's One file per job in /etc/cron.d
puppet-cron
Option ImplicationsOption Implications
Recompile cron to read from asupplemental directory
Anyone else using would have to use our patched cron
Create File resources for eachfile we expect a from a deb.
Whenever someone installs apackage with a new cronjob in it,they'd get a nasty surprise
Find some way to identify thosecronjobs that were originallycreated by Puppet
Good
puppet-cron
What this solution ends up looking like: (ish)file { '/nail/etc/cron.d': ensure => directory, purge => true, force => true, recurse => true, }
file { '/nail/etc/cron.d/myjob': ensure => file, ...} ->file { '/etc/cron.d/myjob': ensure => link, target => '/nail/etc/cron.d/myjob',}
github.com/Yelp/puppet-cron
EXAMPLE 3: /ETC/HOSTSPuppet agent has a RAL
(resource abstraction layer)RAL is responsible for
representing resources on thesystem as Puppet Resource objects
$ puppet resource hosthost { 'ip6-allnodes': ensure => 'present', ip => 'ff02::1', target => '/etc/hosts',}host { 'ip6-allrouters': ensure => 'present', ip => 'ff02::2', target => '/etc/hosts',}host { 'localhost': ensure => 'present',
Puppet diffs resources in the catalog against the RAL it constructs
Could we ask it remove resources present in the RAL but not in the catalog?
YES!
HOW IT WORKSAll on the agent, a�er catalog compilationIterate over resources, calling the generate or eval_generatemethod on each.Each resource has the opportunity to add more resources to the Puppetrun.
Walkthrough: fetching files from a fileserver
file { '/etc/cassandra': ensure => directory, source => 'puppet:///modules/cassandra/config_dir', recurse => true, purge => true, force => true,}
1. Get catalog with this resource declared2. Puppet agent calls eval_generate on this resource3. eval_generate examines the disk, compares it with the Puppet
fileserver4. Generates more resources to represent the files beneath this directory
HOW THE RESOURCES TYPE WORKSresources { 'host': purge => true,}
1. Puppet calls generate2. Generate finds all resources of type Host in the catalog3. Asks providers of Host for their instances4. Compare the two5. Emit new resources:
host { 'ip6-allnodes': ensure => absent,}
PURGING UNDESIRED DEBIAN PACKAGESLet's say I do:
resources { 'package': purge => true,}
This happens:
Notice: /Stage[main]/Main/Package[libxtst6]/ensure: current_value Notice: /Stage[main]/Main/Package[libxcb-dri3-0]/ensure: current_value Notice: /Stage[main]/Main/Package[powermgmt-base]/ensure: current_value Notice: /Stage[main]/Main/Package[python3-py]/ensure: current_value Notice: /Stage[main]/Main/Package[libtk8.6]/ensure: current_value Notice: /Stage[main]/Main/Package[node-ansi-color-table]/ensure:Notice: /Stage[main]/Main/Package[libxpp3-java]/ensure: current_value Notice: /Stage[main]/Main/Package[python3-newt]/ensure: current_value Notice: /Stage[main]/Main/Package[bsdmainutils]/ensure: current_value Notice: /Stage[main]/Main/Package[libpulse0]/ensure: current_value Notice: /Stage[main]/Main/Package[liblvm2app2.2]/ensure: current_value Notice: /Stage[main]/Main/Package[libarchive-zip-perl]/ensure:
NO BUENOWhy doesn't puppet understand that it should remove all packages that:
aren't in the catalogno other package depends on
APT-GET AUTOREMOVEDivides packages into:
manually installed (we're sure we want this)auto installed (a dependency)
THIS MAPS WELL TO PUPPETpuppet state ⇒ autoremover state
puppet state ⇒ autoremover state
in catalog manually installed
not in catalog automatically installed
AN IMPLEMENTATION COMES TOGETHER1. Synchronize the autoremover database with the Puppet catalog2. Run apt-get autoremove3. Problem?
AN IMPLEMENTATION COMES TOGETHER (PARTE DUEX)1. Synchronize the autoremover database with the Puppet catalog2. Run apt-get -s autoremove3. Read the output and create Puppet package resources4. Much rejoicing!
github.com/hashbrowncipher/puppet-package_purging
GENERAL PURPOSE SOLUTIONS?Could there be a jack-of-all-trades solution to purging?
What if we could do:
purge { 'user': unless => [ 'uid', '<=', '500' ],}
It exists: github.com/crayfishx/puppet-purge
NOT MY FAVORITE DEFAULTQ: What does this do?
package { 'mysql-server-5.7': }package { 'bash': }
A: Creates version dri�
UPGRADE (SOME OF THE) THINGSpackage { 'mysql-server-5.7': ensure => $my_favorite_mysql_version}package { 'bash': }
aptly_purge can set all versioned packages as held by dpkg
aptly_purge { 'packages': hold => true,}
Upshot: apt-get dist-upgrade and unattended-upgradeswill only touch packages without a specific Puppet version specified.
END MATTERPlease do tell me your stories of Puppet resource purging.
Walk up and say hi right now.
works [email protected]
in Puppet Community Slack@josnyder