the automated monolith
TRANSCRIPT
The Automated Monolith
Marco Seifried (@marcoseifried)Tora Onaca (ro.linkedin.com/in/toravasilescu)Paul Vidrascu (ro.linkedin.com/in/paulvidrascu)
dev.haufe-lexware.com
github.com/Haufe-Lexware
@HaufeDev
Build, Deploy and Testing using Docker, Docker Compose, Docker Machine, go.cd and Azure
http://devops.com/wp-content/uploads/2014/10/2011.09.18_code_reviews.png
No need to double check this change
list, if some problems remain the reviewer will
catch them
No need to look at this change list to closely, I‘m sure the author knows what he‘s doing
Haufe Strategy - Architecture Principles
Business value over technical strategy
Strategic goals over project-specific benefits
Composability over silos
Shared services over specific-purpose implementations
Evolutionary refinement over pursuit of initial perfection
Design for obsoleteness over building for eternity
Good enough over best of breed
Declarative processes over implicit knowledge
What do we want to achieve?
Speed
Business value in manual deployments?
Reduce Error
Prepare for change
Project and Approach
Features: User-, License- & Subscription Management
Goal: Microservices, fully automated deployment, APIs
Start: Fully automated deployment
First iteration: Automated test environment per client on
Azure
How – Automation Tools and Technologies
Continuous Delivery // pipelines to run scripts // create docker images// deploy onto Azure
Bitbucket – repository for config
Deploy in Azure
Artifactory - Internal docker repository for images
Run anywhere, describe everything in dockerfiles
Break down – step 0
Independent on the pipeline execution, every time an artifact is built in TFS it is then pushed in the Haufe Artifactory.
Step 1 – building the Docker images
A new commit in the BitBucket repository triggers the pipeline.
The pipeline contains steps for building all the needed images (app server, db server, JMS server, logging, monitoring, test)
Step 1 – building the Docker images (cont.)
Logging
Docker logs not enoughAlternative – docker exec and accessing the log filesMap external volume and copy logs thereBest alternative: use what everyone else does and works like a charm: Kibana + ElasticSearch + Fluentd
Monitoring
See the status of the containers usingCadvisor influxDbGrafana
Summary
Two months effort to get to Azure, distributed teams across two countriesCreation of dev environment reduced from one week to 30 minutesDocker is constantly improving (improved networking, built in drivers, etc)Baseline for future improvements
What’s next:
Move this along to production Allow clients to choose the version of the images to useImprove some startup timesTry out different cloud solutions