webcamp 2016: devops. Ярослав Погребняк: gobetween - новый лоад...
TRANSCRIPT
Speaker
Yaroslav PogrebnyakTech Lead & Architect @
pogrebnyak.info github.com/yyyar
Architecture
- Тяжелее масштабировать разработку- Strong Coupling- Не лучшая отказоустойчивость- Легче запускать и тестировать всю систему- Не нужно изобретать механизм взаимодействия между компонентами
- Легче масштабировать разработку- Loose coupling- Если один компонент отказывают, остальные работают- Тяжелее собрать все вместе и запустить- Нужен механизм взаимодействия, как компоненты будут знать друг о друге?
Microservices
Monolythic
Application
API, Billing, Jobs Notificatins, Etc...
Billing
Notifications
API
Background Jobs
Modules in the monolythic
system became separate apps
Etc...
MicroservicesMonolythic
Reality :-)
Deployment
- Components usually sharing the same process
- “Static” Envs
- Hardware Servers, Virtual Servers
- Manual Scaling :-)
- Components became separate isolated processess
- Load Balancing In Dynamic Environment
- Docker Containers, LXC, etc
- Auto-scaling
MicroservicesMonolythic
MicroservicesMonolythic
Reality :-)
See https://youtu.be/PivpCKEiQOQ =)
Load Balancing & Routing
Load balancer new role:
Router
MicroservicesMonolythic
Components live inside the same app and already know one about another
Components live in separate containers not knowing about each other
Service Registry(Consul, Etcd, Dns, etc)
MicroservicesMonolythic
Reality :-)
Our Cases In
Devices Management System
API
Device Service
rt updates
Jobber
Uploader
Bus
Auth / Billing
Front
Notifications
lb
API
r/t updates
Device ServiceDevice Service
UploaderUploader
Auth / Billing
lb
}
Optional Bond Server
“Elastic” Streaming Service
API + Docker Master
Proxy
Node
Node
Node
Multi-Region Docker Swarm Cluster
asks for host/port
video stream
configures container and returns to proxy host/port
Device Management
Systemmanagement
VLC or 3-rd party client
swarm master uses consul as discovery for nodes
Centralized Logging System
Device Management
System
Auth / BIlling
Streaming Servers
Other Services
ES Data NodeES Data NodeES Data
NodeES Data NodeES
Data Node
Load Balancer
json
Custom Realtime
Log Viewer
Traditional load balancers & dynamic environments and microservices...
No build-in service discovery at allNginx+ Starting at $1900/yr Only DNS discovery (just introduced)
It’s a pain! Too Complex. Tricky Workarounds...- Haproxy + Consul + Consul Templates + Reload- Nginx + Consul + Consul Templates + Reload- Custom discovery scripts + Reload - NGINX + docker-gen + Reload
Nginx + Consul + Consul Templates
upstream app { least_conn; {{range service "production.app"}}server {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1; {{else}}server 127.0.0.1:65535; # force a 502{{end}}}
server { listen 80 default_server;
location / { proxy_pass http://app;
proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
registersre-generates
reloads
gobetween: New Open-Source Load Balancer
Acts as L4 load balancer & reverse-proxy for traditional architecturesActs as Router for microservices architectures“Batteries included” to feel comfortable in dynamic envs!
gobetween.io
Written in Go!- Optimized Concurrency- Cross-architecture (x86, amd64, arm, ...)- Fast!- Easy to deploy, develop & maintain
Built with ❤ in Odessa!
Works on any platform;)
Configuration File - TOML
config.toml:
[servers]
[servers.app] bind = "0.0.0.0:3001" balance = "weight" max_connections = 0 client_idle_timeout = "10m" backend_idle_timeout = "10m" backend_connection_timeout = "5s"
[servers.app.healthcheck] # ………...
[servers.app.discovery] # ………...
Static List “Discovery”
$ gobetween -c config.toml
config.toml:
[servers] [servers.api] bind = "0.0.0.0:3001" [servers.api.discovery] kind = "static" static_list = [ "my.host1.com:8000 weight=1", "my.host2.com:8001 weight=2" ]
DNS SRV (RFC-2782) Discovery
$ gobetween -c config.toml
config.toml:
[servers] [servers.api] bind = "0.0.0.0:3001"
[servers.api.discovery] kind = "srv" failpolicy = "keeplast" interval = "10s" timeout = "1s" srv_lookup_server = "my.dns.com:53" srv_lookup_pattern = "api.service." srv_dns_protocol = "tcp"
Docker / Swarm Discovery
config.toml:
[servers] [servers.web] bind = "0.0.0.0:3000"
[servers.web.discovery] kind = "docker" failpolicy = "keeplast" interval = "10s" timeout = "1s" docker_endpoint = "http://123.0.0.1:2375" docker_container_private_port = 80 docker_container_label = "service=web"
$ gobetween -c config.toml
JSON & Plaintext Discovery
config.toml:
[servers] [servers.app] bind = "0.0.0.0:3000"
[servers.app.discovery] kind = "json" json_endpoint = "http://some.url.com/s.json" json_host_pattern = "host" json_port_pattern = "port" json_weight_pattern = "weight" json_priority_pattern = "priority"
config.toml:
[servers] [servers.app] bind = "0.0.0.0:3000"
[servers.app.discovery] kind = "plaintext" plaintext_endpoint = "http://t.com/s.txt" plaintext_regex_pattern = "^(?P<host>\S+):(?P<port>\d+)(\sweight=(?P<weight>\d+))?(\spriority=(?P<priority>\d+))?$"
[{ “host”: “some.host”, “port”: “12346”, “weight”: 1, “priority”: 1}, … ]
host1:port1 weight=1host2:port2 weight=5
Custom Exec Discovery
config.toml:
[servers] [servers.billing] bind = "0.0.0.0:3002"
[servers.billing.discovery] kind = "exec" failpolicy = "keeplast" interval = "10s" timeout = "1s" exec_command = ["/opt/discovery.sh"]
$ gobetween -c config.toml
/opt/discovery.sh:
#!/usr/env/bin bash
## write custom discovery and# print backends to stdout!## example:
echo localhost:8000 weight=1echo localhost:8001 weight=2
# do curl or anything else you wish!
Consul API Discovery (gobetween 0.3)
config.toml:
[servers] [servers.app] bind = "0.0.0.0:3002"
[servers.app.discovery] kind = "consul" consul_host = "my.consul.host.com" consul_service_name= "web" consul_service_tag= "" consul_service_datacenter= "" consul_service_passing_only = true
$ gobetween -c config.toml
Coming Soon... More Discovery Options!
+ More!
Custom Healthchecks
config.toml:
[servers] [servers.app] bind = "0.0.0.0:3002" ….
[servers.app.healthcheck] kind = "exec" interval = "10s" timeout = "2s" passes = 1 fails = 2 exec_command = "/opt/health.sh"
$ gobetween -c config.toml
/opt/health.sh:
#!/usr/env/bin bash# example:
host=$1port=$2
echo -n 1 # live
# echo -n 0 # dead
# do curl or anything else you wish!
All Features List
● L4 TCP Load Balancing
● Clear & Minimalistic TOML config file
● Management & Stats REST API
● Discovery (Static List, Docker, JSON, Exec, PlainText, SRV, Consul)
● Healthchecks (Ping, Exec)
● Balancing Strategies (iphash, leastconn, roundrobin, weight)
● Single binary distribution
● Web Admin Panel - Coming Soon!
● Live Stream Retranslation - Coming Soon!
● Pull Config from Consul, Etcd, custom server - Coming Soon!
● Webhooks / Events?
Give us a Star on GitHub :-)
http://gobetween.io
сurl http://ip:port/servers/name/stats
{ "active_connections": 8, "rx_total": 2284713478, "tx_total": 4200533149, "rx_second": 38327, "tx_second": 54955, "backends": [ { "host": "10.0.0.51", "port": "9200", "priority": 1, "weight": 1, "stats": { "live": true, "active_connections": 1, "rx": 2284713478, "tx": 4200533149, "rx_second": 38327, "tx_second": 54955 } } ] }
curl http://ip:port/servers/name{ "web": { "bind": "10.0.0.8:9200", "protocol": "tcp", "balance": "weight", "max_connections": 10000, "client_idle_timeout": "10m", "backend_idle_timeout": "10m", "backend_connection_timeout": "2s", "discovery": { "kind": "static", "failpolicy": "keeplast", "static_list": [ "10.0.0.51:9200" ] }, "healthcheck": { "kind": "ping", "interval": "1s", "passes": 3, "fails": 3, "timeout": "500ms" } }}
Full-featured REST API: CRUD + Stats
Performance Tests (8 core 16 thread Intel(R) Xeon(R) CPU L5630 @ 2.13GHz)
gobetween Production Use Cases in
Real Production Example @ Teradek: Device Management Service
config.toml:
[servers.server] bind = "0.0.0.0:7000"
[servers.server.discovery] kind = "static" statis_list = [ "10.0.0.22:7000 weight=1", "10.0.0.23:7000 weight=1", "10.0.0.24:7000 weight=1", "10.0.0.25:7000 weight=1" ] In production on AWS t2.small instanceCurrently handles >12Mbit/s bandwidth> 100 Gb / Day transfer (stats & control - excluding video traffic)gobetween process: CPU: 4%, Mem: 32Mb
Device ServiceDevice
ServiceDevice ServiceDevice
Service
Real Production Example @ Teradek:ElasticSearch cluster balancing
/etc/gobetween/es.sh:
curl -sS -XGET 'http://es:9200/_cat/nodes?v&h=ip,r=d' |sed '1d'|tr -d ' '|sed 's/$/:9200/'
config.toml:
[servers.es] bind = "100.100.1.5:9200"
[servers.es.discovery] kind = "exec" exec_command = ["/etc/gobetween/es.sh"] interval="1m" timeout = "10s"
В продакшене. Сбор логов logstach: >5Gb / dayUptime: >2 months. No downtime.
Yaroslav Pogrebnyak
pogrebnyak.info github.com/[email protected]