fabric8 mq

46
Achieving Horizontal scaling for Enterprise Messaging using Fabric8 Rob Davies June 2015

Upload: rob-davies

Post on 08-Aug-2015

135 views

Category:

Software


0 download

TRANSCRIPT

Achieving Horizontal scaling for Enterprise Messaging using Fabric8

Rob Davies June 2015

RED HAT | Fabric8 Rocks! 2

Rob Davies

•  Director, Engineering xPaaS •  Technical Director of Fuse

Engineering •  Over 20 years experience of

developing large scale solutions for telcos and finance

•  Creator of ActiveMQ and ServiceMix •  Committer on open source projects,

including Apache Camel …

RED HAT | Fabric8 Rocks! 3

Agenda

• Introduction to Fabric8 • Enterprise messaging • Fabric8MQ • Demo

RED HAT | Fabric8 Rocks! 4

Fabric8

•  Open source, ASL 2.0 licensed project, built to provide the management infrastructure for Microservices deployments

•  Management: console, logging, metrics, API Management

•  Continuous Delivery - workflow •  Integration Platform as a Service – Camel route

visualization, API registry, Messaging as a Service, … •  Tools – Kubernetes/OpenShift build integration,

Kubernetes component test support, CDI extensions

RED HAT | Fabric8 Rocks! 5

Fabric8:

GCE AWS CoreOS Mesos vSphere Fedora Vagrant Azure

Kubernetes

Centralized Logging Metrics Deep Application

Management

API Registry

API Man Fabric8MQ Camel Fabric8

Java tools

social SonaQube Gogs Jenkins Nexus

RED HAT | Fabric8 Rocks! 6

Agenda

• Introduction to Fabric8 • Enterprise messaging • Fabric8MQ • Demo

RED HAT | Fabric8 Rocks! 7

Why use messaging ?

•  Robustness to change •  Time independence •  Hide latency •  Event driven •  Platform and language

independence Broker

Message in

Message out after enrichment by Process B

Process C

Process B

Process A Queue:Foo

Queue:Bar

RED HAT | Fabric8 Rocks! 8

Enterprise Message brokers

•  Designed to support many different messaging patterns

•  highly available •  Support clustering •  Support store and forward •  But – are usually very static in nature

RED HAT | Fabric8 Rocks! 9

•  Top Level Apache Software Foundation Project

•  Wildly popular, high performance, reliable message broker

•  Connects to nearly everything

• Native Java, C/C++, .Net,

• AMQP 1.0, MQTT 3.1, STOMP (1.0-1.2) and OpenWire

• STOMP enables Ruby, JS, Perl, Python, PHP, ActionScript …

•  Embedded and standalone deployment options

RED HAT | Fabric8 Rocks! 10

Message Channels and Routing

•  Message Channels •  Named communication between interested parties •  JMS calls them ‘Destinations’

•  Can “tune-in” to multiple channels using wildcards •  Can fine-tune message consumption with selectors •  Can route a message based on content

RED HAT | Fabric8 Rocks! 11

Message Channels = JMS Destinations

Broker Consumer

Consumer Destination WIDGET

Destination ORDER

Producer

Consumer

RED HAT | Fabric8 Rocks! 12

Message Routing: Selectors

Producer Destination

Consumer Color = red

Consumer Color = blue

RED HAT | Fabric8 Rocks! 13

Message Routing: Destination Wildcards

•  * matches a subject •  > matches sub-tree

Topic: BAR.BEER.WHITE

Topic: BAR.WINE.WHITE

Producer

Consumer Topic:BAR.WINE.WHITE

Consumer Topic:BAR.BEER.WHITE

Consumer Topic:BAR.*.WHITE

Consumer Topic:BAR.>

RED HAT | Fabric8 Rocks! 14

Message Groups

Producer

Consumer

Consumer

Consumer

Queue:Prices

Message message = session.createTextMessage(“hi JBCNConf”); message.setStringProperty("JMSXGroupID", ”JavaRocks"); producer.send(message);

RED HAT | Fabric8 Rocks! 15

High Availability clustering:

Master Client Slave Slave

Only one broker is active (the master) at any one time.

1. send

3. receipt

2. store

RED HAT | Fabric8 Rocks! 16

Network of Brokers:

Producer

Highly reliable, store and forward

Consumer

1. send

Broker1

2. store

Broker2

5. store

4. Network send

6. Network receipt

3. receipt

7. delete

6. Network send

7. acknowledge

8. delete

RED HAT | Fabric8 Rocks! 17

Networks of Brokers: some limitations

•  Performance – messages are always stored by the broker to disk before forwarding to another broker – this can increase latency by at least a factor of 10

•  Networks are chatty – routing information is passed from broker to broker every time a consumer starts or stops

•  Networks are a bottleneck – networks are implemented as one TCP/IP connection – this is natural choke point for high throughput scenarios

•  Messages can be orphaned – If a broker goes down, locally stored messages won’t get delivered until the broker is recovered

RED HAT | Fabric8 Rocks! 18

Networks of Brokers: Worst bit …

Producer Consumer Broker1 Broker2

This is configured differently to …

RED HAT | Fabric8 Rocks! 19

Networks of Brokers: Worst bit …

Producer Consumer Broker1 Broker2

To this

Broker3

RED HAT | Fabric8 Rocks! 20

ActiveMQ is static:

RED HAT | Fabric8 Rocks! 21

ActiveMQ Key Features

Fast Storage

Connections

A-MQ

Embedded Camel OpenWire, STOMP MQTT AMQP WebSockets

Scaling Connections A problem

KahaDB/LevelDB

Lots and Lots of these

RED HAT | Fabric8 Rocks! 22

To Scale A-MQ needs to focus …

Fast Storage

A-MQ

Remove all the cruft – allow ActiveMQ to focus on message routing …

RED HAT | Fabric8 Rocks! 23

Messaging for cloud based deployments

•  Needs to support many thousands of clients •  Flexible, brokers need to be spun up and down, based

on demand •  Client connections may need to be multiplexed, to

decrease the load on individual message brokers •  Popular messaging protocols support •  Flexible routing needs to be supported

RED HAT | Fabric8 Rocks! 24

Agenda

• Introduction to Fabric8 • Enterprise messaging • Fabric8MQ • Demo

RED HAT | Fabric8 Rocks! 25

Fabric8 MQ is built on Vert.x

•  Lightweight, multi-reactive, application platform •  Inspired from Erlang/OTP •  Polygot •  High performance •  Asynchronous, non-blocking: – high level of

concurrency

RED HAT | Fabric8 Rocks! 26

Vert.x is just fast …

RED HAT | Fabric8 Rocks! 27

Fabric8 MQ Message Flow:

Protocol Conversion

Camel Routing

API Management Multiplexer

Destination Sharding

Broker Control

RED HAT | Fabric8 Rocks! 28

Fabric8 MQ Protocol Conversion

•  We convert to ActiveMQ OpenWire. This saves ActiveMQ doing this, allowing it to expend all its resources to pushing messages in and out of Queues

•  Protocols currently supported: •  OpenWire •  Stomp •  MQTT •  AMQP 1.0

RED HAT | Fabric8 Rocks! 29

Camel Routing:

Camel is embedded to allow flexible routing:

Topic:Foo

Message Message Message

Queue:Foo

Channel

RED HAT | Fabric8 Rocks! 30

API Management

•  Fabric8 uses APIMan for its API Management – see http://apiman.io

•  Utilized within Fabric8 to provide: •  Central common place for managing all APIs (including

Message Queues) •  Control and Access policies •  Rate Limiting •  Metrics and Billing

RED HAT | Fabric8 Rocks! 31

Multiplexing

•  ActiveMQ performs better the lower the number of connections – less contention

•  IoT applications have long lived connections, but small amount of traffic

•  All configurable – you can decide how fine or coarse grained you want multiplexing

•  Fabric8MQ scales up/down multiplexers as required

RED HAT | Fabric8 Rocks! 32

Destination Sharding: Benefits

•  There is an overhead associated with a Destination – restrict the number of Destinations per Broker to improve performance

•  Co-locate producers and consumers of a Destination to the same broker reduces latency and improves overall performance

•  Increased scalability for your messaging solution

•  This is done automatically by Fabric8 MQ

RED HAT | Fabric8 Rocks! 33

Destination Sharding: Fabric8 MQ Internal Model

•  Discover and track all allocated brokers

•  Fabric8 MQ regularly updates its internal model by examining internals of brokers running via Jolokia allowing multiple Fabric8-MQ instances to run (co-operatively scalable)

•  Scales up and down brokers – moving destinations and clients between brokers

•  Can dynamically set limits on number of brokers, and broker limits

Broker1 Connections count

Depth Consumers Producers

Queue://foo.1

Depth Consumers Producers

Queue://foo.2

Depth Consumers Producers

Queue://foo.3

Broker2 Connections count

Depth Consumers Producers

Queue://bar.1

Depth Consumers Producers

Queue://bar.2

Depth Consumers Producers

Queue://bar.3

RED HAT | Fabric8 Rocks! 34

Destination Sharding: Internal Rules

•  Fired when state changes

•  Operates over the internal Model

•  Rules fired in priority order

•  Only one rule fired per state change

Scale Up

•  Connection limits exceeded OR

•  Destination Limits Exceeded AND

•  NO spare capacity

Broker Control

Scale Down

•  At least ONE broker AND

•  Total load across all brokers less than 50%

Distribute Load

•  Broker limits are Exceeded AND

•  Spare capacity

Ask to move clients and destinations from most loaded to least loaded

Distribute clients and destinations off least loaded and remove broker

Request a new Broker

Broker Control is done via Kubernetes

RED HAT | Fabric8 Rocks! 35

ActiveMQ hates unexpected Acks…

RED HAT | Fabric8 Rocks! 36

Fabric8MQ has to 1.  Work out which destinations to migrate 2.  Stop dispatching to consumers 3.  Wait until the all in flight messages are acked 4.  Do the migration 5.  Update the internal destination map 6.  Then resume processing

RED HAT | Fabric8 Rocks! 37

Kubernetes

•  Created by Google from experience with GCE

•  Lightweight, simple and accessible orchestration system for Docker containers

•  Schedules onto nodes in a compute cluster and actively manages workloads to ensure state is “correct”

•  Provides easy management and Discovery

Kubernetes Master

Node Node Node

User containers

Scheduled and packed dynamically onto nodes

RED HAT | Fabric8 Rocks! 38

Kubernetes Model

•  Pods – collection of one or more Docker containers – the minimum deployment unit in Kubernetes

•  Replication Controllers – stored on Kubernetes Master, they determine how many replicas of Pods should be running

•  Service – abstraction that defines a logical set of pods, and defines a way of exposing their ports on a unique IP address

•  Namespaces – Pods, Services, Replication Controllers, Label Selector – all exist within a defined Namespace. This allows for multiple environments to be supported by the same compute cloud – or multitenancy

Kubernetes Master

Node

Container Pod Container

Pod

Node

Container Pod Container

Pod

Replication Controller

Fabric8-MQ-Service

RED HAT | Fabric8 Rocks! 39

Fabric8 MQ Broker Control:

•  Fabric8 MQ controls a headless Replication Controller to spin up and down ActiveMQ Brokers

•  Monitors the state of running ActiveMQ brokers via Jolokia – and fires changes to the internal rules engine.

•  Communicates with the Replication Controller to spin up – or delete Pods containing ActiveMQ Brokers

Node

Pod

ActiveMQ Broker

AMQ Replication Controller

Node

Pod

ActiveMQ Broker

Node

Pod

ActiveMQ Broker

RED HAT | Fabric8 Rocks! 40

Fabric8 MQ – not a message broker – but a scalable messaging system

Many concurrent connections, one out

Connections

Vert.x Core

Embedded Camel, Integration with APIMan

OpenWire, STOMP MQTT AMQP WebSockets – all done asynchronously

Scaling Connections NOT a problem

Destination Sharding

RED HAT | Fabric8 Rocks! 41

Fabric8 MQ Independently scalable:

Node

Pod

ActiveMQ Broker

AMQ Replication Controller

Node

Pod

ActiveMQ Broker

Node

Pod

ActiveMQ Broker

Fabric8MQ Replication Controller

Vert.x Vert.x

Vert.x

Multiplexer Multiplexer

Multiplexer Fabric8MQ

Vert.x Vert.x

Vert.x

Multiplexer Multiplexer

Multiplexer Fabric8MQ

Vert.x Vert.x

Vert.x

Multiplexer Multiplexer

Multiplexer Fabric8MQ

RED HAT | Fabric8 Rocks! 42

RED HAT | Fabric8 Rocks! 43

What I’m gonna try and attempt to do ...

Fabric8MQ

MQTT Device

MQTT Device MQTT

Device MQTT Device

MQTT Device MQTT

Device

MQTT Device MQTT

Device MQTT Device

RED HAT | Fabric8 Rocks! 44

Destination Sharding: Internal Rules

Limits are set low: 5 Destinations per

Broker

Scale Up

•  Connection limits exceeded OR

•  Destination Limits Exceeded AND

•  NO spare capacity

Broker Control

Scale Down

•  At least ONE broker AND

•  Total load across all brokers less than 50%

Distribute Load

•  Broker limits are Exceeded AND

•  Spare capacity

Ask to move clients and destinations from most loaded to least loaded

Distribute clients and destinations off least loaded and remove broker

Request a new Broker

Broker Control is done via Kubernetes

RED HAT | Fabric8 Rocks! 45

Client code

final String host = System.getenv("FABRIC8MQ_SERVICE_HOST");

final String portStr = System.getenv("FABRIC8MQ_SERVICE_PORT");

Kubernetes injects services as Environment variables

RED HAT | Fabric8 Rocks! 46

Questions and Discussion