azure and cloud design patterns

Post on 15-Jan-2015

454 Views

Category:

Technology

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Introduction to Cloud design patterns and azure services.

TRANSCRIPT

Cloud Computing

Agenda

Design Considerations in moving to Cloud

Introduction to Windows Azure

Demo

Business Benefits of Cloud Computing

Almost zero upfront infrastructure investment.

Just-in-time infrastructure.

More efficient resource utilization.

Usage based costing.

Reduced time to Market.

Cloud Computing

Cloud Computing

Challenges Faced by Apps in the

Cloud

Application Scalability

Cloud promises rapid (de)provisioning of resources.

How do you tap into that to create scalable systems?

Application Availability

Underlying resource failures happen

… usually more frequently than in

traditional data centers.

How do you overcome that to create highly available systems?

The Scalability Challenge

Two different components to scale:

State (inputs, data store, output)

Behavior (business logic)

Any non-trivial application has both.

Scaling one component means scaling the other, too.

Scalability Considerations

Performance vs Scalability

Latency vs Throughput

Availability vs Consistency

How do you manage overload ?

Scalable Service

A scalable architecture is critical to take advantage of scalable

infrastructure.

Characteristic of Scalable Service:

Increasing resources results in a proportional increase in performance

A scalable service is capable of handling heterogeneity

A scalable service is operationally efficient

A scalable service is resilient

A scalable service becomes more cost effective when it grows.

1. Design for Failure

and nothing will really fail

Avoid single points of failure

Assume everything fails and design backwards.

Applications should continue to function even if the physical hardware fails or removed or replaced.

Design for Failure contd..

Unit of failure is a single host

Where possible, choose services and infrastructure that assume host failures

happen.

By building simple services composed of a single host, rather then multiple

dependent hosts, one can create replicated service instances that can

survive host failures.

Make your services small and stateless.

Relax consistency requirements.

2. Build Loosely Coupled SystemsThe looser they’re coupled, the bigger they scale

Loosely Coupled Dependencies

Avoid complex design and interactions.

Best Practices:

Tiered Architecture

Scale out units

Single role

3. Implement ElasticityElasticity is fundamental property of cloud

Ability to add and remove capacity as and when it is required.

Use Elastic Load Balancing.

Use Auto-Scaling (free)

4. Build Security in every layerDesign with security in mind

Encrypt data at rest.

Encrypt data at transit (SSL)

Consider encrypted file system for sensitive data.

Rotate your credentials, Pass in arguments encrypted.

Use MultiFactor authentication.

Restrict external access to specific IP ranges.

5. Don’t Fear ConstraintsRe-think architectural constraints

More RAM? – Distribute load across machines. Shared distributed cache.

Better IOPS on database – Multiple read-only / Sharding.

Performance – Caching at different levels

6. Think ParallelSerial and Sequential is now history

Experiment different architectures in parallel

Multi-threading and Concurrent requests to cloud services.

Run parallel Map Reduce Jobs

Use Elastic Load balancing to distribute load across multiple servers.

Decompose job into its simplest form – and with “Shared Nothing”

7. Leverage many storage optionsOne Size does not fit all

Amazon S3 / Azure Blob – Large Static Objects

Amazon Simple DB / Azure Tables – Data indexing and Querying

Amazon RDS / SQL Azure – RDMBS Service – Automated and Managed MySQL/Azure

Amazon Cloud Front / Azure CDN – Content Distribution

Cloud Architecture Lessons

Design for failure and nothing fails.

Loose coupling sets you free.

Implement Elasticity.

Build Security in every layer.

Don’t fear constraints.

Think Parallel.

Leverage many storage options.

Windows

Why Windows Azure?

Azure Overview

Execution Models

Application Scenarios

Data Management

Storage: What are our options?

Blob Storage Concepts

Networking

Business Analytics

Messaging

Caching

Stages of Service Deployment

Packaging & Deployment

Questions

References

http://WindowsAzure.com

MISC SLIDES

App Scalability Patterns for State

Data Grids

Distributed Caching

HTTP Caching

Reverse Proxy

CDN

Concurrency

Message-Passing

Dataflow

Software Transactional Memory

Shared-State

Partitioning

CAP theorem: Data Consistency

Eventually Consistent

Atomic Data

DB Strategies

RDBMS

Denormalization

Sharding

NOSQL

Key-Value store

Document store

Data Structure store

Graph database

App Scalability Patterns for Behavior

Compute Grids

Event-Driven Architecture

Messaging

Actors

Enterprise Service Bus

Domain Events

Event Stream Processing

Event Sourcing

Command & Query Responsibility

Segregation (CQRS)

Load Balancing

Round-robin

Random

Weighted

Dynamic

Parallel Computing

Master/Worker

Fork/Join

MapReduce

SPMD

Loop Parallelism

The Availability Challenge

Availability: Tolerate failures

Traditional IT focuses on increasing MTTF

Mean Time to Failure

Cloud IT focuses on reducing MTTR

Mean Time to Recovery

Data modelling

Classic distributed systems focused on ACID semantics

Atomicity: either the operation (e.g., write) is performed on all

replicas or is not performed on any of them

Consistency: after each operation all replicas reach the same state

Isolation: no operation (e.g., read) can see the data from another

operation (e.g., write) in an intermediate state

Durability: once a write has been successful, that write will persist

indefinitely

Modern Internet Systems – focused on BASE

Basically Available

Soft-state (or scalable)

Eventually consistent

CAP Theorem

Any distributed system has three properties – CAP

Strong Consistency: all clients see the same view, even in the presence of

updates

High Availability: all clients can find some replica of the data, even in the presence of failures

Partition-tolerance: the system properties hold even when the system is

partitioned

As per CAP theorem you can only have two of these three properties. Choice

of which feature to discard determines the nature of your system.

Map Reduce

Model for processing large data sets.

Many tasks composed of processing lots of data to produce lots of other data

Want to use hundreds or thousands of CPUs... but this needs to be easy!

Contains Map and Reduce functions.

Programming model

Input & Output: each a set of key/value pairs

Programmer specifies two functions:

map (in_key, in_value) -> list(out_key, intermediate_value)

Processes input key/value pair

Produces set of intermediate pairs

reduce (out_key, list(intermediate_value)) -> list(out_value)

Combines all intermediate values for a particular key

Produces a set of merged output values (usually just one)

So How Does It Work?

So How Does It Work?

Example

Page 1: the weather is good

Page 2: today is good

Page 3: good weather is good.

Map output

Worker 1:

(the 1), (weather 1), (is 1), (good 1).

Worker 2:

(today 1), (is 1), (good 1).

Worker 3:

(good 1), (weather 1), (is 1), (good 1).

Reduce Input

Worker 1: (the 1)

Worker 2: (is 1), (is 1), (is 1)

Worker 3: (weather 1), (weather 1)

Worker 4: (today 1)

Worker 5: (good 1), (good 1), (good 1), (good 1)

Reduce Output

Worker 1: (the 1)

Worker 2: (is 3)

Worker 3: (weather 2)

Worker 4: (today 1)

Worker 5: (good 4)

Conclusion – Map Reduce

MapReduce has proven to be a useful abstraction

Greatly simplifies large-scale computations

Fun to use: focus on problem, let library deal w/ messy details

top related