mainframe cost reduction

4
BigMemory Reduces Mainframe Costs Big Results for a Top Global Reservation System This white paper documents the deployment of Terracotta’s BigMemory to increase capacity and reduce mainframe use for one of the largest international reservation systems in production today. Among the results of the deployment were a reduction of 500 million daily mainframe transactions (80 percent of daily load), 50 percent faster response times, a 20x increase in capacity and 99.99 percent uptime. The Challenge The customer was faced with the challenge of expanding capacity to support rapidly growing traffic while simultaneously protecting core business functions, providing additional value-added services and significantly reducing costs. The existing production system relied on an IBM ® System z ® mainframe to manage all business- critical transactional data. The mainframe was capable of a maximum of 10,000 transactions per second (TPS), where each transaction translated into a business request (read or write) for a blob of data. The average payload of each request was 50 kilobytes (KB). Adding more capacity to the mainframe was cost-prohibitive for new initiatives. The customer initiated development of a new middleware architecture that would run on inexpensive commodity hardware and scale independently of the mainframe, yielding a higher return for new initiatives and lowering capital expenditure for the core business. A major part of the proposed middleware architecture consisted of a common data service layer that would store critical business data in ultra-fast machine memory, backed by the mainframe as the system of record. TABLE OF CONTENTS 1 The Challenge 2 Customer Requirements 2 Initial Architecture 2 Solution Architecture with Terracotta BigMemory 3 BigMemory’s In-Memory Data Management Layer 4 Terracotta Server Array 4 Conclusion BUSINESS WHITE PAPER Get There Faster

Upload: software-ag-uk

Post on 03-Sep-2014

61 views

Category:

Software


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Mainframe Cost Reduction

BigMemory Reduces Mainframe Costs Big Results for a Top Global Reservation System

This white paper documents the deployment of Terracotta’s BigMemory to increase capacity and reduce mainframe use for one of the largest international reservation systems in production today. Among the results of the deployment were a reduction of 500 million daily mainframe transactions (80 percent of daily load), 50 percent faster response times, a 20x increase in capacity and 99.99 percent uptime.

The ChallengeThe customer was faced with the challenge of expanding capacity to support rapidly growing traffic while simultaneously protecting core business functions, providing additional value-added services and significantly reducing costs.

The existing production system relied on an IBM® System z® mainframe to manage all business-critical transactional data. The mainframe was capable of a maximum of 10,000 transactions per second (TPS), where each transaction translated into a business request (read or write) for a blob of data. The average payload of each request was 50 kilobytes (KB).

Adding more capacity to the mainframe was cost-prohibitive for new initiatives. The customer initiated development of a new middleware architecture that would run on inexpensive commodity hardware and scale independently of the mainframe, yielding a higher return for new initiatives and lowering capital expenditure for the core business.

A major part of the proposed middleware architecture consisted of a common data service layer that would store critical business data in ultra-fast machine memory, backed by the mainframe as the system of record.

TABLE OF CONTENTS

1 The Challenge

2 Customer Requirements

2 Initial Architecture

2 Solution Architecture with Terracotta BigMemory

3 BigMemory’s In-Memory Data Management Layer

4 Terracotta Server Array

4 Conclusion

BUSINESS WHITE PAPER

Get There Faster

Page 2: Mainframe Cost Reduction

Customer Requirements

Scalability The service must scale to meet business growth requirements while keeping operational and development costs to a minimum.

Availability The service must meet the cross-enterprise Service Level Agreement (SLA) of 99.99 percent uptime.

Performance The service must match the transactional capacity of the mainframe.

Operations The service should provide a rich monitoring and management tool set.

Initial ArchitectureThe architecture prior to the introduction of the Terracotta BigMemory data layer consisted of clusters of multiple applications connected to a back-end mainframe via MQSeries® for TPF.

Solution Architecture with Terracotta BigMemoryThe solution architecture used Terracotta BigMemory to replace the mainframe for more than 99 percent of the read and write transactions. The data access layer was re-implemented as a scalable in-memory service behind a message queue. The in-memory service is available enterprise-wide, providing a common, scalable means to offload mainframe usage with predict-able performance and latency.

Data lookups are read from the in-memory store, faulting to the mainframe only on a cache miss. Data updates are written directly to the in-memory store and written asynchronously to the mainframe via a durable write-behind queue.

Business White Paper | BigMemory Reduces Mainframe Costs

16 application servers3,500 TPS

Travel Agent Network100’s of application servers4,500 TPS

Web Services Cluster12 application servers1,000 TPS

Major Travel Website

MQ/TPF

IBM Series z Mainframe

IBM Series z Mainframe

Figure 1: Initial architecture without Terracotta’s distributed cache

Get There Faster2

Page 3: Mainframe Cost Reduction

Business White Paper | BigMemory Reduces Mainframe Costs

The customer’s 500-millisecond SLA requires that cache lookups happen very fast. To minimize latency, the in-memory service uses a layered caching strategy that keeps hot data in memory as close to upstream applications as possible.

The top layer (“L1 Cache Layer” in Figure 3) is a scalable cluster of Java® processes on commodity hardware that implements the cache service’s message-oriented get/put API. The L1 cache layer is backed by a scalable and highly available Terracotta server array (“L2 Cache Layer” in Figure 3) that also runs on commodity hardware.

BigMemory’s In-Memory Data Management LayerEach L1 node uses the Ehcache library to address cached data. The Ehcache library transparently keeps a hot set of cache data in memory for low-latency access. For operations on a cache element not already in memory, Ehcache automatically requests that cache entry from the Terracotta server array.

The L1 layer is fault tolerant and highly available. Should an L1 node fail, its unanswered cache requests will be handled by another L1 node. All in-memory data is backed by BigMemory’s Terracotta server array, which is fault tolerant and highly available. The L1 layer is also independently scalable as L1 nodes may be added to meet increasing service load.

16 application servers3,500 TPS

Durable write-behind queue

Lookup onCache miss

Travel Agent Network

IBM Series z Mainframe

100’s of application servers4,500 TPS

Web Services Cluster12 application servers1,000 TPS

Major Travel Website

MOM/MQ

MQ/TPF

Data Service API

Figure 2: Solution architecture with a scalable cache service using Terracotta BigMemory

L2 C

ache

Lay

erL1

Cac

he L

ayer

Commodity Server

Stripe

BigMemory

Java Application

App Server

BigMemory

Java Application

App Server

BigMemory

Java Application

App Server

BigMemory

Java Application

App Server

scal

e up

BigMemory BigMemory BigMemory BigMemory

ActiveServer

Commodity Server

MirrorServer

TerracottaServerArray

BigMemory

scale out

TCP TCP TCP TCP

Durability Mirroring Striping DeveloperConsole

Plug-inMonitoring

OperationsCenter

MOM/MQ Interface

Figure 3: Detail of BigMemory’s service architecture

Get There Faster 3

Page 4: Mainframe Cost Reduction

Get There Faster

Find out how to power up your Digital Enterprise at www.SoftwareAG.com

Business White Paper | BigMemory Reduces Mainframe Costs

Terracotta Server ArrayThe Terracotta server array (L2) is an array of Java server processes on commodity hardware that provides durability, mirroring, striping and scalability to the in-memory service. Like the L1 layer, each L2 node maintains an in-memory hot set of data for low-latency access with a disk-backed store for durability and access to very large data sets.

The L2 in-memory service uses BigMemory provide an in-process—but off-heap—in-memory data store that is not subject to garbage collection. This allows each L2 node to store hundreds of gigabytes of data in memory on a single Java Virtual Machine ( JVM®) without suffering long garbage collection pauses that would violate the customer’s SLA. BigMemory consolidates the hardware footprint of the in-memory service by allowing hundreds of GBs of data in memory on a single server.

BigMemory is highly available and independently scalable by virtue of its striping and mirroring characteristics. Two (or more) mirrored L2 nodes constitute a “stripe” in the BigMemory server array. Each stripe is fault tolerant and highly available, since, should any of the mirrored L2 nodes within that stripe go offline, its service load will automatically fail over to another mirror node within that stripe. The L2 layer is independently scalable, as L2 stripes may be added to meet increasing service load.

ConclusionAfter extensive and rigorous testing to ensure it would meet the customer’s stringent perfor-mance and reliability requirements, Terracotta BigMemory was deployed into production on customer-facing applications. After proving its performance and stability in a limited production environment, the customer is now using BigMemory across a wide range of customer applica-tions, offloading 80 percent of requests from the mainframe and yielding a cost savings of mil-lions of dollars per year. The metrics below tell the before and after story.

Initial Architecture Solution Architecture with Terracotta

Throughput ~10K TPS >12K TPS

Uptime 99.99% 99.99%

SLA 3 seconds 500ms

SLA Adherence 99.98% 99.999%

Infrastructure IBM Series z mainframe with per-transaction cost

6 commodity blades

Mainframe transactions per day

>500MM <1000

ABOUT SOFTWARE AGSoftware AG helps organizations achieve their business objectives faster. The company’s big data, integration and business process technologies enable customers to drive operational efficiency, modernize their systems and optimize processes for smarter decisions and better service. Building on over 40 years of customer-centric innovation, the company is ranked as a “leader” in 15 market categories, fueled by core product families Adabas-Natural, Alfabet, Apama, ARIS, Terracotta and webMethods. Learn more at www.SoftwareAG.com.

© 2014 Software AG. All rights reserved. Software AG and all Software AG products are either trademarks or registered trademarks of Software AG. Other product and company names mentioned herein may be the trademarks of their respective owners.

SAG_Terracotta_BigMemory_Reduces_Mainframe_Costs_4PG_WP_Jan14