cloud computing world vol 1 iss 1-aug 2014

50
Why planning should be central to your cloud adoption process Journey to the cloud: challenges posed by security Will Linux cause problems with load balancers? Understanding cloud load balancing The cloud: OpenStack builds momentum CLOUD COMPUTING WORLD Issue 1 August 2014 Launch Partners

Upload: lgn-media

Post on 01-Apr-2016

218 views

Category:

Documents


3 download

DESCRIPTION

CCW is the UKs first digital publication totally dedicated to the subject of cloud computing. CCW reaches an audience of over 15,000 individual subscribers on a bi-monthly basis, delivering them up-to-date information on this fast paced subject, enabling them to use the processing power of the cloud and its unlimited opportunities for collaboration to enhance and grow their businesses.

TRANSCRIPT

Page 1: Cloud Computing World Vol 1 Iss 1-Aug 2014

Why planning should be central to your cloud adoption process

Journey to the cloud: challenges posed by security

Will Linux cause problems with load balancers?

Understanding cloud load balancing

The cloud: OpenStack builds

momentum

CLOUD COMPUTINGWORLDIssue 1 August 2014

Launch Partners

Page 2: Cloud Computing World Vol 1 Iss 1-Aug 2014

ARE YOU ON CLOUD NINE?

One major cloud computing company is, after we saved them more than £19m. We showed our client there was a better solution for their data centre needs and, after two well-thought-out acquisitions, they saved big. Could you too unlock savings from your critical environment? Speak to our Data Centre Solutions team today.

Visit our website to find out more hereor challenge us on the spot by calling +44 20 7182 3529. A

Page 3: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 3

CONTENTS6 CCW News

All the key news in the world of cloud.

8 Understanding the need to reduce data centre PUE levels Power issues in today’s data centres

12 How CRM changed cloud and cloud changed CRM Moving business applications into the cloud

16 Customer-defined data centres Redefining cloud service delivery

18 Removing the risk for data centre and enterprise IT WhiteSpider develops a cloud solution for Parsons Brinckerhoff

22 Cloud: the 60-year-old hot topic Giving data centres a new perspective

24 A well-balanced hybrid Cloud Load balancing for a more robust cloud environment

26 OpenStack Builds Momentum Understanding data centre software

30 Cloud Computing in an On-Demand World Why planning is essential when it comes to the cloud

32 Journey to the cloud: challenges posed by security How the cloud brings challenges, as well as benefits

34 Why planning should be central to your cloud adoption process Breaking down the planning process into more manageable steps

36 Security questions to ask your cloud provider Reducing security risk with due diligence

38 Understanding cloud disaster recovery services How the cloud can make your IT systems more robust

40 Taking your first steps into the cloud Strategies for adopting the cloud

44 Will Linux cause problems with load balancers? How next-gen Linux containers could cause problems

46 Using OpenStack in an all-IP environment Deutsche Telekom taps into the cloud

26 St Thomas Place, Cambridge Business Park, CB7 4EX

Tel: +44 (0)1353 644081 [email protected] www.cloudcomputingworld.co.uk

LGN Media, a subsidiary of The Lead Generation Network Ltd Publisher & Managing Director: Ian Titchener Editor: Steve GoldProduction Manager: Rachel Titchener Advertising Sales: Bob HandleyReprographics by Bold Creative The views expressed in the articles and technical papers are those of the authors and are not endorsed by the publishers. The author and publisher, and its officers and employees, do not accept any liability for any errors that may have occurred, or for any reliance on their contents.

All trademarks and brandnames are respected within our publication. However, the publishers accept no responsibility for any inadvertent misuse that may occur.

This publication is protected by copyright © 2013 and accordingly must not be reproduced in any medium. All rights reserved.

Cloud Computing World stories, news, know-how? Please submit to [email protected]

CLOUD COMPUTINGWORLD

Service prices differences under the microscope

Audiocast: total remote/cloud security becoming reality says veteran pen tester

Looking towards an open source cloud future - cost cutting without service reduction

Understanding cloud load balancing

The cloud: it’s older than

you might think

CLOUD COMPUTINGWORLDIssue 1 August 2014

Launch Partners

Page 4: Cloud Computing World Vol 1 Iss 1-Aug 2014

4 CLOUD COMPUTING

FOREWARD

Hello everyone,

What is the cloud to you?Welcome to this, the first issue of Cloud Computing World, which I’m hoping will entertain and inform you on the highly topical subject of cloud computing.

Is the cloud new?It depends on who you speak to, but for me, the concept of the cloud dates all the way back to 1986 when I purchased my first portable mobile phone for the princely sum of £1,825.00. Plus VAT, naturally. That Cellnet handset cost me £25 a month for line rental, before I even made a call. And in the event I was unable to answer the call - for any reason - the call went to voicemail – at a cost of 25 pence per minute.

And where was the voicemail service?In the cloud, distributed across one of three (of eight) Cellnet mobile switches, with the voicemail promptly replicated between the switches to allow me to dial in from anywhere in the UK where the cellco had coverage - or via the PSTN, where calls were routed to the Slough NOC (Network Operations Centre).

Can you tell I write about this stuff?But I digress. That was my first experience with cloud services. Today, some 28 years later, I have a wide variety of cloud services that use on a regular basis, ranging from multiple email service providers, cellular visual voicemail, Dropbox, Gmail and all the way through to 200 gigabytes of business backup storage with data mirrored across three data centres spread across Europe.And that’s before I talk about our Netflix, Roku and Spotify accounts for use in the house and when out and about.This is the modern person’s use of the cloud. And it’s not just me - you probably recognise many of these services yourself, as you subscribe and use them on a regular basis.

But we have a long way to go before these cloud services are mature. For starters, what happens when a given CSP goes bust?And what happens if one CSP takes over another - how would the services be merged? And would my business cloud data still be stored in a European data centre? Or a US one? And what about the Patriot Act where a US-owned but UK-based cloud service provider is concerned?

It’s these type of questions that I’m hoping to answer in Cloud Computing World - I hope you enjoy the new publication.

May all your IT problems be little ones.

Steve GoldEditor - Cloud Computing World

Storage performance up to 30 times faster than leading cloud providers

< 1ms network performance SLA

Secure Network Architecture

Embedded WAN Optimisation

Standard Global Architecture

You only pay for what you use

Test drive our cloud service for free, for a 14 day trial.*

Click Here

* No credit card is required for this 14 day free trial

Page 5: Cloud Computing World Vol 1 Iss 1-Aug 2014

Storage performance up to 30 times faster than leading cloud providers

< 1ms network performance SLA

Secure Network Architecture

Embedded WAN Optimisation

Standard Global Architecture

You only pay for what you use

Test drive our cloud service for free, for a 14 day trial.*

Click Here

* No credit card is required for this 14 day free trial

Page 6: Cloud Computing World Vol 1 Iss 1-Aug 2014

6 CLOUD COMPUTING

REGULARS

Attix5, the data protection software specialist, has taken the wraps off DynamicRestore, an instant cloud-

based disaster recovery platform. The new service is billed as providing users with immediate recoverability in the event of a loss of critical servers and data.

According to Luv Duggal, Attix’ general manager, DynamicRestore is guaranteed to increase the efficiency and delivery of business continuity and disaster recovery.

Lost servers or data, he says, can have dramatic cost-implications for businesses when they are not recovered to an operational level in minimal time.

“Even with this in mind, there is still a large segment of the market that is unable to buy expensive recovery solutions because of the high level of investment involved. What we have created, is a means of helping small and medium enterprises around the world employ world-class security, at the SME price point - without sacrificing quality for the end-user, or profitability for the service provider,” he explained.

CCW notes that DynamicRestore forms part of the new Attix5 Dynamic product, which includes the features of he company’s current Attix5 Pro platform, combined with the new DynamicRestore technology.

www.attix5.com

Hibernia Networks has added the Cork Internet eXchange (CIX), the regional data centre for Southwest Ireland,

as a new Point of Presence on its network. The PoP allows Hibernia to further expand its high capacity, international services throughout Cork, Munster and the island of Ireland.

Built in 2007 and open for business in March 2008, CIX is a critical piece of communications infrastructure for Cork and Munster.

The facility is responsible for delivering IP connectivity to thousands of businesses and tens of thousands of homes from Kerry to Waterford via large telcos and regional ISPs.

According to Hibernia, CIX connects upstream to an extensive list of fibre providers and has a 30-metre telecoms mast onsite, with a line of sight to Cork City and Cork County.

CIX customers will gain access to Hibernia’s Project Kelvin network. Project Kelvin is an extensive submarine and terrestrial cable deployment that directly connects Northern Ireland to North America and Europe.

The sub sea cable comes ashore at Portrush, Northern Ireland and connects to Hibernia’s terrestrial fibre optic ring consisting of over a dozen Irish towns and cities, providing local and global commerce opportunities between the island of Ireland and the rest of the world.

www.cix.ie

Gridstore, the SDS (Software-Defined Storage) provider of Windows Servers and Hyper-V Gridstore, has

announced the integration of Gridstore 3 with Microsoft System Centre 2012, a move it says will enable its delivery of the Cloud Data Centre.

According to the firm, the integration with System Centre allows for management of all resources via a single console. This central management, says Gridstore, provides for better overall efficiency and flexibility.

With System Centre Virtual Machine Manager (SCVMM) integration, Gridstore is billed as delivering policy-based provisioning and orchestration of storage resources at VM-level granularity including key characteristics such as Quality of Service and Data Protection Schemes.

System Centre integration will be available by the end of Q3 - the company says that all current and new customers running Gridstore 3 can upgrade with no disruption or hardware change.

www.gridstore.com

Telstra has unannounced new cloud infrastructure services in the US, expanding on its offering already

available in the UK, Hong Kong, Singapore and Australia and strengthening its global virtual private cloud solution for multinational customers.

Martin Bishop, Telstra’s global lead of network applications and services, said the US extension - which will be located on the East Coast - is in important milestone in the

CSP’s on-going strategy to provide cloud infrastructure services to support business growth initiatives.

“The new US node brings our total cloud presence up to seven distinct locations throughout the United States, Europe and Asia Pacific and will enable customers operating across multiple geographic locations, including the US, to quickly and efficiently realise the benefits of enterprise cloud services on their global operations,” he said.

www.telstra.com

More than one-third of IT security pros are sending sensitive data outside of their organisation

without encryptionDespite headline-making breaches that

have called attention to the importance of data encryption, nearly 36 per cent of IT security professionals admit to sending sensitive data outside of their organisations without using any form of encryption to protect it.

The research, from Voltage Security, took in responses from more than 200 IT professionals towards encryption, big data security and EU data privacy regulations.

The survey showed that almost half of respondents indicated that they are not de-identifying any data within their organisations. The ability to “de-identify” information, by employing standards based encryption technologies such as FPE (Format Preserving Encryption) is said to provide very effective mechanisms to secure sensitive data, as it is used and managed at the personal and professional level.

Voltage says that discussions surrounding data residency, lawful intercept and protecting data from advanced threats have been top of mind for many years. While recent stories shine a spotlight on the risks to data, including theft and extortion, the need to both protect data from inadvertent risk while ensuring the business isn’t constrained is a clear problem every business needs to solve.

www.voltage.com

CCW NEWSAll the key news in the world of cloud. Please don’t forget to check out our Web site at www.cloudcomputingworld.co.uk for a regular weekly feed of relevant news for cloud professionals.

Page 7: Cloud Computing World Vol 1 Iss 1-Aug 2014
Page 8: Cloud Computing World Vol 1 Iss 1-Aug 2014

8 CLOUD COMPUTING

DATA CENTRES

REDUCE DATA CENTRE PUE LEVELSIntroductionThe rising price of energy - coupled with a rising understanding amongst management of the social responsibilities that companies have in reducing their energy consumption footprint - means that data centre owners, their clients and managers have been revisiting power consumption issues in a big way over the last few years.

In parallel with this, the data centre industry has developed a measure of how effectively a data centre uses its energy. Known as the PUE (Power Usage Effectiveness) this measure quantifies for what application and how much energy is being used.

PUE is defined as the ratio of total amount of energy used by a computer data centre facility to the energy delivered to the computing equipment.

This is calculated by taking a measurement of energy use at or near the facility’s utility meter. We then measure the IT equipment load after the power conversion, switching, and conditioning processes are completed.

The Green GridAccording to The Green Grid (www.thegreengrid.org) - an industry consortium active in developing metrics and standards for the IT industry - the most useful measurement point is at the output of the computer room PDUs (Power Distribution Units). This measurement should represent the total power delivered to the server racks in the data centre.

Data centre association the Uptime Institute reports that a typical data centre has an average PUE of 2.5 - this means that, for every 2.5 watts in at the utility meter, only one watt is delivered to the IT load. The Institute estimates that most facilities can - using the latest (2014) technologies - achieve a 1.6 PUE using the most efficient equipment and best practice.

This ratio can usually be achieved in most data centres using a relatively simple set of steps to boost the power efficiency levels, and which

also have the advantage of generating a good ROI (Return on Investment) as far as Capex (Capital Expenditure) is concerned.

The steps that can be taken will include the retirement of legacy hardware in order to significantly reduce the power and cooling requirements of the IT systems - and so create a greener data centre.

It’s worth remembering here that legacy hardware - once it has been suitably `scrubbed’ of stored data (where appropriate) - can often be traded in with many vendors and their dealers.

PUE in practiceSo how does PUE work in practice? Well, in a data centre with a PUE of 2.5, supporting a 600W

Mark Adwas discusses some of the power consumption challenges - and solutions to those challenges - that face modern data centre facilitators and managers.By Mark Awdas, Engineering Manager, Cannon Technologies

InfoBurstReduced power consumption in the data centre can help to reduce our reliance on non-renewable energy sources

UNDERSTANDING THE NEED TO Power issues in today’s data centres

Page 9: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 9

DATA CENTRES

InfoBurstKeeping cable and power blocks tidy makes life easier for rack amendments and other changes

availability, and kit, whilst IT staff will be try to ensure they have sufficient processing power, network bandwidth and storage capacity to support their upcoming IT initiatives - as well as ensuring sufficient redundancy to handle system disruptions.

Although balancing the needs of these two processes may sound relative easy, their complexity is often compounded by the fact that - in the past - facilities staff and IT professionals have tended to treat their operational costs separately, spreading their overall costs across the organisations and making it difficult to assess their full impact.

Because of the operational differences that exist between facilities staff and their IT colleagues, it is clear that optimising data centre energy efficiency requires a high degree of careful planning.

This is in addition to the deployment of components such as power, cooling, and networking systems that can meet both current needs and also scale for future requirements - and so minimise TCO (Total Cost of Ownership) issues, both now and in the future.

The scalability issue is such that, when data centres reach 85 to 90 per cent of their power, cooling, space, and network capacity, organisations must seriously consider either expanding their existing data centre or building a new one - this is, we have observed, a difficult strategic decision that can have a major impact on the company’s bottom line.

Adopting a green strategyThe good news, however, is that adopting a `green strategy’ can show how best practice for capacity expansion can increase the energy efficiency of a data centre - and also help to increase density, reduce costs, and extend the life expectancy of existing data centres.

In a green data centre, the mechanical, electrical, and spatial elements (facilities) - as well as servers, storage, and networks - are usually designed for optimal energy efficiency and minimal environmental impact.

The first step in energy-efficiency planning involves measuring existing energy usage. It’s worth noting that the power system in a given data centre is a critical element in the facilities infrastructure, so knowing where that energy is being used - and by which equipment - is essential when creating, expanding, or optimising a data centre.

As energy costs continue to rise, it is clear that aligning the goals and requirements of business, facilities, and IT departments will become more critical to optimising overall energy use and reducing the power costs in enterprise data centres.

Following the strategies outlined in this article - including the processes of monitoring current energy usage, retiring idle servers, and deploying energy-efficient virtualised servers - can help enterprises take a major step toward the realisation of a green data centre.

server actually requires the delivery of 1,500W to the data centre as a whole.

Unfortunately, most organisations lack any power-consumption metering which can break down usage at a level that allows them to gauge the results of their optimisation efforts. To help solve this problem, efforts to monitor energy use should start with the creation of a manufacturer’s `power profile’ for each rack in an existing data centre.

Each department with an IT facility - and not just within a data centre itself - faces their own separate challenges that can cloud (no pun intended) the power consumption and efficiency issue for the systems concerned.

For example, facilities staff can be struggling with limits on rack and floor space, power

Page 10: Cloud Computing World Vol 1 Iss 1-Aug 2014

10 CLOUD COMPUTING

belowReducing energy requirements translate to real cost savings on power bills

DATA CENTRESIn many data centres, between 5 and 15

per cent of servers are no longer required and can usually be turned off. The cost savings from retiring these idle servers can be considerable

Average server performance has also increased - today’s servers are far more powerful than those of a decade ago, and virtualisation allows enterprises to take advantage of that performance to consolidate multiple physical servers onto a single virtualised server. It is worth noting that server upgrades can also help in this regard.

One of the pivotal moments in the evolution of data centre efficiency was the introduction of version 1.0 of the European Commission’s `Code of Conduct on Data Centres Energy Efficiency’ (http://bit.ly/1luw7kK ) back in 2008.

In many ways the publishing of this code was something of a wake-up call for the data centre industry - and has helped to generate a better industry understanding of the need to `go green’ where data centres are involved.

The Green Grid, however, has not rested on its laurels, as last year the IT/energy industry association teamed up with ASHRAE - formerly known as the American Society of Heating, Refrigerating and Air Conditioning Engineers and which has re-positioned itself as a sustainability association - to publish a review of the PUE standard.

Entitled `PUE: A Comprehensive Examination of the Metric,’ (http://bit.ly/1eo5o4E ) this is the 11th book in the Datacom Series of publications from ASHRAE’s Technical Committee 9.9.

Its primary goal, says ASHRAE, is to provide the data centre industry with unbiased and vendor neutral data in an understandable and actionable way.

At the time of the book’s publication, John Tuccillo, chairman of the board for The Green Grid Association, said that data centres are complex systems for which power and cooling remain key issues facing IT organisations today

“The Green Grid Association’s PUE metric has been instrumental in helping data centre owners and operators better understand and improve the energy efficiency of their existing data centres, as well as helping them make better decisions on new data centre deployments,” he explained.

ConclusionsAs energy costs continue to rise, it is clear that aligning the goals and requirements of business - as well as facilities and IT departments - is now critical to optimising energy usage and so reducing power costs in enterprise data centres.

Our broad recommendations to help reduce these costs - as well as optimising the power consumption for all types of data centres - is to closely monitor a centre’s current energy usage, as well as retiring idle servers and deploying energy-efficient virtualised servers wherever possible.

Our observations also suggest that, if you are involved in the management or operation of data centres, then the PUE ratio will matter to you. In view of this, you should also be looking at reducing the power consumption of the data centre and so improve your facility’s benchmark along the way.

The human element in the data centre power efficiency stakes should also not be ignored - especially in today’s facilities management arena. Vendors and data centre staff should always be able to advise clients on how to reduce temperatures and energy usage using technologies such as innovative hot- and cold-aisle designs.

Since the UK Carbon Reduction Commitment (CRC) obligations were enacted back in April 2010 (http://bit.ly/1luwLPb ), it should be clear that vendors and data centre providers need to work together in developing industry standards and ratings that work.

Cannon Technologies believes that the data centre industry - from the power suppliers all the way to the rack makers - needs to work together to improve efficiencies and so ensure that we are all at the forefront of efficient and green data centre operations.

www.cannontech.co.uk

“Mark discusses some of the power consumption challenges - and solutions to those challenges - that face modern data centre facilitators and managers”

Page 11: Cloud Computing World Vol 1 Iss 1-Aug 2014

Digital Realty Data CentresPowering the World’s Leading Companies 9 of the Top 15 INVESTMENT BANKS5 of the Top 5 CLOUD SERVICE PROVIDERS3 of the Top 5 SOCIAL MEDIA PROVIDERS

www.digitalrealty.co.uk

Page 12: Cloud Computing World Vol 1 Iss 1-Aug 2014

12 CLOUD COMPUTING

CLOUD BUSINESS ISSUES

IntroductionCRM (Customer Relationship Management) is one of the forerunners of cloud technology and remains one of the great success stories in the space - and has been dramatically changed as it has moved from an on network-led market, to the verge of being dominated by cloud offerings.

The cloud is, of course, a heavily hyped term in both the IT and business sector and has come to cover a wide range of options as vendors have jumped on the bandwagon, many cloud-washing their old solutions to be able to use this hip new term - for example many have simply put a Web front end admin console or added Web update portals to be able to claim they are cloud-enabled.

True cloud solutions outweigh these pretenders and are truly changing the way IT is digested and moving us from an IT domicile to a business led agenda.

Traditionally, for example, customer and

contact management solutions were on network products from legacy vendors and remained a limited market of DOS and early Windows based solutions such as ACT, Goldmine, Maximiser, Superoffice and the like.

These solutions provided the ability to share information usually limited to organisation, people, activities and notes and act as a company shared database of clients and prospects.

Then along came Siebel (founded in 1993) delivering a wider functional experience and richer customer information and really termed the market CRM. By the late 1990’s Siebel had become the dominant player, with a peak market share in 2002 of 45 per cent.

SalesforceIn 1999 Salesforce was founded with a SaaS (Software-as-a-Service) -only offering - and

InfoBurstCustomer Relationship Management in the call centre - all smiles when things are running smoothly

Moving business applications into the cloud

HOW CRM CHANGED CLOUD AND CLOUD CHANGED CRM

By Ian Moyse, Sales Director Workbooks, Eurocloud UK Board Member and Cloud Industry Forum Governance Board Member

Ian Moyse explains the close relationship between the cloud and business applications…

Page 13: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 13

CLOUD BUSINESS ISSUES

InfoBurstCRM - if the IT is running smoothly, everyone is happy...

Cloud customers rely less and less on brand equity to make a decision and increasingly have more choice available to them. For example a USA business can find a UK cloud provider and turn on the service, and use and be supported equally well from the other side of the world.

SaaS-based CRM now contributes 50 per cent of all new sales and is expected to reach 70 per cent market penetration within a few years and cloud CRM providers lead the way in winning awards (Workbooks won CRM of the Year in 2014 and 2013 with most of the finalists being cloud only vendors) and market CRM reports such as G2 Crowd showing the leading players all being cloud based CRM offerings.

On-network CRM providers still have customers, but in the majority are fighting to retain their share, they are not experiencing the growth and certainly not at the pace that cloud CRM vendors are delivering.

Microsoft is the exceptionMicrosoft, of course, is the exception to this, having to have an on network option alongside its cloud CRM whilst it transitions its own business market approach from being an on network vendor to a cloud focused vendor - having realised the market shift a few years back when Microsoft moved 95 per cent-plus of all its development quickly to focus on its cloud offerings.

Once the shift is complete and the market accepts Microsoft fully as a cloud first vendor, when will the step come where Microsoft joins the throws vendors offering cloud CRM as their only form factor option for their own advantage?

remains so today - that rapidly started to disrupt the status quo of vendors aforementioned and has grown to now be one of the top 10 sized IT vendors worldwide, proof positive that SaaS CRM is both a lucrative space and one customers are flocking to.

Alongside Salesforce a wide range of other Cloud CRM providers have sprung up to disrupt, replace and become heavy competitors to the legacy providers.

The cloud enables these vendors to develop quicker (3-4 release updates a year compared to a typical 1 every 2-3 years with a software vendor), reach further and wider (cloud vendors can attain worldwide profile plus customers quickly and affordably in comparison to the costly and slow model to launch in the old software product model).

They can also be more agile when it comes function and flexibility (cloud models need far less testing, support the required browsers and mobile support and your away, compared to a product based system having to work on a wide range of operating systems and version and worry about software incompatibilities, network and hardware issues and a testing regime that can simply never cope with the wide variety of customer on network device environments).

The cloud enables CRM vendors (and others) to innovate and compete on a global market, it empowers a vendor such as Workbooks to deliver a rich, intuitive Web-based system that can compete fairly with vendors such as Salesforce, something previously difficult to do in a product world.

Page 14: Cloud Computing World Vol 1 Iss 1-Aug 2014

14 CLOUD COMPUTING

CLOUD BUSINESS ISSUESCloud CRM was there right at the start,

displacing existing approaches and disrupting the status quo of approaching business application deployment methods and it has proven consistently that this is increasingly the customers preferred approach.

Cloud solutions are now designed work well over slower links and transient connections, making even remote customers who would have previously found their bandwidth limiting, viable users of the SaaS based CRM options available. Increasingly also we have seem customers having higher connection speeds and demanding more mobile access from any device, anywhere at any time (mostly from user demand and not led from the business itself) all needs well suited to a cloud based CRM solution. Legacy solutions still survive, but the emphasis is on survive whilst cloud CRM is termed as thriving.

We are now at the tipping point where cloud is an everyday term - whilst many still do not understand it or its nuances, seeing it only as the Internet, few have not heard of it or seen the branded marketing it is featured in, and accelerated adoption has started.

The cloud is extremely disruptive - this is nothing new to those who are familiar with Clayon Christensen’s theory of disruptive innovation - and those ignoring it in vendor land and supply channels do so at their peril.

Many still dismiss the cloud, demanding on network only, not for a logical reason, but normally on an emotive basis, believing the Internet to be insecure, and reasoning, therefore, that the cloud will be.

This approach is not new and has affected the adoption of ‘new things’ across industries. Take the motor car - when it was first introduced it was deemed the ‘devil’s work’, with a man carrying a red flag having to walk down the street in front of each car and people were recorded as believing that ‘if you went in a car and it travelled at over 20 miles an hour it would rip the skin from the human face.’

Now, of course, we smirk at such things, but at the time that was a very real belief and emotion towards replacing a horse and cart with a car. We are experiencing something similar with the cloud.

Ignoring the cloud Ignoring cloud computing and the new form factor underpinning it can be a dangerous

tactic, enabling you to miss out on competitive advantage, flexibility, cost savings, functional benefit and greater resilience. Many examples already exist of major brand name leaders not recognising the change being driven by the cloud in general and the rapid effect user acceptance can have on changing the historical norm.

Take for example Blockbuster video – once a world leading brand, now gone, devastated by the likes of Netflix and Lovefilm (Amazon) who changed the delivery method for consumers renting a movie from taking a video tape home, to clicking and streaming your choice, which is faster, quicker and cheaper.

The brand equity Blockbuster had was not enough to overcome a new cloud based option that customers chose to choose. Not because of the cloud or because of disliking Blockbuster, simply because someone made it better and delivered something the customer preferred.

The same happened with Kodak as photography rapidly went digital and online with cloud based uploads and sharing replaced the old format. The music industry with ITunes vs bricks and Mortar music stores is going through the same transition as are other markets. So to undertake a belief that cloud will not affect IT delivery and to not truly consider it fairly in any business application or IT project is a naive approach that may leave you and your business out in the cold.

ConclusionThe cloud is not a be all and end all, it is not right for every customer in every situation, just as the horse and cart still having its place in certain situations – i.e. the right tool for the right job - but it will be advantageous in the highest majority of situations.

The technology sector’s ability to change has accelerated. Moore’s Law back in 1965 predicted silicon power would double every two years. But what its creator, Gordon E. Moore, couldn’t have predicted was the dramatic economies of scale the cloud would eventually bring to all of our lives.

For one, it has helped lead to a drop in price for essentials like computing power and storage by making them more accessible. But also, it’s enabled conveniences no one ever would have imagined four or so decades ago.

The cloud has not only driven down costs, but it’s helped increased our satisfaction with and expectations of our Internet experience. It’s enabled mobility and delivered immense computing power to anyone, anywhere at any time.

Perhaps an update to Moore’s Law will be formed to hypothesize that the number of applications running the in the cloud will double every two years; based on today’s adoption and consumption rates, however, it’s also possible we could see it being represented as the computing power available to an individual consumer - via the cloud - doubling every two months.

www.workbooks.com

“We are now at the tipping point where cloud is an everyday term - whilst many still do not understand it or its nuances, seeing it only as the Internet”

Cloud & IT Security Ireland is a NEW independent Conference & Exhibition at which Enterprise and business organisations can see the latest solutions available and receive independent practical information on the business arguments, software, technology and solutions they need to make better informed decisions.

11 – 12 November 2014 RDS, Dublin

Co-Located for success

Cloud & IT Security Ireland benefits from being co-located within DataCentres Ireland the leading IT technology infrastructure event in the country.

To register your interest and receive more information contact Hugh on +44 (0) 1892 518877 or email [email protected]

11-12 Nov 2014RDS, Dublin.

The Conference Utilising a combination of Case Studies, Panel discussions, Technical papers and interactive forums the conference will showcase the latest in new ideas, software, solutions and Best Practice.

The Exhibition Featuring leading companies, brands and value added resellers this is your chance to and compare the latest in technology, software, innovative solutions and source the suppliers who can assist you.

Themes addressed will include:

• What are the available options

• How do I assess my future needs

• Considerations when migrating to the cloud

• Does one size fit all?

• Security and the Cloud

• Future Technology

• Virtualisation and Storage

• Big Data

Page 15: Cloud Computing World Vol 1 Iss 1-Aug 2014

Cloud & IT Security Ireland is a NEW independent Conference & Exhibition at which Enterprise and business organisations can see the latest solutions available and receive independent practical information on the business arguments, software, technology and solutions they need to make better informed decisions.

11 – 12 November 2014 RDS, Dublin

Co-Located for success

Cloud & IT Security Ireland benefits from being co-located within DataCentres Ireland the leading IT technology infrastructure event in the country.

To register your interest and receive more information contact Hugh on +44 (0) 1892 518877 or email [email protected]

11-12 Nov 2014RDS, Dublin.

The Conference Utilising a combination of Case Studies, Panel discussions, Technical papers and interactive forums the conference will showcase the latest in new ideas, software, solutions and Best Practice.

The Exhibition Featuring leading companies, brands and value added resellers this is your chance to and compare the latest in technology, software, innovative solutions and source the suppliers who can assist you.

Themes addressed will include:

• What are the available options

• How do I assess my future needs

• Considerations when migrating to the cloud

• Does one size fit all?

• Security and the Cloud

• Future Technology

• Virtualisation and Storage

• Big Data

Page 16: Cloud Computing World Vol 1 Iss 1-Aug 2014

16 CLOUD COMPUTING

DATA CENTRES

DATA CENTRES

IntroductionThe phrase ‘Software Defined Data Centre’ has been the mantra for those of us working to build the next generation of data centres since it was first coined at VMworld back in 2012. It means that the provision and operation of the data centre infrastructure is entirely automated by software with minimal human intervention.

However a recent visit by the UK Home Secretary Theresa May to officially open one of our newly expanded data centres in Maidenhead, Berkshire, has made me think we need a new

phrase to describe what we’re doing in our DCs. It was the first time Mrs May had visited a data centre and she echoed the thoughts of many who venture inside when she said: “It is interesting to see that the cloud has a physicality to it and isn’t just something up in the ether.”

When a senior government minister is genuinely intrigued by the physical infrastructure that powers the delivery of cloud services we need to listen.

Few ministers have been inside a data centre, yet they are collectively responsible for

The official opening of one of its newly-expanded data centres by the UK Home Secretary prompts Bill Strain to re-define cloud service delivery...By Bill Strain, CTO, iomart

InfoBurstBill Strain shows the Home Secretary what a data centre looks like...

CUSTOMER-DEFINED Redefining cloud service delivery

Page 17: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 17

DATA CENTRESThis makes me think that what we should

be talking about today is the ‘Customer Defined Data Centre’ (CDDC) rather than defining DCs by the way they use software to set up the servers and the network inside them.

The importance of the customer in the delivery of our services should be at the forefront of how we architect the physical infrastructure that makes up the backbone of the cloud.

The innovative Cisco and Corning fibre technology we’ve deployed in the data centre the Home Secretary visited allows us to provision automatically and dynamically through our control panel, whatever services our customers need, at any time, on any scale. The technology has been designed with our end-users, our customers, in mind, providing them with what they need to do their work.

The challenge for us was to make sure that each rack of servers that goes in the seven data halls of the facility is capable of catering for every network requirement, for all business groups, encompassing both initial and rapid future expansion as and when required.

There is of course a benefit to us - we no longer have to physically plug wires into servers, which therefore reduces our management burden - but there is also huge benefit to the customer.

ConclusionWe are managing thousands of servers and the high capacity networks that deliver the computing power to support modern business in the age of digital. No longer do companies have to make huge capital investments in their own hardware on their own premises, instead they invest in us and so we need to have that same investment in them.

By talking about not just software defined but Customer Defined Data Centres, I think we can show that we are transforming our networks to deliver the highest levels of agility, performance and flexibility to drive the development of the new world economy.

the G-Cloud framework, which was set up to encourage the adoption of cloud services by the public sector.

The whole G-Cloud initiative has been pushed by the need to allow local authorities and other public sector organisations to find easier ways to procure services from companies like ourselves on a pay-as-you-go basis instead of having to endure lengthy and often-expensive procurement processes. So it is vital that the people responsible understand that the companies who own and manage data centres are focused on giving them fast and effective ways of getting the cloud services they require.

The same goes for other senior decision makers, few of them probably get the chance to step inside so we need to illustrate how valuable data centres are to the economy by explaining what goes on in them in much simpler terms. This applies to how we educated members of the public, as much as it does to small business owners, officials in local government, right up to the CEOs of the biggest corporations. We need to be focused on the customer.

The people who are increasingly using cloud services do so because it adds value to what they do. It might make their own jobs easier, for instance allowing a busy IT department to backup data quickly and securely without having to assign staff to physically change and store tapes, or it might allow them to deliver better products and services to their own customers, for instance by enabling accountants to use financial software which they access via the internet to provide a service to their clients.

Customer defined After initial scepticism, the value of the on-demand, pay-as-you-go cloud services model is now being embraced by government and enterprise business but it is also being driven and changed by the needs of those same organisations.

UK Home Secretary Theresa May opens new iomart data centre

The UK Home Secretary, The Rt Hon Theresa May MP, officially opened a multi-million extension to a data centre, which is owned and operated by iomart group in June of this year.

The Home Secretary was given a guided tour of the new highly secure, state-of-the-art, 1500 square metre extension to the data centre on the Clivemont Road industrial estate in Maidenhead.

The Rt Hon Theresa May said: “Data centres are an important part of the global economy so I’m delighted to open this new facility for iomart. The technology on show is impressive and will allow businesses to be better connected than ever.”

iomart purchased the data centre as part of its acquisition of Maidenhead-based web hosting company RapidSwitch in 2009.

This upgrade, says the firm, makes it one of the most advanced data centres in the UK and showcases the first major deployment of brand new technology from Cisco, which allows network infrastructure and services to be automatically provisioned and scaled for customers.

Angus MacSween, CEO of iomart, said: “We are delighted that The Home Secretary has officially opened our next generation data centre and seen first-hand the technology involved in creating the infrastructure needed to support the dynamic and ever-changing web hosting and data storage needs of SME and enterprise business.”

“Our data centres are the motorways of the future and this facility enables us to provide flexible and bespoke services to our customers and puts us at the heart of the next generation of software defined data centre technology,” he explained.

The new extension took 12 months to complete and has capacity to hold up to 630 racks containing up to 30,000 physical and as many as 500,000 virtual servers. It has been designed to meet the needs of all the different hosting brands that make up the iomart Group of companies.

www.iomart.com

Page 18: Cloud Computing World Vol 1 Iss 1-Aug 2014

18 CLOUD COMPUTING

CASE STUDY

InfoBurstHow Parsons Brinckerhoff called on the assistance of WhiteSpider to implement a wide-scale cloud topology across its many offices around the globe.

WhiteSpider develops a cloud solution for Parsons Brinckerhoff

How cloud computing helped a company with more than 150 offices around the world...REMOVING THE RISK FOR

DATA CENTRE AND ENTERPRISE IT

Page 19: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 19

CASE STUDY

InfoBurstThe Palm Jumeirah - just one of many locations for PB’s cloud deployment.

define an entire region to smaller, more local projects that keep a community humming.

The company offers skills and resources in strategic consulting, planning, engineering, program management, construction management, and operations and maintenance. It provides services for all modes of infrastructure, including transportation, power, energy, community development, water, mining and the environment.

The challenge - and objectivesParsons Brinckerhoff is a company with over 130 years’ history and a federated structure. So inevitably its IT systems had evolved in a haphazard fashion, responding to needs as they arose in various parts of the business, with each business unit choosing its own solutions and standards.

The company had been having issues for some time with power outages, makeshift server room arrangements and legacy equipment, which could no longer be maintained. Its systems were under-utilised and difficult to manage.

However it was the arrival of Hurricane Sandy in October 2012, and the near-disastrous flooding of the primary data centre in Carlstadt, New Jersey, that served to highlight the level of risk that the company faced.

With increasing single point of failure events and a site that was both unsuitable for future development and nearing the end of its lease, Hurricane Sandy was the final straw that brought forward the company’s plans for consolidation and migration of its systems, with the aim of creating a private cloud platform and a fully maintained data centre, proofed against disasters and with built-in resiliency.

Parsons Brinckerhoff therefore needed to conduct a thorough review of all its systems in order to create a truly robust, consolidated architecture that would be resilient, easy-to-

IntroductionParsons Brinckerhoff was suffering from a problem common in many established enterprises where the IT infrastructure had grown with the company. The need to respond to growing demands by adding new technologies resulted in a piecemeal infrastructure with the associated risks, inefficiencies and inflated costs.

The company had already started to work on its DCCAMP (Data Centre Consolidation and Migration Project) when it was introduced to WhiteSpider.

Working alongside the client’s IT team, WhiteSpider’s team of experts quickly identified the major areas where improvements could be made and applied its unique ea4 framework for enterprise architecture to help the client achieve its key objectives of reducing risk, consolidating and simplifying its enterprise architecture and cutting the overall costs of running its IT systems whilst improving performance.

The result was a positive evolution from a fragmented environment into a coherent, reliable, scalable and future-proof architecture that delivers greater performance at a fraction of the operating cost.

The client in depthParsons Brinckerhoff is a global consulting firm assisting public and private clients to plan, develop, design, construct, operate and maintain thousands of critical infrastructure projects around the world.

Founded in New York City in 1885, Parsons Brinckerhoff is a diverse company of 14,000 people in more than 150 offices on five continents.

With a strong commitment to technical excellence, a diverse workforce, and service to its clients, the company is currently at work on thousands of infrastructure projects throughout the world, ranging from the mega-projects that

Page 20: Cloud Computing World Vol 1 Iss 1-Aug 2014

20 CLOUD COMPUTING

CASE STUDYmanage and future-proof.

With little more than a year until the lease on the existing data centre site expired, the client looked to find a partner who could help them manage this in the timescales available. The company needed a partner with the experience in large-scale migration projects, plus the technological vision and expertise to design, plan and implement a solution that would deliver a good Return on Investment, excellent performance and significant cost savings.

The solution - the DCCAMP projectParsons Brinckerhoff had a disparate infrastructure with many different systems in different business units and a very dispersed estate across several sites. WhiteSpider had to react quickly to review and understand the objectives of the project, including the key services, dependencies and stakeholders, with first results needed within just a few days.

Using its unique ea4 approach, providing a framework for developing and implementing enterprise architectures, WhiteSpider was able to engage quickly with the client team and carry out a high level audit of the service environment, dependencies, locations and user footprint.

The information from the audit provided valuable insight for WhiteSpider to plan client’s migration and transformation strategy, including the size, type and location of a co-located data centre provider. WhiteSpider also supported the client in the procurement process, helping to define objectives, core requirements and selection criteria for the new data centre environment.

This included writing the RFP document and helping to evaluate the proposals and choose the right data centre provider and location. WhiteSpider also used the knowledge gained from the audit to inform the process of designing a new, agile service delivery platform for the client, based on the creation of its own private cloud infrastructure.

As part of this enterprise architecture process, WhiteSpider also helped to manage the comparison of technologies for the new architecture in a technology bake-off.

Planning the migration involved several enterprise alignment steps in a staged migration for the client’s various sites, bringing all the company’s IT systems and data centre facilities into one consolidated infrastructure. This involved creating and implementing a consolidation and migration plan for all systems across a number of sites.

It included a new design and infrastructure for the company’s headquarters at One Penn Plaza in New York, moving many of its servers and consolidating into a smaller space, rationalising technology to create a more coherent infrastructure.

In addition the client was able to vacate its premises in Carlstadt and move to its new co-located data centre environment in Culpeper, Virginia, with economies of scale and the cost advantages of co-location.

The migration plan undertaken with WhiteSpider as part of its enterprise alignment services included consolidation of all systems, the new design and infrastructure in Parson Brinckerhoff’s HQ and the new data centre in Culpeper. It involved deploying new technology solutions and standards, a new virtualisation platform and storage platform, in order to create a powerful private cloud environment for Parsons Brinckerhoff.

One of the major gains was the reduction in the complexity of the system with five storage platforms reduced to one and a 70 per cent virtualisation of the system.

About the ea4 Frameworkea4 was developed out of the WhiteSpider team’s desire to see technology used effectively in order to transform the way global enterprises work. It does this by delivering enterprise standards through the four key elements of the ea4 framework and based on total vendor independence.

The first element of ea4 is `enterprise auditing,’ aimed at gathering an in-depth level of understanding of a customer’s organisation and its business requirements, as well as technical knowledge on the operational environment.

WhiteSpider develops a cloud solution for Parsons Brinckerhoff

InfoBurstHurricane Sandy of 2012 required a move to a new colo data centre in Virginia...

Page 21: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 21

CASE STUDY

The project has already shown significant gains in terms of reduced cost, improved performance and greater manageability. From a disjointed infrastructure, with significant risks and areas of under-utilisation, the client now has a new structured and streamlined architecture and capabilities, already delivering benefits.

The infrastructure is based around a new and risk-free data centre environment delivering a private cloud-based environment.

The client’s headquarters in One Penn Plaza have been refurbished and the consolidation of servers into the purpose-built ROBO Room (Remote Office Branch Office) has meant that costly real estate space has been freed for other activities, whilst servers are housed in more appropriate conditions with better cooling and power supply.

Cooling requirementInitial studies indicate that cooling requirements have been reduced by 80 per cent, power consumption reduced by two-thirds (66 per cent), and the server room footprint is down from 108 sq meters to just 15 sq metres -a reduction in floor space of 88 per cent, representing a cost saving, at New York real estate prices, of $600,000 per year.

The new infrastructure across the client’s sites has also improved connectivity and future-proofed the network - with the expectation that the current infrastructure will need little upgrading in the next 3-5 years.

Resiliency has also been improved and the overall performance available to users is significantly greater, with the capability for up to 10 Gbps to the desk. In addition availability of the new service environment has now reached the desired five-nines on a 24/7/365 basis, due to the elimination of risk, over subscription, device failure and power outages, plus new maintenance contracts around new technologies.

The new environment has been designed and configured in line with industry best practice and therefore it is more agile around service delivery, easier to operate and manage, and integrates seamlessly with legacy equipment and components. As a result it is delivering substantial cost savings, including the operating costs, streamlined time to deliver new services, reduced equipment footprint and maintenance costs.

www.whitespider.eu

This is followed by a detailed plan and design, the ‘enterprise architecture’ element, engaging with the business to understand the key business objectives in relation to the IT infrastructure and assets and developing a blueprint to design and build a core foundation of processes and systems.

The third step is `enterprise alignment’ - once the architectural designs are defined, they can be implemented through a comprehensive portfolio of services that maps across all aspects of IT infrastructure.

In the fourth `enterprise assessment’ element, WhiteSpider uses its experience in modelling, capacity planning and performance management to ensure that the network and applications are tuned to deliver optimal performance and reduce business risk.

One of the major gains here was the reduction in the complexity of the system with five storage platforms reduced to one and a 70 per cent virtualisation of the system

The resultsThe DCCAMP project had a number of clear objectives to help Parsons Brinckerhoff build a robust and agile private cloud environment that would provide high performance IT services for all its business units globally now and into the future.

“One of the major gains was the reduction in the complexity of the system with five storage platforms reduced to one and a 70 per cent virtualisation of the system”

Page 22: Cloud Computing World Vol 1 Iss 1-Aug 2014

22 CLOUD COMPUTING

DATA CENTRES

THE 60-YEAR-OLD HOT TOPIC

A short history of cloudFor something that started in the 1950s, cloud computing might seem to be late to the buzzword party. In fact, that pervasive, omnipresent trend of today is technically more than 60 years old.

In those days, of course, time-sharing allowed multiple terminals to share the physical access and CPU time on mainframes. But the vision for cloud was already there: in the 1950s, scientist Herb Grosch predicted that the world would operate on dumb terminals powered by about 15 large data centres.

Commercialised in the 1960s, cloud computing evolved through the early VPNs of the 1990s, virtualisation and the dotcom bubble that fuelled Amazon’s rise to success, until the point in 2008 when Gartner remarked that cloud computing could “shape the relationship among consumers of IT services, those who use IT services and those who sell them.”

The research firm later observed that businesses were “switching from company-owned hardware and software assets to per-use service-based models” so that the “projected shift to

Andrew Roughan discusses the nature of the cloud and how data centres fit into a cloud-based feature... By Andrew Roughan, Commercial Director at Infinity SDC

InfoBurstThe data centre: concentrated and power hungry technology...

CLOUD:Giving data centres a new perspective

Page 23: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 23

DATA CENTRES

CLOUD:Will your data centre flex like your IT?Whichever path feels best suited to each business, it needs to be agile, able to burst and ultimately dynamic. As part of the journey to the cloud, CIOs have typically deployed virtualisation to increase the utilisation rates of their owned IT assets, while also outsourcing to “as-a-service” providers to reduce the overall size of the owned IT estate.

However, the virtualisation journey can be unpredictable. At the start, companies expect an overall reduction in their owned IT assets but find it difficult to accurately predict by how much.

Whether in-house or outsourced there are data centre costs that require a level of capacity that is almost impossible to foresee and plan for. In addition to the planning, there are times when capacity needs to increase so that new IT can be deployed before older assets are retired. Often, and despite growth in data, the net IT assets shrink as a result of these changes. This can strand power and space capacity and create unrecoverable costs.

Seasonal or campaign-based peaks, such as retail holiday sales, midnight on New Year’s Day for mobile operators and major charity events such as Children in Need create what we in the industry call demand peaks.

The data centre needs to have the provision to cope but should be flexible enough that the user isn’t paying for that full capacity all the time unnecessarily.

The next stage: software-defined data centresAs businesses continue along the IT journey, milestones they reach include converged infrastructure, private cloud and software-defined data centres (SDDC).

The owned IT assets will range from non-virtualised legacy IT, to virtualised private cloud IT and the management and support applications that provide the augmentation, management and security of the SDDC.

However, unable to predict the power densities and resiliencies required for those IT assets, planners face having to over-cater for an unknown future.

This leaves the CIO with a specific issue to contend with - how to manage the data centre capacity to provide the right-sized private cloud environment at each stage of the IT journey.

It is vital that CIOs consider the attributes they need from a data centre as they continue along their IT journey. For example, space flexibility with no minimum commitment; the ability to only pay for power used rather than the maximum power capacity; or predictability of the cost of change.

One thing is clear - a new breed of flexible data centre must emerge to put the CIO back in the driving seat of the outsourced data centre. Ultimately, what these changes all provide the CIO with is high levels of flexibility and agility. www.infinitysdc.net

computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas.”

More recently, in October 2013, Gartner predicted that Cloud Computing would account for the bulk of new IT spend by 2016. Cloud is clearly reaching its apex.

Cloud confusionThe length of time that that the cloud has taken to reach this point perhaps accounts for the confusion that continues to surround it.

There is, for example, confusion about cloud technology, confusion over IT infrastructure development and now, with the illusion of unbounded capacity in the cloud, confusion about data centre options and their place in the IT strategy.

Public, private, hybrid, on premise, co-located - with so many options and approaches, many mid-sized enterprises are finding it difficult to understand the myriad data centre solutions on the market. Many companies have commenced their IT transformation journey, but the data centre typically continues to be viewed simply as real estate. No longer can there be a single procurement approach. Multi-sourcing is here to stay.

The data centre must become more than that. At the heart of the transformation to the cloud, it needs to become more relevant to the enterprise in supporting the transition from basic virtualisation to its latest stage of evolution: software-defined data centres (SDDC). This means understanding both the enterprise IT revolution and the individual needs of each business.

The goals for businesses moving to the cloud tend to be similar: whether private, public, or hybrid cloud, users seek to increase agility, boost flexibility, reduce time to implement, enable efficient international operations and reduce costs. This does not mean that all companies can be herded in the same direction; they won’t take the same journey in the IT transformation and will have different needs. 

A cloud by any other nameSome industries are more accepting of cloud than others. At one end of the scale, the retail industry tends to be very comfortable with the concept and adoption of cloud and can articulate how it works and its benefits.

At the other end of the scale, those driven by strict regulatory standards – charity-funded research organisations and legal in particular – are extremely cautious about cloud. A huge disconnect between the business and IT sides of these industries means that to them, cloud is public, out of their control and a security risk.

That being the case, the mere use of the cloud word causes ripples even when looking to deploy private clouds. More palatable to the lawyers, partners and research leaders is terminology such as “utilising the benefits of automation and orchestration in an on-premises environment”.

Page 24: Cloud Computing World Vol 1 Iss 1-Aug 2014

24 CLOUD COMPUTING

ENERGY CONSUMPTION

IntroductionBack in early 2007, I recall this opening statement by an enthusiastic speaker at a tech conference: “Even though you might not realise it, over 95 per cent of you are already consumers of cloud computing services.”

This came just after the same speaker had asked everyone to answer by a show of hands, whether or not they were Yahoo and Gmail users. Seven years on from this early evangelism at the start of the cloud hype cycle and we’re at a point where cloud computing is real.

The forming of the cloudInterestingly, even though the mid-2000s marked the beginning of cloud computing, the concepts were born more than five decades ago.

Mainframe computing laid the groundwork of pooled resources in a cloud-like infrastructure shared by dispersed users in the 1950s; with the vision of an interconnected globe with access to easily scalable programs, resources and data, regardless of location and without the bounds of a rigid system infrastructure.

Even though the full potential of this vision wasn’t realised then, fast- forwarding to 2006, brings us to a time where Amazon delivered a resurgence of this notion with the development of Amazon Web Services (AWS) and then Elastic Compute Cloud (EC2).

This made possible the delivery of cloud-based storage and compute for companies to rapidly provision services without large capital expenditures or a rigid system infrastructure.

This model has dramatically changed computing and since then, Infrastructure-as-a-Service (Iaas) and Software-as-a-Service (SaaS) frameworks have multiplied by an order of magnitude. IT decision makers now have a plethora of options when it comes to leveraging public cloud service offerings to augment their overall IT delivery strategy.

Despite the advances and benefits in public

cloud computing, governance implications, economics, concerns over reliability and security for custom business critical applications has staved off adoption of a public cloud-only model by an overwhelming majority of organisations.

These limitations have proven to be a main driving force for a hybrid approach to cloud computing. Unfortunately, hybrid cloud is often over-simplified as merely being an IT environment that leverages public cloud infrastructure for some applications and on-premise infrastructure for others.

While this definition does start to paint a picture and in the strictest sense is true, it misses the mark of conveying the depth of the expected outcome of building a hybrid cloud infrastructure in the first place – integration of heterogeneous services both in front of and behind the corporate firewall with such symmetry that each single entity behaves as part a part of a bigger whole.

InfoBurstLoad balancing - more than just a balancing act...

Load balancing for a more robust cloud environment

A WELL-BALANCED HYBRID CLOUD

By Jason Dover, Director of Product Management, KEMP Technologies

Jason Dover looks at why - and how - organisations are adopting the hybrid cloud and the importance of good balancing

Page 25: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 25

ENERGY CONSUMPTIONpublic clouds based on business rules that dictate how company resources should be consumed.

These enablers all have driven the adoption of a hybrid cloud strategy in the enterprise and the outlook is positive.

Modern IT delivery’s need for increased agility, rapid provisioning of innovative applications and focus on quickest time to market of applications coupled with the current gap left by an all-in public cloud model all mean one thing - hybrid cloud is here to stay.

ChallengesWhile this all sounds good, actual execution isn’t easy. Successful hybrid cloud implementation assumes a well-architected private cloud as opposed to simply a well-built traditional IT infrastructure.

This means that adoption of hybrid cloud starts with the transition from a traditional on-premise environment to one that includes concepts and supporting technologies that enable functionality normally associated with public cloud – self-provisioning for application owners, dynamic resource scaling, a charge back model for lines of business, orchestration for automating repeatable tasks and a high-visibility management platform to monitor how and where services get deployed.

It’s the familiarity with the very nature of the public cloud model that has fuelled the business and technical requirements in the enterprise for essentially, an IT-as-a-Service framework that allows for agile self-service, provisioning and consumption monitoring, while simplifying the load on application owners. Because on-premise legacy data centre environments were not built with these principles in mind, transitioning can be a challenge.

Hybrid cloud also opens the possibility for workload overflow processing or cloud bursting so that applications can bring up new instances as needed in the public part of the hybrid cloud once data centre capacity is reached.

Load balancing instances, among other dynamic, virtualised network functions, is a core enabler to make service assurance and optimised delivery possible.

However, without application delivery controller (ADC) technology running natively in the cloud, virtualisation admins can find it challenging to deterministically know where data centre capacity exhausts and how much external resources will need to be consumed in varying scenarios for proper planning.

Additionally, applications actually built with the capabilities to traverse public and private cloud boundaries bring about the additional challenges of ensuring that the underlying data is in the right place at the right time, as well as dealing with enforcement of the same governance and security policies regardless of where active instances are operating.

Where is it all heading? Fortunately, these challenges are not insurmountable. Cloud-focused security solutions with the capability of propagating a unified set of policies across cloud borders have come onto the market.

Technology leaders such as VMware, Microsoft and IBM have launched many new offerings to help companies build better private clouds and extend the benefits of a virtualised infrastructure beyond the on-premise data centre. And finally, advancements in application delivery technology have made possible the use of complex traffic steering algorithms across a fabric of private and

Cloud load balancing revealed

An Application Delivery Controller (ADC) directly assists in the management of client connections to enterprise and web-based applications.

ADCs are normally deployed behind firewalls and in front of application servers and make networks and applications more efficient by managing the processing of traffic shaping and distribution. The ADC directs client access requests to the best performing servers based on factors such as concurrent connections, CPU load and memory utilisation.

This makes sure that bottlenecks do not occur to reduce performance; and if a server or application fails, the user is automatically re-routed to another functioning server. This process is seamless to the user and critical to delivering an optimised and reliable experience.

When it comes to the private, public or hybrid clouds, ADCs ensure the availability of applications while maximising performance, regardless of the user location or device.

In a hybrid-cloud environment, traffic running at normal levels is directed to dedicated, optimised application servers.

However, when traffic spikes occur, the load balancers will direct this ‘spill over’ to servers that can be located on public cloud. In some hybrid cloud environments, dependencies between cloud and on-premise devices may also exist.

The high availability of ADFS Servers delivered through a load balancer can provide guaranteed access to on-premise Active Directory servers for MS-Office365, for example.

Cloud balancing simply increases the choices from where a given application should be delivered and can make application routing

decisions based on a wider range of network as well as business variables, such as the ability to meet a SLA or the value of a transaction based on a per user or customer basis. Other criteria could include user location, time of day, regulatory compliance, energy consumption and contractual obligations.

When it comes to load balancing and traffic management across public cloud providers, it is important to consider some of the inherent limitations.

For example, the built-in load balancer provided in Microsoft Azure does not offer Application Layer (Layer 7) visibility to provide the best level of service to users. While basic Layer 4 balancing directs traffic based largely on server response times, Layer 7 switching uses application-layer criteria to determine where to send a request to provide more granular control.

This leads to an improvement in the utilisation of data and application traffic management and at the same time allows the virtual machines to be used more effectively. It is possible to deploy a third-party Layer 7 virtual load balancer that runs directly on the cloud platform rather than just directing traffic to the cloud network.

Deploying a virtual ADC with an application in the cloud ensures that the organisation is able to monitor and manage the health of the application and make global routing decisions to deliver optimum performance and resilience. A virtual ADC can also provide a platform for global load balancing and DNS routing to enable internal and external cloud implementations to behave as if one single network.

www.kemptechnologies.com

Page 26: Cloud Computing World Vol 1 Iss 1-Aug 2014

26 CLOUD COMPUTING

SOFTWARE

IntroductionIn this Q&A, David Fishman, global VP of marketing for commercial OpenStack distribution vendor Mirantis looks at OpenStack’s current position – and future developments

How has OpenStack got to where it is now?Many companies have wanted to build a Google- or Amazon-like infrastructure for their operations, but didn’t want to outsource for several important business reasons. For example, they saw the value of cloud infrastructure, but they felt that Amazon could not guarantee data privacy and security, or they had limited opportunities to tailor the infrastructure to their specific needs, such as SLAs (service level agreements).

David Fishman explores the future of OpenStack…By David Fishman, Global Vice President, Mirantis

InfoBurstOpen architecture makes life a whole lot easier...

Understanding data centre software

That `closed garden’ makes AWS analogous to the Apple of the cloud; by contrast, OpenStack is the equivalent of Android, helping organisations tailor it to their specific needs, and avoid being locked into a single vendor’s cloud solutions.

A range of software, hardware and service companies have joined OpenStack. What’s in it for them and for end-users?For the end-user, the benefits of OpenStack are rapid deployment, easier scalability of cloud infrastructure, and importantly there’s no vendor lock-in because it’s open. It provides tremendous flexibility, allowing customers to configure their infrastructure exactly to their needs and to integrate with existing systems.

OPENSTACK BUILDS MOMENTUM

Page 27: Cloud Computing World Vol 1 Iss 1-Aug 2014

// your essential partner

For more information about cloud technologyand solutions, please contact one of ourspecialists on 01344 758700.

www.sire.co.uk

// Cloud Solutions // Business Continuity // Managed Service Provider

We are an award winning supplier of leading edge cloud technologies, systems and processes. As specialists in Tailored Cloud solutions, we have been providing organisations with reliable, fl exible and fi nancially viable IT infrastructure coupled with a robust business continuity plan for over two decades.

With SIRE alongside, you are free to get on with running your business, leaving us to make sure your IT infrastructure is protected, optimised and keeping pace with technical and legislative changes.

SIRE’s Cloud Solutions offer reliabilityand scalability:

• Cloud Consultancy

• Tailored Clouds

• Private Clouds

• IaaS and PaaS Providers

• Virtualisation

• Data Protection

SIRE helps businesses make the best use of IT systems to create a competitive advantage.

Page 28: Cloud Computing World Vol 1 Iss 1-Aug 2014

28 CLOUD COMPUTING

SOFTWARE Understanding data centre software

We also recently benchmarked how quickly private clouds could be provisioned using OpenStack, and hit a rate of over 9,000 virtual servers launched per hour for 8 hours in a multi data centre set-up. The result was 75,000 virtual machines running, which is the scale required by the largest banks (such as Barclays), or mobile telecom infrastructure (such as Ericsson).

For software, hardware and service companies, they realise that their customers increasingly want cloud infrastructures that enable rapid change. That works in two ways.

First, the transparency and common interfaces that span compute, network, and storage, mean that companies can more easily update and automate the software that serves their customers, and improves the ROI on the infrastructure.

Second, the common standards that OpenStack enables means that vendors can continuously compete for a piece of that infrastructure, without being locked out by their rivals.

It seems enterprise adoption has been a little slow so far – is this true?Naturally, organisations have been approaching cloud deployments with an element of caution, but I believe momentum is building very quickly now. For example, Ericsson has committed to using Mirantis OpenStack as the foundation for its telecoms networks, internal data centers and cloud computing services for its customers.

Cisco recently announced its huge InterCloud initiative will be OpenStack-based. There’s a great deal of pent-up demand for faster, more agile infrastructure.

Some argue there is a lack of clarity about what OpenStack does. Do you agree?One of the key points that needs to be communicated about OpenStack is that it’s more than just open-source cloud software. It’s commoditising cloud infrastructure, so that cloud deployments can become more vendor-agnostic, with broader interoperability. The aim is to make it easier for customers to build their cloud the way they want, with the best tools for the job, and adapt to marketplace opportunities over time.

One of the things that will help this is open-sourcing OpenStack cloud certifications, to remove the traditional software vendor ecosystem lock-in that says “we only certify this particular solution with our software.”

Open certifications – which are supported by over a dozen infrastructure vendors, including VMware, NetApp and HP, as well as OpenStack users such as Yahoo, Dreamhost and AT&T, are making OpenStack the more buyer-friendly ecosystem. This way, using the open certifications approach, buyers can see for themselves using publicly available dashboards which solutions work best with each other.

What business model will accelerate adoption of OpenStack?Its openness and vendor-agnostic nature are the keys to OpenStack’s rapid adoption, and the fact that OpenStack users are realising it can be used to add more computing capacity in minutes, as opposed to the several weeks or months that it can take to buy and provision new hardware. It’s this that will drive OpenStack’s momentum.

How should organisations use OpenStack for the best results -- within a heterogeneous environment? One of the hallmarks of open source – and particularly so for OpenStack -- is the rapid pace of innovation. For example, Mirantis has partnered with VMware to make it possible to extend VMware environments with OpenStack, so that companies who have invested in ESX hypervisors can benefit from using OpenStack for their IaaS, and protect their innovation.

OpenStack has evolved, and continues to evolve rapidly. The concerns that CIOs might have had 18 months or two years back have been addressed as commercially-supported OpenStack distributions resolve the concerns about security, scalability, support and so on, while still giving customers all the benefits of openness and interoperability.

Where do you see OpenStack going over the next year or two?OpenStack adoption will accelerate in the years ahead, moving even faster than Linux did a few years back. There are four key trends that are driving OpenStack adoption.

First, the overwhelming majority of companies building applications for strategic advantage are using cloud as a platform; as a result, they’re comfortable building applications that leverage cloud resources rather than traditional servers.

Second, open source is no longer foreign and mysterious. Most IT organizations know how to use it and manage it effectively, and understand the benefits it brings.

Third, the vast majority of infrastructure vendors recognize that OpenStack accelerates market adoption of new technologies, and as the market shifts to cloud, they want a piece of that.

Finally, the ability of SaaS companies to offer more compelling, information-driven value to their customers is a lesson in competitive advantage. Any organization that uses IT to innovate is going to look for better, faster ways to make that infrastructure more nimble, and more capable attracting and keeping customers. The flexibility and agility of OpenStack can play a central role in achieving that competitive advantage.

www.mirantis.com

Page 29: Cloud Computing World Vol 1 Iss 1-Aug 2014

NETCOMMS europe magazine is the first,

and only, pan-European journal dedicated

to the network communications infrastructure

marketplace. NETCOMMS europe features

news, legislation and training information from

industry-leading bodies, application stories

and the very latest information on cutting edge

technology and products. NETCOMMS europe

compiles editorial contribution from worldwide

industry figureheads, ensuring that it is the No.

l place to find information on all aspects of this

fast-paced industry.

If you think your colleagues would be interested

in receiving their own regular copy of

NETCOMMS europe simply register online at

www.netcommseurope.com.

And don’t forget to renew your own subscription every

now and then, to make absolutely sure that you never

miss an issue of the most up-to-date publication in the

industry!

ARE YOUR COLLEGUES AS WELL-INFORMED AS YOU ARE?

LGN Media is a trading name of the Lead Generation Network Ltd,

26 St Thomas Place, Ely, Cambridge, CB7 4EX.

Tel 01353 644081 www.netcommseurope.com

£35/

€50

FEATURES Optical fibre - the future of mobile

FEATURES Measuring data centre PUE

FEATURES Building Britain’s appetite for repair

Cloud - the next phase:Resilient cloud networking revealed

www.netcommseurope.com Volume IV, Issue 3 2014

NETCOMMS VOL 4 Issue 3 2014.indd 1 26/05/2014 20:50

SUBS PAGE.indd 1 09/07/2014 19:36

Page 30: Cloud Computing World Vol 1 Iss 1-Aug 2014

30 CLOUD COMPUTING

OPINION

ON-DEMAND WORLD

Introduction Cloud computing is changing the IT landscape and redefining how software is being built, deployed and managed. Enterprises have come to a stage where they cannot ignore cloud computing any longer, or the tangible benefits that it can deliver.

As companies and employees demand more flexibility from their IT, and lower costs, cloud

Amit Khanna looks how the architecture of cloud can help businesses to scale more effectively and with lower costs...By Amit Khanna, Vice President - Technology, Virtusa

InfoBurstCloud computing - full of technology acronyms?

CLOUD COMPUTING IN ANWhy planning is essential when it comes to the cloud

Page 31: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 31

OPINIONSupporting innovation In addition to the cost savings implicit with cloud computing, cloud computing provides other benefits, such as: simplification and standardisation of IT architectures from the consumer stand point, consolidation of infrastructure and application investments, and increased virtualisation of the entire IT landscape of an organisation. Here are some of the direct and indirect ways in which cloud computing technology can cause which benefits organisations:

1. Levels the competitive landscape across industries – Cloud computing will have a profound shift in how IT is consumed by both enterprises and end consumers alike.

2. Accelerates convergence of technologies – Cloud technologies will increasingly be the platform around which other technologies, such as mobile and big data solutions, rely upon.

3. Creates a platform for innovation – with cloud computing providing a platform that can potentially scale indefinitely, the focus shifts from technology to business innovations.

4. Causes shift in enterprise IT buying patterns – Enterprises which have been traditionally dependent on CIO organisation for IT solutions will now have their business units consuming IT solutions directly - thanks to the simplification caused by cloud based consumption of solutions.

So what’s next?Before plunging headlong to cloud adoption, companies will have to do the required groundwork and plan their adoption based on their business needs.

It is important for enterprises to see the big picture about the impact cloud computing adoption will bring to their long term IT infrastructure needs. This requires careful planning, with all aspects clearly thought out before taking the step towards cloud adoption.

Different organisations will have different technology needs based on the markets they operate in, their scale, and the competitive scenario among others to consider. Today, the focus for enterprises is not just to sell products and services in the markets, but also how to create value for their customers.

While adoption of cloud computing does require companies to relinquishing control in some ways, the opportunities that arise out of performance improvement, reliability and scalability override many of the concerns.

Cloud computing technology is set to revolutionise the Information Technology paradigm unalterably in the not so distant future.

These benefits will be propositions that will ensure adoption of Cloud computing technology to scale significantly higher than present levels in the not so distant future.

www.virtusa.com

usage will only increase. Yet what is it about cloud that enables this? What are the cost benefits? How do the economies of scale work?

Keeping cloud costs downFirstly, it is worth noting that cloud computing is not a single technology; it is in fact a computing paradigm that combines many existing technologies to provide distinct characteristics, such as:

• Multi-tenancy: Allows multiple application, users and entities to share computing resources

• Scale: Software can scale almost linearly by leveraging shared resources

• Elasticity: The resources used (compute and networking) automatically adjust to the peaks and troughs of the computing demands

• On demand: The time taken to provision and de-provision the resources is negligible

• Pay as you go: No upfront infrastructure investment required, pay as you use.

Each of these aspects of cloud computing results in lower overall costs for enterprises. For example, the fact that many clients share cloud platforms means that cloud vendors are able to realise much higher utilisations than they can from using traditional models.

This higher utilisation of resources results in cost savings, which can then be passed on to clients.

Most businesses see a huge variance in their computing requirements. Examples include high demand during the office hours, or peak seasons, such as holiday shopping etc.

Traditionally, these businesses had to plan for investments in technology infrastructure and solutions that would support the peak usage, resulting in a lot of capacity lying un-utilised during off-peak season.

Now, the elastic nature of the cloud allows enterprises to scale in accordance with demand. Excess capacity can be automatically released, resulting in overall cost savings. Moreover, cloud computing allows for this elasticity with little to no manual intervention.

Most of the time discussions around cost in cloud are heavily focused on operational aspects. However, there are far more important cost benefits of cloud computing, i.e. opportunity costs and cost of failure:

Opportunity cost – Cloud computing enables enterprises to respond to business needs at a much faster rate than traditional IT. For example, if the business has an opportunity which involves adding more capacity or opening up an office in a new geography.

Cost of failure – The fact that cloud computing offers pay as you go models obviates the need for heavy upfront capital expenditures for any new products and services. This means enterprises can not only bring these products to market faster, but they can also experiment a lot more, as no heavy additional investments are required.

Page 32: Cloud Computing World Vol 1 Iss 1-Aug 2014

32 CLOUD COMPUTING

SECURITY

CHALLENGES POSED BY SECURITYIntroductionThe cloud offers a host of benefits to businesses, from control over applications and ease of accessibility, to fast access and openness. Yet, despite the clear benefits of cloud-based services, security still remains a barrier to cloud adoption.

According to Okta’s research report - Identity and Management in a Cloud and Mobile World - data security risk is by far the most significant concern around the use of cloud applications within organisations, with 70 per cent of respondents citing it as a concern.

But, in reality, most information is actually more secure in the cloud than a lot of the costly on premise infrastructures.

When it comes to cloud security, cloud businesses have to build secure data centres that are independently audited, adhere to standards - such as Soc 2 Type II - and are used by hundreds to thousands of tenants.

Add to this the reputational and business damage that a cloud provider would suffer should their data not be secure and it’s easy to see why it’s in their vested interest to uphold high levels of security. So why then are so many businesses concerned about security in the cloud?

Why visibility is a problemThe real danger of cloud adoption arises from the lack of visibility and control. While the cloud provides employees with the freedom to choose, control and manage their own applications, businesses now have to contend with a whole host of different device and applications, not all of which are vetted by the IT department.

According to the report, one third (37 per cent) of employees are believed to be accessing a minimum of eight cloud applications a month, without IT jurisdiction.

But in reality problems are likely to much worse than estimated with only nine per cent of IT decision makers highly confident that they have

Phil Turner explains how to contain the security challenges that the cloud creates...By Phil Turner, Vice President of EMEA, Okta

JOURNEY TO THE CLOUD: How the cloud brings challenges, as well as benefits

full visibility of all the applications being used by their employees. As a result, it’s no surprise that only six per cent are confident that cloud applications are integrated into their existing governance and IT security policies.

The issue of visibility also stretches beyond the internal enterprise, with access to cloud applications now encompassing suppliers, consultants or contractors.

Indeed, 70 per cent of organisations use portals comprised of multiple applications to engage with partners, customers and other external users, with nearly two-thirds (64 per cent) needing third parties to access cloud apps at least once a month.

By opening their virtual doors to partners and suppliers and allowing them access to data and information, businesses are also opening the door to a number of risks.

Today, a supply chain can consist of tens, or even hundreds, of different suppliers, each of which provides businesses with another potential point of failure, or entry point for a cybercriminal to attack.

As well as the risk of malicious attacks, there’s the risk of counterfeit products entering the supply chain or a loss of intellectual property caused by data leakage, whether intentional or accidental. There’s also the risk of ideas being copied, particularly in innovative sectors such as the high-tech, automotive and pharmaceutical industries. In this new complex environment, what can businesses do to ensure their sensitive data remains protected?

Minimising the riskThere are a number of simple steps that businesses can take in order to secure applications and multiple access points. Rather than relying solely on passwords to authenticate users, multifactor authentication can ensure users are who they say they are and reduce the risk of unauthorised access.

Page 33: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 33

SECURITYInfoBurstData extracts from Okta’s rpeort: Identityand Management in a Cloud and Mobile World...

Another way to safeguard applications is to provide a single-access point to all applications, such as a centralised portal. This enables businesses to quickly and easily automate all customer and partner user management functionality.

IAM (Identity and Access Management) has become an important tool for businesses looking to regain control of their IT security, with 57 per cent believing the adoption of cloud-based services has made IAM more of a priority in recent years.

Services such as cloud-based IAM can not only provide businesses with a better way to secure and control a magnitude more users, devices and applications that span traditional company and network boundaries, but let businesses see who has access to applications and data, where they are accessing it, and what they are doing with it.

Conclusions: stand still or differentiateThe cloud is the next logical architecture to deliver business applications at a massive scale and the right cost model, but it’s clear that security is still seen as both a benefit and a barrier to cloud adoption. Rather than shying away from the cloud due to security concerns, businesses should look towards cloud providers for support and help to alleviate any concerns around security, access and control.

Companies can elect to stand still and bury their heads in the sand like an ostrich, or differentiate themselves through new business models enabled by an agile cloud infrastructure. To me, it’s down to people and that is the only element I think that will “truly” hold back cloud adoption. Security issues have always been around and they have always been addressed – for some people it’s a useful delay to stop the inevitable change that is coming.

 www.okta.com

A Not confident at all (2%)

B Not particularly confident (20%)

C Somewhat confident (72%)

D Highly confident (6%)

Integration with infrastructure and covered by IT polices

Full visibility by IT Department

E Not confident at all (4%)

F Not particularly confident (33%)

G Somewhat confident (54%)

H Highly confident (9%)

A

B

C G

F

E

HD

Journey to the cloud

Cost reduction

62%

Creating / maintaining IT security

60%

Effecient resource utilisation

47%

Driving business growth / innovation

42%

Improving IT / business alignment

39%

Supporting new technologies (e.g. mobile, social, cloud)

28%

Risk management, regulatory compliance

28%

Speedy ROI on projects

20%

Page 34: Cloud Computing World Vol 1 Iss 1-Aug 2014

34 CLOUD COMPUTING

OPINION

IntroductionUnlike computers in the workplace, the evolution of cloud computing has been quite rapid - instead of the three decades of evolution we have seen with PCs, cloud technology has evolved in just a few short years to its current state of play: an economic and highly flexible IT resource that can be scaled up or down, as and when required.

For most organisations, however, implementing a cloud platform in their business is a little more complex than opting for an off-the-peg set of office PCs and a server, and installing the system over a weekend - it takes a fair bit of planning, we have observed.

This planning - as with all good preparations

Russell Cook explains how breaking down the cloud planning process can make the task a lot more manageable…By Russell Cook, Managing Director, SIRE Technology

InfoBurstThe cloud: an amalgam of many different technologies...

Breaking down the planning process into more manageable steps

- is perhaps best undertaken by breaking down the process into a series of four easily-managed steps, categorised as analysis, risk assessment, due diligence and implementation.

The initial step, analysis, involves identifying the benefits and risks to your organisations, with benefits splitting into the financial aspects, flexibility and scalability - and with risks breaking down into the challenges of standardisation, the uncertainty of flexible pricing, and licensing issues.

On the risk assessment front, managers need to look closely at compliance issues very early on in the planning process, covering topics such as data protection, legal compliance issues - both from a UK and a European perspective - and

WHY PLANNING SHOULD BE CENTRAL TO YOUR CLOUD ADOPTION PROCESS

Page 35: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 35

SECURITYunderstanding where your company’s data is going to be stored.

This is an important issue, we have observed, as cloud service providers often duplicate their data - your data - for resilience purposes, but do not always tell their clients where these backup copies are located.

This can be a problem on the compliance front, as data stored in cloud resources outside of the European Union can fall foul of data privacy and security legislation.

And then there is the complex issue of whether a US company is involved with the cloud service provider in any way, as the US Patriot Act requires all US companies and their subsidiaries to allow the US government - and its agencies - complete access to its data, including the cloud files of its clients.

The due diligence step then involves discussing the project with potential suppliers, asking questions about the provision of support services, who ultimately owns the data, what layers of contracts with third parties exist, and what lock-ins are imposed.

You should also be asking questions about what will happen to your data when the contract is up and your data is transferred to another supplier, or what plans are in place in the event that the supplier goes out of business, for whatever reason.

You may, for example, want to know what facilities exist for you to obtain direct physical access to your cloud data and what are the logistics involved with completing a site visit and removing data on suitable media, such as tape cartridges or similar.

It is also necessary at this stage to decide which type of cloud resource is the best for your company - e.g. public, private or hybrid - and which applications are provided by the cloud vendor e.g. SaaS (Software as a Service), PaaS (Platform as a Service) and so on.

The final stage - implementation - is arguably the easiest, as the deployment and test process, followed by an effective pilot program and its evaluation should be a breeze - assuming the earlier stage have been completed reliably.

Business continuityOne of most frequently overlooked aspects of the cloud planning process is that of business continuity (BC), an element that is often confused with disaster recovery.

BC involves planning for a worst-case scenario - and then stepping back to lesser scenarios, and planning accordingly.

We take BC issues very seriously here at SIRE, and in June of this year we joined the Business Continuity Institute (BCI), an organisation that has established itself as the leading international institute for business continuity and certification for both organisations and individuals keen to be recognised for a professional approach to this relatively new area of technology and business.

Being accepted as members of the BCI gives

SIRE’s services and knowledge real credence and allows the company to display its BCI membership, as well as participating in some of the organisation’s initiatives and campaigns.

ConclusionsThere is a lot of talk about cloud computing and many SMEs may be wondering if this can really benefit them or is just for larger organisations?

The answer, we have observed, is that, yes, cloud computing is the next stage in the Internet’s evolution and, when managed correctly, provides the means through which everything, from computing power to computing infrastructure, applications and business processes can be delivered to your business as a service, wherever and whenever you need it.

Our observations also show that the cloud offers any organisation significant benefits, including flexibility and business continuity, regardless of its size or the nature of its business.

If effective planning and suitable allied process are carried out, we have found that clients can enjoy the considerable cost savings that accrue from a well-planned and implemented cloud process.

It is worth remembering that the economic imperative behind the cloud can sometimes lure clients into believing that the lack of human interaction in automated cloud service provision can often reduce the selection process to a `lowest cost is best’ route

This is actually a false economy, as opting for the lowest cost service over the slightly less cheap may lead to extra costs in the longer term. Our observations suggest that a premium economy approach to buying in business cloud services is often the better option in the longer term.

www.sire.co.uk

Page 36: Cloud Computing World Vol 1 Iss 1-Aug 2014

36 CLOUD COMPUTING

SECURITY

IntroductionThe cloud is here and it’s only set to grow. This is because its scalability and on-demand capacity present the perfect medium to support businesses and the need to be agile.

The benefits are many, ranging from the ability to more effectively manage costs (which makes the finance team happy) to not having to worry about installing and maintaining hardware in data centres that don’t have enough space, power or cooling (which keeps the IT team happy).

Offloading the burden on to a cloud provider that [says it will] take care of everything from performance and storage to email is certainly an attractive proposition.

However, this doesn’t mean that these are the only considerations to take into account when undergoing a cloud project. Companies need to do their due diligence and ensure that it is the

infoBurstSecuring the cloud - a complex process that needs to be carried out correctly...

Reducing security risk with due diligence

right choice for their business, just like any other business decision.

Part of this has to be thinking about the scale and type of information that will be placed in and in transit within a cloud provider’s infrastructure. Therefore, businesses that do take advantage of cloud infrastructure must give importance to the security of data that is put in the cloud careful deliberation, whether they are about to make the move to the cloud or even after.

This is for a number of reasons:The same type of attacks typical to on-premise data centre environments are moving to the cloud – What used to be historically on-premise based attacks, such as malware, botnet and brute force attacks, are now targeting cloud environments.

A big driver for this is that businesses are starting to deploy traditional enterprise

SECURITY QUESTIONS TO ASK YOUR CLOUD PROVIDER

Stephen Coty explains some of the questions you should be asking your cloud service provider... By Stephen Coty, Chief Security Evangelist, Alert Logic

Page 37: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 37

SECURITYapplications like ERP and VDI (Virtual Desktop Infrastructure) in the cloud. Hackers that see this happen run vulnerability scans and brute force attacks, that attempt to siphon valuable company data, in hopes of finding and taking advantage of lax security policies in the cloud. Furthermore, as more end user applications move to the cloud, malware and botnet attacks follow suit.

The breadth and depth of attacks means that threat diversity in the cloud is on the rise – threat diversity is basically a measurement of how many different types of attacks exist and companies are facing.

This year, threat diversity in the cloud increased to rival that of on-premise data centres. This means that companies need to be just as vigilant with the same security sophistication in the cloud that would normally apply to protect an enterprise’s on-premise data centre.

The point solutions typically relied upon to combat these threats are not enough – To gauge the effectiveness of security solutions, such as anti-virus protection, in major public clouds around the world, new patterns of attacks and emerging threats were observed through a honeypot project.

One particularly interesting and disturbing observation was that 14 per cent of the malware collected was considered undetectable by 51 of the world’s top anti-virus vendors.

So, that’s the cold, hard facts out of the way and certainly not to say that businesses should stop using the cloud- there are just way too many benefits.

The good news is that there is a lot that organisations can do to protect themselves in the cloud; and the first step is to get educated on what their businesses and applications require from a compliance and security posture.

The following guide of what questions you should be asking your service provider when it comes to security in the cloud is a good starting place. Make sure that the cloud service provider can answer these questions confidently and comprehensively so you feel confident that it takes the security of your business critical data seriously.

1. What is their data encryption strategy and how is it implemented?Encryption is the industry ideal for protecting critical data by making it unreadable to unauthorised parties. While there are many considerations to give when it comes to encryption, preferably, the cloud service provider will be able to answer questions like who controls the keys and what standard of encryption is used.

2. What is the hypervisor and provider infrastructure-patching schedule?As previously explained, malware and exploits continue to rise, so it is important that the cloud service provider patches and updates their infrastructure on a regular and frequent basis. This will minimise the threats to their customers’

data by fixing any “holes” that malicious actors can exploit to gain access to their systems.

3. How do you isolate and safeguard my data from other customers?Due to huge capacities, cloud providers will undoubtedly (unless specified as private) house data for more than one company (multi-tenancy). Ask how they segment the data, what controls they have in place to make sure data isn’t accidentally shared, and how those controls are implemented.

4. How is user access monitored, modified and documented?Naturally, where security is concerned, it is vital to know who is accessing the data so that it remains uncompromised. It is also important that separation of duties are in place so that the service providers administrator does not have end-to-end authority and control over your data.

5. What regulatory requirements does the provider subscribe to?There are a number of regulatory controls that a cloud service provider can adhere to in order to demonstrate best practice and compliance. If you are putting cardholder information in the cloud, for example, you will want to make sure that the provider is PCI compliant. If it adheres to industry standards, such as ISO27001, it is a good indication that it takes security and the integrity of your data seriously.

6. What is the provider’s back-up and disaster recovery strategy?This is often referred to as resiliency. Like most services, occasional downtime is an inevitability. Find out what the provider’s track record is in availability and make sure there is transparency into its infrastructure. It may very well be that you will be responsible for your own back up of information, so make sure the boundaries are defined and each party knows its responsibilities. The recent Code Spaces demise, for example, could have been avoided if they had a separate backup of their infrastructure: without it, they lost everything.

7. What visibility will the provider offer your organisation into security processes and events affecting your data from both front and backend of your instance?These are just some of the questions that you may want to be asking a cloud service provider about the security of sensitive information residing in the cloud..

Depending on the level of confidence and completeness of the answers, they will help you quickly judge how safe your data is with the cloud service provider and how seriously they take the security of the data that backs and fuels your business.

www.alertlogic.com

Page 38: Cloud Computing World Vol 1 Iss 1-Aug 2014

38 CLOUD COMPUTING

INFRASTRUCTURE

CLOUD DISASTER RECOVERY SERVICES

IntroductionIt seems by the stream of TV advertisements and buzz in the technology press that cloud computing is a methodology that can solve deeply intractable problems in the data centre. However, many organisations often adopt cloud to help solve one initial issue, using the cloud as both a remedy and a test bed to gain an understanding of the potential. A survey last year at Amazon Web Services Global Customer and Partner Conference found around two thirds (60%) cited cost savings and disaster recovery as the factors most heavily driving cloud storage adoption.

However, the desire to use the cloud is tempered by the practical realities and additional fears. To quantify this position, Zerto conducted a further survey which found cost and complexity are both the biggest concerns with ‘difficult to manage’ coming in close third. Even the companies that have a DR implementation, only 23% are confident their DR will work in the case of a real emergency.

One of the fundamental problems with using the cloud for IT recovery is that current array-based replication techniques are not well suited to the increasingly virtualised workloads that are becoming more common, across the IT landscape.

Array-based replication products are provided by the storage vendors and deployed as modules inside the storage array. Examples include EMC SRDF and NetApp SnapMirror. As such, they are single-vendor solutions, compatible only with the specific storage solution already in use.

Currently the most popular replication method in use in organisations, array-based replication, does not have the granularity that is needed in a virtual environment or to replicate these virtual environments into the cloud.

Mapping acrossFor example, mapping between virtual disks and array volumes is complex and constantly changing, creating management challenges and

Peter Godden looks at how virtualisation is helping organisations strengthen their disaster recovery positions.By Peter Godden, Vice President of EMEA, Zerto

UNDERSTANDING How the cloud can make your IT systems more robust

InfoBurstZerto’s technology: creating a powerful Disaster Recovery platform...

Page 39: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 39

INFRASTRUCTUREHypervisor replicationHypervisor based replication is fully agnostic to storage source and destination, natively supporting all storage platforms and the full breadth of capabilities made possible by virtualisation, including high availability, clustering, and the ability to locate and replicate volumes in motion.

Hypervisor based replication technologies are becoming standard in a virtualised environment, but even with the technology there are still a number of options that should be considered, as although cloud is well suited to DR but it is not a one hat fits all approach. It is helpful to define the options as this helps to understand the benefits and limitations of the different cloud based approaches.

The first type of approach is a Private Cloud where business continuity and disaster recovery sits between two or more geographically separate sites, all under the control of the enterprise’s IT team and deployed as a private cloud.

This approach allows enterprises to create a flexible and dynamic environment in which their IT departments can scale and mobilise applications depending on needs and resources at any point in time by delivering IT infrastructures across multiple geographical sites.

Taking this approach also helps enterprises to evenly distribute production load between multiple data centres and recovery sites. However, this is more complicated to set-up and manage and places more technical heavy lift on the internal IT department.

ConclusionsThe advent of virtualisation and the growth of cloud computing offer a significant opportunity to strengthen disaster recovery processes. With the inclusion of hypervisor based replication technologies and the benefits of private and as-a-service options, the cost and complexity of disaster recovery options is falling, offering the economies of scale to drive down costs even further.

additional storage overhead. Often, multiple virtual machines reside on a single array volume, or logical unit. An array-based solution will replicate the entire volume even if only one virtual machine in the volume needs to be replicated. This under utilises the storage and results in what is known as “storage sprawl.”

Because array-based replication lacks the visibility and granularity to identify specific virtual machines in different locations, organisations tend to put all disks from an enterprise application into a single storage logical unit, when in fact there are operational advantages to splitting them up over a number of logical units.

Array-based replication has several other important disadvantages that limits its suitability to a cloud based DR position. Essentially, it is designed to replicate physical entities rather than virtual entities. As a result, it doesn’t “see” the virtual machines and is oblivious to configuration changes – and due to their dynamic nature, virtual environments have a high rate of change.

As the starting position for a successful cloud DR strategy, a growing trend is to use hypervisor based replication technology which protects virtual machines (VMs) at the virtual machine disk format file level rather than at the LUN or storage volume level, thus replication can be done without the management and TCO challenges associated with array-based replication.

Because it is installed directly inside the virtual infrastructure (as opposed to on individual machines), Hypervisor based replication is able to replicate within the virtualisation layer itself, so that each time the virtual machine writes to its virtual disks, the write

command is captured, cloned, and sent to the cloud recovery site. This is more

efficient, accurate, and responsive than prior methods.

The Zerto 2.0 option

Whatever path enterprises chose in their application deployment, Zerto provides a BC/DR solution that fits.

Zerto Virtual Replication is the only cloud-ready BC/DR platform providing enterprise-class protection to applications deployed in virtualised environments and private or public clouds.

The technology enables Disaster Recovery-as-a- Service and true, cloud BC/DR for cloud service providers and enterprise customers, respectively.

Enterprises can expand BC/DR support to include not just the traditional data centre, but also smaller branch offices and other sites through multi-site capabilities. Additionally, this lowers barriers to entry for the enterprise to evaluate the cloud for other applications in the environment, perhaps a tier 2 application.

The multi-tenancy features greatly increase efficiencies at the disaster site, especially if there are geographically separate production sites replicating over to the same disaster site.

One infrastructure, managed centrally through VMware vCentre and vCloud Director, can now simplify management and reduce operational costs.

CSPs (Cloud Service Providers) are able to attract new customers by offering a cost-effective service that enables customers to effectively evaluate the CSP without complete dependency.

CSPs can make the price very attractive to enterprises as they do not have to create a completely duplicate infrastructure with matching hardware, software and networking. Additionally, they do not have to have a widely specialised team and can focus on what they have in their environment.

Finally, with true multi-tenancy, economies of scale can be leveraged to further drive down costs for customers.

www.zerto.com

Page 40: Cloud Computing World Vol 1 Iss 1-Aug 2014

40 CLOUD COMPUTING

INFRASTRUCTURE

InfoBurstBreaking down the cloud planning and adoption process into small segments can make life a lot simpler...

Strategies for adopting the cloud

Gordon Howes discusses the strategies that companies need to adopt when embracing the cloud...TAKING YOUR FIRST

STEPS INTO THE CLOUD By Gordon Howes, Director, VMhosts,

Page 41: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 41

INFRASTRUCTURE

InfoBurstTreading a cloud tightrope - simply a question of balance...

or maybe an upgrade to an existing application. Whatever the reason when purchasing hardware there is usually an element of hardware guesswork involved that can lead to large up-front costs.

Many organisations don’t have the time or resource to run capacity plans for every application or service. Typically, when an IT department is considering purchasing a server to perform a particular process, the company is investing in hardware and software and will usually have an expectation that it will need to last around 3 to 5 years.

As it is unsure what the company’s requirements will actually be in 3-5 years’ time, there has to be a bit of guess work - all be it an educated guess.

If not enough hardware is specified then the company will be purchasing expensive upgrades before they know it. If over specified, then the company has not made the best use of its large up-front investment. More often than not for fear of under specifying hardware requirements, many IT departments would over-estimate the hardware they needed, leaving the business with a big up-front bill and a woefully under-utilised server.

Cloud hosted services often work on a monthly-based costing model with little or even no up-front investment. The guesswork is taken more or less out of the equation as hosted server resources can be scaled up or down when the business needs it. Resource is calculated on the applications performance at that time, rather than what it might be doing further along the line.

Cloud computing is a utility based computing in the same way gas and electric are utility-based energy resources. If more or less resource is needed, the price can be scaled up and down easily depending on the usage requirements.

IntroductionAnyone that is in the cloud industry knows that cloud computing - and indeed hosted services - are nothing new for businesses. Companies have been adopting cloud technologies for many years now, and cloud deployment is now often the first choice when looking to roll out a new application or service. With that said however, does the same apply to companies of all sizes? Are smaller SME companies well versed enough to know about the benefits of hosted services?

The cloud can be a daunting topic for many businesses; some will already know the benefits of moving some of all of their services into the cloud, however may not know who to turn to and what the first steps required are to make it all happen.

Other companies may have little to no knowledge of the technology and the process involved - and will often find the whole topic very confusing.

From connectivity to costs there are a number of questions that businesses need to know the answers to before they take their first step into the world of cloud computing.

Many of them will be obvious to people already leveraging the cloud, but for those that aren’t in the know, they are questions that need answering before a move to the cloud is viable.

There will undoubtedly be more questions in addition to the following few, however as a cloud provider, these are the most common ones asked of us.

Isn’t the cloud expensive?This is often a common misconception. In traditional IT purchasing a server or a piece of hardware is being acquired for a particular purpose. It may be a new piece of software that needs a dedicated operating system to run on

Page 42: Cloud Computing World Vol 1 Iss 1-Aug 2014

42 CLOUD COMPUTING

INFRASTRUCTURE

By paying for cloud resources in this way, businesses can more effectively budget for their computing needs with minimum capital expenditure outlay on day one.

How do I know I’m ready for the cloud?This can be looked at in one of two ways: Being ready for the cloud from a physical point of view or being ready for the cloud from a business point of view.

With regards to physically being ready - generally speaking, cloud is all about connectivity, as long as you have relativity decent connectivity from where you are connecting from, then typically you’ll be fine.

Have a chat with a cloud service provider and they will be able to advise you on how adequate (or inadequate) your connectivity is for the solution you are looking for.

You’ll often be pleasantly surprised, there are a huge amount of hosted services that work over relativity slow connections.

Being ready for the cloud from a business perspective can be a trickier one to answer. As the term cloud is quite broad and can encompass a variety of different services, there isn’t a one size fits all solution.

Take some time to audit your current applications and processes, often if a problem has been around for a long time, users may silently accept that it’s “just the way it is” rather than making a problem known.

It is quite often the case that businesses think about moving services to the cloud when the hardware becomes end of life and needs replacing.

Instead of finding the capital expenditure required to purchase new equipment, check to see if the application or service would work in a hosted model. As an example, services such as email hosting, remote access and backup are all extremely viable hosting options.

In fact many business have no idea they are actually already making use of hosted services. If you’ve ever used applications like Dropbox or

Microsoft’s own Office 365 then you are already making use of the technology.

What should I look for in a provider?With a vast array of cloud providers out there, how can you make an informative choice with whom to choose as a hosting partner? Here are a few options to help you make the best choice for your business:

Check for any certifications or codes of practice - by checking to see if a provider adheres to any standards helps to set your mind at rest that the provider has passed and is committed to regulated guidelines. Typically this means that they have process and procedures in place to help protect your data and services.

Ask for a data centre tour - sometimes it can help you to understand and trust the hosting provider. By physically seeing where your data is held, often goes a long way to trusting the provider. Be aware of any companies that refuse a tour unless a very good reason is given - they may not be all that they seem.

Check for any testimonials or ask for references - speaking to a providers existing customers will go a long way to making an informed choice.

Ask if the provider has any disaster recovery or business continuity plans of their own.

Check if the provider can offer any geographically redundant high availability or disaster recovery options - if they have ask them what they are and how they work

Do I have to move all my systems into the cloud?Not at all, although of course you are welcome to do this if you want to and in some cases it makes perfect sense, however cloud is an enabling technology. This means it complements your existing infrastructure and allows you to extend your IT department by moving certain processes to it.

A good example of this is backup and disaster recovery services as these can be very expensive and problematic to run in house. By moving your backup to a cloud provider, you are immediately making use of the cloud without moving any of your company’s servers to a hosted service.

Will my company have to hire any cloud experts?

Not at all - it is the responsibility of the cloud provider to maintain and manage the infrastructure the service is provided on, meaning there is no requirement to employ or hire any cloud experts. If you are a company that use outsourced IT have a chat with them about any migration plans for moving to the cloud.

Also ensure you check with the cloud provider as well. Most of the time they will offer some free migration advise, although any complex migration may have a charge attached to it.

www.vmhosts.co.uk

Strategies for adopting the cloud

InfoBurstPlanning your cloud component strategy - not as easy as it first looks...

Page 43: Cloud Computing World Vol 1 Iss 1-Aug 2014

xxxxxxxThere’s more to UPS than meets the eye

Cutting the Cost of UPS Technology By Kenny Green, Technical Support Manager, UPSL

Three Phase PowerDesigned to bring maximum power to your servers, the G4 three phase range are built to exacting standards to ensure maximum safety for your facility.

Thermal overload protection or fused outlets mean that you only loose a single socket in the event of a fault, not the whole PDU thereby removing the risk of a total rack failure.

Maximise you rack space, specify mixed connector PDU’s built to your exact requirements to give you just the solution you are looking for.

Available with: • C13 C19 Locking outlets • C13 C19 Fused outlets • BS1363 UK outlets • Continental outlets • Individual circuit protection per outlet • Overall metering of V, A, kWh, Harmonics, PF.

G4 MPS LimitedUnit 15 & 16 Orchard Farm Business Park, Barcham Road, Soham, Cambs. CB7 5TUT. +44 (0)1353 723248 F. +44 (0)1353 723941 E. [email protected]

Vert

ical

rac

k M

ount

Hori

zont

al r

ack

Mou

nt

THREE PHASE POWER

G4 with award Logo.indd 1 25/04/2014 19:37NETCOMMS VOL4 ISSUE 4.indb 25 09/07/2014 21:01

Page 44: Cloud Computing World Vol 1 Iss 1-Aug 2014

44 CLOUD COMPUTING

SOFTWARE

IntroductionModern IT infrastructure needs to be highly flexible as the strain on servers, sites and databases grows and shrinks throughout the day.

Cloud infrastructure is meant to make scaling simple by effectively outsourcing and commoditising your computing capacity so that, in theory, you can turn it on and off like a tap.

However, most approaches to provisioning cloud servers are still based around the idea that you have fixed-size server “instances”, offering you infrastructure in large blocks that must each be provisioned and then configured to work together.

This means your infrastructure scaling is less like having a handy tap and more like working out how many bottles of water you’ll need.

There are traditional approaches to ensure all these individual instances work efficiently and in unison (so that those bottles of water don’t run dry or go stagnant); one of the more popular tools

for cloud capacity management today is the load balancer. In fact, load balancers are quite often bought alongside your cloud infrastructure.

The load balancer sits in front of your servers and directs traffic efficiently to your various cloud server instances. To continue the analogy, it makes sure everyone drinks their fill from the bottles you’ve bought, using each bottle equally, and no one is turned away thirsty.

Horizontal scalingIf your infrastructure undergoes more load than you have instances to handle, then the load balancer makes an API call to your cloud hosting provider and more servers are bought and added to the available instances in the cluster. Each instance is a fixed size and you start more of them, or shut some down, according to need. This is known as horizontal scaling.

Existing virtualisation technology also allows

Richard Davie discusses some of the current challenges with cloud-based load balancer technologies...By Richard Davies, CEO, ElasticHosts

WILL LINUX CAUSE PROBLEMSHow next-gen Linux containers could cause problems

InfoBurstContainers - can be filled, emptied at will...

WITH LOAD BALANCERS?

Page 45: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 45

SOFTWARE

Container-based isolation Container-based isolation, such as Linux Containers (LXC), Docker and Elastic Containers, mean that server resources can be fluidly apportioned to match the load on the instance as it happens, ensuring cost-efficiency by never over- or under-provisioning. Unlike traditional virtualisation, containerised Linux cloud servers are not booted at a fixed size, but instead individual servers grow and shrink dynamically and automatically according to load while they are running.

Naturally, there are certain provisos to this new technology. Firstly, as it currently stands, a Linux host can only run Linux-based cloud servers. Also, the benefit of not needing a load balancer at all is most relevant to servers, which scale with the resources of a single large physical host server. Very large systems that need to scale beyond this will still require load-balanced clustering, but can also still benefit from vertical scaling of all of the servers in that cluster.

ConclusionsVertical scaling of containerised servers can therefore handle varying load with no need to pre-estimate requirements, write API calls or, in most cases, to configure a cluster and provision a load-balancer. Instead, enterprises simply pay for the resources they use, as and when they use them. Going back to our analogy, this means you simply turn the tap on at the Linux host’s reservoir of resources. This is a giant leap forward in commoditising cloud computing and takes it closer to true utilities such as gas, electricity and water.

www.elastichosts.co.uk

individual server instances to be scaled vertically after a reboot. A single instance can be resized, on reboot, to accommodate increased load.

This would be like going from a small bottle of water to a 5-gallon demijohn when you know that load will increase. However, frequently rebooting a server is simply not an option in today’s world of constant availability, so most capacity management is currently done by adding servers, rather than resizing them.

However, there are many challenges with this traditional horizontal scaling approach of running multiple server instances behind a load balancer.

The current situation wherein extra servers must be spun up to handle spikes in load means greater complexity for those that have to manage the infrastructure, greater costs in having to scale up by an entire server at a time, and poor performance when load changes suddenly and extra servers can’t be started quickly enough.

Since computing power is provisioned in these large steps, but load varies dynamically and continuously, it means enterprises are frequently paying to keep extra resources on standby just in case a load spike occurs. For example, if you have an 8GB traditional cloud server, which is only running 2GB of software at present, then you still will be paying for 8GB of provisioned capacity. Industry figures show that typical cloud servers may have 50 per cent or more of expensive - but idle - capacity on average over a full 24/7 period.

The latest developments in the Linux kernel have presented an interesting alternative to this approach. New capabilities of the Linux kernel, specifically namespaces and control groups, enabled the recent rise of containerisation for Linux cloud servers in competition to traditional virtualisation.

InfoBurstLoad balancing - think of the process as tapping a series of containers...

Page 46: Cloud Computing World Vol 1 Iss 1-Aug 2014

46 CLOUD COMPUTING

CASE STUDY

IntroductionDeutsche Telekom is piloting TeraStream, an all-IP network that delivers triple play and other services from the cloud, as a model for next-generation operator networks.

TeraStream also is a proving ground for software-defined networking (SDN) and network functions virtualisation (NFV), as Deutsche Telekom looks to automate and orchestrate cloud services to launch new revenue-generating services and adapt to customer needs more quickly.

Deutsche Telekom has partnered with A10 Networks to develop a carrier-grade, IPv4-over-IPv6 `softwire’ solution as a virtualised network function, enabling Deutsche Telekom to differentiate and scale cloud services.

A10 Networks’ software-based and API-driven architecture, commitment to open standards like OpenStack, and a willingness to create innovative solutions were key to helping Deutsche Telekom develop what is widely regarded as one of the most innovative service provider networks today.

The challenge• Build a new, elastically scalable model for the

core central-office data centre optimised for performance, low latency and cost

• Deliver IPv4 services to customers in a native IPv6 network

• Automatically provision IPv4 and other L4-7 services quickly and efficiently

• Architect in compliance with core ETSI NFV documents

• Maintain prime directive of simplicity and openness

The results• Increased business agility with virtual carrier-

grade networking service and pay-as-you-go licensing based on A10 Networks’ cloud services architecture

• Differentiated services on a per-subscriber basis

• Reduced time-to-deploy IPv4 over IPv6

Deutsche Telekom taps into the cloud

softwire service with highly responsive partners

• Deutsche Telekom TeraStream Virtualises IPv4 Services with vThunder CGN

Hyper-connectedToday’s hyper-connected world has not been kind to service providers. The demand for broadband has exploded, as customers want always-on connectivity for work and play, but don’t want to pay a premium for their growing bandwidth consumption. In fact, fierce competition among traditional telcos, cable operators and mobile operators is driving ARPU (Average Revenue Per User) lower and lower.

Capturing new market growth, such as over-the-top (OTT) video and cloud services, requires innovation and speed. Yet many service providers are hampered by the complexity of their networks, which drives up lead-time and cost, while their more nimble competitors and OTT service providers deliver services that are faster, cheaper and better. Traditional service delivery times, which require weeks or months to configure using conventional networking technologies, are no longer competitive.

Innovation and agilityDeutsche Telekom is on the vanguard of this change. As a leader in next-generation operator networks, Deutsche Telekom is piloting TeraStream, an all-IP cloud-enabled network, at Hrvatski Telekom in Croatia.

In TeraStream, Deutsche Telekom says it has re-imagined the network to deliver all services, including voice, IPTV and Internet access, as cloud services that are provisioned on demand.

Deutsche Telekom has taken bold steps to fundamentally change how it delivers new services faster, at a lower cost and with a better user experience. TeraStream is an integrated packet-optical network that runs IPv6 in the core and is built on an infrastructure cloud model.

TeraStream has drastically simplified network architecture and embraces the concepts of

USING OPENSTACK IN AN ALL-IP ENVIRONMENT

Axel Clauberg explains how OpenStack has been the key to a new all-IP triple play network offering... By Axel Clauberg, Vice President Aggregation, Transport, IP and Fixed Access, Deutsche Telekom

Page 47: Cloud Computing World Vol 1 Iss 1-Aug 2014

CLOUD COMPUTING 47

CASE STUDY

Figure 1:TeraStream is a model for next-gen-eration operator networks – an IPv6 network that’s built on an infrastructure cloud model.

SDN (Software-Defined Networking) and NFV (Network Functions Virtualisation), including software appliances, COTS (Common-Off-The-Shelf) hardware, and automated provisioning and service orchestration.

“We designed TeraStream as an architecture that breaks many of the rules on the operator side,” said Axel Clauberg, Vice Present of Aggregation, Transport, IP and Fixed Access at Deutsche Telekom AG.

“The attitude of ‘things-were-always-done-this-way’ doesn’t exist here. We questioned all layers and all protocols in today’s network, and asked ‘how would you run an efficiently managed IP network moving forward?’ We realised that if we truly wanted to change our cost base, we needed to change the mode,” he explained.

TeraStream is an open multi-vendor network, which allows for greater innovation and avoids vendor lock-in.

“It is really key for operators to build a foundation based on an open platform,” said Clauberg. “We don’t want a dependency on a single vendor in our critical infrastructure.”

TeraStream uses OpenStack for cloud orchestration, allowing it to control the compute, storage and network resources in its data centers, while empowering customers to provision resources easily. TeraStream virtualises network functions so they can be chained together to create customized communications services quickly and as needed.

Virtualising network functionsAs an IPv6 network, TeraStream does not have native support for IPv4. Yet it must still deliver IPv4 as a service to its customers to support legacy applications.

“There is an expectation that IPv4 traffic will go down significantly by the end of the decade, but we’ll need to deliver that function for some time,” said Clauberg. “Producing IPv4 as a service is ideal, because we can react based on our current load and we don’t need to drastically overprovision the way you might in a physical appliance scenario.”

The TeraStream team looked for a partner that could drive a scalable, virtualised Softwire encapsulation service in its data centres.

There are multiple ways to transport IPv4 traffic over IPv6, and the team considered

Mapping Address over Port (MAP) as well as Lightweight 4 over 6 (LW4o6), an emerging IETF standard that’s an extension of Dual-Stack Lite (DS-Lite). In DS-Lite, address translation is done at the operator, while LW4o6 moves this translation to the customer premise equipment.

The team decided that the LW4o6 approach would scale more efficiently and allow tenants to be managed individually.

The search for a virtualised Softwire solution led the TeraStream team to A10 Networks.

“We were looking for a partner who could develop LW4o6 softwires and prove that it works,” said Clauberg. “We felt there was common ground with A10 Networks,” he added.

A10 moved quickly to implement LW4o6 in its Thunder Series CGN, and TeraStream deployed vThunder as a virtual service. With vThunder, TeraStream has a high-performance, highly transparent and scalable solution for its customers, which is delivering a strong return on investment.

The Thunder CGN product line is part of the A10 aCloud Service Architecture, which enables cloud operators to dynamically provision Layer 4-7 tenant services while improving agility and reducing cost.

In addition, aCloud on-demand licensing helps operators in providing cloud services consistent with cloud consumption model. The aCloud Services Architecture integrates with OpenStack, SDN network fabrics and cloud orchestration platforms, so operators can dynamically deliver application and security services and policies per tenant.

Automation through OpenStack and integration with aCloud on-demand licensing makes it possible to turn up new services for customers as they are needed, and tear them down once they’re no longer needed.

A10 tuned vThunder to use LW4o6 and deliver

“TeraStream is an open multi-vendor network, which allows for greater innovation and avoids vendor lock-in.”

Page 48: Cloud Computing World Vol 1 Iss 1-Aug 2014

48 CLOUD COMPUTING

CASE STUDY

optimal performance, scalability and automation, which allows TeraStream scale elastically to support more customers and to deliver a better experience.

“When you virtualise a network function coming from hardware, there is a lot of potential for optimisation and automation,” said Clauberg.

“A10 was very helpful to optimise the performance so we could serve our customers without burning hardware resources,” he added.

Clauberg went on to say that IPv4-over-IPv6 Softwire is the first example of a high-volume, data-plane-oriented network function that was virtualised.

“When people talk about NFV today, they are focusing on the control plane, not the data plane. But if we truly want to change our cost basis, we have to look at virtualising network services also touching the data plane,” he explained.

A business model built for the cloudTeraStream is taking advantage of A10’s Pay-as-You-Go licensing model so it can offer on-demand cloud services to customers on a subscription basis.

With the Pay-as-You Go licensing model, TeraStream can offer and deliver IPv4 and other advanced L4-7 networking tenant services with automated metering, reporting, billing and license management, as is necessary in a cloud environment.

“A10’s pay-as-you-go licensing is key,” said Clauberg, adding that a flexible licensing scheme is win-win, because it makes the vendor profitable and it makes us profitable.

About Deutsche TelekomDeutsche Telekom is one of the world’s

leading integrated telecommunications

Deutsche Telekom taps into the cloud

companies with over 142 million mobile customers, 31 million fixed-network lines and over 17 million broadband lines (as of December 31, 2013).

The group provides fixed-network, mobile communications, Internet and IPTV products and services for consumers, and ICT solutions for business and corporate customers. The CSP is present in around 50 countries and has approximately 229,000 employees worldwide.

The group generated revenue of 60.1 billion euros in the 2013 financial year - over half of it outside Germany.

About A10 Networks A10 Networks is a specialist in application

networking, providing a range of high-performance application networking solutions that accelerate and secure data centre applications and networks of thousands of the largest enterprise, service provider and hyper-scale web providers around the world.

The company’s products are built on our proprietary Advanced Core Operating System (ACOS), a platform of advanced networking technologies, which is designed to deliver substantially greater performance and security.

A10 Networks software based ACOS architecture also provides the flexibility that enables A10 Networks to offer additional products to solve a growing array of networking and security challenges arising from increased Internet cloud and mobile computing.

www.a10networks.comwww.telekom.com

Figure 2:TeraStream is a proving ground for network functions virtualisation. It uses Lightweight 4o6 softwires to elastically scale the delivery of IPv4 traffic to customers.

Page 49: Cloud Computing World Vol 1 Iss 1-Aug 2014

CCW is the UKs first digital publication totally dedicated to the subject of cloud computing. CCW reaches an audience of over 15,000 individual subscribers on a bi-monthly basis, delivering them up-to-date information on this fast paced subject, enabling them to use the processing power of the cloud and its unlimited opportunities for collaboration to enhance and grow their businesses.

CLOUD COMPUTINGWORLD

CCW - The Format

26 St Thomas Place, Cambridge Business Park, Ely, Cambridgeshire CB7 4EX

01353 644 081

CLOUD SERVERS

CLOUD COMPUTING 1

Service prices differences under the microscopeAudiocast: total remote/cloud security becoming reality says veteran pen tester

Looking towards an open source cloud future - cost cutting without service reduction

Understanding cloud load balancing

The cloud: it’s older than

you might think

CLOUD COMPUTINGWORLD

Issue 1

June 2014

www.cloudcomputingworld.co.uk

CCW is fully interactive and will be available on all major electronic devices from the first issue – thanks to the use of the digital format, content in the publication will be freed from the two dimensions of print and include rich media that readers will not find in any other place.

In this context, advertisers and editorial contributors will be able to present content in a rich media format. Put simply, this means

that content submissions with move beyond the printed page and into the realm of video and audio.

We believe this offers those involved a much greater opportunity to engage, entertain and inform our readers.

CCW will also deliver advertisers real-time and identifiable metrics, enabling advertisers to calculate their ROI and identify where response comes from.

For Editorial EnquiriesSteve [email protected] 266 3063

For advertising enquiriesIan [email protected] 644081

CCW ADVERT.indd 1 08/07/2014 21:17

Page 50: Cloud Computing World Vol 1 Iss 1-Aug 2014

DATA CENTRE CONSULTANCY | COLOCATION | OPERATION | MIGRATION

SECURE DATA CENTRES TRUSTED ADVICE

IS YOUR DATA IMPORTANT TOYOUR BUSINESS ?

ISO 9001 | ISO 14001 | ISO 27001 | PCI DSS LEVEL 1

THEN IT’S TIME TO MOVE INTO THEGATEHOUSE DATA CENTRE

A

0845 251 2255