technology reboots the big data analytics...

6
In-memory technology—in which entire datasets are pre-loaded into a computer’s random access memory, alleviating the need for shuttling data between memory and disk storage every time a query is initiated—has actually been around for a number of years. However, with the onset of big data, as well as an insatiable thirst for analytics, the industry is taking a second look at this promising approach to speeding up data processing. The appeal of in-memory technology is growing as organizations face the challenge of big data, in which decision makers seek to harvest insights from the terabytes’ and petabytes’ worth of structured, semi- structured and unstructured data that is flowing into their enterprises. A recent survey of 323 data managers and professionals finds that while organizations are still in the early stages of in-memory adoption, most data executives and professionals are expressing considerable interest in adopting this new technology. In the survey, fielded among members of the Independent Oracle Users Group and conducted by Unisphere Research, a division of Information Today, Inc., close to one-third of organizations already have in-memory databases and tools deployed within their enterprises, and report advantages such as real-time operational reporting, accelerating existing data warehouse environments, and managing and handling unstructured data. Another one-third are considering in-memory technologies. The report, “Accelerating Enterprise Insights: 2013 IOUG In- Memory Strategies Survey” (January 2013), was underwritten by SAP. There are compelling technical advantages to having an in-memory database, but the business benefits can be far-reaching. From a technical standpoint, data analysis jobs performed within memory can potentially run up to 1,000 times faster than similar jobs employing traditional disk-to-processor transfers. In-memory databases offer large, high- capacity memory space in which entire datasets—potentially millions of records— can be loaded all at once for rapid access and processing, thereby eliminating the lag time involved in disk-to-memory data transfers. Because of these limitations, many applications relying on data analytics have only been able to deliver limited reports built on smaller chunks of data. In addition, a new generation of tools, featuring visual and highly interactive interfaces, help bring data to life for decision makers. The hardware available also makes in-memory processing a reality. Multicore processors are now the norm, and memory keeps getting cheaper and cheaper. We have seen Moore’s Law— that posits that processor power and capacity will double every 18 months— continue to hold true. In 1965, a company paid for memory at a rate of $512 billion per gigabyte of RAM. Now, it’s a mere $1 per gigabyte. In a few years, the cost per gigabyte of RAM will plunge even further, to a few cents. Sponsored Content THOUGHT LEADERSHIP SERIES | FEBRUARY 2013 2 IN-MEMORY TECHNOLOGY REBOOTS THE BIG DATA ANALYTICS WORLD DELIVERING INFORMATION FASTER:

Upload: others

Post on 23-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TECHNOLOGY REBOOTS THE BIG DATA ANALYTICS WORLDhosteddocs.ittoolbox.com/deliveringinformationfasterinmemorytechn… · to the holy grail of big data—the ability to compete smartly

In-memory technology—in which entire datasets are pre-loaded into a

computer’s random access memory, alleviating the need for shuttling data

between memory and disk storage every time a query is initiated—has

actually been around for a number of years. However, with the onset of

big data, as well as an insatiable thirst for analytics, the industry is taking

a second look at this promising approach to speeding up data processing.

The appeal of in-memory technology isgrowing as organizations face the challengeof big data, in which decision makers seekto harvest insights from the terabytes’ andpetabytes’ worth of structured, semi-structured and unstructured data that isflowing into their enterprises. A recentsurvey of 323 data managers andprofessionals finds that while organizationsare still in the early stages of in-memoryadoption, most data executives andprofessionals are expressing considerableinterest in adopting this new technology.

In the survey, fielded among membersof the Independent Oracle Users Groupand conducted by Unisphere Research, adivision of Information Today, Inc., closeto one-third of organizations already havein-memory databases and tools deployedwithin their enterprises, and reportadvantages such as real-time operational

reporting, accelerating existing datawarehouse environments, and managingand handling unstructured data. Anotherone-third are considering in-memorytechnologies. The report, “AcceleratingEnterprise Insights: 2013 IOUG In-Memory Strategies Survey” (January2013), was underwritten by SAP.

There are compelling technicaladvantages to having an in-memorydatabase, but the business benefits can befar-reaching. From a technical standpoint,data analysis jobs performed withinmemory can potentially run up to 1,000times faster than similar jobs employingtraditional disk-to-processor transfers.

In-memory databases offer large, high-capacity memory space in which entiredatasets—potentially millions of records—can be loaded all at once for rapid accessand processing, thereby eliminating the lag

time involved in disk-to-memory datatransfers. Because of these limitations,many applications relying on data analyticshave only been able to deliver limitedreports built on smaller chunks of data.

In addition, a new generation of tools,featuring visual and highly interactiveinterfaces, help bring data to life fordecision makers. The hardware availablealso makes in-memory processing areality. Multicore processors are now thenorm, and memory keeps getting cheaperand cheaper. We have seen Moore’s Law—that posits that processor power andcapacity will double every 18 months—continue to hold true. In 1965, a companypaid for memory at a rate of $512 billionper gigabyte of RAM. Now, it’s a mere $1per gigabyte. In a few years, the cost pergigabyte of RAM will plunge even further,to a few cents.

Sponsored Content THOUGHT LEADERSHIP SERIES | FEBRUARY 2013 2

IN-MEMORY TECHNOLOGY REBOOTS THE BIG DATA ANALYTICS WORLD

DELIVERING INFORMATION FASTER:

Page 2: TECHNOLOGY REBOOTS THE BIG DATA ANALYTICS WORLDhosteddocs.ittoolbox.com/deliveringinformationfasterinmemorytechn… · to the holy grail of big data—the ability to compete smartly

GOOGLE-LIKE BIFrom a business standpoint, in-memory

capabilities bring organizations ever closerto the holy grail of big data—the ability to compete smartly on analytics, meaningthat a data-driven business would be ableto closely engage with and anticipate theneeds of customers and markets. Overall,nearly 75% of respondents in the IOUG-Unisphere Research survey believe that in-memory technology is important toenabling their organizations to remaincompetitive in the future.

Close to half (47%) of the surveyrespondents whose companies are makingextensive or widespread use of in-memorysay they see the greatest opportunity forfuture use in providing real-timeoperational reporting. One-third of thoserespondents say in-memory is playing arole in delivering new types of applicationsnot possible within their current dataenvironments.

In today's hyper competitive businessclimate, end users expect information attheir fingertips, at a moment’s notice.They want a Google-like experience inworking with enterprise data—meaningthe ability to ask any question and receivea set of potential responses in a matter ofsubseconds. By bringing on-the-spotanalysis of complete datasets close to theuser, in-memory analytics opens up data-driven decision making to individuals atall levels of the organization—beyond the relatively small handfuls of businessanalysts, statisticians, and quants who have traditionally been the users of BI and analytics tools. Customer servicerepresentatives, for example, can make on-the-spot decisions based on theprofitability of customers, with data fromCRM, transactional and data warehousesystems immediately available. Operationsexecutives can prioritize productionorders, taking into account scheduling andforecasting data that is readily available.

In-memory analytics enables morebusiness end users to build their ownqueries and dashboards, without relyingon IT to unlock data sources or to buildand deliver the reports. With dramaticallyenhanced processing speed, and theavailability of entire datasets, in-memory

processing opens the way for moresophisticated market analysis, what-ifanalysis, data mining, and predictiveanalytics.

Typically, such capabilities have beenhamstrung by limited reporting data, aswell as delays and outdated informationproduced by traditional data warehouseand BI tools. The IOUG-UnisphereResearch survey finds, for example, thattoday’s data warehouse environments arenot keeping up with the explosive growthof data volume and the demand for real-time analytics. Fewer than one out of 10data warehouse sites in the survey, for

example, can deliver analysis in whatrespondents would consider a real-timetimeframe. Overall, existing database anddata warehouse environments are time-consuming for both administrators andend users.

However, in-memory technologies willsee their greatest value not by replacingthese systems, but, rather, by augmentingor accelerating existing data environments.In the survey, the most frequently citeduse cases of in-memory technology arefor selective acceleration of analyticsthrough replication of data from theirdata warehouses (45%), and within datamarts that complement data warehouseenvironments (39%). Future areas of

opportunity for use of in-memorytechnology commonly cited include real-time operational reporting andaccelerating or complementing currentdata warehouse environments.

NEW THINKING NEEDEDThe promise of in-memory analytics

is compelling, but as is the case with anynew technology paradigm, new thinking is required—along with preparation,education and support from the business.Since analytics may be delivered at light-speed to decision makers, it will beessential to work with data source ownersto ensure that data is of high quality, and is timely and relevant.

With analytic capabilities shifting toend users, there will be a need for moreuser training and awareness to betterposition them to take advantage of theincreased power of analytics. If businessend users do come up with new insightsthat were never possible before, do theyhave the leeway and latitude to act onthose insights? Are these new insightsbeing tied to business requirements?Having enhanced analytic capabilities—without increased autonomy on the part of decision makers—may only lead tofrustration.

In-memory analytics will require takingon some new software and hardware.The training needed to get end users andadministrators alike up-to-speed with thenew approaches also will require ongoinginvestment from the organization. Newtools and methodologies for data cleansingmay also be needed.

Since in-memory processing opens somany doors that business end users maynot have even known to exist, the ROItends to be a great unknown. Blazing fast,easy-to-run analytics at the speed ofthought frees up end users’ imaginations,enabling them to pose questions theywouldn’t even have thought of askingbefore. In an era in which data is scalinginto the petabyte range, in-memoryanalytics is more than installing newtechnology. It’s a disruptive force. Thepotential benefits are profound—includingvastly accelerated business decisions, fasteradoption of analytics, and lower IT costs. ■

In-memory capabilities

bring organizations ever

closer to the holy grail of

big data—the ability to

compete smartly on

analytics, meaning that

a data-driven business

would be able to closely

engage with and anticipate

the needs of customers

and markets.

Sponsored Content THOUGHT LEADERSHIP SERIES | FEBRUARY 2013 3

Page 3: TECHNOLOGY REBOOTS THE BIG DATA ANALYTICS WORLDhosteddocs.ittoolbox.com/deliveringinformationfasterinmemorytechn… · to the holy grail of big data—the ability to compete smartly

Sponsored Content THOUGHT LEADERSHIP SERIES | FEBRUARY 2013 4

SAP HANA is an in-memory dataplatform that is deployable as an on-premise appliance, or in the cloud. It is arevolutionary platform that’s best suitedfor performing real-time analytics, anddeveloping and deploying real-timeapplications. At the core of this real-timedata platform is the SAP HANA databasewhich is fundamentally different than anyother database engine in the market today.

Many SAP customers have alreadysuccessfully deployed SAP HANA to driveinnovations in IT and in business.

TAKING A NEW APPROACH TOBUSINESS DATA PROCESSING

Today’s business users need to reactmuch more quickly to changing customerand market environments. They demanddynamic access to raw data in real time.SAP HANA empowers users with flexible,on-the-fly data modeling functionality byproviding non-materialized views directlyon detailed information.

SAP HANA liberates users from thewait time for data model changes anddatabase administration tasks, as well asfrom the latency required to load theredundant data storage required bytraditional databases. The elimination ofaggregates and relational table indices andthe associated maintenance can greatlyreduce the total cost of ownership.

Some use the term “in-memory” in the context of optimizing the I/O accesswith database management, centering on accessing data from the hard disk by pre-storing frequently accessed data inmain memory. The term is also used for a traditional relational database runningon in-memory technology.

Some solutions offer columnar storageon traditional hard-disk technology, whileother platforms offer the option ofstoring data on solid state disks (SSD).Although these disks have no movingparts and access data much more rapidly

than hard disks, they are still slower thanin-memory access.

Only SAP HANA takes full advantageof all-new hardware technologies bycombining columnar data storage,massively parallel processing (MPP), andin-memory computing by using optimizedsoftware design.

COMPLETE DBMS AS ITS BACKBONESAP HANA, first and foremost,

incorporates a full database managementsystem (DBMS) with a standard SQLinterface, transactional isolation andrecovery (ACID [atomicity, consistency,isolation, durability] properties), and highavailability.

SAP HANA supports most entry-levelSQL92. SAP applications that use OpenSQL can run on the SAP HANA platformwithout changes. SQL is the standardinterface to SAP HANA. Additionalfunctionality, such as freestyle search,is implemented as SQL extensions. Thisapproach simplifies the consumption of SAP HANA by applications.

Analytical and Special InterfacesIn addition to SQL, SAP HANA

supports business intelligence clientsdirectly using multidimensional expressions(MDX) for products such as MicrosoftExcel and business intelligence consumerservices (BICS), an internal interface forSAP BusinessObjects™ solutions. Foranalytical planning, the user may iteratevalues on the aggregated analytical reports.With SAP HANA, a single value istransmitted for immediate recalculation by the in-memory planning engine.

Parallel Data Flow Computing ModelTo natively take advantage of massively

parallel multicore processors, SAP HANAmanages the SQL processing instructionsinto an optimized model that allowsparallel execution and scales incredibly

well with the number of cores. Theoptimization includes partitioning thedata in sections for which the calculationscan be executed in parallel. SAP HANAsupports distribution across hosts. Largetables may be partitioned to be processedin parallel by multiple hosts.

Application Logic ExtensionsThe parallel-data-flow computing

model is extended with application-specificlogic that is executed in processing nodes as part of the model. Support includesSQLScript as a functional language and “L”as an imperative language, which can callupon the prepackaged algorithms availablein the predictive analysis library of SAPHANA to perform advanced statisticalcalculations. The application logic

SAP HANA: Revolutionize the Way You Run Your Business

“We have seen massive system speed improvementsand increased ability to analyze

the most detailed levels of customers and products.”

– Colgate Palmolive

“Replacing our enterprise-wide Oracle data mart andresulting in over 20,000-timesspeed improvement processingour most complex freight transportation cost calculation.”

– Nongfu Spring

“Our internal technical comparison demonstrated thatSAP HANA outperforms traditional disk-based systems by a factor of 408,000.”

– Mitsui Knowledge Industry Co. Ltd.

Page 4: TECHNOLOGY REBOOTS THE BIG DATA ANALYTICS WORLDhosteddocs.ittoolbox.com/deliveringinformationfasterinmemorytechn… · to the holy grail of big data—the ability to compete smartly

Sponsored Content THOUGHT LEADERSHIP SERIES | FEBRUARY 2013 5

languages and concepts are evolving as aresult of collaboration with internal andexternal SAP developer communities.

Business Function Library andPredictive Analysis Library

SAP leverages its deep applicationexpertise to port specific businessapplication functionality as infrastructurewithin SAP HANA to natively takeadvantage of in-memory computingtechnologies by optimizing application andcalculation processing directly within mainmemory. Examples include currencyconversion, a fundamental first step for aglobal company in which many reports,which otherwise may have utilized plainSQL, utilize parallel processing well.Another example is converting businesscalendars: different countries use differentcivilian or business calendars and havedifferent definitions of a fiscal year.

Multiple In-Memory Stores OptimizedPer Task

Native in-memory storage does notleverage the processing power of modernCPUs well. The major optimization goal ofSAP HANA is to achieve high hit ratios in

the different caching layers of the CPU.This is done by data compression and byadapting the data store for the task. Forexample, when the processing is done rowby row and consumes most of the fieldswithin a row, a row store in which each rowis placed in memory in sequence providesthe best performance. If calculations areexecuted on a single column or severalcolumns only, these are scanned oraggregated into a column store in whicheach column is stored in sequence as a(compressed) memory block, providingmuch better results. An objects graph storemay benefit from a structure in which eachobject body is stored in sequence and thegraph navigation is stored as anothersequence to support unstructured andsemi-structured data storage.

Appliance PackagingSAP HANA applies optimizations that

take CPU utilization to the extreme.Appliance packaging enables full controlof the resources and a certification processfor hardware configurations for bestperformance and reliability. For example,SAP HANA includes automatic recoveryfrom memory errors without system

reboot. Systems withhigh memory arestatistically moresensitive to such error.In addition to a beneficial total cost ofownership of the appliance packagingmodel, it is a fundamental part of the SAPHANA design concept.

TECHNOLOGY FOUNDATIONParallel Execution

SAPA HANA is engineered for parallelexecution that scales well with the numberof available cores and across hosts whendistribution is used. Specifically,optimization for multicore platformsaccounts for the following two keyconsiderations:• Data is partitioned wherever possible in

sections that allow calculations to beexecuted in parallel.

• Sequential processing is avoided, whichincludes finding alternatives toapproaches such as thread locking.

Parallel AggregationIn the shared-memory architecture

within a node, SAP HANA performsaggregation operations by spawning anumber of threads that act in parallel,each of which has equal access to the dataresident on the memory on that node.As illustrated at the top of Figure 2, eachaggregation functions in a loop-wisefashion as follows:1. Fetch a small partition of the input

relation2. Aggregate that partition3. Return to step 1 until the complete

input relation is processed

Columnar and Row-Based Data StorageOne of the differentiating attributes of

SAP HANA is having both row-based andcolumn-based stores within the sameengine.

Conceptually, a database table is a two-dimensional data structure withcells organized in rows and columns.

Computer memory, however, is organizedas a linear sequence. For storing a table inlinear memory, two options can bechosen. Row storage stores a sequence ofrecords that contain the fields of one rowin the table. In a column store, the entriesof a column are stored in contiguousmemory locations.

Figure 1: Near-linear scale joining 120 million records in SAP HANA® on 4S Nehalem-EX (2.26 GHz) with 64 logical cores

Figure 1 summarizes the results of a scale test that was performed by an Intel team incollaboration with SAP. The test demonstrates near-linear scale. The processing time is16.8 seconds using 2 cores and improves to 1.4 seconds using 32 cores.Hyperthreading adds an additional 20% improvement.

Page 5: TECHNOLOGY REBOOTS THE BIG DATA ANALYTICS WORLDhosteddocs.ittoolbox.com/deliveringinformationfasterinmemorytechn… · to the holy grail of big data—the ability to compete smartly

Sponsored Content THOUGHT LEADERSHIP SERIES | FEBRUARY 2013 6

With SAP HANA you can specifywhether a table is to be stored by columnor by row.

Row store is recommended if:• The table has a small number of rows,

such as configuration tables.• The application needs to process only a

single record at a time (many selects orupdates of single records).

• The application typically needs to accessthe complete record.

• The columns contain mainly distinct valuesso the compression rate would be low.

• Aggregations and fast searching are notrequired.

Row store is used, for example, for SAPHANA database metadata, for applicationserver internal data such as ABAP™ serversystem tables, and for configuration data.In addition, application developers maydecide to put application tables in rowstore if the criteria given above arematched.

Column store is recommended if:• Calculations are executed on a single

column or a few columns only.• The table is searched based on the

values of a few columns.• The table has a large number of columns.

• The table has a large number of rows,and columnar operations are required(aggregate, scan, and so on).

• The majority of columns contain only a few distinct values (compared to thenumber of rows), resulting in highercompression rates.

SAP HANA allows the joining of row-based tables with columnar tables. However,it is more efficient to join tables that are located in the same store. Therefore,master data that is frequently joined withtransaction data is put in a column store.

Advantages of Columnar TablesWhen the criteria listed above are

fulfilled, columnar tables have severaladvantages.

Higher performance for columnoperations: Operations on single columns,such as searching or aggregations, can beimplemented as loops over an array storedin contiguous memory locations. Such anoperation has high spatial locality and canefficiently be executed in the CPU cache.

Higher data compression rates:Columnar data storage allows highlyefficient compression. Especially if thecolumn is sorted, there will be ranges of

the same values incontiguous memory, socompression methodssuch as run-lengthcoding or cluster coding can be used. Thisis especially promising for SAP businessapplications as they have many columnscontaining only a few distinct valuescompared to the number of rows.Examples are country codes or status codesas well as foreign keys. This high degree ofredundancy allows for effective compressionof column data. In row-based storage,successive memory locations contain dataof different columns, so compressionmethods such as run-length coding cannotbe used. In column stores a compressionfactor of 10 can be achieved compared totraditional row-oriented database systems.

Columnar data organization also allowshighly efficient data compression. This notonly saves memory but also increases speedfor the following reasons:• Compressed data can be loaded into CPU

cache more quickly. As the limiting factoris the data transport between memoryand CPU cache, the performance gainwill exceed the additional computing timeneeded for decompression.

• With dictionary coding, the columns arestored as sequences of bit-coded integers.Checks for equality can be executed onthe integers (for example, during scans orjoin operations). This is much faster thancomparing string values.

• Compression can speed up operationssuch as scans and aggregations if theoperator is aware of the compression.

Elimination of additional indexes:Columnar storage eliminates the need foradditional index structures in many cases.Storing data in columns works like havinga built-in index for each column: thecolumn scanning speed of the in-memorycolumn store and the compressionmechanisms, especially dictionarycompression, already allow read operationswith very high performance. In many cases,additional index structures will not berequired. Eliminating additional indexesreduces complexity and eliminates effortfor defining and maintaining metadata.

Parallelization: Column-based storagealso makes it easy to execute operations inparallel using multiple processor cores. Ina column store, data is already vertically

Figure 2: Use of parallel aggregation by SAP HANA® to divide work among threads

Each thread has a private hash table where it writes its aggregation results. When theaggregation threads are finished, the buffered hash tables are merged by mergerthreads using range partitioning.

Page 6: TECHNOLOGY REBOOTS THE BIG DATA ANALYTICS WORLDhosteddocs.ittoolbox.com/deliveringinformationfasterinmemorytechn… · to the holy grail of big data—the ability to compete smartly

Sponsored Content THOUGHT LEADERSHIP SERIES | FEBRUARY 2013 7

partitioned. That means operations ondifferent columns can easily be processedin parallel. If multiple columns need to be searched or aggregated, each of theseoperations can be assigned to a differentprocessor core. In addition, operations on one column can be parallelized bypartitioning the column into multiplesections that are processed by differentprocessor cores. Figure 2 shows bothoptions. Columns A and B are processedby different cores while column C wassplit into two partitions that are processedby two different cores.

SAP HANA BENEFITS FOR BUSINESSAPPLICATIONS

Traditional business applications usematerialized aggregates to increase readperformance. That means that theapplication developers define additionaltables in which the applicationredundantly stores the results ofaggregates, such as sums, computed onother tables. The materialized aggregatesare computed and stored either after eachwrite operation on the aggregated data orat scheduled times. Read operations readthe materialized aggregates instead ofcomputing them each time.

With scanning speed of severalmegabytes per millisecond, in-memory

column stores make itpossible to calculateaggregates on largeamounts of data onthe fly with highperformance. This is expected toeliminate the need formaterialized aggregatesin many cases and thuseliminate up to 30% ofthe required tables.

In financialapplications, differentkinds of totals andbalances are typicallypersisted as materializedaggregates for thedifferent ledgers:general ledger,accounts payable,accounts receivable,cash ledger, materialledger, and so on. Withan in-memory column

store, these materialized aggregates can beeliminated as all totals and balances can becomputed on the fly with high performancefrom accounting document items.

Eliminating materialized aggregates hasseveral advantages:

Simplified data model: Withmaterialized aggregates, additional tablesare needed, which make the data modelmore complex. In the Financials datamodel for the SAP Business ByDesign™solution, for example, persisted totals andbalances are stored with a star schema.Specific business objects are introducedfor totals and balances, each of which ispersisted with one fact table and severaldimension tables. With SAP HANA, allthese tables can be removed if totals andbalances are computed on the fly. Asimplified data model makes developmentmore efficient, removes sources ofprogramming errors, and increasesmaintainability.

Simplified application logic: Theapplication either needs to update theaggregated value after an aggregated itemwas added, deleted, or modified, or specialaggregation runs need to be scheduledthat update the aggregated values atcertain time intervals, such as once a day.By eliminating persisted aggregates, thisadditional logic is no longer required.

Higher level ofconcurrency: Withmaterialized aggregates,a write lock needs to beacquired after each write operation for updating the aggregate. This limitsconcurrency and may lead to performanceissues. Without materialized aggregates,only the document items are written. Thiscan be done concurrently without anylocking.

“Up-to-datedness” of aggregated values:With on-the-fly aggregation, the aggregatevalues are always up-to-date, whilematerialized aggregates are sometimesupdated only at scheduled times.

PARADIGM SHIFT IN APPROACH TODATA MANAGEMENT TECHNOLOGIES

SAP is leading a paradigm shift in theway we think about data managementtechnologies. This rethinking affects how we construct business applications and ourexpectations in consuming them. Businessesthat adopt this new technology will have thepotential for sharpening their competitiveedge by dramatically accelerating not onlydata querying speed but also businessprocessing speed. SAP is working closelywith customers, consultants, and developersfor both immediate business gain and alonger-term perspective and road map.

As a line-of-businesses driver in a largeenterprise, you may work in the short termwith consultants to develop a business casefor real-time reporting on operationaldata. You may focus on two or threespecific business scenarios and deploy SAPHANA in a side-by-side, non-disruptivemanner. A team of your IT experts willgradually gain experience working withSAP HANA, learn to exploit its flexibility,and further extend the utilization withinyour business-user community. Yourorganization will become in-memory-computing ready and gain immediatebusiness benefits at the same time. ■

FIND OUT MORELearn more about SAP HANA by visiting:www.saphana.comWatch how-to videos about SAP HANA:http://www.saphana.com/hana-academyTry SAP HANA in the cloud today:http://www.saphana.com/community/learn/cloud-info

Figure 3: Example of parallelization in a column store