the root system status & roadmap

34
The ROOT system Status & Roadmap NEC’2009 Varna, 8 September 2009 René Brun/CERN Global Overview of ROOT system 1 Rene Brun

Upload: cara

Post on 30-Jan-2016

63 views

Category:

Documents


0 download

DESCRIPTION

The ROOT system Status & Roadmap. NEC’2009 Varna, 8 September 2009 Ren é Brun/CERN. Project History : Politics. Staff consolidation. Project starts Jan 1995. Hoffman review. Objectivity, LHC++. Public presentation Nov 1995. LCG RTAGs SEAL,POOL,PI. LCG 2 main stream. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: The ROOT system Status  & Roadmap

The ROOT systemStatus & Roadmap

NEC’2009 Varna, 8 September 2009

René Brun/CERN

Global Overview of ROOT system 1 Rene Brun

Page 2: The ROOT system Status  & Roadmap

Project History : Politics

Rene Brun Global Overview of ROOT system 2

FNAL RHIC

Sep 1998

LCG RTAGsSEAL,POOL

,PI

LCG 2 main stream

Staffconsolidation

Public presentation

Nov 1995

ALICE

Objectivity, LHC++

Hoffman review

Project starts

Jan 1995

Page 3: The ROOT system Status  & Roadmap

CMZ…………………………………CVS………………………………………………………….SVN……………

Major Technical Steps

Rene Brun Global Overview of ROOT system 3

CINT

I/O based on dictionaries in memory

Graphics based on TVirtualX

GUI based on signal/slots

P R O O F CINT7

GL/EVE

RooFitTMVA

MathLibs

ReflexFrom SEAL to

ROOT

Trees automatic

split

Page 4: The ROOT system Status  & Roadmap

From CVS to SVN

Smooth and fast transition from CVS to SVN We are very happy with SVN

Rene Brun Global Overview of ROOT system 4

Page 5: The ROOT system Status  & Roadmap

ROOT libs

Granularity: More than 100 shared libs You load what you use root.exe links 6 shared libs (VM < 20 Mbytes)

x

x

x

x

x

x

x

Page 6: The ROOT system Status  & Roadmap

CINT LLVM CINT is the CORE of ROOT for

Parsing code Interpreting code Storing the classes descriptions

A new version of CINT (CINT7) is based on Reflex, but found too slow to go to production.

We are considering an upgrade of CINT using LLVM (Apple-driven OS project). LLVM is a GCC compliant compiler with a parser and a Just In Time Compiler.

CINT/LLVM (CLING) should be c++0x compliant

Page 7: The ROOT system Status  & Roadmap

Input/Output: Major Steps

Rene Brun Global Overview of ROOT system 7

User written streamersfilling TBuffer

streamers generatedby rootcint

automatic streamers from dictionary with StreamerInfosin self-describing files

member-wise streamingfor TClonesArray

member-wise streaming for STL col<T*>

generalized schema evolution

parallel merge

Page 8: The ROOT system Status  & Roadmap

Input/Output: Object Streaming

From single-object-wise streaming from hand-coded Streamer to rootcint-generated Streamer to generic Streamer based on dictionary

To collections member-wise streaming Member-wise streaming saves space and time

ABCD ABCD ABCD ABCD ABCD ABCD … ABCD

ABCDABCDABCDABCDABCDABCD …ABCD

AAAAAA..A

CCCCCC..C

BBBBBB..B

DDDDDD..D

TBuffer bufAll

TBuffer bufA

TBuffer bufB

TBuffer bufC

TBuffer bufD

std::vector<T>std::vector<T*>

Page 9: The ROOT system Status  & Roadmap

I/O and Trees from branches of basic types created by hand to branches automatically generated from very

complex objects to branches automatically generated for complex

polymorphic objects Support for object weak-references across

branches (TRef) with load on demand Tree Friends TEntryList Automatic branch buffer size optimisation (5.26)

Page 10: The ROOT system Status  & Roadmap

2-D Graphics New functions

added at each new release.

Always new requests for new styles, coordinate systems.

ps,pdf,svg,gif, jpg,png,c,root,

etc

Rene Brun Global Overview of ROOT system 10

Move to GL ?

Page 11: The ROOT system Status  & Roadmap

The Geometry Package TGEO

The TGeo classes are now stable. Can work with different simulation engines

(G3,G4,Fluka) (See Virtual Monte Carlo) G3->G4, G4->TGeo, TGeo>GDML Used in online systems and reconstruction

programs Built-in facilities for alignment Impressive galery of experiments (35 detectors in

$ROOTSYS/test/stressGeometry)

Page 12: The ROOT system Status  & Roadmap

3-D Graphics

Highly optimized GL views in TPad the GL viewer

Rene Brun Global Overview of ROOT system 12

Page 13: The ROOT system Status  & Roadmap

Event Display: EVE EVE is a ROOT package (GL-based) for event

displays. Developed in collaboration with Alice (AliEve) and

CMS (FireWorks). Provides all the GUI widgets, browsers, GL

infrastructure (far better than the old OpenInventor).

Used now by many experiments (see eg FAIRROOT, ILCROOT) to display raw data, MC events or detector oriented visualization.

Page 14: The ROOT system Status  & Roadmap

GUI Many enhancements in the GUI classes: browser,

html browser, TABs, EVE widgets. GUI builder with C++ code generator. Note that

the code generator works from any existing widget (CTRL/S).

class TRecorder

can store and replay a GUI session: All mouse events Keyboard input, including macro execution

QT interfaces: a big pain, difficult to maintain with the successive versions of Qt.

CRTL/S

Page 15: The ROOT system Status  & Roadmap

GUI Examples

Page 16: The ROOT system Status  & Roadmap

GUI Examples II

Can browse a ROOT file on the

remote web server

Page 17: The ROOT system Status  & Roadmap

RooFit/ RooStats

The original Babar RooFit package has been considerably extended by Wouter Verkerke.

Now structured in RooFitCore and RooFit RooFit is the base for the new RooStats

package developed by Atlas and CMS.

Page 18: The ROOT system Status  & Roadmap

PROOF

18

Parallel coordination of distributed ROOT sessions Scalability: small serial overhead Transparent: extension of the local shell

Multi-Process Parallelism Easy adaptation to broad range of setups Less requirements on user code

Process data where they are, if possible Minimize data transfers

Event-level dynamic load balancing via a pull architecture Minimize wasted cycles

Real-time feedback Output snapshot sent back at tunable frequency

Automatic merging of results Optimized version for multi-cores (PROOF-Lite)

PROOF – Parallel ROOT Facility

Page 19: The ROOT system Status  & Roadmap

19

Interactive-batch

$ root -l

root [0] p= TProof::Open("localhost")

...

root [1] p->Process("tutorials/proof/ProofSimple.C”)

root [2] .q

$ root -l

root [0] p = TProof::Open("localhost")

root [1] p->ShowQueries()

+++

+++ Queries processed during this session: selector: 1, draw: 0

+++ #:1 ref:"session-pcphsft64-1252061242-8874:q1" sel:ProofSimple completed evts:0-

+++

root [2] p->Finalize()

Start a session, go into background mode and quit

Reconnect from any other place:if still running the dialog box will pop-up

When finished, call Finalize() to execute TSelector::Terminate()

Page 20: The ROOT system Status  & Roadmap

20

Begin() • Create histos, ...• Define output list

Process()

OK

preselection

analysis

Terminate() Final analysis, fitting, ...

n

1

last

Events

Output List

2

Parallelizableevent loop

Event level TSelector framework

Same frameworkcan be used forgeneric ideallyparallel tasks,e.g. MC simulation

Page 21: The ROOT system Status  & Roadmap

21

TSelector::Process()

... ... // select event b_nlhk->GetEntry(entry); if (nlhk[ik] <= 0.1) return kFALSE; b_nlhpi->GetEntry(entry); if (nlhpi[ipi] <= 0.1) return kFALSE; b_ipis->GetEntry(entry); ipis--; if (nlhpi[ipis] <= 0.1) return kFALSE; b_njets->GetEntry(entry); if (njets < 1) return kFALSE; // selection made, now analyze event b_dm_d->GetEntry(entry); //read branch holding dm_d b_rpd0_t->GetEntry(entry); //read branch holding rpd0_t b_ptd0_d->GetEntry(entry); //read branch holding ptd0_d

//fill some histograms hdmd->Fill(dm_d); h2->Fill(dm_d,rpd0_t/0.029979*1.8646/ptd0_d); ... ...

see $ROOTSYS/tutorials/tree/h1analysis.cxx

Read only the parts of the event relevant to the analysis

Page 22: The ROOT system Status  & Roadmap

TSelector performance

22

from “Profiling Post-Grid analysis”, A. Shibata, Erice, ACAT 2008

Page 23: The ROOT system Status  & Roadmap

PROOF installations

23

CERN Analysis Facility112 cores, 35 TB

Target: 500 cores, 110 TBPrompt analysis of selected data, calibration, alignment, fast simulation

5-10 concurrent users~80 users registered

GSI Analysis Facility, Darmstadt160 cores, 150 TB Lustre Data analysis, TPC calibration5-10 usersPerformance: 1.4 TB in 20 mins

Other farms: JINR, Turin

Wisconsin200 cores, 100 TB, RAID5Data analysis (Higgs searches) I/O perfomance tests w/ multi-RAID PROOF-Condor integration~20 registered users

BNL112 cores, 50 TB HDD, 192 GB SSDI/O perfomance tests with SSD, RAIDTests of PROOF cluster federation~25 registered users

Test farms at LMU, UA Madrid, UTA, Duke,

Manchester

ALICE ATLAS

Page 24: The ROOT system Status  & Roadmap

PROOF: more installations

19 Mai 2009

G.Ganis, Fermes d'analyses basées sur PROOF 24

NAF: National Analysis Facility at DESY ~900 cores shared w/ batch under SGE ~80 TB Lustre, dCache Data analysis for ATLAS, CMS, LHCb et ILC

PROOF tested by CMS groups ~300 registered users

CC-IN2P3, Lyon 160 cores, 17 TB HDD LHC data analysis

Purdue University, West Lafayette, USA 24 cores, dCache storage CMS Muon reconstruction

...

Page 25: The ROOT system Status  & Roadmap

PROOF-LITE

25

PROOF optimized for single many-core machines Zero configuration setup

No config files and no daemons Workers are processes and not threads for added robustness Like PROOF it can exploit fast disks, SSD’s, lots of RAM, fast networks and fast CPU’s Once your analysis runs on PROOF Lite it will also run on PROOF Works with exactly the same user code as PROOF

Page 26: The ROOT system Status  & Roadmap

PROOF-LITE 1 core

26

void go() { gROOT->ProcessLine(".L makeChain.C"); TChain *chain = makeChain(); chain->Process("TSelector_Ntuple_Zee.C+”); }

Page 27: The ROOT system Status  & Roadmap

PROOF-LITE 8 cores

27

void go() { gROOT->ProcessLine(".L makeChain.C"); TChain *chain = makeChain();

TProof::Open(“”); chain->SetProof();

chain->Process("TSelector_Ntuple_Zee.C+”); }

and these two statements will soon be done fully automatic

Page 28: The ROOT system Status  & Roadmap

The new http://root.cern.ch

old

new

The old web site http://root.cern.ch has been replaced by a better version with improved contents

and navigation using the drupal system.

Page 29: The ROOT system Status  & Roadmap

Supported Platforms Linux (RH, SLCx,Suse, Debian, Ubuntu)

gcc3.4, gcc4.4 (32 and 64 bits) icc10.1

MAC (ppc, 10.4, 10.5, 10.6) gcc4.0.1, gcc4.4 icc10.1, icc11.0

Windows (XP, Vista) VC++7.1, VC++9 Cygwin gcc3.4, 4.3

Solaris + OpenSolaris CC5.2 gcc3.4, gcc4.4

Page 30: The ROOT system Status  & Roadmap

Robustness & QA Impressive test suite roottest run in the nightly

builds (several hundred tests) Working on GUI test suite (based on Event

Recorder)

Page 31: The ROOT system Status  & Roadmap

ROOT developersmore stability

Page 32: The ROOT system Status  & Roadmap

Summary After 15 years of development, good balance

between consolidation and new developments. The ROOT main packages (I/O & Trees) are

entering a consolidation, optimization phase. We would like to upgrade CINT with the LLVM C+

+0x compliant compiler (CLING). PROOF becoming main stream for LHC analysis Better documentation and User Support More stable manpower Usage rapidly expanding outside HEP

Page 33: The ROOT system Status  & Roadmap

33

Performance

ALICE ESD analysis

– I/O limitations limits scalability inside a machine

ATLAS tests w/ SSD

1 worker per node n workers on 1 node

# nodes # workers

Rate (MB/s) Rate (MB/s)

Courtesy of S. Panitkin, BNL

CPU limitedBNL PROOF Farm 10 nodes / 80 cores 2.0 GHz / 16GB RAM 5 TB HDD /640 GB

SSD ProofBench analysis

Page 34: The ROOT system Status  & Roadmap

Files: Local/Remote Local files

From 32 to 64 bits pointers What about Lustre/ZFS TFile::Open(“mylocalfile.root”)

Web files on a remote site using a standard Apache web server

TFile::Open(“http://myserver.xx.yy/file.root”)

Remote files served by xrootd Support for parallelism Support for intelligent read-ahead (via TTreeCache) Support for multi-threading At the heart of PROOF TFile::Open(“root://myxrootdserver.xx.yy/file.root”)