hlt data challenge

16
Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT Data Challenge - PC² - - Setup / Results – - Clusterfinder Benchmarks – - Setup / Results –

Upload: reese-norman

Post on 01-Jan-2016

40 views

Category:

Documents


0 download

DESCRIPTION

HLT Data Challenge. - PC² - - Setup / Results – - Clusterfinder Benchmarks – - Setup / Results –. PC² Paderborn. PC² - Paderborn Center for Parallel Computing Architecture of the ARMINIUS cluster 200 nodes with Dual Intel Xeon 64-bit, 3.2 GHz 800 GByte main memory (4 GByte each) - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1

HLT Data Challenge

- PC² -- Setup / Results –

- Clusterfinder Benchmarks –- Setup / Results –

Page 2: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 2

PC² Paderborn

PC² - Paderborn Center for Parallel Computing

• Architecture of the ARMINIUS cluster– 200 nodes with Dual Intel Xeon 64-bit, 3.2 GHz– 800 GByte main memory (4 GByte each)– InfiniBand network – Gigabit Ethernet network– RedHat Linux 4

Page 3: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 3

General Test - Configuration

• Hardware Configuration– 200 nodes with Dual 3.2 GHz Intel Xeon CPUs– Gigabit Ethernet

• Framework Configuration– HLT Data Framework with TCP Dump Subscriber

processes (TDS)– HLT Online Display connecting to TDS

• Software Configuration– RHEL 4 update 1 – RHEL 2.6.9 kernel version – 2.6 bigphys area patch – PSI2 driver for 2.6

Page 4: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 4

Full TPC (36 slices) on 188 nodes (I)

• Hardware Configuration– 188 nodes with Dual 3.2 GHz Intel Xeon CPUs

• Framework Configuration– Compiled in debug mode, no optimizations – Setup per slice (6 incoming DDLs)

•3 nodes for cluster finding each node with 2 filepublisher processes and 2 cluster finding processes

•2 nodes for trackingeach node with 1 tracking processes

– 8 Global Merger processes •merging the tracks of the 72 tracking nodes

Page 5: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 5

Full TPC (36 slices) on 188 nodes (II)Framework Setup

HLT Data Framework setup for 1 slice

.

.

.

Node

GMOnline Display

Node

GM

Node

GM

CF CF

No

deCF CFDDL

CF

Patch

CF CFDDL

CF

Patch

Node

TR

CF CF

No

deCF CFDDL

CF

Patch

CF CFDDL

CF

Patch

Node

TR

CF CF

No

deCF CFDDL

CF

Patch

CF CFDDL

CF

Patch

Sim

ulated

TP

C d

ata

.

.

.

Page 6: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 6

Full TPC (36 slices) on 188 nodes (III)

• Empty Events– Real data format, empty events, no hits/tracks – Rate approx. 2.9 kHz after tracking

– Limited by the filepublisher processes

Page 7: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 7

Full TPC (36 slices) on 188 nodes (IV)

• Simulated Events – simulated pp data (14 TeV, 0.5 T)– Rate approx. 220 Hz after tracking

– Limited by the tracking processes• Solution: use more nodes

Page 8: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 8

Conclusion of Full TPC Test

• Main bottleneck is the processing of the data itself • The system is not limited by the HLT data transport

framework • Test limitations by number of available nodes

Page 9: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 9

„Test Setup“

Page 10: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 10

Clusterfinder Benchmarks (CFB)

• pp – Events• 14 TeV , 0.5 T• Number of Events: 1200• Iterations: 100• TestBench: SimpleComponentWrapper• TestNodes:

– HD ClusterNodes e304, e307 (PIII, 733 MHz)– HD ClusterNodes e106, e107 (PIII, 800 MHz)– HD GatewayNode alfa (PIII, 1.0 GHz)– HD ClusterNode eh001 (Opteron, 1.6 GHz)– CERN ClusterNode eh000 (Opteron, 1.8 GHz)

Page 11: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 11

CFB – Signal Distribution per patch

Page 12: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 12

CFB – Cluster Distribution per patch

Page 13: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 13

CFB – PadRow / Pad Distribution

Page 14: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 14

CFB – Timing Results (I)

Page 15: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 15

CFB - Timing Results (II)

CPUPatch 0

[ms]Patch 1

[ms]Patch 2

[ms]Patch 3

[ms]Patch 4

[ms]Patch 5

[ms]Average

[ms]

Opteron 1,6 GHz 2,93 3,92 2,73 2,96 2,93 2,90 3,06

Opteron 1,8 GHz 3,96 5,32 3,66 3,98 3,94 3,99 4,13

PIII 1,0 GHz 4,95 6,65 4,51 4,90 4,87 4,81 5,11

PIII 800 MHz 6,04 8,10 5,64 6,12 6,06 6,01 6,33

PIII 733MHz 6,57 8,82 6,14 6,67 6,61 6,54 6,90

Page 16: HLT  Data Challenge

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 16

CFB – Conclusion / Outlook

• Learned about different needs for each patch

• Number of processing components have to be adjusted to particular patch