experience on the deployment of geant4 on fkppl vo november 28, 2008 황순욱, 안선일, 김남규...

36
Experience on the Deployment of Geant4 on FKPPL VO November 28, 2008 황황황 , 황황황 , 황황황 KISTI e-Science Division 황황황 , 황황황 National Cancer Center

Upload: rolf-daniels

Post on 29-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

Experience on the Deployment of Geant4 on FKPPL VO

November 28, 2008

황순욱 , 안선일 , 김남규KISTI e-Science Division

신정욱 , 이세병National Cancer Center

Introduction to KISTI

Organization 8 divisions 3 centers 3 branch offices

Personnel About 340 regular staffs 100 part-time workers

Annual Revenue About 100M USD mostly

funded by a government

Outline

Introduction to EGEE the World Largest Grid Infrastructure

FKPPL VO Grid Testbed Deployment of Geant4 on FKPPL VO Demo

EGEE (Enabling Grids for E-SciencE)

• the largest multi-disciplinary grid infrastructure in the world

Objectives• Build large-scale, production-quality

grid infrastructure for e-Science

• Available to scientists 24/7

EGEE grid Infrastructure• 300 sites in 50 countries• 80,000 CPU cores• 20 PBytes• 10,000 User

FNALFNALKISTIKISTIIN2P3IN2P3

INFNINFNTaiwanTaiwan

San DiegoSan Diego

112 core, 56TB, 13 개 서비스KISTI, IN2P3, CERN, ASGC 협력26 개국 , 86 기관 1000 여명 참여가동률 96.7%, 22502 건 활용

KISTI ALICE Tier2 CenterKISTI ALICE Tier2 Center

KISTI ALICE Tier2 Center

EGEE middleware : gLite

Data Management

MetadataCatalog

StorageElement

DataMovement

File & ReplicaCatalog

Information & Monitoring

Information &Monitoring

Application

Monitoring

Security

Authorization

AuthenticationAuditing

Workload Management

ComputingElement

WorkloadManagement

JobProvenance

PackageManager

AccessAPICLI

Accounting

Site Proxy

Framework for building grid applications Tapping into the power of distributed computing and storage

resources

7

User Interface (UI)User Interface (UI): The place where users logon to access the Grid

Computing Element (CE)Computing Element (CE): A batch queue on a site’s computers where the user’s job is executed

Storage Element (SE)Storage Element (SE): provides (large-scale) storage for files

Resource Broker (RB) (Workload Management System (WMS)Resource Broker (RB) (Workload Management System (WMS): Matches the user requirements with the available resources on the Grid

gLite Main components

Information SystemInformation System: Characteristics and status of CE and SE

File and replica catalogFile and replica catalog: Location of grid files and grid file replicas

Logging and Bookkeeping (LB)Logging and Bookkeeping (LB): Log information of jobs

8

Basic gLite use case:Job submission

Computing Element

Storage Element

Site X

Information SystemSubmit job(executable + small inputs)

Submit job

query

Retrieve output

Resource Broker

User Interface

publishstate

File and Replica Catalog

VO Management Service

(DB of VO users)

query

createproxy

process

Retrieve status & (small) output files

Logging and bookkeeping

Job status

Job status Loggin

g

Input file(s)

Output file(s)

Register file

FKPPL VO Testbed

Goal

Background Collaborative work between KISTI and CC-IN2P3 in the area of

Grid computing under the framework of FKPPL Objective

(short-term) to provide a Grid testbed to the e-Science summer school participants in order to keep drawing their attention to Grid computing and e-Science by allowing them to submit jobs and access data on the Grid

(long-term) to support the other FKPPL projects by providing a production-level Grid testbed for the development and deployment of their applications on the Grid

Target Users FKPPL members 2008 Seoul e-Science summer school Participants

FKPPL VO Testbed

Service Host Site

UI kenobi.kisti.re.kr KISTI

VOMS palpatine.kisti.re.kr KISTI

WMS/LB snow.kisti.re.kr KISTI

SE ccsrm02.in2p3.fr (0.5TB) CC-IN2P3

hansolo.kisti.re.kr (1.5TB) KISTI

CE cclcgceli03.in2p3.fr (5000 CPU cores) CC-IN2P3

darthvader.kisti.re.kr (100 CPU cores) KISTI

VOMS

WMSCE

SE

UI

CE

SE

FKPPL VO

KISTIIN2P3

LFC WIKI

VO Registration Detail

Official VO Name fkppl.kisti.re.kr

Description VO dedicated to joint research projects of the

FKPPL(France Korea Particle Physics Laboraroty), under a scientific research programme in the fields of high energy physics (notably LHC and ILC) and e-Science including Bioinformatics and related technologies

Information about the VO https://cic.gridops.org/index.php?section=vo

Progress

2008.8.30 "fkppl.kisti.re.kr" VO registration done

2008.9.15 UI and VOMS Installation and configuration done

2008.9.30 WMS/LB Installation and configuration done

2008.10.10 SE configuration done

2008.10.15 FKPPL VO Service Open

A FKPPL Wiki site Open

FKPPL VO Usage

Application porting support on FKPPL VO Geant4

Detector Simulation Toolkit Working with National Cancer Center

WISDOM MD part of the WISDOM drug discovery pipeline Working with the WISDOM Team

Support for FKPPL Member Projects

Grid Testbed for e-Science School Seoul e-Science summer school

Seoul e-Science Summer School 2008

국내 최초의 운영 , 개발 , 연구자 통합의 intensive 교육 실시 (2 주간 )

프랑스 , 스위스 , 대만 등의 대표적 연구자의 직접 강의 및 실습

국내 대학 , 연구소 중심의 60 여명 /일 참가KISTI 와 프랑스 IN2P3, 스위스 CERN 과

공동개발 결과 포함

How to access resources in FKPPL VO Testbed

Get your certificate issued by KISTI CA http://ca.gridcenter.or.kr/request/

certificte_request.php

Join a FKPPL VO membership https://palpatine.kisti.re.kr:8443/voms/

fkppl.kisti.re.kr

Get a user account on the UI node for FKPPL Vo Send an email to the system administrator at

[email protected]

User Support

FKPPL VO Wiki site http://anakin.kisti.re.kr/mediawiki/index.php/FKPPL_VO

User Accounts on UI machine 17 User accounts have been created

FKPPL VO Registration 4 users have been registered as of now

Contact Infomation Soonwook Hwang (KISTI), Dominique Boutigny (CC-

IN2P3) responsible person [email protected], [email protected]

Sunil Ahn (KISTI), Yonny Cardenas (CC-IN2P3) Technical contact person, [email protected], [email protected]

Namgyu Kim Site administrator [email protected]

Sehoon Lee User Support [email protected]

Deployment of Geant4 on FKPPL VO

Geant4 Installation

What are the two pieces of software required for building Geant4? gcc 3.2.3 or 3.4.5 이상 CLHEP

Base libraries providing manipulations and four-vector tools, etc

Getting CLHEP http://proj-clhep.web.cern.ch/proj-clhep/

DISTRIBUTION/distributions/clhep-2.0.3.1.tgz

CLHEP Installation

그리드 상에서 Geant4 를 실행시키기 위해선 CLHEP 을 static library 형태로 컴파일할 것을 권장함 .

$ ./configure --prefix=${NCCAPPS}/clhep/2.0.3.1 -disable-shared$ make; make install ; make install-docs$ ls ${NCCAPPS}/clhep/2.0.3.1/libCLHEP-2.0.3.1.a libCLHEP-Geometry-2.0.3.1.alibCLHEP.a libCLHEP-Matrix-2.0.3.1.alibCLHEP-Cast-2.0.3.1.a libCLHEP-Random-2.0.3.1.alibCLHEP-Evaluator-2.0.3.1.a libCLHEP-RandomObjects-2.0.3.1.alibCLHEP-Exceptions-2.0.3.1.a libCLHEP-RefCount-2.0.3.1.alibCLHEP-GenericFunctions-2.0.3.1.a libCLHEP-Vector-2.0.3.1.a

Option 1 Staging in all the necessary material data on the grid node on

which the geant4 application is to be run. Option 2

Allowing remote access to material data from grid nodes using global file system such as AFS

Advantage don’t need to modify the source code When submitting G4 applications to the grid, what we need to do is to

set environment variables to the AFS directory export $G4DATA_HOME=/afs/in2p3.fr/grid/toolkit/fkppl.kisti.re.kr/geant4/data

We chose to use the second option on FKPPL VO put all the material/interaction data on AFS AFS Directory for the material data

/afs/in2p3.fr/grid/toolkit/fkppl.kisti.re.kr/geant4/data

How to access Geant4 material/interaction data on the Grid?

How to access ROOT I/O library on the grid?

We tried to use static library for ROOT, but failed for some reason.

Currently, we chose to use the ROOT library on the AFS system

Location of ROOT Directory on CERN AFS system /afs/cern.ch/sw/lcg/release/ROOT/5.20.00/

slc4_ia32_gcc34/root Geant4 applications need to be compiled and built using

the shared ROOT library on the AFS When submitting jobs on the grid, we need to set an

environment variable to the AFS directory ROOTSYS=/afs/cern.ch/sw/lcg/release/ROOT/5.20.00/slc4_ia32_gcc34/root

Use Case:Running GTR2_com code on the FKPPL VO

Overview of GTR2_com Application name: GTR2_com (G4 app for proton therapy sim s/w by developed by NCC) -> GTR2 : Gantry Treatment Room #2, com: commissioning

(now GTR2 simulation code is under commissioning phase)

-> libraries: Geant4 , root (root.cern.ch) as simulation output library

/user/io/OpenFile root B6_1_1_0.root/GTR2/SNT/type 250/GTR2/SNT/aperture/rectangle open#Geant4 kernel initialize/run/initialize /GTR2/FS/lollipops 9 5/GTR2/SS/select 3/GTR2/RM/track 5/GTR2/RM/angle 80.26 /GTR2/VC/setVxVy cm 14.2 15.2/beam/particle proton/beam/energy E MeV 181.8 1.2 /beam/geometry mm 3 5/beam/emittance G mm 1.5/beam/current n 3000000#SOBP/beam/bcm TR2_B6_1 164 /beam/juseyo/user/io/CloseFile

user macro

output

GTR2_com

GTR2_com 의 input 은 nozzle 의 configuration 이며 , 이 configuration 이 명시된 macro 파일을 읽어서 최종 양성자 빔에 의한 선량분포를 3D-histogram 의 root 파일로 출력

GTR2_com Code

G4 application Name GTR2_com

User Input File QA_11_65_8_*.mac

Output File for Analysis from Simulation Run QA_11_65_8_*.root

Execution of GTR2_com

On Local machine

$ GTR2_com QA_11_65_8_24.mac > std_out 2> std_err(waiting)$ ls std_out std_err QA_11_65_8_24.root

On local cluster

$vi cluster_run.sh # writing script file$ qsub cluster_run.sh # submit a G4 job to local scheduler(waiting)$ ls std_out std_err QA_11_65_8_24.root

Execution of GTR2_com (cont’d)

On Grid

$ vi grid_run.jdl # Job description file ( 생략 )$ vi grid_run.sh #shell script file that runs on grid( 생략 )$ glite-wms-job-submit –a –o jobid grid.jdl $ glite-wms-job-status -I jobid $ glite-wms-job-output --dir myresult –i jobid$ ls ./myresultstd_out std_err QA_11_65_8_24.out

Example macro

Shell Script to be run on Grid

Example of JDL File

Type = "Job";JobType = "Normal";Executable = "/bin/sh";Arguments ="grid_run.sh" ;StdOutput = "grid_run.out";StdError = "grid_run.err";InputSandbox = {"grid_run.sh","GTR2_com","QA_11_65_8_24.mac"}; //QA_11_65_8_24.mac: NCC user macroOutputSandbox ={"grid_run.err","grid_run.out","QA_11_65_8_24.root"}; //QA_11_65_8_24.root : root OutputShallowRetryCount = 1 ;

GTR2_com Execution on FKPPL VO

1. Logon into the UI node Prepare necessary files

GTRT_com, JDL file, macro file, shell script

2. Submit job to the grid glite-wms-job-submit –i jobid JDL_file

3. Check job status glite-wms-job-status –i jobid

4. Get output Glite-wms-job-output –dir result_dir –i jobid

Do I bother to write thousands of JDL files to run thousands of my G4 applications on the Grid?

No, you can submit thousand of jobs to the grid with only one JDL file.

Demo

Distribution of subjobs’ completion time on FKPPL VO

GTR2_com applications was submitted to the grid at 18:05

감사합니다Thank you for your attention