high performance computing center at jinr 1997 lcta “status report” release 1, 1997

29
High Performance Computing Center at JINR 1997 LCTA “Status Report” LCTA “Status Report” Release 1, 1997 Release 1, 1997

Upload: marvin-mitchell

Post on 28-Dec-2015

217 views

Category:

Documents


3 download

TRANSCRIPT

High Performance Computing Center at JINR

1997

LCTA “Status Report” Release 1, 1997LCTA “Status Report” Release 1, 1997

Telecommunication systems:* External communication channels (INTERNET);* High-speed JINR Backbone;

Systems for powerful computations and mass data processing:* General High-performance server;* Clusters of workstations of JINR laboratories and experiments;* Computing farms (PC-farms);

Data storage system:* File servers system based on AFS;* Mass storage system;* Information servers and database servers;

Software support systems:* System for application creation and maintenance;* Visualization systems.

155 Mb/s155 Mb/s

10-1

00 M

b/s

10-1

00 M

b/s

10-100 Mb

/s10-100 M

b/s

155 Mb/s155 Mb/s

12121212

ATM Switch

1 2

Network ManagerNetwork ManagerAuxiliary Auxiliary Network ManagerNetwork Manager

Route ServerRoute Server

Statistics for Convex

2922

523554 2381

1465

116038

LCTA LNP BLTP LHE LPP FLNR FLNP

Statistic for VAX Cluster

55526284149

1761

39

LCTA FLNR LNP BLTP LHE LPP

Scalar-Vector system CONVEX C3840

Ethernet, SCSI interface, DAT, 9-trk tape

Scalar system Hewlett-Packard S class SPP2000

ATM, 100BaseT Ethernet

Software:Batch processing, NQS, C, C++, Parralel Scientific Lib, Fortran,

Message Passing Interface, Parralel Virtual Management,Parralel Application Development Tools

Operating System ConvexOS 11.5.0Number of Processors 4Performance (peak) 200 MFlops/1_CPU

Memory 2 GBDisks 48 GB

Operating System SPP-UX 10.20Number of Processors 8

Performance (peak) 800 MFlops/1_CPUProcessor Type PA8000

Memory 2 GBDisks 64 GB

Data Management SystemD-class server + ATL 2640 Automated Tape Library

D-class server

ATM, Ethernet, CD, SCSISoftware: HP OpenVeiw OmniBack II, HP OpenVeiw OmniStorage

ATL 2640

Operating System HP-UX 10.20

Number of Processors 2Processor Type PA8000

Memory 512 MBDisks 36 GB

Number of cartriges 264

Capacity of the cartrige 20 GB/40 GBTotal capacity 5.28 TB/10.56 TB

Number of drives 3

A Computing Farm to Solve the Problems of Simulation and Homogeneous Physical Data Mass Processing

The main goal: to construct at JINR the PC-farm with a new possibilities of physical experiments simulation, mass data processing and analysisThe main result: there will be the real possibility of equal in rights participation of JINR physicist in the data analysis of the large international experimental programs in high energy physicsThe main requests: CPU Time (on Pentium 200) >109 s/year RAM Memory on processor up to 128 Mb HDD Memory 200 - 500 Gb Mass Storage System 4 - 12 DLTThe reason of choice: A most simple and cheap solution for very special problems in high energy physicsCustomers: Laboratory of Particle Physics Laboratory of High Energy Physics

Two stages of the Project realization:

1. PC-farm segment 32 Pentium Pro 200 Processors (4 - for interactive using);(2 years) 128 Mb per processor; 200 Gb HDD memory (RAID and

SCSI); 4 DLT; 100 Mbit Ethernet Switch with 16 portsOS "Solaris x86 v.2.6" or LinuxAndrew File System (AFS);Network Queue System (NQS);Fortran, C++.

The total cost of this stage is 187,600 USD.Different financial sources are needed to realize this stage of the project: the Russia Ministry of Science and Technologies subprogram "Perspective information technologies" fund, budget of JINR, special funds of physical collaboration.

2. Full PC-farm: 128 Pentium Processors (8 - for interactive using);(3-4 years) 128 - 256 Mb per Processor; 500 Gb HDD Mary;

Mass Storage System with 12 DLT;new Ethernet Technology; new Software Technology.

An estimation total cost of this stage equipment is about 260,000 USD. The additional financial sources and funds are needed.

SUN/SPARC CLUSTER at JINR as OS SOLARIS ENVIRONMENT SERVER

Cluster ultra.jinr.dubna.su is working under OS Solaris 2.5.1

Site-licenses for JINR on Fortran F77-4.0 and C++-4.1 provide all JINR specialists complete conditions in OS Solaris environment for their work including the usage of current versions of CERNLIB

Site-licenses on F77-4.0 and C++-4.1 are available for any JINR hosts working under OS Solaris

The latest versions of many FSF/GNU products widely used in JINR are installed at cluster in a proper way and can be used for installation at any other JINR hosts working under OS Solaris

CMS CLUSTER at JINR

Hardware: 3 SPARC-stations ( 140 Ultra SPARC station and two SPARC-stations-20)

Disk Space: 24 GB

Software: OS Solaris 2.5.1, C-4.0, C++-4.1, F77-4.0 compilers

Number of users: 37

The computational environment is the same as at the CERN CMS cluster (cms.cern.ch)

CMS cluster at JINR supports both the tasks of simulation and data processing

MAIN DATA BASES

1. "Physics Information Servers and Data Bases" Accelerators DB

2. JINR Library Reference DB system3. DB for IP addresses of JINR4. Russian WWW-Servers DB5. Central services DB for administrative economic activities

Publishing Department DBSTAFFTopical Plan Etc.

GENERAL WWW SUPPORT

"Physics Information Servers and Data Bases" Physics Servers and Services around the World give access to Physics Encyclopedia, Educational Resources, News in Physics, Data and Tables on Physical Constants et cetera. Physics Institutions around the World. Physics Conferences, Workshops and Summer Schools. Publishing Offices home pages which give access to book and journal catalogs, their tables of contents and abstracts of articles, etcHigh Energy Physics(HEP) section includes current news, educational materials, information about HEP centers, programs and service packages on data analysis and modelling, detector simulation, event generators etc.Low and Medium Energy PhysicsCMS-RDMS Web siteMain JINR Web siteGeneral INFOMain documentsJournal "JINR Rapid Communications"Newspaper "DUBNA"JINR Information BulletinJINR Scientific Programme for the Years 1997 - 1999Etc.LCTA main Web site

RDMS CMS WWW-SERVER

CMS informational system is heavily based on the world-wide web (WWW)

Web-server http://sunct2.jinr.dubna.su was designed at LCTA and contains information on RDMS CMS collaboration activities

This web-server was adopted as an official web-server of RDMS CMS collaboration by the RDMS CMS Collaboration Board in June, 1997

Now there references on RDMS CMS web-server http://sunct2.jinr.dubna.su from CERN CMS web-servers CMSDOC and CMSINFO

LCTA is responsible for the further development and support of RDMS CMS web-server

PARTICIPATION IN RUSSIAN INTERDEPARTMENTAL PROGRAMMES

1."Creation of National Network of Telecommunications for Science and Higher Schools"

- development of the network RBNet and earth channels (section of the Programme 3.3.2. jointly with ROSNIIROS) - development of the network RUHep - "Dubna-Moscow" link creation (jointly with the Research Institute for Nuclear Physics MSU) - head organization within the project BAPHYS -creation of a data base network for nuclear physics.

2. Working out an interdepartmental programme "Creation of high-performance computer centres in Russia" (funds have been allocated for organization of HPCC at JINR).

3. A project "A computing farm for solving the problems of modeling and mass processing of homogeneous physical information" has been worked out for participation in the programme "Advanced Information Technologies".

PLAN FOR THE YEAR 1998

To finish the ATM-project on creation of the ATM-based backbones in all JINR laboratories. To provide a reliable operation of the external communication links by means of a tender selection of the service-provider. The most reasonable strategy for development of external computer communications for JINR, in view of the limited funds, involves aid in solving the question on creation of a node for a high-speed Europe Backbone (project TEN34) in Moscow, as well as a high-speed communication link to the USA in frames of projects on the development of the national network of computer telecommunications for science and higher schools. To start up the HPCC at JINR at its first stage -SPP200+DLT robot+mass memory system. To realize a first step in creation of the computing farm. To work out a project on organizing an electronic complex for message exchange within JINR by using a unified data base of all registered JINR users. To install an electronic libraries server, based on HPC servers (full-text data bases, photo-archive data bases, physical data bases etc.) To design a centralized Backup system for general JINR servers by using the HPC facilities. To pursue a policy of software standardization and licensing for creation of workplaces for users of the JINR networking informational-computational infrastructure(NICE95/NT). Further development of algorithms and methods for researches under way at JINR in the field of nuclear physics.

Development Strategy

Link to Europe/USAfrom 2 Mbps to 155 Mbps

JINR LANFull transfer to ATM

Mass StorageFrom Gbytes to Petabytes

Tele-videoconferencesAt least on room in each Laboratory

PC-Farms for large experimentsATLAS, CMS, STAR etc.

High-performance Computing(nonbudgetary sources)

OO, CASE-toolsStandard-Software, algorithms and methods

LCTA expenses for the development of the J INR informational-computational infrastructurein 1997 (thous. dollars)

Equipment ordered or purchased services provided. Price of the orderedequipment

Expensesattributed to

LCTA

Means of financing at LCTA

Modernization (internal and external memory upgrade) of thedatabase server AlphaServer 2100. Installation of DLT 200 GBexternal magnetic tape memory

22,86 22,86 Fee from Germany

Purchase of a workstation SUN of UltraSPARC 1 model 6,08 6,08 Fee of GermanyOrder for telephone-communication gadgets ATM of theNewBridge company

116,4 30,0 Fee of Germany

Purchase of managing program products for ATM-technology 56,4 5,64 Fee of GermanyPurchase of workstation Hewlett-Packard Open View for J INRLAN management

27,2 27,2 Fee of Germany

Payments for the network providers' services 73,5 8,8 Topic –1019Expansion of cluster CONVEX computers. Introduction intoservice C3840 machine.

200,0 0 Transferred for the free use according to aprotocol with Rossendorf

Order of a high-performance system of class S from Hewlett-Packard

250,0 0 Grant of Ministry of Science

Installation of networking equipment for the 2Mb/sec computercommunication channel

50,0 0 Transferred free of charge according to aprotocol withRosNIIROS.

0 Construction on a 2MB/s fibre optic channel J INR – University –SCC2

23,0 0 At the expense of fund coming fromMinistry of Science within theinterdepartmental programme

1 Rent fee for the 2 Mb/sec computer channel 4,9 per month 0 Funding is provided at the expense ofMinistry of Science in frames of protocolJ INR – RosNIIROS.

Total 830,34 100,58

In according to Memorandum of Understanding of collaboration between the Laboratory of Computing Techniques and Automation of the Joint Institute for

Nuclear Research, Dubna, Russia and the SPS/LEP Division of the European Organization for Nuclear Research, Geneva, Switzerland during 1997 year has

been done:

1. TDM Monitoring System http://hpslz24.cern.ch:8080/cgi-bin/tdm\_protect.vi2. MOPOS timing diagnostics system http://hpslz24.cern.ch:8080/examples/ex\_mopos.html3. LabVIEW on the WEB http://hpslz24.cern.ch:8080/4. TCP/IP based Message Handler (LV\_mhm) http://hpslz24.cern.ch:8080/others/rem\_exec.html5. Synchronisation of LabView application http://hpslz24.cern.ch:8080/sync/LV\_sync.html with an accelerator's cycle6. An Object-Oriented technology in http://hpslz24.cern.ch:8080/others/LV\_oop.html LabVIEW programming (report ICALEPS'97 Beijing 1-7.11.1997)

Future developments and projects on 1998 year (a few points has beendone now)

1. BEAM LOSS MONITORING System http://hpslz24.cern.ch:8080/examples/BML.html2. COMMON LABVIEW ENVIRONMENT http://hpslz24.cern.ch:8080/others/LV\_slops.html for the operational machines3. Evaluation of ACE http://hpslz24.cern.ch:8080/others/LV\_ace.html