joint usage/research center for interdisciplinary large ... · 2018 call for proposal of joint...
TRANSCRIPT
2017/12/06
1
Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures
2018 Call for Proposal of Joint Research Projects
The Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures
(JHPCN) calls for joint research projects for fiscal year 2018. JHPCN is a "Network-Type" Joint
Usage/Research Center, certified by the Minister of Education, Culture, Sports, Science and
Technology, and comprises eight super-computer equipped centers affiliated with Hokkaido
University, Tohoku University, The University of Tokyo, Tokyo Institute of Technology, Nagoya
University, Kyoto University, Osaka University and Kyushu University, among which Information
Technology Center at the University of Tokyo is functioning as the core institution of JHPCN.
Each JHPCN member institution will provide its research resources for joint research projects.
Researchers of an adopted joint research project can use provided research resources within
granted amount with free of charge. In addition, expenses for publication or presentation of
research results such as overseas travel expenses may be supported. Every eligible project
should use research resources provided by JHPCN member institutions or its research team
should include researchers affiliated with JHPCN member institutions. The available research
resources are computers, storages, visualization systems and so on. Some of them can be
connected to facilities inside and outside of JHPCN member institutions via high speed
networks for the purposes of exchange, accumulation and processing of data. It is also possible
to connect research resources of JHPCN member institutions with SINET5 L2VPN.
Part of High Performance Computing Infrastructure (HPCI) system, referred to as the HPCI-
JHPCN system, are also available for joint research projects. Applications for research
projects that use the HPCI-JHPCN System are invited via the HPCI online application system
and will be selected in line with our reviewing policy (see Section 6,“Research Project
Reviews”).
The aim of JHPCN is to contribute to the advancement and permanent development of
academic and research infrastructure of Japan by implementing joint research projects that
require large scale information infrastructures and that address Grand Challenge-type
problems, thus far considered extremely difficult to solve, in the following fields: Earth
environment, energy, materials, genomic information, web data, academic information, time-
series data from the sensor networks, video data, program analysis, application of large
capacity network and other fields in information technology. Since the JHPCN member
institutions have enrolled leading researchers, acceleration of joint research projects is
anticipated through the collaboration with these researchers. These joint research projects (for
the fiscal year 2018) will be implemented from April 2018 to March 2019.
2017/12/06
2
1. Joint Research Areas
This call for joint research projects will adopt interdisciplinary research projects in the four areas:
very large-scale numerical computation, very large-scale data processing, very large capacity
network technology, and very large-scale information systems. Approximately 60† joint research
projects will be adopted.
†Total number of JHPCN member institutions used for the adopted research projects.
(1) Very large-scale numerical computation
This includes scientific and technological simulations in scientific/engineering fields such as
Earth environment, energy, and materials, as well as modeling, numerical analysis algorithms,
visualization techniques, information infrastructure, etc., to support these simulations.
(2) Very large-scale data processing
This includes processing genomic information, web data (including from Wikipedia, as well as
news sites and blogs), academic information contents, time-series data from the sensor
networks, high-level multimedia information processing needed for streaming data for video
footage, etc., program analysis, access and search, information extraction, statistical and
semantic analysis, data-mining, machine learning, etc.
(3) Very large capacity network technology
This includes control and assurance of network quality for very large-scale data sharing,
monitoring and management required for construction and operation of very large capacity
networks, assessment and maintenance of the safety of such networks, a large-scale stream
processing framework taking advantage of very large capacity networks as well as the
development of various technologies for the support of research. Research dealing with large-
scale data processing using the large capacity network is classified in this area.
(4) Very large-scale information systems
This combines each of the above-mentioned areas, entailing exascale computer architectures,
software for high-performance computing infrastructures, grids computing, virtualization
technology, cloud computing, etc.
Please pay special attention to the following points when applying.
① We will only be accepting interdisciplinary joint research proposals that will involve
cooperation among researchers from a wide range of disciplines. For example, we
presume that a research team in the “very large-scale numerical computation” area will
2017/12/06
3
consist of researchers from computer and computational sciences who work together in a
cooperative and complimentary manner. In this area, therefore, we invite joint research
projects with researchers who will be solving problems in application fields using computers
and those who will be conducting research in computer science, such as on algorithms,
modeling, and parallel processing. It is not mandatory to use the HPCI-JHPCN system or
other research resources provided by JHPCN member institutions. In other words, a joint
research project with no use of provided research resources can be acceptable.
② We particularly appreciate the following categories of joint research projects.
● Research projects in close cooperation with multiple JHPCN member institutions
Taking advantage of the “network” of JHPCN, a project in this category should use
research resources and/or involve researchers of multiple JHPCN member institutions. For
example, relevant research topics may include but not limited to large-scale and
geographically distributed information systems and multi-platform implementations of
applications using research resources provided by multiple JHPCN member institutions.
● Research projects using both large-scale data and large capacity networks
The available research resources include those that can be directly connected to a very
wide bandwidth network provided by SINET5, including L2VPN, in cooperation with the
National Institute of Informatics. Therefore, research can be conducted that depends upon
a very wide bandwidth network. A project in this category should require massive data
transfer, using the very wide bandwidth network, between research facilities located at
involved researchers’ working places and at JHPCN member institutions or between those
at JHPCN member institutions. Please refer to Attachment 2 for possible examples of
research topics in this category.
2. Types of Joint Research Projects
Under the premise of the interdisciplinary joint research project structure mentioned above, joint
research projects to be invited are as follows.
(1) General Joint Research Projects (approximately 80% of the total number of accepted
projects will be in this type)
(2) International Joint Research Projects (approximately 10% of the total number of
accepted projects will be in this type)
International joint research projects are conducted in conjunction with foreign researchers to
address challenging problems that may not be resolved or clarified with only the help of
researchers within Japan. For such research projects, there will be a certain amount of
2017/12/06
4
subsidies paid to cover travel expenses incurred for holding meetings with foreign joint
researchers during the fiscal year of commencement. For details, please contact our office
once your research project has been accepted.
(3) Industrial Joint Research Project (approximately 10% of the total number of accepted
projects will be in this category)
Industrial joint research projects are interdisciplinary projects focused on industrial
applications.
A research proposal submitted to (2) International Joint Research Project or (3) Industrial Joint
Research Project might be adopted as a (1) General Joint Research Project. Further, we invite
applications in each of these three types for “research projects that use the HPCI-JHPCN
System” and “research projects that do not use the HPCI-JHPCN System.”
If you require the assistance and cooperation of researchers and research groups that are
affiliated with JHPCN member institutions in the computer science field, please complete
Section 7 of the application form describing the specific requirements in detail. We will arrange
for as much assistance and cooperation as possible.
During the application, please designate the university (JHPCN member institution) with which
you seek collaboration. You may also name several institutions. If this is difficult for you to
designate, it is possible for us to decide corresponding research center(s) on your behalf after
considering the research proposal. Please be sure to discuss this with our contact person (listed
on Section 10) in advance.
3. Application Requirements
Applications must be made by a Project Representative. Furthermore, the Project
Representative and the Deputy Representative(s), as well as any other joint researchers of a
project, must fulfill the following conditions.
① The Project Representative must be affiliated with an institution in Japan (University,
National Laboratory, Private Enterprise, etc), and must be in a position to be able to
obtain the approval of his/her institution (or representative head).
② At least one Deputy Representative must be a researcher in a different academic field
from that of the Project Representative. There may be two or more Deputy
Representatives, and one of them can be in charge of HPCI face-to-face identity vetting.
③ A graduate student can participate in the project as a joint researcher, but an
undergraduate cannot. A Graduate student cannot become the Representative or the
Deputy Representative of the project. If a non-resident member, defined by the Foreign
2017/12/06
5
Exchange and Foreign Trade Act, is going to use computers, a researcher affiliated with
the JHPCN member institutions equipped with them must participate as a joint
researcher.
International joint research projects must, in addition to the above-mentioned ①–③, fulfill the
following conditions (④ and ⑤).
④ At least one researcher affiliated with a research institution outside Japan must be
named as a Deputy Representative. Furthermore, an application must be made using
the English Application Form.
⑤ A researcher affiliated with the JHPCN member institutions that you designate “Desired
University for Joint Research” must participate as a joint researcher.
Industry joint research projects must, in addition to the above-mentioned ①–③, fulfill the
following conditions (⑥ and ⑦).
⑥ The Project Representative must be affiliated with a private company, excluding
universities and national laboratories.
⑦ At least one researcher affiliated with the JHPCN member institutions that you designate
“Desired University for Joint Research” must be named as a Deputy Project
Representative.
4. Joint Research Period
April 1, 2018 to March 31, 2019.
Depending on conditions for preparing computer accounts, the commencement of computer
use may be delayed.
5. Facility Use Fees
The research resources listed in Attachment 1 can be used within granted amount with free of
charge.
6. Research Project Reviews
Reviews will be conducted by the Joint Research Project Screening Committee, which
comprises faculty members affiliated with JHPCN member institutions as well as external
members, and the HPCI Project Screening Committee, which comprises industry, academic,
and government experts. Research project proposals will be reviewed in both a general and
technical sense for their scientific and technological validity, their facility/equipment
requirements and the validity of these requirements, and their potential for development. The
feasibility of resource requirements at and cooperation/collaboration with the JHPCN member
institutions that you designate “Desired University for Joint Research” will also be subject to
2017/12/06
6
review. In addition, how relevant a proposal is to its type of joint research project will be
considered.
Furthermore, for projects continuing from the previous fiscal year and projects determined to
have substantial continuity, an assessment of the previous year’s interim report and previous
usage of computer resources may be considered during the screening process.
7. Notification of Adoption
We expect to notify the review results by mid-March 2018.
8. Application Process
1. Please note that the following application procedures differ for “Research projects that use
the HPCI-JHPCN System” and “Research projects that do not use the HPCI-JHPCN System”
(The HPCI-JHPCN system is described in Attachment 1 (1)).
2. In particular, for “Research projects that use the HPCI-JHPCN System,” the Project
Representative (and the Deputy Representative who will submit the proposal or who will be in
charge of HPCI face-to-face identity vetting on behalf of the Project Representative) and all
joint researchers that will use the HPCI-JHPCN system must have obtained their HPCI-ID
prior to the application.
3. For international joint research projects, an English application form must be completed.
(1) Application Procedure
After completing the Research Project Proposal Application Form obtained from the
JHPCN website (https://jhpcn-kyoten.itc.u-tokyo.ac.jp/en/) and completing the online
submission of the electronic application, please print and press seals onto the application
form and mail it to the Information Technology Center, The University of Tokyo (address
provided on Section 10). For details on the application process, please consult the JHPCN
website and the “HPCI Quick Start Guide.”
The summary of the application process is shown below.
I: For “Research Projects that use the HPCI-JHPCN System”
① Download the Research Project Proposal Application Form (MS Word format) from
the JHPCN website and complete it. In parallel, the Project Representative (and the
Deputy Representative who will submit the proposal or who will be in charge of HPCI
face-to-face identity vetting on behalf of the Project Representative) and all joint
researchers who will use the HPCI-JHPCN System must obtain their HPCI-IDs
2017/12/06
7
unless already have them. For online entries to be made as per the requirements of
step ②, it is necessary to register the HPCI-IDs of the Project Representative, the
Deputy Representative acting on behalf of Project Representative, and all joint
researchers who will use the HPCI-JHPCN System. Joint researchers who are not
registered at this stage will not be issued with computer user accounts (HPCI
account, local account) and thus, will not be able to use the HPCI-JHPCN system.
② Access the “Research Projects that use the HPCI-JHPCN System” on the Project
Application page of the above-mentioned JHPCN website. You will be transferred to
the HPCI Application Assistance System where after filling out the necessary items,
you should upload a PDF file of the Research Project Proposal Application Form
(without seal) created in ①.
※ For “Research Projects that use the HPCI-JHPCN System” the project
application system on the JHPCN website will not be used.
③ Print and press seals onto the Research Project Proposal Application Form created
in ①, and mail it to the address provided on Section 10. Contact Information
◆ If your project is adopted, please follow the “Procedures after the awarding of the
proposal” stipulated by the HPCI (https://www.hpci-office.jp/pages/e_taimen.html). In
particular, face-to-face identity vetting must be provided responsibly by the Project
Representative or a Deputy Representative, who may have to bring copies of
photographic identification of all joint researchers who will use the HPCI-JHPCN system.
(Note that the contact persons such as secretaries are ineligible for a face-to-face identity
vetting).
II: For “Research Projects that do not use the HPCI-JHPCN System”
① Download the Research Project Proposal Application Form (MS Word format) from
the JHPCN website and complete it.
② Access the “Research Projects that do not use the HPCI-JHPCN System” on the
project application page of the above-mentioned JHPCN website. After filling out
the necessary items, you should upload a PDF file of the Research Project
Proposal Application Form (without seal) created in ①. An e-mail “Notification of
Receipt” will be sent to the e-mail address that you registered on the Research
Project Application page.
※ Applications for “Research Projects that do not use the HPCI-JHPCN System” do
not require the HPCI Application Assistance System. Therefore, it is not
necessary to obtain HPCI-IDs.
2017/12/06
8
③ Print and press seals onto the Research Project Proposal Application Form
created in ①, and mail it to the address provided on Section 10. Contact
Information.
(2) Application Deadline
Online Registration (submission of PDF application form): January 9, 2018 (Tue.)
17:00 [Mandatory]
Submission of printed application form: January 15, 2018(Mon.) 17:00 [Mandatory]
The above deadlines must be followed for submission of the Research Project Proposal
Application Forms. However, if submission of printed documents created in (1)Ⅰ③ and (1)Ⅱ
③ above is delayed, please contact the JHPCN office (details on Section 10) in advance.
(3) Points to remember when creating the Research Project Proposal Application Form
① Applicants must comply with rules and regulations of the JHPCN member institutions
whose computer resources will be used in the joint research project.
② Research resources must be only used for the purpose of the adopted research project.
③ The proposal must be for a project with peaceful purposes.
④ The proposal should consider human rights and profit protection.
⑤ The proposal must comply with the Ministry of Education, Culture, Sports, Science and
Technology’s “Approach to Bioethics and Safety”.
⑥ The proposal must comply with the Ministry of Health, Labour and Welfare’s “Guidelines
on Medical and Health research.”
⑦ The proposal must comply with the Ministry of Economy, Trade and Industry’s
“Concerning Security Trade Management.”
⑧ Applicants must complete a course in Responsible Conduct of Research (RCR).
Programs on RCR, including lectures, e-learning, and training workshops, are conducted
at each of the affiliated organizations (including CITI Japan e-learning). If your affiliated
organization does not provide any RCR programs, please contact our office. Researchers
who are eligible to apply for the Japanese Grant-in-Aid for Scientific Research awarded by
the Ministry of Education, Culture, Sports, Science and Technology and the Japan Society
for the Promotion of Science will be considered as eligible if they note their “Kakenhi”
Researcher Code in the application form. In addition, researchers who were recently
awarded research grants that required RCR training will be also considered as eligible if
they present evidence.
⑨ After being accepted, applicants must comply with the deadlines for submitting a
research introduction poster and various reports.
2017/12/06
9
9. Other Important Notices
(1) Submission of a written oath
Research groups whose research projects have been adopted will be expected to submit a
written oath pledging adherence to the contents of the above-mentioned “(3) Points to
remember when creating the Research Project Proposal Application Form” of “8. Application
Process.” The specific process of submission will be provided post-adoption, but a sample of
it is provided on the website for those applicants who wish to familiarize themselves with the
requirements of the oath.
(2) Regulations for use of facilities
While using the facilities, you are expected to follow the regulations for use pertaining to
research resources as stipulated by the JHPCN member institutions with which you will
work.
(3) Types of reports
The submission of research introduction posters, interim reports, and final reports are
expected. Each report must be submitted by the designated deadline. Research introduction
posters as well as the final reports, in particular, will be expected to be suitable for
disclosure (see the JHPCN website for details). For International Joint Research Projects,
these reports should be submitted in English in principle. Furthermore, you will be asked to
introduce the research through poster presentation at the JHPCN symposium (where the
results of research projects from the previous year will be presented orally) that is expected
to be held during the summer.
(4) Disclaimer
Each JHPCN member institution assumes no responsibility at all for any inconveniences that
affect applicants as a consequence of joint research projects.
(5) Handling of intellectual property rights
In principle, any intellectual property that arises as a result of a research project will belong
to each of the research groups involved. However, it is presumed that any recognition of
inventors will be in accordance with each institution’s policy concerning intellectual property
rights. Please contact each JHPCN member institution for details and handling of other
exceptional matters.
(6) RCR training
If a participant in an accepted project (excluding students) cannot be confirmed to have
completed a program pertaining to RCR or equivalent (for example, eligibility for the
Japanese Grant-in-Aid for Scientific Research that is awarded by the Ministry of Education,
Culture, Sports, Science and Technology, or the Japan Society for the Promotion of
2017/12/06
10
Science), this individual’s use of computers may be suspended until the completion of such
a program can be confirmed.
(7) Acknowledgement
Upon publication of the results of an accepted project, the author(s) should indicate in
Acknowledgements that the project was supported by JHPCN (see the JHPCN website for
an example sentence)..
(8) Other
① Personal information provided by this proposal shall only be used for screening research
projects and providing system access.
② After the adoption of a research project, however, the project name and the
name/affiliation of the Project Representative will be disclosed
③ After the adoption of the research project, changes cannot be made to the JHPCN
member institutions to work with or the computers to be used.
④ If you wish to discuss your application, please contact us on the e-mail address listed on
Section 10 (please note in advance that we are not able to respond to telephone-based
inquiries).
10. Contact information (for inquiries about application, etc.)
● For inquiries about application
Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures Office
E-mail address: jhpcn.adm@gs.mail.u-tokyo.ac.jp
● For available resources, how to use resources, details of eligibility, faculty members who can
participate in joint research projects, and the handling of intellectual property of each institution,
please feel free to directly contact the following.
JHPCN member institutions e-mail address
Information Initiative Center, Hokkaido University [email protected]
Cyberscience Center, Tohoku University [email protected]
Information Technology Center, The University of Tokyo [email protected]
Global Scientific Information and Computing Center,
Tokyo Institute of Technology [email protected]
Information Technology Center, Nagoya University [email protected]
Academic Center for Computing and Media Studies,
Kyoto University [email protected]
2017/12/06
11
Cybermedia Center, Osaka University [email protected]
Research Institute for Information Technology,
Kyushu University [email protected]
◆ Mailing address for the Research Project Proposal Application Form
2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-8658
Information Technology Center, The University of Tokyo
Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures Office
(You may use the abbreviations, “Information Technology Center Office,” or the “JHPCN Office”)
2017/12/06
12
Attachment 1
List of research resources available at the JHPCN member institutions for the Joint Research
Projects
The research resources that can be directly connected via SINET5 L2VPN provided by National
Institute of Informatics is annotated as “L2VPN ready.”
(1)List of the HPCI-JHPCN system resources available for Joint Research Projects
JHPCN Institution
Computational Resources, Type of Use
Estimated number of Projects adopted
Information Initiative Center, Hokkaido University
① Supercomputer HITACHI SR16000/M1
(Max. 4 node ・four months per 1 project)
2018-04 – 2018-07 (due to system renewal)
《Hardware resources》
168 nodes,5,376 physical cores,Total main memory capacity 21TB,164.72 TFLOPS (Shared with general user, MPI parallel processing of up to a maximum of 128node per 1 job is possible) 1) Calculation time: 27,000,000 seconds: file 2.5 TB 2) Calculation time: 2,700,000 seconds: file 0.5 TB (Assumed amount of resources:166,666,667s and 48 TB when combining 1) and 2))
《Software resources》
【Compilers】Optimized FORTRAN90 (Hitachi),
XL Fortran (IBM),XL C/C++ (IBM)
【Libraries】MPI-2 (without dynamic process creation function),
MATRIX/MPP,MATRIX/MPP/SSS,MSL2,NAG SMP Library,
ESSL,Parallel ESSL, BLAS,LAPACK,ScaLAPACK,ARPACK,FFTW2.1.5/3.2.2
【Application software】Gaussian
② Cloud-System Blade Symphony BS2000 2018/04 - 2018/10 (due to system renewal)
《Hardware resources》
Virtual server(”S”/”M”/”L” server)
2 nodes,80 cores
Equal to 8 units of “L” server (May also use “S” server and “M” server)
(Max. 4 units・seven months per 1 project)
Physical server(”XL” server)
1 unit (40 cores,Memory 128GB,DISK 2TB)
《Usage》
L2VPN Ready: Only for ” XL” server. (VPN service is available for “S”/”M”/”L” servers by “Cloud Stack”)
③ Data Science Cloud System HITACHI HA8000
《Hardware resources》
1) Physical server 2 units (20 cores, Memory 80GB, Hard-Disk 2TB)
①+②:10
③
Max.2
④+⑤:8
⑥:4
2017/12/06
13
《Usage》
L2VPN Ready
④ New Supercomputer A 2019/01 - 2019/03 (due to system renewal)
《Hardware resources》
(Max. 32 node years per 1 project)
About 900 nodes,36,000 physical cores,Total main memory capacity
337TB,2.5PFLOPS
(Shared with general user, MPI parallel processing of up to a maximum of 128node per 1 job is possible) 1) Calculation time: 15,000,000 seconds: file 3TB (Assumed amount of resources: 3,000,000,000 s and 600TB for multiple sets of 1))
《Software resources》
【Compilers】Fortran, C/C++
【Libraries】MPI, Numerical libraries
⑤ New Supercomputer B 2019/01 - 2019/03 (due to system renewal)
《Hardware resources》
(Max. 16 node years per 1 project)
About 400 nodes,24,000 physical cores,Total main memory capacity
43TB,1.2PFLOPS
(Shared with general user, MPI parallel processing of up to a maximum of 64node per 1 job is possible) 1) Calculation time: 15,000,000 seconds: file 3TB (Assumed amount of resources: 1,500,000,000 s and 600TB for multiple sets of 1))
《Software resources》
【Compilers】Fortran, C/C++
【Libraries】MPI, Numerical libraries
⑥ New Cloud System 2018/12 – 2019/3 (due to system renewal)
《Hardware resources》
1) Physical server 5 nodes (40+ cores, Memory 256+ GB, DISK 2+ TB)
2) Intercloud package 1 sets (Physical servers each of which is installed at Hokkaido university, University of Tokyo, Osaka university, and Kyushu university, connected via SINET VPN)
3) Virtual server (L server) 8 nodes (10core Memory 60+ GB, DISK 500+GB)
《Usage》
L2VPN Ready
Cyberscience Center, Tohoku University
① Supercomputer SX-ACE(2,560nodes)
《Hardware resources》
48 node years / project
About 707TFLOPS,Main memory 160TB,Maximum number of nodes
1,024, Shared use
《Software resources》
【Compilers】FORTRAN90/SX, C++/SX, NEC Fortran2003
【Libraries】MPI/SX, ASL, ASLSTAT, MathKeisan
10
2017/12/06
14
② Supercomputer LX406Re-2(68nodes)
《Hardware resources》
12 node years / project
About 31.3TFLOPS,Main memory 8.5TB,Maximum number of nodes
24,Shared use
《Software resources》
【Compilers】Intel Compiler(Fortran, C/C++)
【Libraries】MPI, IPP, TBB, MKL, NumericFactory
【Application software】Gaussian16
③ Storage 4PB
《Hardware resources》
20TB / project(possible to add)
Information Technology Center, the University of Tokyo
① Reedbush-U (Intel Broadwell-EP cluster, High-speed file cache system (DDN IME))
《Hardware resources》
Maximum tokens for each project: 16 node-year, Storage 16TB (138,240 node-hour) Options: node occupied service, customized login nodes, L2VPN Ready (negotiable)
《Software resources》
【Compilers】Intel Fortran, C, C++
【 Libraries 】 MPI, BLAS, LAPACK/ScaLAPACK, FFTW, PETSc,
METIS/ParMETIS, SuperLU/SuperLU_DIST
【Application software】OpenFOAM, ABINIT-MP, PHASE, FrontFlow/Blue,
FrontFlow/Red, REVOCAP, ppOpen-HPC, BioPerl, BioRuby, BWA, GATK, SAMtools, K MapReduce, Spark
② Reedbush-H (Intel Broadwell-EP+NVIDIA P100 (Pascal) cluster (2 GPU per node), High-speed file cache system (DDN IME))
《Hardware resources》
Maximum tokens for each project: 8 node-year, Storage 32TB (69,120 node-hour)
《Software resources》
【Compilers】Intel Fortran, C, C++, PGI Fortran, C, C++ (with Accelerator
support), CUDA Fortran, CUDA C
【Libraries】GPUDirect for RDMA: Open MPI, MVAPICH2-GDR, cuBLAS,
cuSPARSE, cuFFT, MAGMA, OpenCV, ITK, Theano, Anaconda, ROOT, TensorFlow
【Application software】Torch, Caffe, Chainer, GEANT4
③ Reedbush-L (Intel Broadwell-EP+NVIDIA P100 (Pascal) cluster (4 GPU per node), High-speed file cache system (DDN IME))
《Hardware resources》
Maximum tokens for each project: 4 node-year, Storage 16TB (34,560 node-hour) Options: node occupied service, customized login nodes, L2VPN Ready (negotiable)
《Software resources》
Same as Reedbush-H
④ Oakforest-PACS (Intel Xeon Phi 7250 (Knights Landing))
《Hardware resources》
10
2017/12/06
15
Maximum tokens for each project: 64 node-year, Storage 16TB (552,960 node-hour)
《Software resources》
【Compilers】Intel Fortran, C, C++
【 Libraries 】 MPI, BLAS, LAPACK/ScaLAPACK, FFTW, PETSc,
METIS/ParMETIS,, SuperLU/SuperLU_DIST
【 Application software (planned) 】 OpenFOAM, ABINIT-MP, PHASE,
FrontFlow/Blue, FrontFlow/Red, REVOCAP, ppOpen-HPC
Global Scientific Information and Computing Center, Tokyo Institute of Technology
① Cloudy, Big-Data and Green Super Computer “TSUBABE3.0”
《Hardware resources》
TSUBAME3.0 system includes 540 compute nodes, which provides 12.15PF performance (CPU 15,120 cores, 0.70PF + GPU 2,160 slots, 11.45PF) in total. Maximum system available at a time is 50% of full-system. (Shared use) Total provided resource is 230 units (= 230,000 node-hour, 1 unit = 1,000 node-hour). 27 units (= 3.125 node-year) are maximum resources for each project. Please specify not only total amount of resources but also quarterly amounts of resources. Maximum storage is 300TB for each project. Ensuring 1TB of storage for one year requires 120 node-hour of resource. Resources should be requested accordingly.
《Software resources》
【OS】SUSE Linux Enterprise Server 12 SP2
【Language Compiler:】Intel Compiler (C/C++/Fortran), PGI Compiler
(C/C++/Fortran, OpenACC, CUDA Fortran), Allinea FORGE, GNU C, GNU Fortran, CUDA, Python, Java SDK, R
【Libraries】OpenMP, MPI (Intel MPI, OpenMPI, SGI MPT),BLAS,
LAPACK,CuDNN, NCCL, PETSc, fftw, PAPI
【Application software】Gaussian, Gauss View, AMBER (only for academic
users) , CST MW-STUDIO (only for Industrial Joint Research Projects), Caffe, Chainer, TensorFlow, Apache Hadoop, ParaView, POV-Ray, VisIt, GAMESS, CP2K, GROMACS, LAMMPS, NAMD, Tinker, OpenFOAM, ABINIT-MP
14
Information Technology Center, Nagoya
University
(1) Fujitsu PRIMEHPC FX100
《Hardware resources》
3.2PF (2,880 nodes, 92,160 cores, 92TiB memory)
《Software resources》
【OS】FX100 dedicates OS (Red Hat Enterprise Linux for login node)
【Compilers】 Fortran,C,C++,XPFortran
【Libraries】 MPI, BLAS, LAPACK, ScaLAPACK, SSLII, C-SSL II,
SSL,II/MPI, NUMPAC, FFTW, HDF5
【Application software】Gaussian, LS-DYNA, AVS/Express,
AVS/Express PCE(parallel version ), EnSight Gold, IDL, ENVI, ParaView, Fujitsu Technical Computing Suite (HPC Portal)
(2) Fujitsu PRIMERGY CX400/2550
《Hardware resources》
447TF (384 nodes, 10,752 cores, 48TiB memory)
《Software resources》
【OS】 Red Hat Enterprise Linux
【Programming languages】 Fujitsu Fortran/C/C++/ XPFortran,Intel
Fortran /C/C++, Python (2.7, 3.3)
【Application software】 MPI(Fujitsu, Intel), BLAS, LAPACK,
(1) +(2)+
(3)+(4):10
2017/12/06
16
ScaLAPACK,SSLII(Fujitsu), C-SSL II, SSL II/MPI, MKL(Intel), NUMPAC, FFTW, HDF5, IPP(Intel)
【Application software】 LS-DYNA, ABAQUS, ADF, AMBER,GAMESS,
Gaussian, Gromacs, LAMMPS, NAMD, HyperWorks,OpenFOAM, STAR-CCM+, Poynting, AVS/Express Dev/PCE,EnSightGold, IDL, ENVI, ParaView, Fujitsu Technical Computing Suite (HPC Portal) (3) Fujitsu PRIMERGY CX400/270
《Hardware resources》
279TF (184 nodes + 184 Xeon Phi, 4,416 + 10,488 cores, 23TiB memory)
《Software resources》
Same to software resources of (2) Fujitsu PRIMERGY CX400/2550 (4) SGI UV2000
《Hardware resources》
24TF (1,280 cores, 20TiB shared memory, 10TB memory base file system) 3D visualization system:
High resolution 8K display system (185in size by 16 panel tile display), Full HD circularly polarized light binocular vision system (150in screen, projector, circularly polarized light eyeglasses, etc.),
head mount display system (MixedReality, VICON, etc.), dome type display system
《Software resources》
【OS】 SuSE Linux Enterprise Server
【Programming languages】Intel Fortran C C++, Python (2.6, 3.5)
【Libraries】 MPT(SGI),MKL(Intel), PBS Professional, FFTW,
HDF5,NetCDF, scikit-learn, TensorFlow
【Application software】 AVS/Express Dev/PCE, EnSightDR,
IDL,ENVI(SARscape), ParaView, POV-Ray, NICE DCV(SGI), ffmpeg, ffplay, osgviewer, vmd(Visual Molecular Dynamics) Maximum resource allocation to one project: 15 unit or 86 node year 1 unit becomes 50 thousand node hour (equivalent to 180 thousand
yen charge) You can use large scale storage by converting 5000 node hour to
10TB. All resources are shared with general users. When using 3D visualization system, it is 100 thousand yen per
research projects.(Equivalent to additional contribution)
Academic Center for Computing and Media Studies, Kyoto University
Cray XC40(Camphor 2:Xeon Phi KNL/node)
《Hardware resources》
① 128 nodes, 8,704 cores, 390.4TFLOPS×throughout the year (Each
project is assigned up to 48 nodes×throughout the year)
② 128 nodes, 8,704 cores, 390.4TFLOPS×20 weeks (Each project is
assigned up to 128 nodes×4 weeks. )
Available amount of computing resources are adjusted by the projects.)
《Software resources》
【Compilers】Fortran2003/C99/C++03(Cray, Intel)
【Libraries】MPI, BLAS, LAPACK, ScaLAPACK
【Application Software】Gaussian09
《Usage》
L2VPN ready (negotiable).
① ②:5
Cybermedia Center,
(i) Super Computer SX-ACE
《Hardware resources》 (i)(ii)(iii): 10
2017/12/06
17
Osaka University
- Resource per project: up to 22 node years
- Computational node: 512 nodes (141 TFlops) will be provided up to 280,000 node hours in shared use.
- Storages per project: 20 TByte
《Software resources》
- 【Compilers】C++/SX, FORTRAN90/SX, NEC Fortran 2003 compiler
- 【Libraries】ASL, ASLQUAD, ASLSTAT, MathKeisan(including BLAS,
BLACS, LAPACK, ScaLAPACK), NetCDF (ii) PC Cluster for large scale visualization (possible to link with the large-
scale visualization system)
《Hardware resources》
- Resource per project: up to 6 node years
- Computational node: 69 nodes (CPU: up to 31.1 TFlops, GPU: up to 69.03 TFlops) will be provided up to 53,000 node hours in shared use. Arbitrary number of GPU resources can be attached to each node based on user requirement. Simultaneous use of this system and the large-scale visualization system is possible based on user requirement.
- Storages per project: 20 TByte
《Software resources》
- 【Compilers】Intel Compiler (C/C++, Fortran)
- 【Libraries】Intel MKL, Intel MPI(Ver.4.1)
- 【 Application software 】 OpenFOAM, GROMACS (for CPU/for
GPU),LAMMPS (for CPU/for GPU), Gaussian09, AVS/Express PCE,AVS/Express MPE (Ver.8.2)
(iii) OCTOPUS
《Hardware resources》
- Resource per project: General purpose CPU nodes: up to 35 node years GPU nodes: up to 5 node years Xeon Phi nodes: up to 6 node years Large-scale shared-memory nodes: up to 0.3 node years
- Computational node: General purpose CPU nodes: 236 nodes (CPU: 471.1 TFLOPS) will be provided up to 310,000 node hours in shared use. Because processors on these nodes are the same as that of GPU nodes, users can use 273 nodes for a job. GPU nodes: 37 nodes (CPU: 73.9 TFLOPS, GPU: 784.4 TFLOPS) will be provided up to 49,000 node hours in shared use. Xeon Phi nodes: 44 nodes (CPU: 117.1 TFLOPS) will be provided up to 58,000 node hours in shared use. Large-scale shared-memory nodes: 2 nodes (CPU: 16.4 TFLOPS, memory: 12TB) will be provided up to 2,600 node hours in shared use. Storages per project: 20 TByte
《Software resources》
[Compilers] Intel Compiler (FORTRAN, C, C++), GNU Compiler (FORTRAN, C, C++), PGI Compiler (FORTRAN, C, C++), Python 2.7/3.5, R 3.3, Julia, Octave, CUDA, XcalableMP, Gnuplot
[Library] IntelMPI, OpenMPI, MVAPICH2, BLAS, LAPACK, FFTW, GNU Scientific Library, NetCDF 4.4.1, Parallel netcdf 1.8.1, HDF5 1.10.0
[Application software]
2017/12/06
18
NICE Desktop Cloud Visualization, Gaussian16, GROMACS, OpenFOAM, LAMMPS, Caffe, Theano, Chainer, TensorFlow, Torch, GAMESS, ABINIT-MP, Relion
Research Institute for Information Technology, Kyushu University
① ITO Subsystem A (Fujitsu PRIMERGY)
《Hardware Resources》
1) (Nearly dedicated-use) The Maximum resources allocated for 1 project is 32 nodes for a year. Most of resources are dedicated to the project. 32 nodes(1,152 cores), 110.59 TFLOPS
2) (Shared-use) Up to 64 nodes can be used at the same time per project. It is shared with general users. 64 nodes(2,304 cores), 221.18 TFLOPS
《Software Resources》
【Compilers】Intel Cluster Studio XE(Fortran, C, C++), Fujitsu Compiler
② ITO Subsystem B (Fujitsu PRIMERGY)
《Hardware Resources》
(Nearly dedicated-use) The maximum resource allocation per task is 16 nodes for a year. Most of resources are dedicated to the project.
16 nodes(576 cores), CPU 42.39TFLOPS + GPU 339.2TFLOPS,
including SSD
《Software Resources》
【Compilers】Intel Cluster Studio XE(Fortran, C, C++), Fujitsu Compiler,
CUDA
③ ITO Frontend (Virtual server / Physical server)
《Hardware Resources》
Standard Frontend:
1 nodes(36 cores), CPU 2.64TFLOPS,Memory 384GiB,GPU
(NVIDIA Quadro P4000)
Large Frontend:
1 nodes(352 cores), CPU 12.39TFLOPS,Shared memory 12GiB,
GPU(NVIDIA Quadro M4000)
《Software Resources》
【Compilers】Intel Cluster Studio XE(Fortran, C, C++), CUDA
Storages per project: 10 TByte, (possible to add)
Regarding ①/②, a server for pre/post-processing or visualization is
available (Standard Frontend)
①1):2
2):8
②:3
③:5
2017/12/06
19
(2)Other facilities/resources available for Joint Research Projects
The following facilities/resources, despite not being part of the HPCI-JHPCN system, are available
for Joint Research Projects.
JHPCN Institution
Computational Resources, Type of Use
Estimated number of Projects adopted
Information Initiative Center, Hokkaido University
《Hardware resources》
(1) Large-format printer
《Software resources》
《Usage》
12
Cyberscience Center, Tohoku University
《Hardware resources》
(1) Large-format printer (2) 3D visualization system(12 screens large stereoscopic display
system(1920×1080(FULL HD) 50 inch projection module × 12
screens), visualization server × 4, 3D visualization glasses)
《Software resources》
AVS/Express MPE(3D visualization software)
《Usage》
・ L2VPN Ready
・ On-demand L2VPN
10
Information Technology Center, the University of Tokyo
《Hardware resources》
(1) FENNEL (Real-time Data Analysis Nodes) x 4 At maximum four VMs or one bare metal server are provided to each group. The provided VMs or a bare metal server are dedicated to the group.
(2) GPGPU (Nvidia Tesla M60) is available on request. 10Gbps Network Access
(3) etwork Attached Storage (NAS) 150 TB 10Gbps Network Access
《Software resources》
Analysis nodes
【OS】Ubuntu 16.04, Ubuntu 17.04, CentOS 7.4
【Programming Language】Python, Java, R
【Application Software】Apache Hadoop, Apache Spark, Apache Hive,
Facebook Presto, Elastic Search, Chainer, Tensor Flow
《Usage》
L2VPN ready (1) Analysis nodes
You can directly login to the nodes via remote network with SSH. You can also mount NAS as native file system.
(2) NAS You can mount with iSCSI, NFS and/or CIFS protocol via SINET L2VPN. Y
《Other Options》
Housing services 20U Rack-Space (Arm-Type Racking)
3 or 4
2017/12/06
20
Network Connectivity (Network Bandwidth: 10Gbps, (consultation
is necessary), Power: AC100V/15A)
Global Scientific Information and Computing Center, Tokyo Institute of Technology
《Hardware resources》
(1) Remote GUI environment: the VDI (Virtual Desktop Infrastructure)
system
If you are planning to use the VDI system, please contact us in advance.
《Software resources》
《Usage》
Information Technology Center, Nagoya University
《Hardware Resources》
(1) Visualization system Remote visualization system, on-site use(for data translation)
(2) Network connection up to 40GBASE Note: Nagoya University only provides SFP+/QSFP port so that you have to prepare optical modules including the module for network switch of Nagoya University.
《Software Resources》
(1) Fujitsu PRIMERGY CX400/2550 Python + scikit-learn + TensorFlow Other software which is executable on Linux
(2) Visualization system AVS/Express Dev/PCE, EnsightHPC, 3DAVSPlayer, IDL, ENVI (SARscape),ParaView, osgviewer, EasyVR, POV-Ray, 3dsMAX, 3- matic,VMD, LS-prepost
《Usage》
L2VPN Ready You can use L2VPN network connection when you connect own system to the network via hardware resource (2). We configure SINET L2VPN connection to outer university and internal university VLAN (with connection between them)
Academic Center for Computing and Media Studies, Kyoto University
《Hardware resources》
(1) Virtual Server Hosting Standard Spec: CPU 2 Cores, Memory 8GB, Disk capacity 1TB
《Software resources》
(1) 【Hypervisor】VMware
【OS】CentOS7
【Application】Basic software prepared at the center and support for
introducing various open source software.
《Usage》
L2VPN Ready Dedicated VLAN (on-campus) or SINET L2VPN(off-campus) Connection
Cybermedia Center, Osaka University
《Hardware resources》
1. 24-screen Flat Stereo Visualization System
・ Install location: Umekita Office of the Cybermedia Center, Osaka University
・ 50-inch FHD (1920 x 1080) stereo projection module: 24 displays Image Processing PC Cluster: 7 computing nodes
・ * Prioritize the visualization by users of “PC cluster for the large-scale visualization”.
2. 15-screen Cylindrical Stereo Visualization System
・ Install location: the main building of the Cybermedia Center, Suita Campus, Osaka University
・ 46-inch WXGA (1366 x 768) LCD: 15 displays
2017/12/06
21
・ Image Processing PC Cluster: 6 computing nodes * Prioritize the visualization by users of “PC cluster for the large-scale visualization”.
《Software resources》
・ AVS Express/MPE, IDL
・ AVS Express/MPE, IDL
《Usage》
L2VPN Ready
Research Institute for Information Technology, Kyushu University
《Hardware resources》
(1) Visualization environment Visualization server (SGI Asterism), 4K projector, and 4K monitor
《Usage》
L2VPN Ready
2017/12/06
22
Attachment 2
The following are possible examples of “Research projects using both large-scale data and large
capacity networks” described in Attention ② of "1. Joint Research Areas" in the call for
proposals. The purposes of this attachment are to present, by example, how research resources
provided by JHPCN member institutions may be used. We appreciate proposals of joint research
projects using both large-scale data and large capacity networks, not limited to those examples.
Information Initiative Center, Hokkaido University
Distributed systems (virtual private cloud systems) can be deployed using our cloud system / data
science cloud system federated with computational resources of the other universities or the
applicant’s laboratory, connected via SINET L2VPN or with software VPN solutions.
Available resources
Supercomputer system, Cloud system, data science cloud system (c.f. Attachment 1.)
How to use
Dedicated systems can be developed for the collaborative research projects employing physical
and virtual servers as virtual private clouds. Distributed systems can also be developed by
connecting our cloud resources with servers and storages in the users’ laboratories as hybrid
cloud systems. The users can access the systems not only via ssh/scp but also with virtual
console, which provided by Cloud Middleware,through web browsers.
Email address for inquiring about resource usage and joint research
Details of anticipated projects
- Experiment data analysis platform in the intercloud environment: constructing a data store,
analysis, and sharing infrastructure employing virtual/real machines and storages of the
cloud system / data science cloud system of Hokkaido university connected to computational
resources of the other universities via SINET L2VPN.
- Development of a large-scale pre/post-processing environment federating supercomputers
and intercloud systems: developing a large-scale distributed processing environment such
as performing analysis of big data generated by supercomputers using Hadoop clusters to
visualize at the other universities’ remote systems.
- An always-on platform to support network-oriented research projects: development of a
2017/12/06
23
nation-wide distributed high-speed networking platform employing the cloud system / data
science cloud system of Hokkaido university and private clouds of the other universities
connected via SINTE5 L2VPN.
Cyberscience Center, Tohoku University
Cyberscience center provides vector parallel and scalar parallel supercomputers, a distributed
data sharing environment through on-demand L2VPN, and a large-scale remote visualization
environment with other centers. These environments allow users to share and analyze vast
amounts of observed data obtained by sensors or IoT devices. We strongly invite proposals that
try to exploit the potential of these environments. For example, joint research regarding the real-
time analytics using supercomputers, and storage/network architectures for a large-scale
distributed data sharing.
Available resources
《Hardware resources》
・Storage (500TB / project)
・3D visualization system
・Supercomputers(SX-ACE, LX406 Re-2)
・On-demand L2VPN
2017/12/06
24
《Software resources》
【OS】Redhat Enterprise Linux 6
【Programing languages】
LX406 Re-2: Fortran,C, C++, Ruby, Python, java, etc.
SX-ACE : Fortran, C, C++
【Application software】
Basic applications provided by Cyberscinece Center and original codes developed by
users. We also support installing/migrating required software to our system. (please
contact us in advance.)
How to use
Supercomputers (LX406 Re-2, SX-ACE)
• log in to the compute nodes using ssh
• transfer files to the node using the scp / sftp
Network
• Possible to build L2VPN on SINET5
Storage
• Possible to use remote mount by NFS through L2VPN
Email address for inquiring about resource usage and joint research
Details of anticipated projects (in japanese)
http://www.ss.cc.tohoku.ac.jp/collabo/net/
2017/12/06
25
Information Technology Center, the University of Tokyo
Available resources and Connectivity to SINET5
(1) Reedbush System
We would like to offer Reedbush-U, H, and L systems for Big-Data analyses using Burst
Buffer (DDN IME) which incorporates high-speed and high-capacity SSD storages. In
addition, we will offer the direct connection via SINET5 L2VPN for login-node of
Reedbush-U and L (Please ask us for detailed configuration).
Available Resources
《Hardware resources》
Reedbush-U, H, L System
《Software resources》
Refer to description of Reedbush-U, H, L System in Attachment 1.
How to use
You can directly login to the nodes via remote network with SSH.
You can transfer data via remote network with SCP / SFTP.
You can directly access to login node via SINET5 L2VPN (negotiable for
Reedbush-U and L).
(2) FENNEL and Network Access Storage
We, Information Technology Center in the University of Tokyo, would like to offer a Big-Data
analysis infrastructure in which a researcher can take sole usage of its resources. The key design
concept of the infrastructure is to facilitate real-time analysis and accumulating intermittently
generated data. Further, we will also offer an environment which can connect both your own
2017/12/06
26
resources and our resources with SINET L2VPN service. The anticipated issues for this resources
are IoT Big-Data analysis, SNS message analysis, anomaly detection of network traffic, and etc.
Overview of FENNEL
Available Resources
《Hardware resources》
FENNEL (Real-Time Dana Analysis Nodes) x 4 nodes
At maximum four VMs or one bare metal server are provided to each group.
The provided VMs or a bare metal server are dedicated to the group.
GPGPU (Nvidia Tesla M60) is available on request.
Network Access Storage (150TB)
10GbE Network Access
Rack Space (with our Housing Service: You can bring your own devices into our
datacenter and connect to the network)
《Software resources》
【OS】Ubuntu 16.04, Ubuntu 17.04, CentOS 7.4
【Programming languages】Python, Java, R
【Application software】Apache Hadoop, Apache Spark, Apache Hive, Apache Impala,
Presto, Elastic Search, Chainer, Tensor Flow
2017/12/06
27
How to use
FENNEL (Dedicated and Real-Time Data Analysis Nodes)
You can directly login to the nodes via remote network with SSH.
You can transfer data via remote network with SCP / SFTP.
You can use GPGPU on VM if you want.
Network Access Storage (NAS)
You can mount with iSCSI, NFS and/or CIFS protocol via SINET5 L2VPN.
You can use data proxy to store your own data that you obtain form your own
sensor devices.
The volumes provided by the storage can be mounted on VMs as native filesystem.
Email address for inquiring about resource usage and joint research
Details of anticipated projects
https://www.itc.u-tokyo.ac.jp/iiccs/supplement2018-en
GSIC, Tokyo Institute of Technology
Machine learning jobs, especially in deep learning which recently attracts great attention, require
both storage resources for storing large scale I/O data and high performance computation
resources. For these jobs, we provide environment for large-scale high-performance machine
learning by using lots of GPUs (>2,000 in the whole system) and large storage (up to 300TB per
user group) equipped by the TSUBAME3.0 supercomputer. By using pre-installed frameworks
that harnesses GPUs, acceleration of research projects of large scale machine learning is
expected.
2017/12/06
28
Available Resources
《Hardware resources》
Refer to description of TSUBAME3.0 in Appendix 1. Especially, 4 Tesla P100 GPUs per node
are available.
《Software resources》
Refer to description of TSUBAME3.0 in Appendix 1. The followings are highly related items to
this page:
- OS: SUSE Linux Enterprise Server 12 SP2
- Programming Languages: Python, Java SDK, R
Application software: Caffe, Chainer, TensorFlow
How to use
TSUBAME3.0
Same as regular usage.
Email address for inquiring about resource usage and joint research
Details of anticipated projects
http://www.gsic.titech.ac.jp/en/jhpcn/dl-en
Information Technology Center, Nagoya University
We provide SGI UV2000 subsystem for very large scale data processing research field which is
a large scale shared memory system equipping various storage system. UV2000 subsystem
equips 10TB memory based file system by utilizing large scale shared memory, SSD file system
which is superior in random access performance, and 2.9PB HDD array which is connected with
DAS as storage systems. You have to include planned quota in proposal. UV2000 subsystem
equips scikit-learn and TensorFlow libraries for machine learning and deep learning usage.
Additionally, Fujitsu PRIMERGY CX400 subsystem equips many CPU cores including Intel Xeon
Phi (Knights Corner) so that it is suitable for data processing which requires much CPU power.
We are just supposing them for very large scale data processing or real-time large scale
information collection and summarizing from the Internet, as research candidates for this
subsystem.
We provide up to 40GBASE network connection for very large bandwidth network technology
research field. You can connect your own system to given network port (with own optical modules)
to create very large bandwidth network experiment environment by creating L2 flat network via
2017/12/06
29
SINET L2VPN for external university and internal university VLAN. We are just supposing it for
large scale parallel data processing with utilizing remote computer systems or speculative data
caching via large bandwidth network, as research candidates for this resource.
We provide visualization subsystem and visualization software suite which can utilize from
remote environment for very large scale information system research field. The subsystem is
connected to SGI UV2000 subsystem and we execute the visualization software suite on UV2000
subsystem. We are just supposing them for remote visualization processing, VR teleconferencing
system which accelerates brain storming, and high performance communication protocol to real-
time remote visualization, as research candidates for this subsystem.
Available Resources
《Hardware resources》
1. SGI UV2000 subsystem (1280 cores, 24 TF, 20TiB shared memory) + memory base
file system + SSD file system + HDD array (2.9PB)
2. Fujitsu PRIMERGY CX400 subsystem
3. Visualization subsystem: HMD system (Canon HM-A1, nVisorST50, Sony HMZ-T2) ,
dome type display system, 8K display system
4. Network connection up to 40GBASE (with internal university VLAN and SINET
100G
10Gx4
40Gx2
SINET5
Border switch
super core SW
(10G)
40G
Experiment with very large
network bandwidth with
40GBASE(max) connection and
SINET L2VPN connection
vPC
10G
Fujitsu PRIMERGY CX400/2550
Fujitsu PRIMERGY CX400/270
10G
A subsystem for very large
scale data processing by
CPU core amount oriented
system (with Xeon Phi)
A visualization subsystem
for very large scale
information system
A subsystem for very large
scale data processing by data
supply speed oriented system
(with SSD file system)
HDD array storage
InfiniteStorage 17000
2.39PB
File system with SSD
512TB
Visualization system
with
file system controller
1280 cores (24TF/s),
20TiB shared memory
High resolution display system
7680 x 4320 resolution
4m width x 2.3m height (16 tiles) Dome type display system
Handy type
HMD systemStationary type
HMD system
2017/12/06
30
L2VPN configuration)
《Software resources》
1. SGI UV2000 subsystem
【OS】SuSE Linux Enterprise Server
【Compilers】Fortran/C/C++, Python (2.6, 3.5)
【Libraries】MPT(SGI), MKL(Intel), PBS Professional, FFTW, HDF5, NetCDF, scikit-
learn, TensorFlow
【Application software】AVS/Express Dev/PCE, EnSightHPC, FieldviewParallel, IDL,
ENVI(SARscape), ParaView, 3DAVSplayer, POV-Ray, NICE DCV(SGI), ffmpeg,
ffplay, LS-prepost, osgviewer, vmd(Visual Molecular Dynamics)
2. Fujitsu PRIMERGY CX400 subsystem
【OS】Red Hat Enterprise Linux
【Compilers】Fujitsu Fortran/C/C++/ XPFortran,Intel Fortran /C/C++, Python (2.7, 3.3)
【Libraries】MPI(Fujitsu, Intel), BLAS, LAPACK, ScaLAPACK, SSLII(Fujitsu), C-SSL II,
SSL II/MPI, MKL(Intel), NUMPAC, FFTW, HDF5, IPP(Intel)
【Application software】 LS-DYNA, ABAQUS, ADF, AMBER, GAMESS, Gaussian,
Gromacs, LAMMPS, NAMD, HyperWorks, OpenFOAM, STAR-CCM+, Poynting, Fujitsu
Technical Computing Suite 4 (HPC Portal)
3. Visualization subsystem
【Visualization software suite】AVS/Express Dev/PCE, EnsightHPC, 3DAVSPlayer, IDL,
ENVI(SARscape), ParaView, osgviewer, EasyVR, POV-Ray, 3dsMAX, 3-matic, VMD,
ffmpeg, ffplay, LS-prepost
How to use
Remote login with ssh through login node.
You can use remote visualization by VisServer on UV2000 subsystem.
File transfer with scp / sftp through login node.
Job posting and job management through Fujitsu Technical Computing Suite (HPC Portal).
Email address for inquiring about resource usage and joint research
Very large scale numerical calculation field: Takahiro Katagiri (High Performance
Computing Division)
Very large scale data processing field: Takahiro Katagiri (High Performance
Computing Division)
Very large bandwidth network technology field: Hajime Shimada (Advanced
2017/12/06
31
Networking Division)
Very large scale information system field: Masao Ogino (High Performance
Computing Division)
Details of anticipated projects
http://www.icts.nagoya-u.ac.jp/en/center/jhpcn/suppl/
Academic Center for Computing and Media Studies, Kyoto University
We will provide the infrastructure for collecting large scale data from laboratory equipment and
observation equipment possessed by researchers via large capacity network or internet such as
Kyoto University internal LAN (KUINS) or SINET5 L2VPN for 24 hours a day, 365 days and
analyzing them with a supercomputer system in real time or periodically then offering information
of the results on the Web.
Available resources
《Hardware resources》
・Supercomputer system
Cray XC40 (Camphor 2: Xeon Phi KNL/node) Each project is assigned up to 48 nodes×
throughout the year.
DDN ExaScaler (SFA14K) Each project is assigned up to 192TB
・Mainframe System Virtual Server Hosting
Virtualized environment: VMWare
Standard Spec: CPU 2 Cores, Memory 8GB, Disk capacity 1TB
《Software resources》
・Supercomputer system
【Compilers】Fortran2003/C99/C++03(Cray, Intel)
【Libraries】MPI, BLAS, LAPACK, ScaLAPACK
・Mainframe System VM Hosting
Standard OS: CentOS7
How to use
・Supercomputer system
Login with SSH (Key authentication)
・Mainframe System VM Hosting
Login with SSH (Granting Root authority)
Access by various service port such as HTTP (80/TCP) or HTTPS (443/TCP)
Multiple virtual domains are available
2017/12/06
32
Kyoto University internal LAN (KUINS) VLAN and SINET5 L2VPN can be housed directly in VM
Email address for inquiring about resource usage and joint research
Cybermedia Center, Osaka University
Our center provides L2 VPN connection service between other site and Osaka University testbed
network through SINET. It aims to construct an environment of sharing visualization contents with
our systems, devices and storages. Please contact us for detail of our service, usage of our
resources or research collaboration.
Email address for inquiring about resource usage and joint research
Details of anticipated projects
http://www.hpc.cmc.osaka-u.ac.jp/en/for_jhpcn/
http://vis.cmc.osaka-u.ac.jp/en/
Research Institute for Information Technology, Kyushu University
We provide a remote visualization and data analytic infrastructure that researchers can use from
2017/12/06
33
remote sites. The provided system allows us to process generated large-scale data without
moving, thus efficient processing is possible. Besides, if available, L2VPN enables the combined
usage of resources between end users and bases. The provided resources are assumed to be
used for research subjects that visualize and analyze large-scale parallel simulation and/or
observation data. Available user scenarios are batch mode (use the back-end nodes), interactive
mode (use the front-end nodes), and in-situ mode (use both the front-end and the back-end nodes
simultaneously).
If the data you generated does not correspond to the data format of the provided software or the
supplied system does not have the analysis function you want, consultation is available.
Available resources
Hardware resources
・ Subsystem A, Subsystem B, Standard Frontend (c.f. Attachment 1.)
Software resources
・ 【OS】Linux
・ 【Programming languages】Python, R
・ 【Application software】Tensor Flow, OpenFOAM, HIVE(Visualization)
How to use
Batch environment
2017/12/06
34
Direct login is possible to the node using ssh via the network.
File transfer is possible to/from the node using scp/sftp via the network.
Conventional batch usage.
Interactive environment
Login is possible to the front-end node using ssh.
Real-time parallel visualization and data analysis are performed using visualization
application that runs on the front-end. In the situation where a job is running on the
back-end node, it provides an interactive visualization environment through the file
information between the back-end and the front-end. The interactive rate is
assumed to be on the order of several to 0.1 fps depending on the communication
bandwidth and the amount of transferred data.
Email address for inquiring about resource usage and joint research
Details of anticipated projects
https://www.cc.kyushu-u.ac.jp/scp/service/jhpcn/jhpcn.html (in Japanese)