KEYNOTES

The International Conference on High Performance Computing & Simulation

(HPCS 2015)

The Thirteen Annual Meeting

July 20 – 24, 2015

The Hilton Amsterdam Hotel

Amsterdam, the Netherlands

http://hpcs2015.cisedu.info or http://cisedu.us/rp/hpcs15

HPCS 2015 KEYNOTES

Tuesday Keynote: Architecture-aware Algorithms and Software

for Peta and Exascale Computing

Jack Dongarra

University of Tennessee and Oak Ridge National Lab, Tennessee, USA

and University of Manchester, U.K.

NOTES (See file below)

Wednesday Keynote: The Accelerated Cloud

Marc Hamilton

Vice President, Solutions Architecture and Engineering, NVIDIA, California, USA

NOTES (See file below)

Thursday Keynote: DAS-5: Harnessing the Diversity of Complex e-Infrastructures

Henri E. Bal

Faculty of Sciences, Dept. of Computer Science, Vrije Universiteit, Amsterdam, The Netherlands

NOTES (See file below)

Thursday Keynote II

& Closing Plenary: EGI-Engage: Towards an Open Science Commons

Yannick Legré

Director of EGI.eu, Amsterdam, The Netherlands

NOTES (See file below)

HPCS 2015 PLENARY SPEAKERS

Plenary I: Revisiting Co-Scheduling for Upcoming ExaScale Systems

Stefan Lankes

RWTH Aachen University, Germany

NOTES (See file below)

Plenary II: Opportunistic Vehicular Networking

Lars Wolf

Technische Universität Braunschweig, Germany

NOTES (See file below)

______________________________________________________________________

Tuesday Keynote: Architecture-aware Algorithms and Software

for Peta and Exascale Computing

Jack Dongarra

University of Tennessee and Oak Ridge National Lab, Tennessee, USA

and University of Manchester, U.K.

ABSTRACT

In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder.

We will look at five areas of research that will have an importance impact in the development of software and algorithms. We will focus on the following themes:

• Redesign of software to fit multicore and hybrid architectures

• Automatically tuned application software

• Exploiting mixed precision for performance

• The importance of fault tolerance

• Communication avoiding algorithms

_____________________________________________________________________________

Wednesday Keynote: The Accelerated Cloud

Marc Hamilton

Vice President, Solutions Architecture and Engineering, NVIDIA, California, USA

ABSTRACT

Cloud computing and HPC have both seen rapid growth since Amazon launched AWS in 2006. Back in June 2006, the fastest supercomputer on the Top 500 list was an IBM BlueGene system at Lawrence Livermore National Labs clocking in at 280 TFLOPs. In November of 2014, the Tianhe-2 system topped the list at 33,862 TFLOPs, an increase of 120x. While AWS does not disclose exact numbers of servers in their cloud, their growth rate is estimated to be a similar order of magnitude. However, the amount of HPC work done in the cloud has grown much more slowly, in part because of the limited adoption of GPUs and accelerated computing by the major public cloud providers. This is rapidly starting to change, however, due to the convergence of three trends: the huge amounts of unstructured big data that have been stored in public clouds over the last 10 years, the compute density of new GPUs, and the rapid adoption of deep neural network based approaches to machine learning. As just one example, Google has publically claimed to be running over 40 GPU-accelerated applications utilizing 1000’s of GPU on their internal cloud. The increased use of GPU enabled deep learning applications in the cloud will also benefit traditional scientific computing HPC users and is expected to drive a major shift of HPC workloads to the cloud over the rest of this decade.

_____________________________________________________________________________

Thursday Keynote: DAS-5: Harnessing the Diversity of Complex e-Infrastructures

Henri E. Bal

Faculty of Sciences, Dept. of Computer Science, Vrije Universiteit, Amsterdam, The Netherlands

ABSTRACT

Modern e-infrastructures typically are distributed facilities with increasing diversity (e.g., they may contain a variety of accelerators). This diversity makes the infrastructures complex and difficult to use for scientists. Nonetheless, many applications need to harness the power of these diverse distributed infrastructures to run large scale simulations or to address big data processing problems. Handling the complexity and diversity of distributed infrastructures thus is a key research challenge. The Dutch ASCI research school has recently set up the fifth generation of its Distributed ASCI Supercomputer (DAS-5) to address this challenge. DAS-5 is based on a similar concept as its four successful predecessors: a collection of six cluster computers located at different institutes and integrated into a single shared testbed. The system will be used for experimental computer science research in ASCI (Advanced School for Computing and Imaging), the Netherlands eScience Center, the Dutch ICT program COMMIT, and the ASTRON radio astronomy institute.

This presentation will first discuss the history, organization, and impact of the DAS project as a whole. During the past 18 years, DAS has been used for numerous award-winning projects and over 100 Ph.D. theses. Next, it zooms in on some typical projects for which DAS-5 was designed and that try to ease the programming of complex heterogeneous systems. MCL is a system for programming many-core accelerators using a new methodology called “stepwise refinement for performance” that integrates hardware descriptions into the programming model. MCL allows programmers to work on multiple levels of abstraction, under the direction of a compiler. Cashmere combines MCL with a divide-and-conquer programming model to ease programming of clusters of heterogeneous many-core devices. Glasswing is a novel MapReduce framework on top of OpenCL that efficiently uses resources of heterogeneous cluster environments.

_____________________________________________________________________________

Thursday Keynote

& Closing Plenary: EGI-Engage: Towards an Open Science Commons

Yannick Legré

Director of EGI.eu, Amsterdam, The Netherlands

ABSTRACT

In this talk, we will explain how EGI-Engage - an EC action funded in H2020 WP 2014-2015 - will push the boundaries of EGI and European e-Infrastructures in general by implementing a number of strategic big shifts.

In EGI-Engage, community engagement for the first time reaches out to eight new international research collaborations supporting the ESFRI roadmap and it embraces a completely new organization in user support: the Distributed Competence Centre, which sees the participation of EGI participants, user communities and technologies.

EGI will reinforce its business engagement by enabling the big data value in a number of disciplinary areas. The technical roadmap will advance through the development of an Open Data Platform that brings federated open data and grid and cloud computing together for scalable big data access.

Through EGI-Engage, EGI will accelerate the implementation of the Digital ERA by advancing towards an Open Science Commons, the vision through which:

"Researchers from all disciplines have easy, integrated and open access to the advanced digital services, scientific instruments, data, knowledge and expertise they need to collaborate and achieve excellence in science, research and innovation. They feel engaged in governing, managing and preserving these resources for everyone’s benefit, with the support of all stakeholders."