Dr. Bill McColl, Huawei Research, France
Abstract. In Big Data and Cloud Computing, many/most of the key design decisions regarding parallelization, data partitioning, load balancing, communications, synchronization, redundancy and fault tolerance are automated, and the key objective is to enable developers to produce applications with a minimum of effort, applications that will typically be run on low-cost pools of commodity cloud hardware resources, that will often be virtualized or based on containers. This talk describes some of the challenges in bringing high performance to Big Data and Cloud computing, and describes a new approach to this problem. As machine learning moves to the center of computing, and “AI-as-a-Service” becomes a major new business opportunity, this area will become more and more important. If HPC also moves to a new era in which cost-effective “Cloudonomics” matters more, then it will become important there too.
Download slides
Aleksandar Ilic, Universidade de Lisboa, Portugal
Abstract. As architectures evolve towards more complex multi-core designs, deciding what optimizations provide the best tradeoff between performance and efficiency is becoming a prominent issue. To help in this decision process, a set of fundamental Cache-aware Roofline Models are presented in this tutorial, which allow characterizing the upper bounds for performance, power, energy and energy-efficiency of multi-core architectures. These models evaluate how key micro-architectural aspects, such as accessing different functional units or different memory hierarchy levels, affect the attainable performance, power and efficiency of the processor (by also considering different power domains).
Download slides
Prof. Ali Shoker, INESC TEC & Minho University, Portugal
Abstract. In this tutorial we aim to give the students an overview of the three Vs of Big Data from the perspective of Quality of Data. In particular, we will introduce each V and discuss its relation to the other Vs, and what are state-of-the-art techniques in each category. After discussing their tradeoffs, we then focus on the quality of data perspective, mainly consistency and freshness. Starting by the CAP theorem, we discuss the famous data consistency models like strong consistency, eventual consistency, and causal consistency together with an overview of relevant real systems in production. After that, we focus on the synchronization-free model and on Conflict-free Replicated DataTypes. We explain the motivation, the theory behind them, and an overview of some datatype designs together with a hands-on work. The tutorial will conclude giving some out of the box data designs for “almost infinite” scalability.
Download slides
Profs. Domenico Talia and Paolo Trunfio, University of Calabria, Italy
Abstract. Scalable big data analysis today can be achieved by parallel implementations that are able to exploit the computing and storage facilities of HPC systems and Clouds, whereas in the next future exascale systems will be used to implement extreme scale data analysis. In fact, In a longer perspective, new exascale computing infrastructures will appear as the scalable platforms for big data analytics in the next decades, and data mining algorithms, tools and applications will be ported on such platforms for implementing extreme data discovery solutions. This tutorial introduces and discusses cloud models and frameworks that support the design and development of scalable data mining applications and discusses challenges and issues to be addressed and solved for developing data analysis algorithms on extreme-scale systems.
Download slides
Prof. Tuan Trinh, Corvinus Business School and Founder of InsurTech Consulting Partners, Hungary
Abstract. This tutorial discusses HPC and Big Data role in financial services and technologies. We focus on two prominent areas of recent interest involving the Distributed Ledger Technology and Big Data in financial services. The goal of this tutorial is to give insights into how HPC and Big Data analysis can contribute to the successful transformation and innovation of financial services. Topics include: Distributed Ledger Technologies and Financial Services, Areas of Distributed Technology applications, HPC issues in Distributed Ledger Technologies, Areas of Big Data applications in Financial Services, Enabling Big Data platforms and engines, Virtual Currencies, Crypto-Currencies and links to HPC, and High Frequency Trading (HFT) and HPC platforms.
Download slides
Drs. Jakob Luettgau and Benson Muite, University of Hamburg and Tartu Ulikool
Abstract. This tutorial discusses benchmarks for parallel computer systems that are useful performance indicators for solving real world problems. An overview on application HPC benchmarking will be given. Participants will learn about different libraries, effects of computer architecture on performance and measure performance on their own systems if possible in addition to the ones provided for the workshop. There will be two components, one for Fast Fourier transforms and the second for I/O benchmarking. The tutorial will end up with an exercise hands on session in which data will be collected and analyzed in R.
Download slides – Muite
Download references – Muite
Dr. Kai Keller, Barcelona Supercomputing Center, Spain
Abstract. In this tutorial, we focus on how to guarantee high reliability to high performance applications running in large infrastructures. In particular, we cover all the technical content necessary to implement scalable multilevel checkpointing for tightly coupled applications. This includes an overview of failure types and frequency in current data-centers, which is instrumental to the efficient deployment of fault tolerance strategies. The tutorial will also cover the theoretical analysis necessary to achieve optimal utilization of the computing resources. In addition, we will cover the implementation of efficient checkpointing for task-based programming models, such as OmpSs. Moreover, we will present the internals of the FTI library tool, to demonstrate how multilevel checkpointing is implemented today. This includes code analysis and execution traces to help the audience grasp the fundamental parts of this technique. Finally, we will have hands-on examples that the audience can analyze in their own laptops, so that they learn how to use FTI in practice, and lately transfer that knowledge to their production runs.
Download slides
Download Handouts
__________
The program of the NESUS winter school will features also a PhD Symposium where PhD students attending the school will present their PhD research to a senior court and other attendees of the school.