New H2020 Project DAPHNE (Integrated Data Analysis Pipelines for Large-Scale Data Management, HPC, and Machine Learning)
The working group Data Management Technologies is joining the international consortium of the DAPHNE project to work on integrated data analysis pipelines for data-intensive machine learning applications together with data management, machine learning, and high-performance computing experts in Europe. The focus of the Data Management Technologies will be on leveraging modern “Computational Storage” technologies in the context of data-intensive machine learning workloads.
The DAPHNE project aims to define and build an open and extensible system infrastructure for integrated data analysis pipelines, including data management and processing, high-performance computing (HPC), and machine learning (ML) training and scoring. Key observations are that (1) systems of these areas share many compilation and runtime techniques, (2) there is a trend towards complex data analysis pipelines that combine these systems, and (3) the used, increasingly heterogeneous, hardware infrastructure converges as well. Yet, the programming paradigms, cluster resource management, as well as data formats and representations differ substantially. Therefore, this project aims – with a joint consortium of experts from the data management, ML systems, and HPC communities – at systematically investigating the necessary system infrastructure, language abstractions, compilation and runtime techniques, as well as systems and tools necessary to increase the productivity when building such data analysis pipelines, and eliminating unnecessary performance bottlenecks.
Further information can be found on the project web page: https://daphne-eu.github.io/