A Must for intelligent Cyber-Physical Systems

Resilience

Figure: Intelligent Cyber-Physical System of Systems (iCPSoS), Source: DLR

A Must for intelligent Cyber-Physical Systems

Resilience is a key concept, which promises a solution against a rapidly growing surface of attack. Resilient systems are designed to be able to respond in the event of a malicious or accidental impairment in order to restore the maximum achievable system function. The project Resilience of Intelligent Cyber-Physical Systems (short: Resilience) takes a holistic approach to achieve resilience. For this purpose, resilience mechanisms are implemented, integrated and tested on both algorithmic and system levels using an onboard avionic as well as a man-in-the-loop use case.

The Institute for AI Safety and Security leads one of the main work packages, called Engineering resilient AI-based iCPSoS. Together with Institute for Flight Systems Technology we also handle the overall project management. The duration of the project is 3 years, from 2022 to 2024.

A resilient AI-based intelligent Cyber-Physical System of Systems (iCPSoS) is characterised by the fact that even in untypical situations, in the event of (partial) failures of technical components of the overall system or in the event of attacks on the system, central system functions can still be maintained. If a complete representation of functionality is not possible, it should at least be possible to ensure a controlled successive degradation of functionality, for example by foregoing efficiency or integrating additional resources while at the same time guaranteeing unrestricted security.

Contribution by the Institute for AI Safety and Security

To achieve these targets, the Institute for AI Safety and Security focuses on two main topics (sub work packages):

  • Development of an approach to formalize the Operational Design Domain (ODD), where the ODD is to be understood as the reference environment or reference system for an iCPSoS, from which requirements or relevant situations can be derived that must be fulfilled and handled by a system. Our goal is to create a methodology for developing ODD concept which can be transferred across other domains where the autonomous systems are deployed. ODD provides the boundary line for establishing safe and secure testing environments for higher automation level.
  • Deriving requirements for AI-based components in the context of safety proofs & approaches of safety/security-by-design. These requirements are necessary in order to develop trustworthy safety critical systems for the future.

Furthermore, our institute works on three more sub work packages:

  • Application in the context of the use case for resilient avionics architecture for urban air mobility
  • Presentation of the method portfolio for safety verification and safety/security-by-design
  • Exemplary application to selected AI components and re-build of selected elements

All Resilience main work packages and the DLR institutes involved

  • Technology integration and demonstration: Institute of Flight Systems Technology
  • Engineering resilient AI-based iCPSoS: Institute for AI Safety and Security
  • Robustness and runtime monitoring: Institute of Data Science, Institute for Software Technology
  • Response and Recovery: Institute of Flight Systems Technology, Institute for Software Technology
  • Human-in-the-Loop Resilience: Institute of Flight Guidance

Future cyber-physical systems (CPS) are intelligent, highly connected and autonomous. They use advanced technologies such as AI-supported decision-making processes and machine learning to expand their perception. However, leaping towards this innovation comes with significant obstacles. Breaking with previous practices removes much of the pitfalls of process-centricity that has been achieved before. To ensure robust operations with minimized security risks despite increasing system complexity, our understanding of safety and security has to take a paradigm shift. For this, the Institute for AI Safety and Security delves into the describing the environment for deploying an AI system exhaustively and also the development of the AI system in a transparent and modular approach.

Contact

Dr.-Ing. Sven Hallerbach

Head of Department
German Aerospace Center (DLR)
Institute for AI Safety and Security
AI Engineering
Wilhelm-Runge-Straße 10, 89081 Ulm
Germany

Karoline Bischof

Consultant Public Relations
German Aerospace Center (DLR)
Institute for AI Safety and Security
Business Development and Strategy
Rathausallee 12, 53757 Sankt Augustin