Algorithms & Hybrid Solutions
Getting everyone on board - Development of AI methods and technologies that are demonstrably safe in terms of safety and security and that embed themselves in distributed data and service ecosystems. Social, human and technological AI research and further development in synergy.
This topic area encompasses research into the comprehensive verifiability of safe and standard compliant AI and, in particular, its operational and attack security. In addition, embedding in safety-critical scenarios is being investigated, e.g. in autonomous mobility and logistics facilitation. In pursuing this objective, we are seeking to identify a shared trajectory of scientific and technical advancement that also incorporates social considerations and the potential risks posed by artificial intelligence. Our approach involves a structured approach to developing artificial intelligence methodologies and technologies with the dual objective of either making them accessible for safety-related verification methods or synthesising them in a verifiable manner up to standards that align with established norms. The objective of this research is to demonstrate the accuracy and reliability of AI algorithms and to elucidate the underlying principles of their components. The question thus arises as to how the algorithm arrives at this solution and why it suggests this solution. In the context of robust AI, these inquiries become particularly intricate, and their responses assume a distinctive significance. This is particularly pertinent to safety-critical scenarios that already incorporate human involvement. The objective is to engage the AI community, as well as its supporters and critics, in a constructive dialogue.
Hybrid solutions are distinguished by their capacity to draw both conclusions based on large amounts of data and incorporate human observations and experience. The integration of data knowledge and expertise is mutually beneficial. However, hybrid solutions not only combine humans and AI, but also combine sub-symbolic (not or difficult to interpret) and symbolic (very easy to interpret) AI methods and technologies in order to synthesise safe and explainable systems.
In the development of algorithms, safety and security by design represent a defining concept, which guides our efforts. This implies that attack and operational security must be considered an essential core quality from the outset and throughout the entire process, particularly given that at DLR we primarily conduct research for ambitious application classes. Consequently, AI must be capable of meeting the particularly safety-critical requirements that are commonplace in aeronautics, space, energy and transport. One concrete example of our work is the development of automated transport systems, production and handling solutions in logistics centres, and manufacturing processes. These projects must be secured in particular through the permanent interface with humans as a cooperative element. The objective of these projects is to integrate AI into the existing process chain in a reliable and robust manner, while simultaneously harmonising human and technological development.