Department: Systems Theory and Design
The Systems Theory and Design (THD) department researches methods, procedures and tools of systems engineering for the requirements analysis of automated and autonomous systems. These new systems also require standardization of their functional properties and restrictions with regard to their quality and economic efficiency. This includes their integrity, accountability and certifiability.
Concepts for trust and self-reflection are being formalized. Based on these requirements for new systems, architecture patterns are designed during software development. These enable integrity, self-reflection and certifiability to be anchored in system architectures at various levels of abstraction. The architecture patterns are supplemented by analysis and optimization methods for the resulting architectures, on the basis of which verification and early validation are carried out. Comprehensive research on this will extend the already established research approaches, including test methods for virtual integration, trustworthy construction techniques, self-X and fail-safe mechanisms as well as modeling for human-machine cooperation. The content described above will be advanced towards system configuration and variant handling. Verification and validation methods and tools for automated and autonomous systems are being developed for the verification phase. Simulation-based, statistical methods are dealt with as well as formal verification methods. These methods will form the basis for the virtual and physical certification of the new systems in the respective test fields, which are also being developed by this department.
Group: System Concepts and Design Methods
The group „System Concepts and Design Methods“ develops methods, tools and processes for the realisation of trustworthy systems. One focus is the exploration of novel approaches for comprehensive hazard and risk analysis of automated and autonomous systems. To this end, approaches from different research areas, such as causal modelling, ontologies and real-time analysis, are combined and further developed to identify potential hazards and investigate their interdependencies. Scenario description languages allow efficient coverage of the complex and often unstructured environment of automated systems and thus form a basis on which safety cases can be built. Architectural patterns and safety mechanisms are being researched to enable the design of systems that can behave safely at all times, even in the presence of faults and unexpected events.
Group: Human-Centered Engineering
The research of the "Human-Centered Engineering" group focuses on the question of how systems can be developed that people understand and with which they can interact intuitively. To this end, different methods and techniques of human modeling are being researched. Human models make it possible to demonstrably incorporate findings about the way humans interact with machines into the development process. Models that can serve as "virtual testers" of system designs or as "virtual assistants" play an important role here.
Group: Evidence for Trustworthiness
The "Evidence for Trustworthiness" group is dedicated to research questions regarding the proof of technical aspects of the trustworthiness of systems, in particular with regard to their safety and security properties. Among other things, research will be conducted into how extremely rare events can be efficiently simulated and how analyses of the effects of errors and attacks in systems can be carried out automatically. Another core topic is proving the validity of the models and simulations used in the analyses in order to establish a consistent chain of reasoning.