November 19, 2024

AI Biennale 2024

Panel discussion: ‘Technological progress and social consequences - How do we achieve trustworthy AI?’
from left.: Dr. Jan Hofmann (Telekom), Dr. Arne Raulf (DLR), Julia Eisentraut (MdL, Grüne), Dr. Cedric Janowicz (DLR Projektträger), Prof. Frank Köster (DLR)

Science-led AI development: the key to trustworthy systems

Artificial intelligence (AI) has long been more than just a topic for the future - it is already permeating almost all areas of life. At the AI Summit of the AI Biennale 2024, high-ranking representatives from politics, business and research discussed how we can make AI systems not only efficient but also trustworthy.

Europe focusses on scientific quality

The panel discussion, moderated by Prof Frank Köster, Founding Director of the DLR Institute for AI Safety, showed that Europe is pursuing a special approach to AI progress. While other regions are primarily focussing on rapid growth, Europe is concentrating on solid mathematical systems and integrating safety aspects from the outset.
Julia Eisentraut, spokesperson for science, digitalisation and data protection, emphasised the strengths of the European approach: the high density of research and the consistent focus on quality - especially in avoiding bias - are decisive advantages.

Transparency and ethical dimensions

Dr Arne Raulf from the Institute for AI Security emphasised Europe's leading role in creating transparent and fair data ecosystems.
Dr Jan Hofmann from Deutsche Telekom emphasised that success lies in the intelligent combination of man and machine.
Dr Cedric Janowicz from the DLR Project Management Agency shed light on the social dimension: in view of the profound impact of AI on our understanding of democracy, an intensive examination of ethical implications is essential.

The path to trustworthy AI

One consensus became clear: the approach to AI must be guided by science and continuously adapt to the rapid pace of technological development. Only through consistent ‘human oversight’ can we ensure that our value models are incorporated into AI systems.