2 Responsible artificial intelligence
CEA-List’s responsible AI program—strongly rooted in our history of research on the foundations of AI—brings scientific excellence to two topics vital to our AI strategy: trust and frugality.Our research on digital trust, one of our main focus areas, grew out of the CEA’s decades of experience with software safety and security. The CEA-List researchers working on responsible AI address the issue holistically. This chapter will highlight two major research topics: The first is the operating safety and verifiability of AI-based critical systems; the second is the coupling of AI with simulation models in which model imprecisions are an issue—something that is especially relevant to signal processing.
Frugal AI, another challenge, is addressed from a use-case perspective. Here, natural language processing and computer vision stand out as particularly salient examples of AI use cases. Our researchers develop design methods—from data through to algorithms—capable of delivering both performance and frugality. These methods also address today’s generative-AI-based approaches. The idea is to enable specialization and performance in specific operational situations by bypassing crude generalist solutions.

Over the course of the year, we reported five major advances:
- First prize in the EvalLLM text-based information extraction challenge
- Quantitative measurement of uncertainties in artificial-intelligence-guided simulation
- PyRAT wins formal verification competition
- DIOD (Self-Distillation Meets Object Discovery) boosts the performance of unsupervised object discovery in videos
- Generative AI successfully applied to robotic grasping
