Annachiara Ruospo
Why do you care about AI Existential Safety?
Ensuring the safety of AI models is currently one of the most crucial and, at the same time, important requirements. While AI remains one of the most promising technologies of the future, the potential risks are worrying and limit its adoption to strictly controlled environments. As a researcher, I feel the responsibility to direct our efforts to prevent catastrophic situations from happening. We are the ones primarily responsible for the AI we create. We will not have an out-of-control AI if we all together set up guidelines to be able to avoid it.
Please give at least one example of your research interests related to AI existential safety:
The primary focus of my research is on AI models and on the assessment and the improvement of their reliability. During my PhD, this topic has been addressed comprehensively at very different abstraction levels and from different perspectives. A key contribution of my research activities is the proposals of different fault injection tools and methodologies to easy and support the reliability assessment process. Next, relying on these analysis and results, strategies to detect and mitigate the effect of faults have been proposed. Nowadays, understanding how to reliably deploy AI models on safety-critical systems starts to be crucial. Indeed, my research findings show that artificial neural networks, although they are mimics of the human brain, cannot be considered inherently resilient. Their safety must be evaluated, preferably in conjunction with the hardware running the AI model. To raise the safety of AI models, a starting point might be to explain the behaviour of such predictive models: to comply with safety standards it is crucial to understand the reasons behind their choices, and to move beyond the black box view of artificial neural networks.