José Hernández-Orallo
Why do you care about AI Existential Safety?
I care about AI Existential Safety because we are starting to explore new kinds of intelligence. These new types of intelligence may be very different from us and may challenge the conception of our species and place it in a broader, Copernican context. In order to use the power that AI will bring more responsibly, we need to better understand what kinds of intelligence we can create and what their capabilities and behaviour really mean.
Please give one or more examples of research interests relevant to AI existential safety:
I do research on the evaluation of AI capabilities, as determining what AI systems can do and what they cannot do is key to understanding their possibilities and risks. I’m also interested in how humans and AI systems may interact in the future, especially as systems with higher generality become more ubiquitous, and what the future of cognition may look like.