
Scott Emmons
University of California, Berkeley
Why do you care about AI Existential Safety?
COVID-19 shows how important it is to plan ahead for catastrophic risk.
Please give one or more examples of research interests relevant to AI existential safety:
I’ve done work on the game theory of value alignment and the robustness of reinforcement learning.