
Sumeet Motwani
University of California, Berkeley
Why do you care about AI Existential Safety?
AI poses a significant existential risk to humanity and a significant benefit to it too. Ensuring safety as the field of AI progresses ensures a future where we can rely on AI to solve some of the world’s most important problems while being a vital part of our daily lives.
Please give one or more examples of research interests relevant to AI existential safety:
I’m currently working on topics such as Power Seeking AI, AI Alignment, and ML for Security.