Charlie Steiner
Independent
Why do you care about AI Existential Safety?
It’s a rich vein of interesting philosophical and technical problems that also happens to be vital, urgently, for realizing the long-term potential of the human race.
Please give one or more examples of research interests relevant to AI existential safety:
I’m interested in how to make conceptual progress on the problem of value learning, and how to translate that progress to motivate experiments that can be carried out today using language models or model-based reinforcement learning. An example interest for conceptual progress would be how to translate values and policies between different learned ontologies.