Nell Watson
Why do you care about AI Existential Safety?
AI is a powerful amplifier, one that may be applied to infinite purposes. It therefore is a steroid for existential risks, as well as a risk in itself due to misalignment and supernormal stimuli. Protecting the future against the excesses of AI is probably the biggest question of our time, and perhaps the only intellectual domain that truly matters in the long term.
Please give one or more examples of research interests relevant to AI existential safety:
I have been working in the space of AI and AI Ethics intelligence for many years, having founded a machine vision company, and having taught machine intelligence for a variety of higher-ed clients, including creating courseware for O’Reilly media on Convolutional Neural Networks, as well as courseware for IEEE and Coursera on AI Ethics. I enjoy outreach and public education in science and policy. I have given talks and lectures all on Artificial Intelligence and Ethics all over the world, on behalf of, for example, MIT, and the World Bank. I have also co-developed the Certified Ethical Emerging Technologist professional examination for CertNexus, and served as an executive consultant philosopher for Apple. I have also initiated CulturalPeace.org, working to bridge polarization in society through basic ground rules for conflict, Endohazard.org, aiming to better inform the public about which products or components contain endocrine disrupting chemicals, Pacha.org exploring how we can manage shifted costs in an intelligent and automated manner, Slana.org on leveraging entheogenic treatments within conflict zones, and a forthcoming IEEE standard of audio and visual marks denoting whether one is engaging with a human, AI, or a ‘centaur’ combination.