Skip to content

AI Researcher Seth Herd

Published:
September 30, 2016
Author:
Revathi Kumar

Contents

AI Safety Research




Seth Herd

Senior Research Associate, Psychology and Neuroscience

University of Colorado

seth.herd@colorado.edu

Project: Stability of Neuromorphic Motivational Systems

Amount Recommended:    $98,400





Project Summary

We are investigating the safety of possible future advanced AI that uses the same basic approach to motivated behavior as that used by the human brain. Neuroscience has given us a rough blueprint of how the brain directs its behavior based on its innate motivations and its learned goals and values. This blueprint may be used to guide advances in artificial intelligence to produce AI that is as intelligent and capable as humans, and soon after, more intelligent. While it is impossible to predict how long this progress might take, it is also impossible to predict how quickly it might happen. Rapidly progress in practical applications is producing rapid increases in funding from commercial and governmental sources. Thus, it seems critical to understand the potential risks of brain-style artificial intelligence before it is actually achieved. We are testing their model of brain-style motivational systems in a highly simplified environment, to investigate how its behavior may change as it learns and becomes more intelligent. While our system is not capable of performing useful tasks, it serves to investigate the stability of such systems when they are integrated with powerful learning systems currently being developed and deployed.

Technical Abstract

We apply a neural network model of human motivated decision-making to an investigation of the risks involved in creating artificial intelligence with a brain-style motivational system. This model uses relatively simple principles to produce complex, goal-directed behavior. Because of the potential utility of such a system, we believe that this approach may see common adoption, and has significant risks. Such a system could provide the motivational core of efforts to create artificial general intelligence (AGI). Such a system has the advantage of leveraging the wealth of knowledge already available and rapidly accumulating on the neuroscience of mammalian motivation and self-directed learning. We employ this model, and non-biological variations on it, to investigate the risks of employing such systems in combination with powerful learning mechanisms that are currently being developed. We investigate the issues of motivational and representational drift. Motivational drift captures how a system will change the motivations it is initially given and trained on. Representational drift refers to the possibility that sensory and conceptual representations will change over the course of training. We investigate whether learning in these systems can be used to produce a system that remains stable and safe for humans as it develops greater intelligence.




Ongoing Projects/Recent Progress

  1. Risks of Brain-Style AI – Herd and Jilk
    • Discussions centered on how the human and mammalian motivational system works, and how a roughly analogous system would work in an AGI system. They settled on the position that the motivational system is relatively well understood, making it quite plausible that those theories will be used as the basis for an AGI system. One important consideration is that the most work on AGI safety assumes a goal-maximizing motivational system, whereas the mammalian system operates very differently. If these researchers are correct that a brain-style system is a likely choice for practical reasons, the safety of such a system deserves a good deal more focused thought.
    • These researchers plan to continue theoretical work in the coming project-years, and to publish those theories as well as the results of the empirical work planned for the coming two years. As planned, this year’s activities laid the theoretical basis for upcoming computational work. This year was a necessary maturation period for the base model, which is being developed for other purposes (Understanding the neural bases of human decision-making).

This content was first published at futureoflife.blackfin.biz on September 30, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

AI Researcher Michael Wooldridge

AI Safety Research Michael Wooldridge Head of Department of Computer Science, Professor of Computer Science University of Oxford Senior Research […]
October 1, 2016

AI Researcher Brian Ziebart

AI Safety Research Brian Ziebart Assistant Professor Department of Computer Science University of Illinois at Chicago bziebart@uic.edu Project: Towards Safer […]
October 1, 2016

AI Researcher Jacob Steinhardt

AI Safety Research Jacob Steinhardt Graduate Student Stanford University jacob.steinhardt@gmail.com Project: Summer Program in Applied Rationality and Cognition Amount Recommended:    $88,050 […]
October 1, 2016

AI Researcher Bas Steunebrink

AI Safety Research Bas Steunebrink Artificial Intelligence / Machine Learning, Postdoctoral Researcher IDSIA (Dalle Molle Institute for Artificial Intelligence) bas@idsia.ch […]
October 1, 2016

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram