Skip to content
All Podcast Episodes

Daniela and Dario Amodei on Anthropic

Published
March 4, 2022
Video

Find the podcast on:

  • Anthropic's mission and research strategy
  • Recent research and papers by Anthropic
  • Anthropic's structure as a "public benefit corporation"
  • Career opportunities

 

Watch the video version of this episode here

Careers at Anthropic

Anthropic's Transformer Circuits research 

Follow Anthropic on Twitter

microCOVID Project

Follow Lucas on Twitter here

0:00 Intro

2:44 What was the intention behind forming Anthropic?

6:28 Do the founders of Anthropic share a similar view on AI?

7:55 What is Anthropic's focused research bet?

11:10 Does AI existential safety fit into Anthropic's work and thinking?

14:14 Examples of AI models today that have properties relevant to future AI existential safety

16:12 Why work on large scale models?

20:02 What does it mean for a model to lie?

22:44 Safety concerns around the open-endedness of large models

29:01 How does safety work fit into race dynamics to more and more powerful AI?

36:16 Anthropic's mission and how it fits into AI alignment

38:40 Why explore large models for AI safety and scaling to more intelligent systems?

43:24 Is Anthropics research strategy a form of prosaic alignment?

46:22 Anthropic's recent research and papers

49:52 How difficult is it to interpret current AI models?

52:40 Anthropic's research on alignment and societal impact

55:35 Why did you decide to release tools and videos alongside your interpretability research?

1:01:04 What is it like working with your sibling?

1:05:33 Inspiration around creating Anthropic

1:12:40 Is there an upward bound on capability gains from scaling current models?

1:18:00 Why is it unlikely that continuously increasing the number of parameters on models will lead to AGI?

1:21:10 Bootstrapping models

1:22:26 How does Anthropic see itself as positioned in the AI safety space?

1:25:35 What does being a public benefit corporation mean for Anthropic?

1:30:55 Anthropic's perspective on windfall profits from powerful AI systems

1:34:07 Issues with current AI systems and their relationship with long-term safety concerns

1:39:30 Anthropic's plan to communicate it's work to technical researchers and policy makers

1:41:28 AI evaluations and monitoring

1:42:50 AI governance

1:45:12 Careers at Anthropic

1:48:30 What it's like working at Anthropic

1:52:48 Why hire people of a wide variety of technical backgrounds?

1:54:33 What's a future you're excited about or hopeful for?

1:59:42 Where to find and follow Anthropic

 

Transcript

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
View previous editions
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram