Skip to content
All Grant Programs

PhD Fellowships

The Vitalik Buterin PhD Fellowship in AI Existential Safety is for PhD students who plan to work on AI existential […]
Status:
Closed for submissions
Applications closed

Grant winners

People that have been awarded grants within this grant program:

Yawen Duan

University of Cambridge
Class of 2023

Caspar Oesterheld

Carnegie Mellon University
Class of 2023

Kayo Yin

UC Berkeley
Class of 2023

Johannes Treutlein

UC Berkeley
Class of 2022

Erik Jenner

UC Berkeley
Class of 2022

Erik Jones

UC Berkeley
Class of 2022

Stephen Casper

Massachusetts Institute of Technology
Class of 2022

Xin Cynthia Chen

ETH Zurich
Class of 2022

Usman Anwar

University of Cambridge
Class of 2022

Zhijing Jin

Max Planck Institute
Class of 2022

Alexander Pan

UC Berkeley
Class of 2022

Results

All of the outputs that have resulted from this grant program:

Contents

Stephen Casper

  1. Casper, Stephen, et al. "Explore, Establish, Exploit: Red Teaming Language Models from Scratch." arXiv preprint arXiv:2306.09442 (2023).
  2. Shah, Rusheb, et al. "Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation." arXiv preprint arXiv:2311.03348 (2023).

Stephen Casper, Cynthia Chen, Usman Anwar

  1. Casper, Stephen, et al. "Open problems and fundamental limitations of reinforcement learning from human feedback." arXiv preprint arXiv:2307.15217 (2023).

Yawen Duan

  1. Ji, Jiaming, et al. "Ai alignment: A comprehensive survey." arXiv preprint arXiv:2310.19852 (2023).
Request for Proposal

The Vitalik Buterin PhD Fellowship in AI Existential Safety is for PhD students who plan to work on AI existential safety research, or for existing PhD students who would not otherwise have funding to work on AI existential safety research. It will fund students for 5 years of their PhD, with extension funding possible. At universities in the US, UK, or Canada, annual funding will cover tuition, fees, and the stipend of the student’s PhD program up to $40,000, as well as a fund of $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the stipend amount will be adjusted to match local conditions. Fellows will also be invited to workshops where they will be able to interact with other researchers in the field.  Applicants who are short-listed for the Fellowship will be reimbursed for application fees for up to 5 PhD programs, and will be invited to an information session about research groups that can serve as good homes for AI existential safety research.

Questions about the fellowship or application process not answered on this page should be directed to grants@futureoflife.blackfin.biz

Purpose and eligibility

The purpose of the fellowship is to fund talented students throughout their PhDs to work on AI existential safety research. To be eligible, applicants should either be graduate students or be applying to PhD programs. Funding is conditional on being accepted to a PhD program, working on AI existential safety research, and having an advisor who can confirm to us that they will support the student’s work on AI existential safety research. If a student has multiple advisors, these confirmations would be required from all advisors. There is an exception to this last requirement for first-year graduate students, where all that is required is an “existence proof”. For example, in departments requiring rotations during the first year of a PhD, funding is contingent on only one of the professors making this confirmation. If a student changes advisor, this confirmation is required from the new advisor for the fellowship to continue.

An application from a current graduate student must address in the Research Statement how this fellowship would enable their AI existential safety research, either by letting them continue such research when no other funding is currently available, or by allowing them to switch into this area.

Fellows are expected to participate in annual workshops and other activities that will be organized to help them interact and network with other researchers in the field.

Continued funding is contingent on continued eligibility, demonstrated by submitting a brief (~1 page) progress report by July 1st of each year.

There are no geographic limitations on applicants or host universities. We welcome applicants from a diverse range of backgrounds, and we particularly encourage applications from women and underrepresented minorities.

Application process

Applicants will submit a curriculum vitae, a research statement, and the names and email addresses of up to three referees, who will be sent a link where they can submit letters of recommendation and answer a brief questionnaire about the applicant. Applicants are encouraged but not required to submit their GRE scores using our DI code: 3234.

The research statement can be up to 3 pages long, not including references, outlining applicants’ current plans for doing AI existential safety research during their PhD. It should include the applicant’s reason for interest in AI existential safety, a technical specification of the proposed research, and a discussion of why it would reduce the existential risk of advanced AI technologies. For current PhD students, it should also detail why no existing funding arrangements allow work on AI existential safety research.

Timing for Fall 2023

The deadline for application is November 16, 2023 at 11:59 pm ET. After an initial round of deliberation, those applicants who make the short-list will then go through an interview process before fellows are finalized. Offers will be made no later than the end of March 2024.

Supplementary materials

AI Existential Safety Research definition

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram