Skip to content
All Grant Programs

Postdoctoral Fellowships

The Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety is designed to support promising researchers for postdoctoral appointments who plan to work on AI existential safety research.
Status:
Closed for submissions
Applications closed

Grant winners

People that have been awarded grants within this grant program:

Dr. Peter S. Park

Massachusetts Institute of Technology
Class of 2023

Nisan Stiennon

UC Berkeley - Center for Human-Compatible AI (CHAI)
Class of 2022

Results

All of the outputs that have resulted from this grant program:

Contents

No results to show yet.

Request for Proposal

The Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety is designed to support promising researchers for postdoctoral appointments who plan to work on AI existential safety research. Funding is for three years subject to annual renewals based on satisfactory progress reports. For host institutions in the US, UK, or Canada, the Fellowship includes an annual $80,000 stipend and a fund of up to $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the fellowship amount will be adjusted to match local conditions. Fellows will be invited to workshops where they will be able to interact with other researchers in the field.

Questions about the fellowship or application process not answered on this page should be directed to grants@futureoflife.blackfin.biz

Purpose and eligibility

The purpose of the fellowship is to fund talented postdoctoral researchers to work on AI existential safety research. To be eligible, applicants should identify a mentor (normally a professor) at the host institution (normally a university) who commits in writing to mentor and support the applicant in their AI existential safety research if a Fellowship is awarded. This includes ensuring that the applicant has access to office space and is welcomed and integrated into the local research community. Fellows are expected to participate in annual workshops and other activities that will be organized to help them interact and network with other researchers in the field.

Application process

Applicants will submit a detailed CV, a research statement, a summary of previous and current research, the names and email addresses of three referees, and the proposed host institution and mentor (whose agreement must have been secured beforehand).

The research statement should include the applicant’s reason for interest in AI existential safety, a technical specification of the proposed research, and a discussion of why it would reduce the existential risk of advanced AI technologies or otherwise meet our eligibility criteria.

The proposed mentor will be asked to submit a letter confirming that they will supervise the applicant to work on AI existential safety research as per above, and that the applicant will be employed by the host institution if the Fellowship is offered.

There are no geographic limitations on applicants or host universities. We welcome applicants from a diverse range of backgrounds, and we particularly encourage applications from women and underrepresented minorities.

Timing for Fall 2023

The deadline for application is January 2, 2024 at 11:59 pm ET. After an initial round of deliberation, those applicants who make the short-list will then go through an interview process before fellows are finalized. Offers will be made no later than the end of March 2024.

Supplementary materials

AI Existential Safety Research definition

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram