Skip to content
All Grant Programs

Call for proposals evaluating the impact of AI on Poverty, Health, Energy and Climate SDGs

The Future of Life Institute is calling for proposals for research evaluating in detail how artificial intelligence (AI) has so far impacted the Sustainable Development Goals (SDGs) relating to poverty, healthcare, energy and climate change, and how it can be expected to impact them in the near future.
Status:
Open for submissions
Go to application portal
Request for Proposal

I. Background on FLI

The Future of Life Institute (FLI) is an independent non-profit that works to steer transformative technology towards benefiting life and away from extreme large-scale risks. We work through policy advocacy at the UN, in the EU and the US, and have a long history of grants programmes supporting such work as AI existential safety research and investigations into the humanitarian impacts of nuclear war. This current request for proposals is part of FLI’s Futures program, which aims to guide humanity towards the beneficial outcomes made possible by transformative technologies. The program seeks to engage a diverse group of stakeholders from different professions, communities, and regions to shape our shared future together.

II. Request for Proposal

Call for proposals evaluating the impact of AI on Poverty, Health, Energy and Climate SDGs

The Future of Life Institute is calling for proposals for research evaluating in detail how artificial intelligence (AI) has so far impacted the Sustainable Development Goals (SDGs) relating to poverty, healthcare, energy and climate change, and how it can be expected to impact them in the near future. This research can examine either cases where AI is intended to address respective SDGs directly, or where AI has affected the realisation of these goals by its side effects. Each paper should select one SDG or target, analyse the impact of AI on its realisation up to the present, and explore the ways in which AI could accelerate, inhibit, or prove irrelevant to, the achievement of that goal by 2030. We acknowledge that AI is a broad term, encompassing systems that are both narrow and general with varying degrees of capability. Hence, for the purposes of this RFP we encourage using this taxonomy as a guide for exploring and categorising AI’s current and future uses.

FLI’s rationale for launching this request for proposal

Need for more detail on how AI can improve lives

There has been extensive academic research and, more recently, public discourse on the risks of AI. Experts have exposed the current harms of AI systems, as well as how increasing the power of these systems will scale these harms and even facilitate existential threats.

By contrast, the discussion around the benefits of AI has been quite ambiguous. The prospect of enormous benefits down the road from AI – that it will “eliminate poverty,” “cure diseases” or “solve climate change” – helps to drive a corporate race to build ever more powerful systems. But while it is clear that AI will make significant contributions to all of these domains, the level of capabilities necessary to realize those benefits is less clear.

As we take on increasing levels of risk in the race to develop more and more capable systems, we need a concrete and evidence-based understanding of the benefits, in order to develop, deploy and regulate this technology in a way that brings genuine benefits to people’s lives, all over the world. This understanding has real world impacts. For instance, if current AI models are already sufficient to solve major problems and meet global needs, then the way forward looks much more like applying and adapting what we have to the tasks at hand. As Future of Life Institute Executive Director Anthony Aguirre put it in a recent paper, ‘systems of GPT-4’s generation are already very powerful, and we have really only scratched the surface of what can be done with them.’

When considering the kinds of AI models we might need to achieve the SDGs in the near future, a recent paper provides a useful framework (see below) that grades AI models – those we already have and those not yet achieved – by generality and performance capability. For more on this, read the full paper.

Equally, if it becomes clear that the hurdles impeding the improvement of human lives are borne not of technological shortcomings but coordination problems, or sociological puzzles, then that too will have implications for the dispersal of future funding. The question then becomes: how can we know if AI is presently bringing, or able to bring, real benefits?

The SDGs

The Sustainable Development Goals (SDGs) remain the most broadly supported repository of high-priority problems for the world to solve, especially with regards to poverty, health, energy and climate-related challenges. The centrepiece of the 2030 Agenda for Sustainable Development adopted by all United Nations Member States in 2015, the 17 SDGs constitute an ambitious hope for a better world, but also, for our purposes, a set of concrete measurable targets against which to assess the extent of progress in the four defined areas. For clarity, the goals directly relevant to those focuses are 1 (Poverty), 3 (Health), 7 (Energy) and 13 (Climate).

The goals are interconnected. Solving one may involve or assist the solving of another. According to the UN, the goals “recognize that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth – all while tackling climate change and working to preserve our oceans and forests.” Indeed, a 2020 paper written by Vinuessa et al. analysed the effect AI could have on all of the goals. It concluded that while AI could have both positive and negative impacts on the SDGs, the net effect of AI would be positive.

Nonetheless, each of them individually poses a formidable challenge, with its own specific contingencies and obstacles. Recent assessments of the state of progress on the SDGs painted a bleak picture. Many of them look unlikely to be achieved by the end of the decade. For instance, goals concerning hunger, malaria, employment, slum-dwelling proportions, greenhouse gas emissions and the extinction of threatened species are all deemed to be in the red by the UN, in part because of the indirect effects of COVID-19 – especially when it comes to poverty eradication (SDG 1). Evaluating the impact of AI on just one of these domains will be more than sufficient a task for a single research paper of approximately ten pages.

As the 2020 paper showed, there is cause for optimism about how AI might affect the achievement of each goal. But it is time we moved beyond the hypothetical, and ascertained the impact AI is already having on the pursuit of these targets. Only then can we proceed to assess what kinds of AI development will help to bring about the better world promised in the 2030 Agenda, and how we might pursue them.

Filling a gap

As noted in the overview analysis by Vinuesa et al, “self-interest can be expected to bias the AI research community and industry towards publishing positive results.” As a result, we lack objective, independent analysis of the impact of AI thus far. Given that AI is rapidly being integrated into all aspects of society, this gap in the research community now needs filling.

Sample proposal titles

These samples are to get researchers thinking about various approaches. The selection of SDGs does not imply a preference for those particular goals in the research proposed.

SDG 1

  • How has AI been affecting the implementation of social support systems?
  • What data do we have to suggest how AI will impact the goal of reducing poverty by half in 2030?
  • What is the risk that general-purpose AI will significantly increase poverty by then?

SDG 3

  • How has AI affected the goal of decreasing maternal mortality?

III. Evaluation Criteria & Project Eligibility

Proposals will be evaluated according to the track record of the researcher, the quality of the evaluation outline, the likelihood of the research yielding valuable findings, and the rigour of the proposed projection method.

Grants applications will be subject to a competitive process of external and confidential peer review. We intend to support several proposals. Accepted proposals will receive a one-time grant of $15,000, to be used at the researcher’s discretion. Grants will be made to nonprofit organizations, with institutional overhead or indirect costs not exceeding 15%.

IV. Application process

All applications should be submitted electronically through this form. We will accept applications internationally. But all applicants should have a nonprofit organization with which they are associated to accept the funding. We will not make grants directly to individuals.

Applications deadline: 1st April 2024.

External reviewers invited by FLI will then evaluate all the proposals according to the above criteria, and decisions will be shared by mid to late May. Completed research papers are due by 13th September.

All questions should be sent to grants@futureoflife.blackfin.biz. 

Our other grant programs

2023 Grants

Funds allocated

2022 Grants

Completed

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram