AI News Hub Logo

AI News Hub

Preventing extinction from ASI on a $50M yearly budget

AI Alignment Forum
Andrea_Miotti

ControlAI's mission is to avert the extinction risks posed by superintelligent AI. We believe that in order to do this, we must secure an international prohibition on its development. We're working to make this happen through what we believe is the most natural and promising approach: helping decision-makers in governments and the public understand the risks and take action. We believe that ControlAI can achieve an international prohibition on ASI development if scaled sufficiently. We estimate that it would take approximately a $50 million yearly budget in funding to give us a concrete chance at achieving this in the next few years. In this post, we lay out some of the reasoning behind this estimate, and explain how additional funding past that threshold, including and beyond $500 million, would continue to significantly improve our chances of preventing extinction risk from ASI. Preventing ASI 101 Negotiating, implementing and enforcing an international prohibition on ASI is, in and of itself, not the work of a single non-profit. You need to have the weight of nations behind you to achieve this kind of goal. If humanity manages to achieve an international ban on ASI, it'll be through the efforts of a sufficiently motivated, sufficiently powerful initial coalition of countries. Assuming that we work in multiple countries in parallel, we could say the problem statement is: get each country to be motivated to achieve an international prohibition on ASI. It’s not obvious what it means for a country to be “motivated” to do something, so it’s worth taking a second to unpack. Our full theory of change chart, which backtracks from the desired outcome to our currently running workstreams. Normally, parts of a country's executive branch are responsible for international negotiations around urgent issues concerning national and global security. In practice, these are the groups who need to be sufficiently motivated to achieve the ban to throw their weight behind it. Branches of the government are generally not in the business of independently taking bold positions and then pursuing those positions to their logical ends. Instead, their stances and actions are mostly shaped by prevailing social currents. Some of these currents are informal. This includes things like the conversations they have with their colleagues, advisors, confidants and family members. It also includes any recent news cycles and the media they consume. Other parts of these currents operate through more formal channels, particularly in democracies. The legislative branch can influence the executive branch. [1] The public influences governments through elections, but also through polls, public discussions and common demands (at the very least because they affect the expectation of future election results). If enough of these inputs point in the same direction, pushing for an international ban on ASI can become one of the country’s top priorities. For this to work, we need pervasive awareness of the issue of extinction risk from ASI. This sentence makes two claims, both of which are fully necessary, so let us repeat them and expand them individually. Claim 1: The awareness of extinction risk needs to be pervasive throughout society. Prohibiting ASI development is not easy. It will require the relevant parts of the executive branch to take a great deal of initiative, and involve many hard tradeoffs. At a minimum, it will mean significantly slowing down improvements in general-purpose AI and thus forgoing economic and military advantages. If some countries are initially not willing to cooperate with a ban, proponents of the ban will need to apply an expensive combination of carrots and sticks to bring holdouts on board. For the relevant groups to push through these costs, it needs to feel like there is plenty of pressure to act, and like this pressure is coming from many places. If everyone who is asking for this is part of a specific, small faction, there will be a strong immune reaction and the faction will be ignored, or even purged in some cases. Claim 2: The awareness needs to be specifically about extinction risk from superintelligent AI. It is insufficient, and sometimes actively harmful, for people to vaguely dislike AI or only vaguely be aware that AI poses some scary risks. Due to the hard tradeoffs mentioned earlier, there will be pressure to take half-measures, at many layers, both internal and external. The only sufficient counterweight against this pressure is an understanding that ASI development must absolutely be prevented to ensure human survival. A lack of awareness of the specific issue will inevitably lead to anemic action and weak, unfocused policies that do not actually prevent the development of ASI. This is one of the reasons why, in our communications, we solely focus on extinction risk from ASI, and we do not work on raising awareness of other AI risks, or otherwise trying to get people to vaguely dislike all AI. [2] All of our efforts are specifically around raising awareness of extinction risk from ASI, and how it may be addressed. [3] Awareness is the bottleneck Chart synthesized from the section “The Simple Pipeline” of Gabriel Alfour's post on The Spectre haunting the “AI Safety” Community. It’s a common perception that one cannot communicate directly to lay people about extinction risks from ASI, because they would never get it. Instead, one must cook up sophisticated persuasion schemes. Based on our experience, this idea is just plainly wrong. Just tell the truth! We believe the primary bottleneck to getting an international prohibition on superintelligence is basic awareness of the issue. Most of the people we reach, for example among lawmakers and the media, have simply never been told about the problem in plain terms. We find that often, all it takes to bring someone on board is a single honest conversation. The fact that honestly explaining the concerns to people is such a low-hanging fruit is one of the reasons why we could get so much done in 2025. Politicians and the public simply don’t know that the most important figures in AI are literally worried about superintelligence causing human extinction. They simply don’t know that the only way to avoid human extinction on which experts can truly form a consensus is not to build ASI in the first place. [4] The reason why they are not aware of this is because they haven’t been told, not because they don’t understand the concepts involved. In our experience, most people find it intuitive that it is extremely dangerous to build something as powerful as ASI, that you don’t understand and can’t predict. They find it intuitive that you can’t control ASI, that it can very easily precipitate catastrophic scenarios, and that this means you should not build it in the first place. The reason why people are not aware of extinction risk from superintelligence is, simply put, because concerned experts have generally not been straightforward about their concern. The CAIS statement on AI risk is a rare exception to this, [5] but it’s starting to get old, and even then it’s just not enough. We’ve met with lawmakers over 300 times. Most of the time, they’ve never had someone explain extinction risk to them before, nor have they ever heard of the CAIS statement before the meeting. Even then, politicians don’t care about a person having signed a single statement once. That’s not how they’d expect someone who’s worried about the literal annihilation of the entire human race to behave. It sounds weak and almost fake to them. In a serious world, you’d expect every single AI expert who is worried about extinction to be loudly and consistently vocal about it, including to the public and decision-makers in governments. As it stands, this is simply not the case. AI companies and their leaders constantly soften their communications, avoiding clearly mentioning extinction and preferring to talk about euphemisms and other risks. Anthropic’s head of growth recently said that Anthropic constantly adjusts their communications to be “softer” and appear “less over the top”. Sam Altman, when asked by a US Senator whether he had jobs in mind when he said that “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”, did not correct the senator and instead proceeded to talk about the possible effects of AI on employment. If you ever have the chance to attend a house party in the Bay Area, you will get a really good sense of this: many researchers at AI companies are worried about extinction risk, and significantly orient their lives around this. At the same time, they don’t talk about these risks publicly. It’s obvious to us that the reason so little progress has been made towards international agreements on ASI is exactly because experts have failed to be consistently open about their concerns. An asymmetric war While an international ASI ban is obviously a very ambitious goal, there is a sense in which advocacy about extinction risks from ASI means having the wind at one’s back. At a fundamental level, this is because approximately no one wants to die from ASI. Politics is often an adversarial tug-of-war between opposing interests. When it comes to high-profile issues in American politics (e.g. abortion, marijuana legalization, Prop 22), it can take hundreds of millions. [6] However, when it comes to extinction risk, there is little conflict between different interest groups. If extinction risk materializes then everyone dies, regardless of their wealth, political affiliation or other personal interests. It is only an extreme minority of people who, even after having the chance to consider the dilemma of extinction risk, decide they are willing to bet on humanity’s extinction in order to get to ASI. This is true at a fractal level. Not only does it mean that we expect the issue to be nonpartisan within countries, but we expect the interests of countries to be aligned with each other as long as there is a significant risk that building superintelligence will cause human extinction. [7] This is why we think that there is a good chance that achieving the same kind of success we’ve already achieved, but at a larger scale, will lead to an international ban on superintelligence. Since our approach is not about winning a political tug-of-war through sheer might, we expect that we have a shot (~10%) [8] at winning even with a budget as low as $50 million, which is at least an order of magnitude smaller than other political campaigns on major issues. It would be a long shot, and we think “good odds” (~30%) would require larger budgets in the order of $500 million. [9] Let us elaborate a bit more on what we mean when we refer to a “political tug-of-war”. A common tactic, especially when trying to prevent a law from being passed, is to deliberately confuse people, for example by loudly communicating only the upsides of your proposals and only the downsides of the opposition’s, or through personal attacks on the opposition’s character. Aside from the obvious moral issues with this strategy, they are much less effective when it comes to an issue that is so clear cut and of such universal concern as extinction risk from ASI. With an issue like extinction risk, it becomes much harder to pit people against each other or to execute confusion tactics in order to hinder efforts to establish restrictions. Scalable processes At its core, ControlAI is an effort to create a scalable, industrial approach to averting extinction risks from ASI. The field of AI risk mitigation has historically relied on what we could call a “bespoke” or “artisanal” approach. That is, it relies on exceptional individuals to achieve specific successes, such as publishing a successful book, or performing some impressive networking feats, all through following their personal taste. The definition of what it means for these “artisanal” workstreams to “succeed” is not written down anywhere, and not much effort goes into defining it and grounding it. For most people focused on AI risk, getting a sense of whether they’ve succeeded doesn’t look like measuring something as much as it looks like applying ad-hoc rationales that easily fall prey to galaxy-braining. Everything hinges on the quality of the person’s taste at best, and on sheer luck at worst. Even when you succeed at these endeavors, you’re not in a position to easily replicate this success. To make it more clear what we mean: Eliezer Yudkowsky and Nate Soares can't trivially replicate the success they had by publishing the book “If Anyone Builds It, Everyone Dies” by simply putting more resources into the same effort, let alone scale up the approach by building an organization around it. The book was excellent and has helped spread awareness, but you can’t publish a book every week. Similarly, the CAIS “Statement on AI Risk” was excellent for establishing common knowledge, and has greatly helped us in our endeavors. That said, this type of work is hard to replicate, and indeed has not been replicated: neither CAIS nor any other organization has since succeeded in getting all the CEOs of top AI companies to sign a similarly candid statement. [10] ControlAI takes a different approach, one that straightforwardly allows scaling up workstreams once they’ve been set up. Whenever we have a goal that is too far off to tackle directly, we break it down into the most ambitious possible intermediate goal that we think we can act on. Crucially, we choose intermediate goals whose progress we can measure as hard numbers. In this way, we’re approximating sales funnels, the gold standard for how companies handle sales. Here’s a couple of examples of how we apply this approach. One of the early challenges we faced was to crystalize our successful lawmaker briefings into something that would accumulate over time and generate momentum. Our answer to this was to create a campaign statement [11] and to ask lawmakers to publicly support it. We’ve already secured 120 such supporters! This solution satisfies a few important constraints: It moves the world into a state where it’s somewhat easier to achieve our overarching goal of an international ban on ASI: Each public supporter helps by creating common knowledge that lawmakers consider this an urgent issue that merits immediate attention. It gives us a clear, numeric measure of success for this workstream: the number of lawmakers who signed on to our campaign. We could tackle this challenge directly: at the end of each briefing, simply ask lawmakers to publicly support the campaign. Marginal inputs compound over time: each additional lawmaker publicly supporting the campaign helps increase the credibility of the issue, and makes it easier for more lawmakers to take a stance on it in the future. After a while, we were ready to push toward something more ambitious. So, while still working on growing the number of lawmakers supporting the campaign, we introduced a new metric: the number of public declarations, written or spoken by an individual lawmaker, [12] that explicitly reference AI extinction risk or preventing superintelligence, on the condition that this happened after we personally briefed them. In the UK, this metric is currently sitting at 21. These metrics are numerical and clearly defined, meaning that even a fresh graduate hire can be pointed at one and told to "make it go up" or to improve conversion rates between one step of the funnel and the next. There’s no danger that the person will fool themselves about how much progress they’re making. [13] In fact, most reasonably smart and motivated people, given a reasonable amount of mentorship, will naturally iterate on their approach and eventually achieve good results. This way, we don't need to hit the jackpot on hiring people who possess incredible taste right off the bat. The best proof for this claim is our success in Canada. In about half a year, with only 1 staff member who had no previous experience in policy, we managed to brief 89 lawmakers and spur multiple hearings in the Canadian Parliament about the risks of AI. These hearings included testimonies from many experts who expressed their concerns about extinction risks: ControlAI’s Andrea Miotti (CEO), Samuel Buteau (Canada Program Officer) and Connor Leahy (US Director) Malo Bourgon (MIRI) Max Tegmark and Anthony Aguirre (FLI) Steven Adler (ex-OpenAI) David Krueger (Evitable) The fact that our approach is easily scalable is precisely the reason why we can write, in the rest of this post, about how we plan to make productive use of funding much larger than we currently enjoy. It’s also why, in some cases, we are able to make tentative predictions about what kind of success we expect to achieve. What we’d do with $50 million or more per year Right now, we believe that we are underfunded compared to what it would take us to have an actual shot at achieving an international ban on superintelligence. Our estimate is that a $50 million yearly budget [14] would give us a chance to succeed, although it would be a long shot. [15] Here, we break down how we would allocate a budget of $50 million to maximize our chances at achieving an international ban on ASI development. We also show how more funding would further increase our chances of succeeding, giving a few examples of how we would make productive use of budgets as large as $500 million or $1 billion (roughly in line with major campaigns in the US, such as abortion, marijuana policy, and the presidential race). We’ll cover our plans to use funds for policy advocacy in the US and the rest of the world, public awareness campaigns, policy research, outreach to thought-leaders (such as journalists), grassroots mobilization, and more. US policy advocacy Within a $50 million yearly budget, we’d be able to hire ~18 full-time policy advocates dedicated to briefing US members of Congress. In principle, we’d have enough bandwidth to meet every member of Congress within 3 to 6 months, ensuring that they’ve been briefed at least once on extinction risk from superintelligence. While we are confident that we’d have the capacity for these meetings, it is less clear whether we’d be able to regularly brief with members of Congress face-to-face, or whether we’d spend a significant fraction of our time communicating with staffers. At the moment, we are cautiously optimistic: in the past 5 months, with ~1 staff member, [16] we’ve managed to personally meet with and brief 18 members of Congress, as well as over 90 Congressional offices. Additionally, we’d have the capacity to brief offices in the executive branch relevant to national security and international affairs. These agencies are trusted by many other actors to stay on top of security risks, especially drastic ones like extinction risks from superintelligence; it’s essential for large-scale coordination that members of these institutions have a good grasp on the issue. A budget of $50 million would also allow us to hire a small team of ~6 staff members focused on performing outreach to state legislators in a small number of high-priority states. The bread and butter of our work is to ensure that US decision-makers are properly informed about and understand: Concepts like superintelligence, recursive self-improvement, compute, etc; That superintelligence poses an extinction risk; That this can be addressed by an international agreement prohibiting ASI, and how such an agreement could be designed such that it is actually enforced. We expect that, to the degree that we succeed in informing decision-makers about these matters, we’ll be able to leverage this into measurable outcomes such as: Politicians make public statements about superintelligence and the extinction risks it poses. Politicians make public statements about the need for an international prohibition on superintelligence development. Hearings are held in Congress on the above topics. The US takes steps toward negotiating an international prohibition on superintelligence with other countries. Within a $500 million budget, we would not only double or triple the number of full-time staff dedicated to US policy advocacy, but we’d also be able to attract the best talent, and hire policy advocates with very strong pre-existing networks. Policy advocacy in the rest of the world In the UK, we’ve already moved the national conversation on superintelligence forward. In little more than a year, we’ve gathered 110 supporters on our campaign statement, and catalyzed two debates at the House of Lords on superintelligence and extinction risk. At a yearly budget of $50 million, we could afford to more than triple our efforts in the UK. Now that we’ve managed to get some attention, we’ll put more focus on the following: Get government to discuss bills, amendments and actions the UK could take to champion the establishment of an international prohibition on superintelligence; [17] Executive branch outreach. A coalition of countries sufficiently powerful to achieve a ban on ASI will likely need to include multiple powerful countries to participate. To maximize the probability that this happens, we plan to prioritize G7 in our policy advocacy efforts. This is because G7 includes all of the most powerful countries that we’re confident can be influenced democratically. Within a budget of $50 million, we’d be able to match our current UK efforts in all other G7 countries and in the EU’s institutions. This means we’d likely be able to replicate our UK successes in most of these places, even accounting for bad luck or for them being slightly more difficult. [18] With roughly an additional $5 million in our budget (on top of the previous $50 million), we’d be able to dedicate at least 1 policy advocate (in some cases 2) to many other countries in the rest of the world. For example, we could maintain a presence in almost all G20 countries. We don’t know in advance which countries will respond well to our efforts, so we think it would be useful to spread out and take as many chances as possible. Our previous experience shows that it’s at least possible to get good results with only 1 staff member in some G7 countries. In Canada, our only local staff member managed to hold more meetings with representatives than any other corporate lobbyists or advocates during February. It seems probable that we can replicate our results in Canada in at least some G20 countries, where the competition for the attention of decision-makers is less stiff. Public awareness Our theory of change hinges not only on key decision-makers understanding the issue, but also on the public doing so. Our key messages to the public are: [19] Top AI experts warn that AI poses an extinction risk. We can prevent this risk by prohibiting superintelligence. Superintelligence may come quickly, in a matter of 5 years or less. We believe our key messages are straightforward: you don’t need to be a genius or to be deeply familiar with AI to understand them. [20] The main bottleneck is making the public aware of the issue in the first place; after that, it’s getting them to take action about it. We roughly expect that the average person will need to see each of our key messages 7 to 10 times in order to remember them, at the bare minimum. [21] That said, we expect that even after the same person sees a message dozens of times, the marginal returns on delivering the same message to this same person once more have still not been saturated. For example, we expect each new view will make the person slightly more likely to bring up the issue spontaneously in conversation, or slightly more likely to change their vote based on this issue. [22] Within a budget of $50 million, we expect that we can achieve on the order of magnitude of 2 billion ad impressions in the US, [23] an order of magnitude increase over our current ~200M. [24] Various sources suggest that the average YouTube CPM is roughly $9, with a range between approximately $3 and $23 depending on the ad and campaign. Using this as a reference, and assuming we allocate $16 million to raw ad spend, we’d get somewhere between 700 million and 5.3 billion impressions. This is assuming that all of our ad spend is on a single platform, but we can easily improve this by spreading our ad spend across platforms. For context, a $16 million per year ads budget is comparable to the ad spend of companies like Shake Shack, but still two to three orders of magnitude away from presidential campaigns or Coca-Cola’s yearly ad spend. If this was spread uniformly across the US population, every US adult would see our ads at least ~3 times. [25] More realistically, if we targeted a narrower segment of the US population, we could be seen by 10% of US adults ~30 times, or 5% of US adults ~60 times. In other words, it becomes plausible that a sizable portion of the US population would remember our key messages: they would be aware that AI poses an extinction risk, they would remember as the main recommendation on how to fix this problem that we should prohibit the development of superintelligent AI. This level of awareness seems like it would be a great step forward, but we would not stop there. In addition to raising awareness, we’d also aim to help people to take action that helps move the world toward an international ban on superintelligence. So far, we think that the most useful CTA (call to action) is to ask people to email or call their lawmakers. Using this CTA allows us to build a base of supporters who are motivated enough to take this kind of action, who we can call upon again in the future. We have already built the online campaigning infrastructure for this, and our 180k email subscribers have already sent over 200k messages to their lawmakers about ASI. At this $50 million budget, we estimate that we could grow this base of supporters to 2 million citizens within 1 year. When we email this type of CTA, we currently get an action rate of around 2%. We think we can safely assume that this action rate will not degrade by a whole order of magnitude at this scale. Given these assumptions, we predict that if we target some carefully selected subset of US states, this would produce enough constituent pressure to get on the radar of key decision-makers and their staff purely through constituents emailing and calling lawmakers. For example, if we target swing states, we might be able to get electoral campaigns to at least be aware of our issue. Public awareness efforts can scale massively before saturating. There are straightforward, non-innovative ways to make productive use of budgets as large as $500 million or $1 billion: large-scale ad campaigns routinely do so. Coca Cola spent $5.15 billion in 2024, and Trump’s 2024 presidential campaign spent more than $425 million, or $1.4 billion including outside groups. This is also the scale at which, if we wanted to do so, we could spend $8 million on a Super Bowl ad about extinction risk from superintelligent AI! [26] A total budget of $500 million to $1 billion would allow us to scale our ad spend massively. At this point, even with extremely pessimistic assumptions, [27] we could reach each US citizen at least a dozen times. Alternatively, we could focus on the 10% most engaged segment of the US population, reaching each individual at least 100 times. As a lower bound, we are confident this is enough to make sure that every citizen in the US is at least somewhat aware of the issue. More importantly, we suspect that at this scale we could push the issue to the forefront of the public’s attention, and make it into one of the main topics in the national conversation. We acknowledge it’s really hard to predict the effects of a campaign at this scale [28] , but we think that it can help to anchor on other campaigns of similar scale in the US: abortion, marijuana policy, and the presidential race itself. As we argued in the section An asymmetric war, we see these campaigns as mostly a zero-sum game, in which both sides must burn as many resources as possible to be competitive. If we receive comparable funding, we feel confident in our chances, as we see an AI extinction risk awareness campaign as a much more positive sum game. One last point about ad spending: in order to run an ad campaign, we need not only to buy ad space, but we also need to expand our marketing team so that it has sufficient capacity to optimize the campaign. Within a budget of $50 million, we could afford to dedicate ~6 people to this, offering salaries roughly between $100k and $200k. This addresses basic needs, but it does not provide an appropriate amount of bandwidth for the task, nor does it allow us to attract and retain the best talent. Running an effective ad campaign is not a fire-and-forget operation. We’d need to continuously measure results, A / B test, experiment, brainstorm ads and concepts, research trends and audience behaviors, even come up with novel metrics and testing methodologies. All of this information needs to be collected, analyzed and fed into the next round of iteration. The rounds of iteration themselves need to be very fast if we want to improve in a relevant amount of time. Whereas less ambitious marketing teams may take ~3 months to go through an iteration cycle, we’d have to do it in ~2 weeks. To run this kind of operation, we would benefit immensely from hiring the most talented people, who can not only follow existing playbooks, but also innovate. These people are in extremely high demand, and we’re competing for them against the private sector. Within a budget of $500 million, we could afford to dedicate ~20 people to this, offering salaries roughly between $200 and $400k. This would allow us to attract top talent and compete with the private sector. [29] Grassroots mobilization We already have a base of motivated supporters. 180k people are subscribed to our mailing list. 30k of our supporters contacted their lawmakers about extinction risk from ASI, and ~2000 of our supporters are willing to commit 5 minutes per week to regularly take small actions to help with the issue. Dozens have shown up at our pilot in-person events. With more funding, we think we can turn this into a significant grassroots movement. We currently lack the capacity to properly organize and mobilize this community. We believe that we’d have sufficient capacity for this at a $50 million overall budget. Concretely, this work would consist of things like: Vetting local leaders, coaching them and helping them with their work. Organizing or providing funding for local events. Helping with initial set up of groups, legal entities, basic websites etc. Building and providing services like Microcommit and tools such as the “Contact your lawmakers” tool on our campaign website Providing educational materials like tutorials and scripts for contacting one’s lawmakers. Policy work As part of our work in policy advocacy, it is often useful to be able to show policymakers a concrete policy proposal. These proposals can take various forms: legal definitions of superintelligence, high-level proposals for an international agreement on prohibiting ASI, national bills implementing a country’s obligations in an international agreement. These proposals are not meant to be the exact, definitive version of the law that will eventually be implemented. It is understood that things will change as time passes, more parties weigh in, and negotiations unfold. That said, it helps in many ways to have initial, concrete proposals. It helps people to publicly discuss, red-team, and refine the proposals. But also, it helps to show policymakers a proof-of-concept that concrete measures can be taken to prevent extinction risk from superintelligence. The more countries we reach, the more complicated this work becomes. The legal landscape differs significantly between countries: they have different legal traditions, processes, institutions, constitutions, limits on power of governmental bodies, etc. It takes a team of policy researchers, and the help of parliamentary lawyers, to develop and propose such policy proposals. We estimate that we’d have sufficient capacity for this work at around a $50 million total yearly budget. Thought-leader advocacy Most people rely on trusted voices, across the political spectrum, to help them navigate complex issues rather than trying to form their view from scratch on every single topic. This is a normal and healthy part of how democracies function: just like representative democracy exists because we don’t expect every citizen to participate directly in the full political process, we don’t expect everyone to independently decide to pay attention to such a highly complex matter as extinction risk from ASI. Instead, people look to figures like journalists, academics and public intellectuals to help them understand which issues deserve their attention. One of our key workflows is outreach to these kinds of thought-leaders. At the moment, this mostly includes journalists, and sometimes content creators. This workstream has so far resulted in 22 media publications on risk from superintelligent AI including in TIME and The Guardian, and in 14 collaborations (a mix of paid and free) with content creators including popular science communicator Hank Green, Rational Animations, and more. With more funding, we could not only scale up these workstreams, but also extend this outreach effort to include NGOs other than those who focus on AI, academics, religious leaders, authors and other public intellectuals, CEOs of companies outside of tech, leaders of local communities, and others. If we want our society to develop a deep awareness of the extinction risk posed by ASI, we need to help these people understand the issue. At a $50 million total budget, we’d have enough bandwidth for a thought-leader outreach effort focused on the lowest-hanging fruits. In practice, this likely means having a single generalist team spread across every type of thought-leader, and covering only the Anglosphere. At a total budget of $500 million, we could afford to build strong dedicated teams, each focused on one of the most important thought-leader communities. At the same time, we could establish a presence in other major cultural regions outside the Anglosphere. Attracting and retaining the best talent Many in our organization are forsaking significant increases in compensation they could command in the private sector, purely because they are deeply committed to our mission. As we scale, it will become increasingly difficult to find talented people who are willing to take this kind of pay cut. This is especially true if we scale aggressively. To attract the caliber of talent that a problem of this importance deserves, we need to offer salaries that are as competitive as possible with the private sector. At a yearly budget of $50 million, we’d be able to slightly improve our compensation, though most of the increase would be eaten by scaling the number of staff rather than increasing pay. As a rough estimate, we could probably offer between $100k and $200k to people in the public awareness team (comparable to sales in the private sector), and ~$350k to principal staff. At $500 million, we think we could be truly competitive. While we would likely still be unable to match the salaries offered by AI corporations to staff who take part in their lobbying and marketing operations, we could significantly reduce the gap. Conclusion We want to be upfront: we don't know for sure if this will work. An international ban on ASI is an extraordinarily ambitious goal. But we believe that the structure of the problem gives us a fighting chance: approximately no one wants to play a game that risks wiping out humanity, regardless of the prize. In 2025, with a team of fewer than 15 people, we’ve built a coalition of over 110 UK lawmakers to support our campaign, with 1 in 2 lawmakers having supported our campaign after we briefed them. On top of this, we’ve catalyzed parliamentary debates on superintelligence and extinction risk. In the US, where competition for lawmakers' attention is the fiercest, we’ve personally met with 18 members of Congress with only a tiny number of staff on the ground. On the public awareness side, over 30k people have used our tools to send over 200k messages to their lawmakers about extinction risk from superintelligence, most of them in the US. This wasn't a fluke of exceptional talent or lucky connections; we’ve done this with remarkably junior staff, in little more than a year. It was the result of a straightforward, scalable process, and of building solid foundations that enable us to scale to meet the challenge. What’s standing between us and a real fighting chance is funding commensurate with the problem. If you are a major donor or a philanthropic institution, please get in touch at [email protected]. We’d be glad to walk you through our theory of change in more detail and discuss how additional funding would be deployed. If you know a major donor or someone at a philanthropic institution, please introduce us. A warm introduction from someone they trust goes much further than a cold email from us. You can loop us in at the same address. If you're an individual donor who is considering a gift of $100k or more, please reach out at the same address. Please only consider doing so if this wouldn't significantly impact your financial situation. We don't want anyone to overextend themselves on our behalf, no matter how much they care about the issue. We are a 501(c)(4) in the US and a nonprofit (not a registered charity) in the UK, so your donations are not tax deductible. We’re currently not set up to receive smaller donations. If you still want to contribute, you can check our careers page. If you see a role you could fill, please apply. If you know someone who'd be a good fit, send them our way. e.g. US Congress has the “power of the purse”, parliamentary systems can hold “votes of no confidence”. ↩︎ Between our founding in October 2023 and mid 2024, we ran 3 campaigns in rapid succession. One of these was a campaign against deepfakes. This was a sincere effort: we do believe that deepfakes are a problem that should be addressed with legislation, and we’re proud of our achievements as part of our campaign. That said, after refining our thinking and developing the ideas we’re espousing on this post, we’ve updated towards focusing exclusively on extinction risk from ASI. This is what we’ve been doing since the end of 2024. ↩︎ Consider the environmentalist movement as a cautionary example. Environmental efforts have generally failed to achieve their stated goals (e.g. reducing emissions, reversing climate change). Richard Ngo argues that they’ve caused serious collateral harms. We think this is partly because of their lack of focus. Rather than concentrating on a single core concern, environmental campaigns rummage around for anyone who, for any reason, feels good vibes toward the idea of the environment. As a result, the movement struggles to achieve good policies despite being enormously salient. Because of its lack of focus, it is interlinked with anti-capitalist groups, and so it tends to oppose interventions that would actually help with climate change, such as nuclear energy, as well as carbon capture and market-based solutions in general. Relevant posts on LessWrong: @habryka’s “Do not conquer what you cannot defend”, @Gabriel Alfour‘s “How to think about enemies: the example of Greenpeace”. ↩︎ To clarify: this doesn’t mean that everyone thinks the only way to avoid extinction is to not build ASI. Some do, while others have complicated ideas about how ASI can be built safely. The point is that none of those specific complex ideas benefit from a broad expert consensus. The only thing that most of us can agree on is that it won’t kill us if we don’t build it. ↩︎ There have been other statements, such as this great one from FLI, but none signed by *both* top AI scientists and CEOs of top AI companies. ↩︎ Sources: abortion was roughly $400 million in 2024, marijuana legalization was roughly $185 million in 2024, Prop 22 was roughly $220 million. ↩︎ See Annex 2 of our paper “How middle powers may prevent the development of ASI”. While the paper focuses on the perspective of middle powers, this section’s analysis extends to superpowers. ↩︎ The probabilities are produced mostly by gut feeling, but the major barriers that were considered are the following. 1) We are able to maintain a good internal culture as we scale extremely aggressively. 2) The lower bounds of our gears-level estimates mentioned in the second half of this post (e.g. ad impressions per dollar) hold. 3) We are able to validate our approach at scales of ~$50 million a year, and are able to continue raising at this scale if getting the agreement in place takes longer than a year. 4) The issue becomes a top 10 salient issue in the US and another 2~3 major countries. 5) The behavior of governments championing the ban is sufficiently connected to the right insights about extinction risk and ASI, requiring at the very least that public discourse about the ASI ban does not get distracted or confused in a way that makes the resulting actions ineffective. 6) This leads to an international ban on ASI in which major powers, including the US and China, conclude that participation serves their national interests and try to enforce globally. Alternatively, if China or other countries do not join, the coalition of countries behind the ASI ban is powerful enough to be able to deter non-participating countries and any rogue actors from developing ASI. ↩︎ We strongly believe in the principles we follow: honesty, openness, and democracy. Of course, we do think that our approach to averting extinction risks from ASI is the best; we wouldn’t pursue it if we didn’t think so. At a 500M budget level, we’d love to fund organizations that pursue different approaches, as long as they respect our basic principles. If we had that level of funding, we would seek to ensure that there are other organizations pursuing a candid approach to communication about ASI, and of organizations that directly tackle the need for strong international coordination. ↩︎ Notably, a statement like this one can generate a temporary spike of media coverage, but does not generate sustained attention by itself. Statements like this one need a sustained campaign (like the one we’re running) in order to receive sustained attention. ↩︎ The statement reads: “Nobel Prize winners, AI scientists, and CEOs of leading AI companies have stated that mitigating the risk of extinction from AI should be a global priority. Specialised AIs - such as those advancing science and medicine - boost growth, innovation, and public services. Superintelligent AI systems would compromise national and global security. The UK can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems.” ↩︎ Examples of this: a lawmaker giving a speech in parliament, writing an op-ed, or speaking in an interview to a major media outlet. ↩︎ Importantly, our metrics are strictly focused on AI extinction risk. This reduces the risk that the person working on them, or the organization as a whole, will fool themselves into pursuing issues other than preventing extinction risk from superintelligent AI. A “lawmaker public declaration” only counts if it covers extinction risk specifically. If people at ControlAI spend time trying to push topics such as “job loss”, “AI ethics” or “autonomous weapons”, we consider this a failure. This is how we fight The Spectre, and stay laser focused on addressing extinction risk from superintelligence. ↩︎ To be considered a very rough estimate, could be 30M to 80M. ↩︎ As we mentioned earlier, we feel that this is around a ~10% chance. ↩︎ 1 member for most of this period; the 2nd member joined in the past month. ↩︎ We’ve already fostered two debates about prohibiting ASI, and helped submit one amendment recognizing ASI and putting in place kill-switches for use in case of AI emergencies. To our knowledge, we are the first organization to successfully prompt a debate, in the parliament of a major country, focused specifically on prohibiting superintelligence. ↩︎ Consider that replicating a success should be much easier than doing it the first time. By design, our results are public, and so, produce common knowledge. Now that 100+ lawmakers support our campaign in the UK, it is easier for other lawmakers to take a similar stance, including in other countries. ↩︎ To a lesser degree, we would like people to remember our organization as a place where they can find trustworthy information on the issue and what they can do to help solve it. ↩︎ The vast majority of people will not feel the need to fully understand the technical and geopolitical details in order to buy into the concern. The important part is that most people can intuitively understand why and how ASI can cause human extinction, and are happy to defer to experts about the details. ↩︎ This is the most common rule of thumb in marketing, and is backed up by some academic research as well, e.g. see Advertising Repetition: A Meta-Analysis on Effective Frequency in Advertising. ↩︎ Unlike the previous one, this statement is not backed by academic research. While most academic research focuses on marketing aimed at selling products and services, our goals present quite a different challenge. There are two main differences that make us expect to keep getting returns after even hundreds of exposures. 1) Our messages are somewhat novel and complex to the audience. This complexity will have to be accounted for in some way: either the message is presented in a complex way that takes more exposures to remember, or the message is broken down into many building blocks, each of which needs to be shown many times. 2) The success bar is somewhat higher: we do benefit from people responding to CTAs similar in scope to “buying a product”, but we also benefit from deeper engagement (see the section on “Grassroots mobilization"), we benefit from people spontaneously bringing up the topic in conversations, which happens more if we create common knowledge that the topic exists. ↩︎ This section assumes that we will allocate 60% of our ad spend to the US. We expect it will be quite a bit easier to yield good results in other countries, mostly due to lower cost per impression. For example, if we put the remaining 40% in 3 G7 countries, we expect to roughly be able to replicate the same success as in the US across those 3 countries. ↩︎ Including both organic and paid reach. ↩︎ This corresponds to 800 million total impressions. ↩︎ Though it’s not clear to us at the moment if this would be a good use of money. ↩︎ In this paragraph, we use our worst case assumption that scaling ad-spend by x30 multiplies impressions by x4. We expect it’s much more likely that scaling x30 will yield x10 to x15 impressions. ↩︎ Simpler models and extrapolations that we think we can use at a $50 million budget will break at this scale. There are strong reasons to deviate from these, both in pessimistic and optimistic directions. At this scale, we’ve probably run out of people who can be mobilized solely through ads. At the same time, network effects come into play, where people hear about the issue from others, and they start to see it as a “normal” part of the political discourse. It seems to us that trying to model the net effect ahead of time would be a fool’s errand. ↩︎ For reference, here’s a job post by Anthropic for a marketing role, which they advertise as paying $255k to $320k. ↩︎ Discuss