Owen Cotton-Barratt and Toby Ord

There are several different kinds of artificial general intelligence (AGI) which might be developed, and there are different scenarios which could play out after one of them reaches a roughly human level of ability across a wide range of tasks. We shall discuss some of the implications we can see for these different scenarios, and what that might tell us about how we should act today.

A key difference between different types of post-AGI scenario is the ‘speed of takeoff’. This could be thought of as the time between first reaching a near human-level artificial intelligence and reaching one that far exceeds our capacities in almost all areas (or reaching a world where almost all economically productive work is done by artificial intelligences). In fast takeoff scenarios, this might happen over a scale of months, weeks, or days. In slow takeoff scenarios, it might take years or decades. There has been considerable discussion about which speed of takeoff is more likely, but less discussion about which is more desirable and what that implies.

Are slow takeoffs more desirable?

There are a few reasons to think that we’re more likely to get a good outcome in a slow takeoff scenario.

First, safety work today has an issue of neartsightedness. Since we don’t know quite what form artificial intelligence will eventually take, specific work today may end up being of no help on the problem we eventually face. If we had a slow takeoff scenario, there would be a period of time in which AGI safety researchers had a much better idea of the nature of the threat, and were able to optimise their work accordingly. This could make their work several times more valuable.

Second, and perhaps more crucially, in a slow takeoff the concerns about AGI safety are likely to spread much more widely through society. It is easy to imagine this producing widespread societal support of a level at or exceeding that for work on climate change, because the issue would be seen to be imminent. This could translate to much more work on securing a good outcome — perhaps hundreds of times the total which had previously been done. Although there are some benefits to have work done serially rather than in parallel, these are likely to be overwhelmed by the sheer quantity of extra high-quality work which would attack the problem. Furthermore, the slower the takeoff, the more this additional work can also be done serially.

A third key factor is that a slow takeoff seems more likely to lead to a highly multipolar scenario. If AGI has been developed commercially, the creators are likely to licence out copies for various applications. Moreover it could give enough time for competitors to bring alternatives up to speed.

We don’t think it’s clear whether multipolar outcomes are overall a good thing, but we note that they have some advantages. In the short term they are likely to preserve something closer to the existing balance of power, which gives more time for work to ensure a safe future. They are additionally less sensitive to the prospect of a treacherous turn or of any single-point failure mode in an AGI.

Strategic implications

If we think that there will be much more time for safety work in slow takeoff scenarios, there seem to be two main implications:

First, when there is any chance to influence matters, we should generally push towards slow takeoff scenarios. They are likely to have much more safety work done, and this is a large factor which could easily outweigh our other information about the relative desirability of the scenarios.

Second, we should generally focus safety research today on fast takeoff scenarios. Since there will be much less safety work in total in these scenarios, extra work is likely to have a much larger marginal effect. This can be seen as hedging against a fast takeoff even if we think it is undesirable.

Overall it seems to us that the AGI safety community has internalised the second point, and sensibly focused on work addressing fast takeoff scenarios. It is less clear that we have appropriately weighed the first point. Either of these points could be strengthened or outweighed by a better understanding of the relevant scenarios.

For example, it seems that neuromorphic AGI would be much harder to understand and control than an AGI with a much clearer internal architecture. So conditional on a fast takeoff, it would be bad if the AGI were neuromorphic. People concerned with AGI safety have argued against a neuromorphic approach on these grounds. However, precisely because it is opaque, neuromorphic AGI may be less able to perform fast recursive self-improvement, and this would decrease the chance of a fast takeoff. Given how much better a slow takeoff appears, we should perhaps prefer neuromorphic approaches.

In general, the AGI safety community focuses much of its attention on recursive self-improvement approaches to designing a highly intelligent system. We think that this makes sense in as much as it draws attention to the dangers of fast takeoff scenarios and hedges against being in one, but we would want to take care not to promote the approach for those considering designing an AGI. Drawing attention to the power of recursive self improvement could end up being self-defeating if it encourages people to design such systems, producing a faster takeoff.
In conclusion it seems that when doing direct technical safety work, may be reasonable to condition on a fast takeoff, as that is the scenario where our early work matters most. When choosing strategic direction, however, it is a mistake to condition on a fast takeoff, precisely because our decisions may affect the probability of a fast takeoff.

Thanks to Daniel Dewey for conversations and comments.

Posted in Essays.

Share on Facebook | Share on Twitter