AI policy: Differential Acceleration in AI development

Summary: I do the following: 1) provides a framework and consideration of “differential acceleration” in AI development (concentrating resources towards promoting safer or more promising technological paradigms, while simultaneously restricting or limiting the advancement of others) and 2) an exploration of differentially accelerating Cognitive Emulations to achieve AGI.

Epistemic status: I wrote this up in an ~hour in April 2023, so many of the examples are outdated, though the concept still remains the same. Not confident, but seems something to think a lot about. (Also, upon further research, I found this is essentially a developed version of Nick Bostrom's 2002 proposal for "differential technological development").

Three things seem clear from developments in AI policy in the past couple of months

  1. The Overton window has shifted on AI safety: the topic has received significant attention from the public and the policymakers
  2. We have not solved technical AI alignment, and no clear strategy has emerged yet.
  3. Even though are efforts to call for an indefinite pause on AGI development, most policymakers/labs are (and should be) keen to reap the benefits from strong AI systems. 

From speaking within policy circles, the general attitude has moved from putting AI on the agenda to deciding how to best respond (“AI solutionism”).

In this post, I consider a governance approach those interacting with policymakers may wish to consider: whether we should call for differential acceleration of AI development. 

I define differential acceleration to mean: a strategic approach in technology development where resources are concentrated towards promoting safer or more promising technological paradigms, while simultaneously restricting or limiting the advancement of others. It involves the selective acceleration of preferred technologies, guided by factors such as safety, societal preferences, and policy decisions, rather than purely technological superiority.

We can see this condition holds for a selection of historical examples of technological development, where one can observe a similar pattern where a technology initially perceived as less competitive gets chosen and replaces an existing, dominant one due to policy decisions or regulatory constraints. In the domains of energy, transportation, and manufacturing, technologies like unleaded gasoline, Hydrofluorocarbons (HFCs) in refrigeration, and synthetic colourings in food production were chosen, despite their initial inferiority, due to health and environmental concerns related to their predecessors.

Such transitions usually involve policy restrictions on the prevailing technology, setting the stage for alternatives to rise. Case in point, the replacement of ozone-depleting Freon with Hydrofluorocarbons in air conditioning, or the transition from leaded to unleaded gasoline for health reasons. Both shifts were driven by policy changes that inhibited the further development of the existing technology.

I outline most policy interventions of differential acceleration as following this path:

  1. Existence of an Alternative: A viable, although initially less competitive, an alternative to the dominant technology must exist. The alternative doesn't need to outperform the incumbent technology at the time of its introduction; it just needs to offer a different set of advantages or overcome specific drawbacks of the incumbent technology. This technology needs to be theoretically feasible, with a clear goal and identifiable end result which should enable rapid research progress
  2. Policy or Regulatory Intervention: There needs to be a strong policy or regulatory impetus for adopting the new technology.
  3. Restriction of Incumbent Technology: The development of the incumbent technology is restricted or halted. This could be due to policy bans, regulations limiting its use, or significant public opposition.
  4. Subsequent Development of the Alternative: After the adoption of the alternative, further development and refinement occur. The initially less competitive technology evolves and improves over time, potentially reaching or surpassing the performance of the original dominant technology.
  5. Adoption Driven by Policy, not Technological Superiority: The switch to the new technology isn't driven by its superior performance or efficiency but by external factors such as policy decisions and societal preferences. The key catalyst for change lies in these external pressures rather than the intrinsic merits of the technology itself.

Differential Acceleration in AI

The current paradigm adopted by OpenAI’s frontier model (GPT-4) is a black-box (’magical’) Large Language Model enhanced with RLHF. Anthropic Claude is trained with “Constitutional AI”, simplistically described as RLHF conducted by the AI system itself rather than humans. A consequence of this is that we are not clear about where these capabilities arise from (they are, crudely, large matrix transformers), nor even the capabilities of such models.

To illustrate what this might look like in AI: I’ll focus on Cognitive Emulations, a safety-oriented approach to artificial intelligence development that aims to create AI systems emulating human thought processes, with capabilities bounded to human-like regimes. 

When I speak about how much we do not understand what these models are capable of to policymakers, most of them seem to have an (implicit) internal understanding of “black-box” that is best analogised by “black box” which is the brain. Their incorrect internal model might be represented as follows:

Yes, I don’t, as a non-technical person, understand what is going on in any part of this black box but, if I studied, I could approximately tell you what each of the parts do, and, certainly, the world’s leading experts on this (AI/neural) system could tell you significantly more. Those (Machine Learning/Neurology) experts could definitely give you a range of what the system is capable of.

For AI, however, the best ‘interpretability’ research is still being done on GPT-2-like models - ones that are orders of magnitude worse than the state-of-the-art on many benchmarks. Model Evaluations, testing different models to see if you can prompt them to exhibit dangerous capabilities, think deception, power-seeking and self-replication, is, at the moment, best used as evidence of what a model can do, rather than what it cannot. Some are concerned that such efforts are ‘safety-washing.’

Conveniently, this internal model is pretty similar to Cognitive Emulations, which attempt to minimize uninterpretable "magic" and prioritizes predictable, safe, and understandable behaviours. This means that such an approach may be more understandable to the public/policymakers (”we want the AIs to be more human-like rather than magical”) and therefore increase its political viability.

Assuming, then, that you have some alignment/governance approach that you were excited about, as I am for Cognitive Emulations. What could a national/international agency do about it?

Uncertainties

  1. Effectiveness of the Strategy: The feasibility and potential efficacy of differentially accelerating AI are still uncertain. While in theory, focusing resources on more desirable paradigms may lead to safer developments in AI, the specifics of its execution and whether it will yield the desired results is not a guarantee. Also, in general, policy approaches are difficult and high-variance. I anticipate that this is the most likely way that this approach would fail, however, as noted earlier, there does seem to be an appetite among policymakers and the public for solutions. 
  2. Failure of Selected Paradigms: The possibility exists that the chosen safer paradigm, like Cognitive Emulations, could fail to deliver on its promise or may not be as convincing as initially thought. Such a failure could lead to a loss of resources and credibility for the differential acceleration approach. In the case of Cognitive Emulations, I believe that it seems both possible in principle and with unlimited resources (and current technology). Given the starting assumption of a huge increase in resources (compute, talent etc.), this might be sufficient.
  3. Risk of Unaligned Intelligence Development: The development of certain AI paradigms, such as cognitive emulation technology, could potentially facilitate the creation of unaligned intelligence if not deployed perfectly. However, the hard reality is that any path towards creating an aligned intelligence inherently carries the risk of unaligned intelligence, and that is the default that we will head towards in any case. 
  4. Capabilities Acceleration: Differential Acceleration may just lead to more capability breakthroughs without significantly helping safety. This is why it is important 

Operationalising Differential Acceleration 

Some steps you could consider pushing for the adoption for in national/international agencies:

  1. Direct large-scale funding towards specific research and development agendas related to the technology. Consider the establishment of grants and fellowships for researchers in the field, targeted subsidies for companies working on the technology, or even the creation of public research institutions dedicated to the technology.
  2. Encourage a neutral or coordinated effort to develop the technology, possibly with a halt on certain risky or frontier developments in the field, eg. through a pause. A neutral compute cluster, for example. This could involve creating international research consortia or agreements to work together on the technology, setting common standards and sharing research findings, or even temporary moratoriums on certain lines of research that are seen as risky.
  3. Offer significant non-profit funding or other financial incentives for organizations that are pioneering safer or more promising approaches. The NSF has already announced $20 Million in NSF Grants for Safety Research. This could be done through tax incentives for donations to non-profits working on the technology, or the creation of public foundations or funds that provide grants to these organizations.
  4. Direct technical time and talent from the private sector towards organizations or projects that are focused on these alternatives. This could involve public-private partnerships, where government agencies work closely with private-sector companies on projects of mutual interest. It could also involve the government providing scholarships or incentives for students to study relevant fields and work in the sector.
  5. Implement stronger regulatory measures to guide the development and use of the technology, possibly tailored to promote safer or more desirable forms. Model Evaluations, in this model, would be used as a way to provide evidence for banning the pursuit of certain AI paradigms, rather than verifying their safety.

Concluding, this all links back to the familiar debate of whether technological progress is linear, or a tree that we can shape. The forces of competitive pressures and race dynamics are strong, however, we must remember that these are political problems. Given that alignment seems very hard and a complete AI pause looks unlikely, a valuable approach is considering political solutions such as differential acceleration (and the combined slowing down of other approaches) that exist to buy us more time. Much of my thinking today is shaped by the belief that we find ourselves equipped with an unprecedented level of public awareness, political influence, and potentially, considerable financial resources. We should ensure that we use it well.