I liked this article about AI as a resource, not an object by Eric Drexler (“Framework for a Hypercapable World”).
In “The Strategic Calculus”, he proposes that the immense growth of resources post-AGI favors cooperative dynamics rather than competitive. But to make it absolutely certain that the dynamics are actually cooperative, we should realize the naturally defense-dominant nature of technology. For example, use formal methods to develop secure software, and “verification that others aren’t poised to strike”.
How do we do this verification? The article doesn’t say. I’m not sure that datacenter verification (that others are training AIs of the size they say they are, and not bigger) is it, mainly because in a world of “AI as a resource” training more AIs isn’t actually dangerous.
What then? That AI aren’t being used to develop bioweapons? That the nuclear missiles aren’t launching? If you have any idea, I’d love to read your thoughts.