What Does AI Mean for Geopolitics?

Image for post
Image for post
Technology has connected the world through media like Facebook, but one crucial divide remains.

I think we need something like a Manhattan Project on the topic of artificial intelligence, not to build it because I think we’ll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you’re talking about superintelligent AI, that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then, we will need to absorb the economic and political consequences of getting them right.

But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we’re in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.

Sam Harris, Ted Talks (2016)

What does the development of artificial intelligence mean for the future of geopolitics? The more that AI is developed, and the more that the systems of nations become dependent on AI, to include military and economic, the more this could complicate international relations, as nations will be employing an increasingly powerful tool to advance their military and economic interests.

Earlier in that same Ted Talk, Harris states that we are looking at a window of about fifty years before we reach the point at which AI reaches the level described above. He also mentions that we will need to be able to handle the political consequences of such AI. I suspect he was mainly thinking about the effect that such AI could have on our democratic societies. It certainly has the potential to disrupt it or to enhance it. The primitive AI that govern Facebook news feeds certainly show that it can play a decisive role in influencing human political cognition.

This, however, is speaking of the political consequences purely in a domestic sense. We must consider the diplomatic sense as well. I think Harris was correct in describing the first action on our “To Do” list as something akin to a Manhattan Project. The invention of the nuclear bomb had a tremendous impact. That impact was not in the hundreds of thousands of people it killed in Hiroshima and Nagasaki (as enormous as that was). The impact, rather, was in how it transformed global diplomacy.

The principal of Mutually Assured Destruction, whereby an aggressive nuclear power would be equally punished by a defending nuclear power, brought an era of relative peace among nations. In the last half of the Twentieth Century, the vast majority of armed conflicts were civil wars, not between sovereign states. Countries would not use such a weapon on themselves for peace, after all. This brought international stability, but it also preserved many of the more odious regimes on the planet. Half of Europe was under the boot of Soviet tyranny until 1991, and Russia and China still stand as repressive states. The pursuit of nuclear weapons by lesser powers such as Iran and North Korea has frequently forced the hand of the United States to act.

I assert that AI has an equal ability to alter the trajectory of geopolitics. The bomb did this by way of its raw, destructive power. In the grand family of human invention, nuclear weapons represent the maximum brawn. It would appear that AI will prove similarly significant in geopolitics by representing the maximum brain of human creation. Rather than destroy (per se), AI’s power comes from its ability to do everything that the bomb cannot do, meaning that nothing is off the table.

The potential of the geopolitical effect of AI is actually far greater than what could occur domestically. Even if a country such as the United States correctly accounts for all internal factors to allow its AI to coexist with its free markets and democracy, it has no such guarantee that the AIs from rivals such as Russia or China will be friendly to its systems, and the same concerns exist for these countries and the friendliness of American AI to their systems.

One of the reasons that nuclear weapons have allowed for stability is that they are static weapons. Once built, there is little that they do except wait to be deployed by human actors. AI would be our first truly dynamic weapons, and as other nations develop AI in parallel, they would be weapons that have an ability to interact and come into conflict with each other. We have often wondered what the human cost would be, should AI ever come to compete with their human creators. I do not know if we have ever considered the human cost if different AI (representing the interests of their human creators) ever came to compete against one another.

This is a situation with many moving pieces. There is already enough uncertainty, both in what a single, self-improving AI may do and may choose to do and in what kind of branch of decisions that any combination of parameters may produce. Now just broaden that scenario to include multiple AI, each with different parameters, each with different national interests behind them. This reads out like a tragedy in the making. Even if these competing AI avoid directly destructive behavior, the way they may try to subdue other nations in this contest, by influencing food production, water supplies, power grids, transportation, elections, and so on, allows for a wide range of terrible possibilities.

As part of this discussion of AI, we need to talk about this future in particular and what we can do to prevent it from happening. The best way to prevent it, as I can see, is for humanity to form a single, global state. Only then could we thoroughly remove the factor of international competition and the dangers it would provoke. If Harris is to be believed, we only have about fifty years to figure this out, and with a developed world increasingly ready to reject efforts at proto-integration such as NAFTA and the European Union, this task looks to be a difficult one.

Ultimately, I share Harris’s view that the development of AI would be very much indistinguishable from our general conception of a god in its cognitive ability. I also agree that we must do everything we can to make sure that it is a god with which we can coexist. I would simply expand that concern to account for multiple gods, who may compete with each other. Rather than allow the next chapter in human history to be an unhappy hybrid of science fiction and Homeric lore, whereby a cybernetic pantheon dooms our planet, I think we will find more predictability and security through a more monotheistic result, which will only occur if we take serious steps toward peaceful unity.

Written by

I discuss politics, economics, art, video games, and other interests.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store