mircea

Who’s Afraid of AGI?

or, Why you shouldn’t be afraid of mathematics

February, 2023

TL;DR; version:

AGI stands for Artificial General Intelligence.

It is theoretized to be the level of AI at which an “algorithm” exposes general inteligence on par with a human. It is the “phylosopher’s stone” of the modern techno-alchemist*.

Many people, and sometimes even smart ones, are afraid that we are on the verge of inventing an AGI that will be more “intelligent” (whatever that may mean) than humans. Reportedly, Elon Musk can’t sleep at night because of his worries about AI and AGI.

But why be afraid of the AGI?

Well, because that “more intelligent” than humans AGI will start improving itself and will turn itself into a super-AGI which will be so much more intelligent than all of us that it will be like a God compared to us. And then, it might just decide to dispose with our species. Or make us into its pets. The techno-apocalypse, also called the “singularity”, is an idea originally proposed by Ray Kurzweil, which seems to not let some people sleep at night. The fear of AI has flared these days, probably because of the popularity of the recent advances in AI and the associated hopes and journalistic and marketing amplifications.

Note that the field of AI is nowhere near to any such level of generality: most of the algorithms are narrowly focused on a single domain: playing chess, playing go, generating images, generating text, classifying images, etc. (And it should be like that. Algorithms are tools, and tools are there to enhance our capabilities, not to replace us).

Moreover, most of those who are afraid of AI seem to forget that all current AI is just advanced applied mathematics. Even the recently impressive ChatGPT can be seen in some sense as a giant mathematical function that takes as input the last 1000 tokens of text, and generates text. You give it some input. It provides some output. You should’t lose sleep over it. (Also, if you pay attention at the text it generates, you’ll realize it looks smarter than it is… In fact, the more impressed you are by it, the more likely you’re not paying attention to the details; but that’s another story.)

Surely, there is a clear increase in complexity when one goes from linear regression, to logistic regression, to neural networks, to deep neural networks, and to large language models, but in the end, each one of these models is, nothing more than a very complicated mathematical function defined as a statistical approximation from very large quantities of example data. A stochastic mathematical function that you don’t understand is still a … mathematical function.

By being afraid of AI, one is literally afraid of mathematical functions. Yes, it’s true, we’re all a bit afraid of mathematics, but that’s because we’re too lazy to put the time and understand it, not because mathematics is dangerous in itself.

In fact, until recently, those who were afraid of mathematics would rather not mention it. It is really a new activity to express your fears about it, and it is definitely a silly idea to try to stop everybody else from doing mathematics for six months.

Thus, don’t lose sleep over fears of AGI-apocalypse: God is not a mathematical function computed in the Sillicon Valley.

And instead of proposing that people stop doing research in AI, I think we’d all be better served by a ban on people who do not work in AI (e.g. journalists, marketing, pundits, Elon Musk): for six months they should not be allowed to worry in public about the techno-apocalypse. It’d be a better, quieter, more rational world.

* I’m not the only one who sees the parallel with alchemy: Rodney Brooks, the robotics pioneer, says in a tweet: “My take on LLM/CHATters. We’re going to transmute lead to gold! Why now? We just pumped massive energy in and got Pb to turn to liquid. First time ever. It logically follows that soon we will have AgI(silver incalculable) and even AuI. We (my in group) are going to be RIIICCCHHH!” (source)

Further Reading

Further Notes