Who Is Afraid of AGI?

February, 2023

AGI stands for Artificial General Intelligence.

It is theoretized to be the level of AI at which an “algorithm” is as intelligent as a human. It is the “phylosopher’s stone” of the modern techno-alchemist.

Many people and sometimes smart ones are afraid that once we invent an advanced enough AI, there is only one small step to inventing an AGI - an AI that is as “intelligent” (whatever that may mean) than humans. At that point, there is only one more step to a more-intelligent-than-humans-algorithm. And then, this new algorithm will start improving itself… And next thing you know, the techno-apocalypse is upon us because the super-AGI is now so much more intelligent than humans and will just decide to dispose with our species or make it into it’s pets.

Reportedly, Elon Musk can’t sleep at night because of his worries about AI and AGI.

My advice those who are afraid of AI & AGI is to realize that they are literally, afraid of mathematics. Indeed, current AI techniques are just advanced mathematics. There is a clear increase in complexity when one goes from linear regression, to logistic regression, to neural networks, to deep neural networks, and to large language models, but in the end, each one of these models is, admittedly, a very complicated mathematical function representing a statistical approximation derived from very large quantities of example data.

So being afraid that a mathematical function (or a set of functions) will suddently become more intelligent than humans is silly. Being afraid that these functions will take over the planet and drive humanity extict is even more silly. Remember, by being afraid of AI you’re being afraid of matehmatics. Yes, we’re all a bit afraid of complicated mathematics, but we should not exagerate, and definitely we should not lose sleep over it.

I’ve made a visual summary of this idea:

P.S. Sure, if we put statstical functions that we don’t fully understand inside systems that can harm humans, than we should be afraid. But that’s stupid and in that case the fear should not be of the AI component itself but of the unpredictibility of the whole resulting system. My rule of thumb is this: use AI if you can “undo” its action (spam classification, code completion) or if you don’t give a speck about mistakes (image search that returns a muffin in between dogs) then use AI as much as you can. However, don’t let an AI algorithm drive your car or shoot weapons. Driving over a child can’t be undone. And you do care about mistsakes in weapon-related situations too.