AI Will Neither Save nor Destroy the World

Many people seem to be putting their hopes in AI to solve all our problems and save the world, by means such as figuring out how to best regulate the economy; figuring out the best possible governmental systems, laws and their interpretations; replacing politicians and judges; etc.

Even if we were to succeed in creating true AGI, which I don’t think is possible, and even if it were way more intelligent than humans, it’s not realistic to think that humans will allow it to take charge of the human race. The leaders of the world are only human, and the same human flaws we want to transcend are the selfisms (like the will to power) and conceits (that they actually know best) that will prevent them from giving up their own power to the AGI.

At best, they would use the AGI instance or instances as consultants and selectively apply or not apply their advice. This probability is compounded even more by the fact that some people would have to act as mediators anyway between the AGI, being just a program or programs ran on computers, and the actual physical acts of delegation and ordainment, so it’s that much more tempting for people to interject their own wills into the process. Though this would be less of an issue if the AGI were given physical agency in a robot body or some such with ability to speak, see, hear, write, type, read, move, manipulate objects, etc.

Of course, fear of giving an essentially alien intelligence total control, whether justified or not (probably justified), would also be a hindrance to leaders stepping aside and giving any AGI full reign.

Also, those with all the power in the form of money will want to remain in ownership of the means of production, or of debt or whatever, and also will want to keep calling the shots in order to maintain and increase their wealth, and they have lots of power to enact their will, so they will definitely thwart AGI from taking over key positions of power if nothing else does.

Why do I think people’s fear of giving AGI full control probably be justified?

As humans, our own cognition and its faculties seem to be a relatively simple, almost singular thing. We don’t notice all the individual talents, intuitions, precepts and skills that go into day-to-day life and decision making. These are made more apparent when we dissect the brain and find that it has many, many different sections that we can individually prod, electrify, damage or whatever, each having a different consequence on individual cognitive and other mental faculties. The fact that this is a total surprise to us speaks to our self-perception as being something more like simple points of a kind of universal intelligence.

So, we imagine that an AGI will be similarly a kind of point of universal intelligence, but the truth is that all of the sensibilities and precepts that make us human would have to somehow be adequately programmed or otherwise infused into the AGI. This would be no easy task, and it would probably be fraught with danger. In humans, we use grace, intuition, experience, etc. to weigh many possibly conflicting precepts against each other to come to a balanced solution in every case. With AGI, this grace, intuition, experience, etc. would probably be impossible to replicate, and any given precept or edict might possibly, under the right circumstances, accidentally actuate a “Universal Paperclips“-esque mode of operation.

The fact that we thought for decades that we’d be able to create AGI with relatively slow computers and hand-coded source in, e.g. Lisp (thinking Lisp’s self-modifying ability would facilitate AGI) and never came even close is another testament to the misleading nature of how simple we think our intelligence, and hence the intelligence of a sought for AGI, really is. I would argue that, even now, we’re still under the misimpression that it’s much simpler than it is.

People are in a frenzy over it now because of the mind-blowing success of ChatGPT, GPT-4 and others, thinking they’re indicators that we’re “so close” to AGI, but the truth is that these things are worlds apart from being actual AGI. They can’t ruminate or reflect for an indeterminate amount of time on anything, they often make basic mistakes in cognition that no truly intelligent being (in their right mind) would ever make, there are simple and obvious rules that they’re just unable to infer in their training, and they exist wholly within the world of language, which means they’re a totally virtual kind of intelligence. They exist in a kind of hyperreality.

Have you ever thought about how feasible it would be to learn a language just by reading the dictionary? Assuming you had no prior language. The answer is that it would be impossible. There would be nothing to ground the words in anything but other words (even if you understood the basic structure of a dictionary and what a word is). As I mention in https://myriachromat.wordpress.com/2017/02/16/on-the-meanings-of-words/, if you were to look up all the words in the definition of a word, and then look up all the words in those words’ definitions etc. recursively, every path of lookups would ultimately either be one long circular definition or would end in a word that’s not defined.

When children learn language, they ground the words they learn in real-world experiences. LLMs have no such opportunity. Therefore, although they are good at making rules of prediction based on language patterns, they can’t possibly “understand” (even in some non-conscious mechanical sense) any better than you can learn a language by reading a dictionary for that language without the aid of already understanding some of the words or word stems.

And, since they’re just inferring patterns based on hundreds of billions of tokens of human text, their reasoning, initiatives and knowledge can be no better than those of the average human. (Well, in the case of its total body of knowledge, it can be no better than that of humanity at large.)

Why do I think creating true AGI is not possible? I have a write-up of that here: https://myriachromat.wordpres.com/2019/09/13/on-the-possibility-of-artificial-general-intelligence/.

I wrote more about why AGI won’t save the world here: https://myriachromat.wordpress.com/2023/01/28/why-ai-wont-save-the-world/.

Besides the many who seem to hold high hopes that AI will fix the world and perhaps even usher in utopia, there also seem to be many people who fear that AGI will destroy humanity and/or the whole world. For the same reasons as explained above that AI will never be smart enough to fix the world, I say that AI will never be smart enough to destroy the world either.

Though, because of what it is and potentially can be, it will be, and, to some extent, already is, extremely disruptive to society. Many workers will be out of jobs; many communications that were once conferred by humans will be conferred by robots; art and media will be taken over by AI-generated content; and we will no longer be able to validate any video or audio we see and hear of alleged criminals, celebrities, politicians, and maybe ordinary humans as authentic, so the entire notion of accountability will take a huge hit.

It’s an interesting question just what will happen if AI ends up displacing a large portion of the workforce. Those in power will want to employ it because it’s cheaper labor, but, on the other hand, if most people in a society aren’t getting paid, who will make the purchases that in turn power the economy? And, if the economy drops, how will the rich possibly remain rich?

Ideally, the extra economic efficiency imparted by the actions of AI robots would translate into monetary returns (or just goods and services directly) for everybody, including those whose jobs were displaced by AI. But there just doesn’t seem to be any macro-economic mechanism/pathway under capitalism for that to happen. The name of capitalism is cooperation through mutual greed—you get something for something you give.

So, what, X percent of the population would just suffer and starve to death? I just don’t know.

Could we solve the problem by instating a universal basic income? Or would we have to supplant capitalism with another form of economy altogether? If so, are there any good alternatives other than socialism, and what may they be? I don’t know. Maybe you know.

Leave a Reply