Why AI Won’t Save the World

In an earlier post, I explained why artificial general intelligence is a logical impossibility. Now I’ll explain why, even if it does turn out to be possible and we develop it, it won’t be the world saver that many are looking forward to it being.

With the inventions of more- and more-impressive (and complex) ML models, such as DALL-E 2, Midjourney, Stable Diffusion, GPT 3, ChatGPT, etc., and not to mention more- and more-powerful GPUs to compute them on, and other things like chess engines that can beat grand masters half the time, it seems to many that we’re on the brink of developing actual AGI—a digital general intelligence—that’s smarter even than humans. People also expect that, as a result of this development, the world will drastically change, either for better or worse. Opinions seem to be split on whether AGI will totally annihilate humanity or make this world into a utopia, humans included.

In my opinion, comparing any of our current ML models to true human-like intelligence, thinking that they’re an indication that we’re very close to the latter, is essentially a category error.

Most of the functions we’re applying AI to are very, very specific in comparison to the generality of human, living intelligence. Because they’re tasks that our own general intelligence tends to be proficient at (if to a lesser degree than AI in some cases), we make the mistake of associating the AI’s proficiency at those specific tasks with a kind of general intelligence, while, in actuality, the specificity of the task at hand and the generality of actual intelligence are worlds apart. So, for example, the proficiency of a chess engine at playing chess is evidence of nothing other than the ability of computers to play chess per se.

ChatGPT, a popular new chat bot by OpenAI, is admittedly surprisingly impressive, and it appears to be able to interact and provide and create information in a very general way, but this is all strictly via text. It’s not clear to me that a similar technology could navigate and manipulate the actual physical world in any non-trivial sense. The world of text is a very limited subset of the real world, a sort of virtual reality. We merely use text as a convenient tool; ChatGPT lives in text. There’s a huge difference there.

And, while ChatGPT able to “remember” past inputs and apparently hold up a conversation, it can’t actually learn new things, as all its knowledge was preprogrammed into it during the creation of the model. The ability to learn new things is an essential part of actual intelligence. (I put “remember” in quotes because, like “understanding,” “intelligence,” “mind,” etc., I think the real thing necessarily requires consciousness, and to apply such terms to AI is to fundamentally misunderstand their nature and/or the nature of AI. I say a little bit more in support of this view in the above-mentioned essay.)

The reason ChatGPT is able to appear as though it has general intelligence, apart from some very clever programming of its mode of assimilation, is that it’s assimilated ~300 billion words of text from the internet, text created by human intelligence. It essentially was programmed to find patterns in this information and blindly responds and creates new text based on these patterns. Real minds perform actual general intelligence based on much less input.

Neither ChatGPT nor any other AI program actually understands anything you tell it or anything it says. It’s all just sequences of individual blind CPU or GPU instructions. This is shown by the fact that AI chat bots are known to make small yet glaring errors in logic/reasoning/consistency that a human never would. In the earlier post mentioned above, I argue that actual understanding requires consciousness, and AI can’t be conscious. Without actual understanding, AI can only go so far.

But let’s get to the point: what if AGI actually is possible and does get developed? People seem to think that, if we can develop a superintelligence smarter than humans, it can solve all our problems, I suppose through legislation and/or the development of new technologies. But I’d argue that our problems are not primarily due to the lack of intelligence or technology.

Our problems are primarily due to greed, selfishness and evil; lack of self-awareness and self-criticism; lack of fairness; lack of integrity; living in small, blissfully ignorant, convenient bubbles; lack of compassion and caring for others; dysfunctional upbringing of children; an imbalanced, left-brained zeitgeist, the patriarchy, political and economic systems in which the most sociopathic and unscrupulous personalities naturally end up with the most power; etc.

Perhaps some of these problems could be solved by an AGI, if we even choose to listen to its radical, subversive suggestions, but problems like cultural momentum and selfishness, which are more-essential problems, can’t be solved by AI. What would it do, tell every individual hundreds of micromanaging instructions per day that probably contradict their belief systems and values?

And as for AGI developing more sophisticated technology, contrary to popular belief, technology will not solve our problems. We’re always salivatingly looking forward to the next technological thing, as if it will make us happy, but the reason we do this so much is that we’re not happy now, and the reason we’re not happy now is because of technology.

Technology is seductive because it’s enabling, ever-increasing, and necessarily leads to somewhere, but we need the wisdom to use what enables us well, or what enables us will be our own downfall. As it is, technology (like the spice) gives with one hand and takes with all of its others. In the old days of the US, when the Native Americans still had their own, naturalistic culture, anyone who spent a lot of time with them found that they were much happier and didn’t want to go back to technological civilization.

According to The De-Textbook: The Stuff You Didn’t Know About the Stuff You Thought You Knew by cracked.com, “According to Loewen, ‘Europeans were always trying to stop the outflow. Hernando De Soto had to post guards to keep his men and women from defecting to Native societies.’ Pilgrims were so scared of Indian influence that they outlawed the wearing of long hair,” and “Ben Franklin noted that, ‘No European who has tasted Savage Life can afterwards bear to live in our societies.'”

I wrote more about why AI won’t save (nor destroy) the world here: https://myriachromat.wordpress.com/2023/05/02/ai-will-neither-save-nor-destroy-the-world/.

One thought on “Why AI Won’t Save the World

Leave a Reply