Tag: AGI

AI Will Neither Save nor Destroy the World

Many people seem to be putting their hopes in AI to solve all our problems and save the world, by means such as figuring out how to best regulate the economy; figuring out the best possible governmental systems, laws and their interpretations; replacing politicians and judges; etc.

Even if we were to succeed in creating true AGI, which I don’t think is possible, and even if it were way more intelligent than humans, it’s not realistic to think that humans will allow it to take charge of the human race. The leaders of the world are only human, and the same human flaws we want to transcend are the selfisms (like the will to power) and conceits (that they actually know best) that will prevent them from giving up their own power to the AGI.

At best, they would use the AGI instance or instances as consultants and selectively apply or not apply their advice. This probability is compounded even more by the fact that some people would have to act as mediators anyway between the AGI, being just a program or programs ran on computers, and the actual physical acts of delegation and ordainment, so it’s that much more tempting for people to interject their own wills into the process. Though this would be less of an issue if the AGI were given physical agency in a robot body or some such with ability to speak, see, hear, write, type, read, move, manipulate objects, etc.

Of course, fear of giving an essentially alien intelligence total control, whether justified or not (probably justified), would also be a hindrance to leaders stepping aside and giving any AGI full reign.

Also, those with all the power in the form of money will want to remain in ownership of the means of production, or of debt or whatever, and also will want to keep calling the shots in order to maintain and increase their wealth, and they have lots of power to enact their will, so they will definitely thwart AGI from taking over key positions of power if nothing else does.

Why do I think people’s fear of giving AGI full control probably be justified?

As humans, our own cognition and its faculties seem to be a relatively simple, almost singular thing. We don’t notice all the individual talents, intuitions, precepts and skills that go into day-to-day life and decision making. These are made more apparent when we dissect the brain and find that it has many, many different sections that we can individually prod, electrify, damage or whatever, each having a different consequence on individual cognitive and other mental faculties. The fact that this is a total surprise to us speaks to our self-perception as being something more like simple points of a kind of universal intelligence.

So, we imagine that an AGI will be similarly a kind of point of universal intelligence, but the truth is that all of the sensibilities and precepts that make us human would have to somehow be adequately programmed or otherwise infused into the AGI. This would be no easy task, and it would probably be fraught with danger. In humans, we use grace, intuition, experience, etc. to weigh many possibly conflicting precepts against each other to come to a balanced solution in every case. With AGI, this grace, intuition, experience, etc. would probably be impossible to replicate, and any given precept or edict might possibly, under the right circumstances, accidentally actuate a “Universal Paperclips“-esque mode of operation.

The fact that we thought for decades that we’d be able to create AGI with relatively slow computers and hand-coded source in, e.g. Lisp (thinking Lisp’s self-modifying ability would facilitate AGI) and never came even close is another testament to the misleading nature of how simple we think our intelligence, and hence the intelligence of a sought for AGI, really is. I would argue that, even now, we’re still under the misimpression that it’s much simpler than it is.

People are in a frenzy over it now because of the mind-blowing success of ChatGPT, GPT-4 and others, thinking they’re indicators that we’re “so close” to AGI, but the truth is that these things are worlds apart from being actual AGI. They can’t ruminate or reflect for an indeterminate amount of time on anything, they often make basic mistakes in cognition that no truly intelligent being (in their right mind) would ever make, there are simple and obvious rules that they’re just unable to infer in their training, and they exist wholly within the world of language, which means they’re a totally virtual kind of intelligence. They exist in a kind of hyperreality.

Have you ever thought about how feasible it would be to learn a language just by reading the dictionary? Assuming you had no prior language. The answer is that it would be impossible. There would be nothing to ground the words in anything but other words (even if you understood the basic structure of a dictionary and what a word is). As I mention in https://myriachromat.wordpress.com/2017/02/16/on-the-meanings-of-words/, if you were to look up all the words in the definition of a word, and then look up all the words in those words’ definitions etc. recursively, every path of lookups would ultimately either be one long circular definition or would end in a word that’s not defined.

When children learn language, they ground the words they learn in real-world experiences. LLMs have no such opportunity. Therefore, although they are good at making rules of prediction based on language patterns, they can’t possibly “understand” (even in some non-conscious mechanical sense) any better than you can learn a language by reading a dictionary for that language without the aid of already understanding some of the words or word stems.

And, since they’re just inferring patterns based on hundreds of billions of tokens of human text, their reasoning, initiatives and knowledge can be no better than those of the average human. (Well, in the case of its total body of knowledge, it can be no better than that of humanity at large.)

Why do I think creating true AGI is not possible? I have a write-up of that here: https://myriachromat.wordpres.com/2019/09/13/on-the-possibility-of-artificial-general-intelligence/.

I wrote more about why AGI won’t save the world here: https://myriachromat.wordpress.com/2023/01/28/why-ai-wont-save-the-world/.

Besides the many who seem to hold high hopes that AI will fix the world and perhaps even usher in utopia, there also seem to be many people who fear that AGI will destroy humanity and/or the whole world. For the same reasons as explained above that AI will never be smart enough to fix the world, I say that AI will never be smart enough to destroy the world either.

Though, because of what it is and potentially can be, it will be, and, to some extent, already is, extremely disruptive to society. Many workers will be out of jobs; many communications that were once conferred by humans will be conferred by robots; art and media will be taken over by AI-generated content; and we will no longer be able to validate any video or audio we see and hear of alleged criminals, celebrities, politicians, and maybe ordinary humans as authentic, so the entire notion of accountability will take a huge hit.

It’s an interesting question just what will happen if AI ends up displacing a large portion of the workforce. Those in power will want to employ it because it’s cheaper labor, but, on the other hand, if most people in a society aren’t getting paid, who will make the purchases that in turn power the economy? And, if the economy drops, how will the rich possibly remain rich?

Ideally, the extra economic efficiency imparted by the actions of AI robots would translate into monetary returns (or just goods and services directly) for everybody, including those whose jobs were displaced by AI. But there just doesn’t seem to be any macro-economic mechanism/pathway under capitalism for that to happen. The name of capitalism is cooperation through mutual greed—you get something for something you give.

So, what, X percent of the population would just suffer and starve to death? I just don’t know.

Could we solve the problem by instating a universal basic income? Or would we have to supplant capitalism with another form of economy altogether? If so, are there any good alternatives other than socialism, and what may they be? I don’t know. Maybe you know.

On the Possibility of Artificial General Intelligence

Digital Vs. Analog; Virtual Vs. Physical

Most consideration of the possibility of AGI is on its implementation on digital computers. This from the outset poses major inherent limitations. The logic of computer programming, even in the magical self-modifying Lisp, is still mainly about yesses and noes, ons and offs. Either do this, or don’t do this. A calculation here, a conditional loop there.. Minds, on the other hand, aren’t constrained to binary paths. A computer language can be very dynamic, but it’s still just a hierarchy of discrete, modularized functions. This is essentially different from a mind; building intelligence with a programming language is therefore like building a living body with parts from Sears.

Of course, you could try building AGI by simulating a brain, and the above problem wouldn’t apply, at least not in the same way. But some fundamental limitations of computers would still apply. You have the limitation in how precisely you can simulate, for example, a voltage level and perform calculations on it. You have the fact that you’re reducing complete neuron cells with many organelles and trillions of atoms in each organelle to simple calculations. We may abstract an effective approximation to the workings of a neuron as a computable function, but it’s still only that: an approximation. An actual brain’s operation is the “laws” of physics. It is the universe’s operation itself, the brain being a physical thing. Any simulation thereof is once removed.

And that’s to say nothing of how much of physics we actually know and how much of what we know is possible to simulate. Consider the difference between a real live protein folding and the folding@home and rosetta@home projects. A single protein is microscopic and folds in nanoseconds. To simulate the same would take 30 years on a single PC. The time ratio is about 86 quadrillion to one, and that’s only for a single, microscopic piece of matter. And even that 30-year result isn’t 100% accurate—it’s just a likely result. In a simulation, the simulating machine is actually physical, but the mechanisms by which the thing is simulated are in strict limitation; it’s virtual. For example, for the purposes of simulation in a binary computer, it’s all just ons and offs. If the electrical voltage input into a logic gate in the course of an operation happens to be +4v instead of +5v, it makes no difference. Details and slight variations in the operation of the physical system don’t have any effect on the algorithm being run on it on the functional level of consideration.

A simulation, no matter how precise, being virtual and once-removed over physics, is not necessarily isomorphic to the physical process being simulated, whose detail is unlimited and whose principles of operation are no less profound, deep and dynamic than the functioning of the universe itself.

We don’t know fully know what physics are involved in producing intelligence, and we don’t even fully “laws” of physics themselves. Of particular importance, we don’t know everything about electricity. (Take, for example, ball lightning, or the Biefield-Brown effect.) And we don’t know whether quantum-random behavior can have macroscopic effects within the brain. It has been postulated that the brain operates “on the edge of chaos,” which would mean that, as in any chaotic system, it’s dynamic enough that tiny causes regularly have major/macroscopic effects, which should include the supposedly absolutely random actions of quantum mechanics. Maybe these actions aren’t actually “absolutely random,” and maybe they’re in some way coordinated in the brain.

Quantum Mechanics and the Brain

Just writing off unpredictable quantum behavior as “absolutely random” is too easy (and not to mention epistemologically/metaphysically troublesome). Our methods of prediction/creating models of prediction, and hence gaining any kind of insight into nature, have many constraints and presuppositions about them that could prevent us from finding meaning in quantum-random behavior. Such constraints and presuppositions include linearity in time, a forward direction of cause and effect, nearness in place and time (with exceptions where the nature of the cause-effect relationships over long distances and times is relatively simple, such as in a=GM/d2 or Δt=Δd/c), measurability by existing physical instruments, exclusively mathematical and mechanistic relationships, categorical consistency on some level, functional/black-box-like separability between objects, and simple enough/easily intuited/general enough patterns of causal relationship. Some of those properties are necessary for it to be causality as such, and some aren’t. None of those things are necessary for the actions to be meaningful yet non-mechanistic.

So,­ quantum-mechanical randomness could have meaning, and its effects could be a factor in cognition; and if both of those are true, then we must first understand the fundamental meaning of quantum randomness in order to simulate, or recreate, cognition. Of course, if quantum “randomness” is meaningful yet non-mechanistic, then it’s impossible to simulate with computers, in which case simulating it is not an option but recreating it (by making something new that works similarly to a physical brain) may be.

Quantum-random actions could be where a soul, spirit, incorporeal mind, or otherwise non-physical intelligence comes into play in the arena of physical actions. Of course, people of a certain popular mindset would take issue with that idea. One person I mentioned this idea to characterized it as “god of the gaps” perspective, as if quantum randomness is merely the last frontier in an arena of what will be an ultimate, overarching physical explanation of all of reality. But I look at it in a different way. There was never any reason to assume that science’s arm is all-reaching, or potentially so, in the first place.

When Newton discovered the laws of motion, gravity, etc., everyone was shocked by how much of reality we could predict or explain which had been hitherto and otherwise unexplainable, perhaps even magical. Then when mankind discovered atoms it was assumed that, all the way down to the level of atoms, things behaved completely predictably in the manner of “billiard-ball physics”. Thought about the world had gone from one extreme to the other, from a world full of mystery and magic to one assumed to be completely mechanical and predictable, if only given enough data as input.

And now that we’ve discovered quantum randomness, quantum-random behavior is presumed to be just the last remaining gap in an essentially all-pervasive paradigm of mechanistic predictability. But given that we never have been able to predict everything, it’s just a presumption. Things aren’t automatically accounted for until proven otherwise; they’re living and magical until proven accountable under a mechanistic, totalitarian paradigm. It’s no coincidence that a Gaussian probability distribution, which is what characterizes quantum-random actions, is just what you get when there is an untold number of influences at play.

So, if quantum randomness actually plays a role in cognition, then computational solutions, such as emulating the quantum randomness with a pseudo-random number generator, won’t adequately simulate the quantum-random influences in the brain. A quantum random number generator probably won’t be adequate either, because it’ll sample from one place many times, in a functionalized way rather, than from many places continuously in a way that’s inextricable from the mechanics of the neurons. The brain probably would have evolved to take advantage of whatever’s behind quantum-random events (such as a cosmic intelligence, for example) in a way that’s specific to the brain’s particular physical configuration.

Physical Brain-Analogues

Of course, you could create a machine unlike a digital computer, whose operation is actually much like the brain’s operation, and it might even exhibit intelligence, but in that case, it wouldn’t be artificial intelligence. It would exhibit intelligence for the same reason a real brain does. It wouldn’t be a simulation, and it would be conscious. Calling it artificial intelligence would be tantamount to calling it artificial intelligence when we clone a sheep in the lab.

Exactly how much a physical system can differ from a brain and in what ways while still facilitating consciousness is a fascinating question. Since brains are the only example of recognizable conscious expression we have, it’s unknown how generalizable the underlying principles behind its facilitation of consciousness are and in what ways. Knowing the answer to this would lead to important insight into the nature of consciousness. And into developing truly intelligent “machines,” but that would be an ethical nightmare. We wouldn’t necessarily be able to know the quality of life of those machines, and even if it’s not terrible, they’d most likely be enslaved to do our biddings.

Consciousness Is Necessary for Intelligence

It seems inescapable that consciousness is an absolutely necessary element for any truly effective general intelligence. If you introspect and observe your own process while thinking, you’ll see that it’s inextricably tied with consciousness/awareness, and the role of consciousness/awareness can’t be understood and broken down into a straightforward series of steps/manipulations. You use your spark of awareness in a holistic/singular way to magically pull solutions and concepts out of thin air or to know how to connect various concepts synergistically into a novel solution or greater concept.

And consciousness is scientifically, rationally, and in any other way, really, unaccounted for and is inherently mysterious. Of course, physicalists believe that consciousness arises from brain processes and may assume that eventually the progression of neuroscience will reveal how this happens, but I give arguments for why this is infeasible in this essay: https://philosophy.inhahe.com/2018/04/13/notes-on-science-scientism-mysticism-religion-logic-physicalism-skepticism-etc/#Emergent, and there’s also the problem of the “philosophical zombie” thought experiment that I didn’t include in that essay.

My take on the subject is that consciousness is magical in the truest sense of the word, is fundamentally non-mechanistic, and likely is intrinsically connected with, and hence draws from, the entire cosmos. Either way, the fact alone that consciousness can’t be emergent from brain processes implies that it’s non-physical and thus can’t be replicated by an algorithm, including artificial neural nets or any other simulations of brain matter.

A Neural Network Can’t Be Conscious

A software engineer and former acquaintance of mine, Tanasije Gjorgoski, wrote a reductio-ad-absurdum thought experiment showing why a neural network can’t be conscious. The argument is as follows (with minor modifications):

Let’s say that the system is composed of “digital” neurons, where each of them is connected to other neurons. Each of the neurons has inputs and outputs. The outputs of each neuron are connected to inputs of other neurons, or go outside of the neural network. Additionally, some neurons have inputs that come from outside of the neural network.

Let’s suppose additionally this system is conscious for a certain amount of time (e.g. two minutes), so we will do reductio ad absurdum later. We are measuring each neuron activity (input and output signals of the neuron) for those two minutes in which the system is conscious. We store those inputs and outputs as functions of time. After we got that all, we have enough information to replay what was happening in the neural network by:

  • Resetting each neuron internal state to the starting state and replaying it with the inputs which come from outside of the neural net, using the inputs which came from inside the neural net at that time as the starting state. As the function is deterministic, everything will come out again as it was the first time. Would this system be conscious?
  • Resetting each neuron internal state to starting state, then disconnecting all the neurons from each other and replaying the saved inputs to each of them. Each of the neurons would calculate the outputs it did, but as nobody would “read them”, they would serve no function in the functioning of the system, so actually they wouldn’t matter! Would this system be conscious too?
  • Shutting down the calculations in each neuron (as they are not important as seen in the second scenario—because the outputs of each neuron are is not important for functioning of the system while we replay). We would give the inputs to each of the “dead” neurons (and probably we would wonder what we are doing). Would this system be conscious?
  • As the input we would be giving to each of the neurons actually doesn’t matter, we would just shut down the whole neural net, and read the numbers aloud. Would this system be conscious? Which system?

He wrote another, longer version of this argument here and some more general articles about consciousness here.

Obviously, human beings (as well as other animals) are conscious, and brains are instrumental to this consciousness in some way, but this argument shows that the consciousness isn’t actually in the neural network. If it’s not in the neural network, then simulating a neural network computationally (and hence deterministically) won’t produce consciousness. If consciousness is crucial to intelligence (which I’ve tried to show that it is), then simulating a neural network can’t produce general intelligence.

Actually, let’s make a similar argument to Tanasije’s but applying specifically to AI.

1. Take each individual calculation/opcode execution and separate them across a long span of time. Is the resulting “system” conscious?
2. Remove the computation element and just have a sequence of register and/or memory states. Is the resultant information conscious? What part actually matters?
3. Take the register and/or memory states, and maybe even the internal CPU/GPU states composing each individual computation, and encode them in etchings on a marble wall. Is the resulting state of affairs conscious?
4. Instead of etching the encodings into marble, encode them into patterns of water droplets in random places spread over many clouds. Is the resulting data conscious?
5. Just interpret whatever informational patterns that already exist in the water droplets spread over many clouds as the information contained in an AI according to whatever ad hoc encoding is necessary to do that, since the particular method of encoding is arbitrary anyway… are the clouds conscious?

(Maybe the clouds are conscious, but probably not for the reason that they can be arbitrarily interpreted as encoding the digital information of an AI…)

You might argue at this point that brains obviously are conscious, since we humans are conscious and our consciousness is apparently seated in our brains, and that therefore Tansije’s original argument is invalid, and therefore my adaptation of it to AI must be invalid for the same fundamental reason (whatever that reason may be), but my position is that Tansije’s reductio ad absurdum is a strong enough argument to prevail, and that therefore consciousness is not seated in the brain but rather the brain merely “channels” it in some way.

To this you might argue that AGI could equally “channel” consciousness, but I’d argue that there’s no way for this to happen because its processes/transformation of state is completely algorithmically determined, so even if it is somehow conscious, that consciousness can’t inform its so-called “thinking.”

You could then argue that adding a TRNG (True Random Number Generator) to its processes could potentially make it conscious (supposing that consciousness imbues all true randomness, or at least that a conscious being would choose to possess the AI), but to that I’d say that such randomness is too indirectly causally related to the actual material/algorithmic processes instead of being intimately coupled with them as they are in a brain. It would require consciousness to play pinball, metaphorically speaking, with its own materially embodied mind. That’s too much rocket science to expect of consciousness, especially when the calculative aspect of intelligence seems to be a product of material processes rather than being a function of consciousness itself.

An Algorithm Can’t Be Conscious

Algorithms and computations are essentially series of many small instructions. There is no essential, significant difference between an algorithm running, say, Pac-Man and one simulating a brain or otherwise attempting to manifest artificial general intelligence. The only differences are details of which instructions are executed when. An if/then branch here, a multiplication there, etc. It makes no sense that a lot of simple, separate instructions added together in a series could somehow magically create experience/self-awareness/consciousness/life. (Though similarly, a collection of non-living pieces of material interacting with each other, as in a neural network, should not be able to give rise to the singular phenomenon of experience. I wrote more about that here.

Any entities that a digital simulation simulates are wholly abstract, and therefore they only “exist” insofar as conscious beings outside the simulation imagine them in response to contemplating the simulation. What really exists are series of mostly separate executions of computer opcodes, only a few of which exist at any one particular time.

Consciousness is not abstract, it’s real, as revealed to us phenomenologically, and experience also tells us that it’s singular, so it can’t be made up of a series of mostly disconnected digital commands over time. If consciousness were merely abstract, there could be no real being to experience it—or, perhaps more concisely said, we know consciousness is real because we directly experience it—and, if consciousness weren’t singular, we couldn’t experience an entire thought at one time.

Another argument against the possibility of an algorithm producing consciousness is Searle’s Chinese Room thought experiment. Basically, it goes like this: Take a computer running an algorithm that simulates or produces general intelligence (in this particular thought experiment its task is to convincingly converse in Chinese with a human). Is the system conscious? Now, instead of having a computer execute the algorithm, have an English-only-speaking person execute it, using a sufficient amount of pencils, papers, and filing cabinets. Obviously, the English person doesn’t understand the conversation he’s engaged in and if the system is conscious because it’s “intelligent” then that consciousness has nothing to do with his own. So where is it? Is the room itself conscious? Is some abstract consideration of the timeline of his actions somehow conscious? No alternative seems reasonable.

If the serial execution of a particular set of instructions can give rise to consciousness just because it’s programmed to act in a way that mimics understanding, then why can’t any system/series of events give rise to consciousness just by virtue of being itself? There’s nothing in the Chinese-room situation to logically differentiate the system being programmed to appear to understand from its being programmed to actually understand. You could argue that it’s not truly understanding because it only follows syntax rules without any knowledge of the words’ semantics, but I uphold that understanding, according to common understanding of the term, is something only conscious, living things can do, as we know something else understands by relating to our own understanding, which is fundamentally conscious, and calling any other activity “understanding” is an over-generalization and misuse of the term. And the same applies the term “intelligence.” Therefore, we’re effectively saying that the mere mimicry of understanding gives rise to consciousness, and what’s so significant about the mere mimicry of understanding that it can give rise to consciousness while, say, running a Pac-Man game can’t? Or even a non-digital system such as a tropical storm or a sewing machine? Or, say, at least some other kind of mimicking system, such as a computer that renders a CGI scene, or a movie theater that displays actors talking to each other on a screen?

In Conclusion…

To recap,

  • Traditional computer programs work too differently and simply from how minds do to recreate general intelligence
  • A simulated neural network can’t fully embody physical processes
    • Precision is limited
    • We don’t fully know the “laws” of physics, even regarding electricity
    • A simulated neural network is virtual, once-removed over reality
  • An algorithm, including a simulated neural network, can’t be conscious
    • Gjorgoski’s argument that a neural network can’t be conscious (if neural networks aren’t conscious then simulated neural networks certainly aren’t conscious)
    • My own adaption of Gjorgoski’s argument applied to AI, resulting in the reductio ad absurdum that even clouds are conscious
    • Consciousness is real, as it’s directly experienced, while any object or process simulated in a digital simulation is merely abstract, as it’s merely a series of mostly disconnected computer instructions that don’t even exist at the same time
    • Searle’s argument that an algorithm can’t be conscious
  • Consciousness can’t be an emergent property (link to part of another essay; see also the p-zombie thought experiment
  • Consciousness/mind can’t be an illusion (part of the other essay following the above part, not directly linked to or mentioned elsewhere in this essay but probably should be)
  • Consciousness is necessary for general intelligence
  • Quantum mechanics has to be the liaison between consciousness/the seat of intelligence (whatever and wherever it is) and the brain’s mechanics
  • Mind/consciousness itself is non-mechanistic (not sure if I mentioned this. I write more about this subject here: )
  • Quantum randomness, therefore, isn’t “absolutely random” and can’t be adequately simulated
  • The only way to recreate general intelligence, therefore, is with a physical system that’s analogous to the brain, and that’s not “artificial” intelligence, it’s real intelligence. Doing so would be unethical, of course.

In conclusion, artificial general intelligence is nothing more than a faddy, technologistic pipe dream.

Addendum: An interesting related post by Darin can be found here.

Addendum 2: I wrote more about why current success in AI doesn’t imply that AGI is right around the corner, and why, even if I’m wrong and we do develop AGI, it won’t save the world as many hope, here.