Digital Vs. Analog; Virtual Vs. Physical
Most consideration of the possibility of AGI is on its implementation on digital computers. This from the outset poses major inherent limitations. The logic of computer programming, even in the magical self-modifying Lisp, is still mainly about yesses and noes, ons and offs. Either do this, or don’t do this. A calculation here, a conditional loop there.. Minds, on the other hand, aren’t constrained to binary paths. A computer language can be very dynamic, but it’s still just a hierarchy of discrete, modularized functions. This is essentially different from a mind; building intelligence with a programming language is therefore like building a living body with parts from Sears.
Of course, you could try building AGI by simulating a brain, and the above problem wouldn’t apply, at least not in the same way. But some fundamental limitations of computers would still apply. You have the limitation in how precisely you can simulate, for example, a voltage level and perform calculations on it. You have the fact that you’re reducing complete neuron cells with many organelles and trillions of atoms in each organelle to simple calculations. We may abstract an effective approximation to the workings of a neuron as a computable function, but it’s still only that: an approximation. An actual brain’s operation is the “laws” of physics. It is the universe’s operation itself, the brain being a physical thing. Any simulation thereof is once removed.
And that’s to say nothing of how much of physics we actually know and how much of what we know is possible to simulate. Consider the difference between a real live protein folding and the folding@home and rosetta@home projects. A single protein is microscopic and folds in nanoseconds. To simulate the same would take 30 years on a single PC. The time ratio is about 86 quadrillion to one, and that’s only for a single, microscopic piece of matter. And even that 30-year result isn’t 100% accurate—it’s just a likely result. In a simulation, the simulating machine is actually physical, but the mechanisms by which the thing is simulated are in strict limitation; it’s virtual. For example, for the purposes of simulation in a binary computer, it’s all just ons and offs. If the electrical voltage input into a logic gate in the course of an operation happens to be +4v instead of +5v, it makes no difference. Details and slight variations in the operation of the physical system don’t have any effect on the algorithm being run on it on the functional level of consideration.
A simulation, no matter how precise, being virtual and once-removed over physics, is not necessarily isomorphic to the physical process being simulated, whose detail is unlimited and whose principles of operation are no less profound, deep and dynamic than the functioning of the universe itself.
We don’t know fully know what physics are involved in producing intelligence, and we don’t even fully “laws” of physics themselves. Of particular importance, we don’t know everything about electricity. (Take, for example, ball lightning, or the Biefield-Brown effect.) And we don’t know whether quantum-random behavior can have macroscopic effects within the brain. It has been postulated that the brain operates “on the edge of chaos,” which would mean that, as in any chaotic system, it’s dynamic enough that tiny causes regularly have major/macroscopic effects, which should include the supposedly absolutely random actions of quantum mechanics. Maybe these actions aren’t actually “absolutely random,” and maybe they’re in some way coordinated in the brain.
Quantum Mechanics and the Brain
Just writing off unpredictable quantum behavior as “absolutely random” is too easy (and not to mention epistemologically/metaphysically troublesome). Our methods of prediction/creating models of prediction, and hence gaining any kind of insight into nature, have many constraints and presuppositions about them that could prevent us from finding meaning in quantum-random behavior. Such constraints and presuppositions include linearity in time, a forward direction of cause and effect, nearness in place and time (with exceptions where the nature of the cause-effect relationships over long distances and times is relatively simple, such as in a=GM/d2 or Δt=Δd/c), measurability by existing physical instruments, exclusively mathematical and mechanistic relationships, categorical consistency on some level, functional/black-box-like separability between objects, and simple enough/easily intuited/general enough patterns of causal relationship. Some of those properties are necessary for it to be causality as such, and some aren’t. None of those things are necessary for the actions to be meaningful yet non-mechanistic.
So, quantum-mechanical randomness could have meaning, and its effects could be a factor in cognition; and if both of those are true, then we must first understand the fundamental meaning of quantum randomness in order to simulate, or recreate, cognition. Of course, if quantum “randomness” is meaningful yet non-mechanistic, then it’s impossible to simulate with computers, in which case simulating it is not an option but recreating it (by making something new that works similarly to a physical brain) may be.
Quantum-random actions could be where a soul, spirit, incorporeal mind, or otherwise non-physical intelligence comes into play in the arena of physical actions. Of course, people of a certain popular mindset would take issue with that idea. One person I mentioned this idea to characterized it as “god of the gaps” perspective, as if quantum randomness is merely the last frontier in an arena of what will be an ultimate, overarching physical explanation of all of reality. But I look at it in a different way. There was never any reason to assume that science’s arm is all-reaching, or potentially so, in the first place.
When Newton discovered the laws of motion, gravity, etc., everyone was shocked by how much of reality we could predict or explain which had been hitherto and otherwise unexplainable, perhaps even magical. Then when mankind discovered atoms it was assumed that, all the way down to the level of atoms, things behaved completely predictably in the manner of “billiard-ball physics”. Thought about the world had gone from one extreme to the other, from a world full of mystery and magic to one assumed to be completely mechanical and predictable, if only given enough data as input.
And now that we’ve discovered quantum randomness, quantum-random behavior is presumed to be just the last remaining gap in an essentially all-pervasive paradigm of mechanistic predictability. But given that we never have been able to predict everything, it’s just a presumption. Things aren’t automatically accounted for until proven otherwise; they’re living and magical until proven accountable under a mechanistic, totalitarian paradigm. It’s no coincidence that a Gaussian probability distribution, which is what characterizes quantum-random actions, is just what you get when there is an untold number of influences at play.
So, if quantum randomness actually plays a role in cognition, then computational solutions, such as emulating the quantum randomness with a pseudo-random number generator, won’t adequately simulate the quantum-random influences in the brain. A quantum random number generator probably won’t be adequate either, because it’ll sample from one place many times, in a functionalized way rather, than from many places continuously in a way that’s inextricable from the mechanics of the neurons. The brain probably would have evolved to take advantage of whatever’s behind quantum-random events (such as a cosmic intelligence, for example) in a way that’s specific to the brain’s particular physical configuration.
Of course, you could create a machine unlike a digital computer, whose operation is actually much like the brain’s operation, and it might even exhibit intelligence, but in that case, it wouldn’t be artificial intelligence. It would exhibit intelligence for the same reason a real brain does. It wouldn’t be a simulation, and it would be conscious. Calling it artificial intelligence would be tantamount to calling it artificial intelligence when we clone a sheep in the lab.
Exactly how much a physical system can differ from a brain and in what ways while still facilitating consciousness is a fascinating question. Since brains are the only example of recognizable conscious expression we have, it’s unknown how generalizable the underlying principles behind its facilitation of consciousness are and in what ways. Knowing the answer to this would lead to important insight into the nature of consciousness. And into developing truly intelligent “machines,” but that would be an ethical nightmare. We wouldn’t necessarily be able to know the quality of life of those machines, and even if it’s not terrible, they’d most likely be enslaved to do our biddings.
Consciousness Is Necessary for Intelligence
It seems inescapable that consciousness is an absolutely necessary element for any truly effective general intelligence. If you introspect and observe your own process while thinking, you’ll see that it’s inextricably tied with consciousness/awareness, and the role of consciousness/awareness can’t be understood and broken down into a straightforward series of steps/manipulations. You use your spark of awareness in a holistic/singular way to magically pull solutions and concepts out of thin air or to know how to connect various concepts synergistically into a novel solution or greater concept.
And consciousness is scientifically, rationally, and in any other way, really, unaccounted for and is inherently mysterious. Of course, physicalists believe that consciousness arises from brain processes and may assume that eventually the progression of neuroscience will reveal how this happens, but I give arguments for why this is infeasible in this essay: https://philosophy.inhahe.com/2018/04/13/notes-on-science-scientism-mysticism-religion-logic-physicalism-skepticism-etc/#Emergent, and there’s also the problem of the “philosophical zombie” thought experiment that I didn’t include in that essay.
My take on the subject is that consciousness is magical in the truest sense of the word, is fundamentally non-mechanistic, and likely is intrinsically connected with, and hence draws from, the entire cosmos. Either way, the fact alone that consciousness can’t be emergent from brain processes implies that it’s non-physical and thus can’t be replicated by an algorithm, including artificial neural nets or any other simulations of brain matter.
A Neural Network Can’t Be Conscious
A software engineer and former acquaintance of mine, Tanasije Gjorgoski, wrote a reductio-ad-absurdum thought experiment showing why a neural network can’t be conscious. The argument is as follows (with minor modifications):
Let’s say that the system is composed of “digital” neurons, where each of them is connected to other neurons. Each of the neurons has inputs and outputs. The outputs of each neuron are connected to inputs of other neurons, or go outside of the neural network. Additionally, some neurons have inputs that come from outside of the neural network.
Let’s suppose additionally this system is conscious for a certain amount of time (e.g. two minutes), so we will do reductio ad absurdum later. We are measuring each neuron activity (input and output signals of the neuron) for those two minutes in which the system is conscious. We store those inputs and outputs as functions of time. After we got that all, we have enough information to replay what was happening in the neural network by:
- Resetting each neuron internal state to the starting state and replaying it with the inputs which come from outside of the neural net, using the inputs which came from inside the neural net at that time as the starting state. As the function is deterministic, everything will come out again as it was the first time. Would this system be conscious?
- Resetting each neuron internal state to starting state, then disconnecting all the neurons from each other and replaying the saved inputs to each of them. Each of the neurons would calculate the outputs it did, but as nobody would “read them”, they would serve no function in the functioning of the system, so actually they wouldn’t matter! Would this system be conscious too?
- Shutting down the calculations in each neuron (as they are not important as seen in the second scenario—because the outputs of each neuron are is not important for functioning of the system while we replay). We would give the inputs to each of the “dead” neurons (and probably we would wonder what we are doing). Would this system be conscious?
- As the input we would be giving to each of the neurons actually doesn’t matter, we would just shut down the whole neural net, and read the numbers aloud. Would this system be conscious? Which system?
Obviously, human beings (as well as other animals) are conscious, and brains are instrumental to this consciousness in some way, but this argument shows that the consciousness isn’t actually in the neural network. If it’s not in the neural network, then simulating a neural network computationally (and hence deterministically) won’t produce consciousness. If consciousness is crucial to intelligence (which I’ve tried to show that it is), then simulating a neural network can’t produce general intelligence.
Actually, let’s make a similar argument to Tanasije’s but applying specifically to AI.
1. Take each individual calculation/opcode execution and separate them across a long span of time. Is the resulting “system” conscious?
2. Remove the computation element and just have a sequence of register and/or memory states. Is the resultant information conscious? What part actually matters?
3. Take the register and/or memory states, and maybe even the internal CPU/GPU states composing each individual computation, and encode them in etchings on a marble wall. Is the resulting state of affairs conscious?
4. Instead of etching the encodings into marble, encode them into patterns of water droplets in random places spread over many clouds. Is the resulting data conscious?
5. Just interpret whatever informational patterns that already exist in the water droplets spread over many clouds as the information contained in an AI according to whatever ad hoc encoding is necessary to do that, since the particular method of encoding is arbitrary anyway… are the clouds conscious?
(Maybe the clouds are conscious, but probably not for the reason that they can be arbitrarily interpreted as encoding the digital information of an AI…)
You might argue at this point that brains obviously are conscious, since we humans are conscious and our consciousness is apparently seated in our brains, and that therefore Tansije’s original argument is invalid, and therefore my adaptation of it to AI must be invalid for the same fundamental reason (whatever that reason may be), but my position is that Tansije’s reductio ad absurdum is a strong enough argument to prevail, and that therefore consciousness is not seated in the brain but rather the brain merely “channels” it in some way.
To this you might argue that AGI could equally “channel” consciousness, but I’d argue that there’s no way for this to happen because its processes/transformation of state is completely algorithmically determined, so even if it is somehow conscious, that consciousness can’t inform its so-called “thinking.”
You could then argue that adding a TRNG (True Random Number Generator) to its processes could potentially make it conscious (supposing that consciousness imbues all true randomness, or at least that a conscious being would choose to possess the AI), but to that I’d say that such randomness is too indirectly causally related to the actual material/algorithmic processes instead of being intimately coupled with them as they are in a brain. It would require consciousness to play pinball, metaphorically speaking, with its own materially embodied mind. That’s too much rocket science to expect of consciousness, especially when the calculative aspect of intelligence seems to be a product of material processes rather than being a function of consciousness itself.
An Algorithm Can’t Be Conscious
Algorithms and computations are essentially series of many small instructions. There is no essential, significant difference between an algorithm running, say, Pac-Man and one simulating a brain or otherwise attempting to manifest artificial general intelligence. The only differences are details of which instructions are executed when. An if/then branch here, a multiplication there, etc. It makes no sense that a lot of simple, separate instructions added together in a series could somehow magically create experience/self-awareness/consciousness/life. (Though similarly, a collection of non-living pieces of material interacting with each other, as in a neural network, should not be able to give rise to the singular phenomenon of experience. I wrote more about that here.
Any entities that a digital simulation simulates are wholly abstract, and therefore they only “exist” insofar as conscious beings outside the simulation imagine them in response to contemplating the simulation. What really exists are series of mostly separate executions of computer opcodes, only a few of which exist at any one particular time.
Consciousness is not abstract, it’s real, as revealed to us phenomenologically, and experience also tells us that it’s singular, so it can’t be made up of a series of mostly disconnected digital commands over time. If consciousness were merely abstract, there could be no real being to experience it—or, perhaps more concisely said, we know consciousness is real because we directly experience it—and, if consciousness weren’t singular, we couldn’t experience an entire thought at one time.
Another argument against the possibility of an algorithm producing consciousness is Searle’s Chinese Room thought experiment. Basically, it goes like this: Take a computer running an algorithm that simulates or produces general intelligence (in this particular thought experiment its task is to convincingly converse in Chinese with a human). Is the system conscious? Now, instead of having a computer execute the algorithm, have an English-only-speaking person execute it, using a sufficient amount of pencils, papers, and filing cabinets. Obviously, the English person doesn’t understand the conversation he’s engaged in and if the system is conscious because it’s “intelligent” then that consciousness has nothing to do with his own. So where is it? Is the room itself conscious? Is some abstract consideration of the timeline of his actions somehow conscious? No alternative seems reasonable.
If the serial execution of a particular set of instructions can give rise to consciousness just because it’s programmed to act in a way that mimics understanding, then why can’t any system/series of events give rise to consciousness just by virtue of being itself? There’s nothing in the Chinese-room situation to logically differentiate the system being programmed to appear to understand from its being programmed to actually understand. You could argue that it’s not truly understanding because it only follows syntax rules without any knowledge of the words’ semantics, but I uphold that understanding, according to common understanding of the term, is something only conscious, living things can do, as we know something else understands by relating to our own understanding, which is fundamentally conscious, and calling any other activity “understanding” is an over-generalization and misuse of the term. And the same applies the term “intelligence.” Therefore, we’re effectively saying that the mere mimicry of understanding gives rise to consciousness, and what’s so significant about the mere mimicry of understanding that it can give rise to consciousness while, say, running a Pac-Man game can’t? Or even a non-digital system such as a tropical storm or a sewing machine? Or, say, at least some other kind of mimicking system, such as a computer that renders a CGI scene, or a movie theater that displays actors talking to each other on a screen?
- Traditional computer programs work too differently and simply from how minds do to recreate general intelligence
- A simulated neural network can’t fully embody physical processes
- Precision is limited
- We don’t fully know the “laws” of physics, even regarding electricity
- A simulated neural network is virtual, once-removed over reality
- An algorithm, including a simulated neural network, can’t be conscious
- Gjorgoski’s argument that a neural network can’t be conscious (if neural networks aren’t conscious then simulated neural networks certainly aren’t conscious)
- My own adaption of Gjorgoski’s argument applied to AI, resulting in the reductio ad absurdum that even clouds are conscious
- Consciousness is real, as it’s directly experienced, while any object or process simulated in a digital simulation is merely abstract, as it’s merely a series of mostly disconnected computer instructions that don’t even exist at the same time
- Searle’s argument that an algorithm can’t be conscious
- Consciousness can’t be an emergent property (link to part of another essay; see also the p-zombie thought experiment
- Consciousness/mind can’t be an illusion (part of the other essay following the above part, not directly linked to or mentioned elsewhere in this essay but probably should be)
- Consciousness is necessary for general intelligence
- Quantum mechanics has to be the liaison between consciousness/the seat of intelligence (whatever and wherever it is) and the brain’s mechanics
- Mind/consciousness itself is non-mechanistic (not sure if I mentioned this. I write more about this subject here: )
- Quantum randomness, therefore, isn’t “absolutely random” and can’t be adequately simulated
- The only way to recreate general intelligence, therefore, is with a physical system that’s analogous to the brain, and that’s not “artificial” intelligence, it’s real intelligence. Doing so would be unethical, of course.
In conclusion, artificial general intelligence is nothing more than a faddy, technologistic pipe dream.
Addendum: An interesting related post by Darin can be found here.
Addendum 2: I wrote more about why current success in AI doesn’t imply that AGI is right around the corner, and why, even if I’m wrong and we do develop AGI, it won’t save the world as many hope, here.