Month: January 2023

Why Altruism Really Does Exist

A lot of intellectuals seem to conclude that there’s no such thing as true altruism. Their reasoning invariably seems to be that whatever we do, including any supposedly altruistic act, we wouldn’t do it if there weren’t some incentive, or in other words a reward, for doing it. They see this as a truism pertaining to the very nature of decision-making. And, of course, they reason that doing something for a reward, even if it helps another person, is just as selfish or at least self-centered as any other act. To stand in as the award in question, they may reason that doing such a thing necessarily gives the agent a dopamine/serotonin/whatever hit.

First, it’s only an assumption that there’s an expected reward for everything we do and that furthermore that reward necessarily is equal to or outweighs the effort and/or sacrifice put into doing a thing. We don’t really know that. Or maybe it’s true in a sense, but the locus of reward is transpersonal. In other words, it’s only considered a reward given a transpersonal understanding of the dynamic between the benefactor and the beneficiary. That is, it’s only thought of as a reward because one views another’s well-being as just as important as their own, the other’s joy being their joy and the other’s pain being their pain, probably because they recognize their own divine spark in the other, or else just due to a liminal awareness that separation is an illusion and we’re all one. They love the other. An act of love may well entail a sacrifice that outweighs any personal reward we can reason about rationalistically. It often does.

Second, even if there is always an expected reward, such as a dopamine hit, for every altruistic act we do, the very fact that one has configured themselves, or was already naturally configured, such that doing good for another gives one a dopamine hit is gratuitous and indicates the presence of altruism.

To be fair, the claims against altruism may center not only around incentives, but also around deterrents, such as the pain of seeing another in pain or losing another. But the same reasoning I give above basically applies to that case, too. I.e., it’s only a deterrent to do something that hurts or kills another because we’re considering damage not only to ourselves but in a broader arena. And, if doing harm to another causes a surge of uncomfortable brain chemicals in us, the very fact that that relationship between harming another and our own unhappiness exists is, again, gratuitous and an indication of altruism.

Also to be fair, a lot of people believe that our propensity to help others, and more generally our conscience, as part of our social nature, is a product of evolutionary psychology, evolved presumably because it’s a survival advantage to the species. Such people would probably then say that the mechanism that connects “altruistic” behavior to reward isn’t gratuitous as I claimed it is. They might say that it’s virtually no different from evolution having provided us with a shower of $1 bills every time we do good for others, only, instead of $1 bills, it’s brain chemicals.

To this I would say, first, that it’s questionable whether that having been evolutionally baked into us means it’s not altruism. Maybe it simply means that evolution provided us with altruism? Though it does seem to take the magic and beauty out of it.

Second, I’d say it’s wrong and scientistic to assume that virtually all aspects of behavior are solely attributable to evolutionary psychology. First, because there are behaviors that arise secondarily/incidentally from evolutionary drives and the overall biological dynamic, analogously to how nipples on men provide no direct survival or reproductive benefit, and second, because the human mind is too dynamic and too much of a carte blanche (not entirely, but to a large degree) for many of its behaviors to be instinctual or otherwise genetically programmed into us.

Third, it’s my contention that many facets of mind or behavior, such as love and compassion, are likely fundamental traits of life/consciousness itself, and that to reduce those to mere evolutionary mechanism is to snuff out magic and beauty, both in your own mind and in the minds of anyone else who’s influenced by your mechanistic worldview. Consciousness/life existed way before evolution began; it has always existed, and evolution and brains merely harness consciousness/life and its properties, shaping and manipulating it to do its bidding with mechanisms such as, for example, the feelings of pain and pleasure.

If you think about it, it’s mysterious how evolutionary imperatives could possibly impinge on our experience of free will to make us desire to avoid pain or seek pleasure. How does a feeling carry with it inherent intentionality? It’s more intuitive to think that not liking pain and liking pleasure are somehow preexisting, inherent traits of consciousness/life that evolutionary psychology and brains merely make use of.

Also, humans feel compassion not only for other humans, but also for other animals. And many other animals have been known to perform acts of aid—sometimes very brave ones—not only for members of their own species, but also for members of other species. This is somewhat problematic to try to explain away via natural selection, since natural selection as we understand it should only happen on the level of the genepools of individual species.

Similarly, many animals have been known to engage in play, and scientists have been hard-pressed to think up a likely evolutionary cause for this. I propose that It’s actually because play is a baked-in activity of life qua life. Perhaps it’s all life ever really does on the most fundamental level, while the more boring, arduous, and undesirable activities of life are forms of play that are only engaged in by beings such as us and other animals because we’re stuck in saṃsāra, which means basically “the cycle of aimless drifting, wandering or mundane existence,” which is also connected to the cycle of reincarnation; or, in other words, we and the other animals are stuck in māyā, meaning basically “illusion.”

For further reasoning on why consciousness and/or mind is apparently primary, see https://myriachromat.wordpress.com/2020/02/07/why-im-an-idealist/ and https://myriachromat.wordpress.com/2018/04/13/notes-on-science-scientism-mysticism-religion-logic-physicalism-skepticism-etc/#Emergent.

It’s the biggest tragedy when people allow their rationalism or cynicism to trample on and snuff out the realization of beauty in those things that are most beautiful in life.

Why AI Won’t Save the World

In an earlier post, I explained why artificial general intelligence is a logical impossibility. Now I’ll explain why, even if it does turn out to be possible and we develop it, it won’t be the world saver that many are looking forward to it being.

With the inventions of more- and more-impressive (and complex) ML models, such as DALL-E 2, Midjourney, Stable Diffusion, GPT 3, ChatGPT, etc., and not to mention more- and more-powerful GPUs to compute them on, and other things like chess engines that can beat grand masters half the time, it seems to many that we’re on the brink of developing actual AGI—a digital general intelligence—that’s smarter even than humans. People also expect that, as a result of this development, the world will drastically change, either for better or worse. Opinions seem to be split on whether AGI will totally annihilate humanity or make this world into a utopia, humans included.

In my opinion, comparing any of our current ML models to true human-like intelligence, thinking that they’re an indication that we’re very close to the latter, is essentially a category error.

Most of the functions we’re applying AI to are very, very specific in comparison to the generality of human, living intelligence. Because they’re tasks that our own general intelligence tends to be proficient at (if to a lesser degree than AI in some cases), we make the mistake of associating the AI’s proficiency at those specific tasks with a kind of general intelligence, while, in actuality, the specificity of the task at hand and the generality of actual intelligence are worlds apart. So, for example, the proficiency of a chess engine at playing chess is evidence of nothing other than the ability of computers to play chess per se.

ChatGPT, a popular new chat bot by OpenAI, is admittedly surprisingly impressive, and it appears to be able to interact and provide and create information in a very general way, but this is all strictly via text. It’s not clear to me that a similar technology could navigate and manipulate the actual physical world in any non-trivial sense. The world of text is a very limited subset of the real world, a sort of virtual reality. We merely use text as a convenient tool; ChatGPT lives in text. There’s a huge difference there.

And, while ChatGPT able to “remember” past inputs and apparently hold up a conversation, it can’t actually learn new things, as all its knowledge was preprogrammed into it during the creation of the model. The ability to learn new things is an essential part of actual intelligence. (I put “remember” in quotes because, like “understanding,” “intelligence,” “mind,” etc., I think the real thing necessarily requires consciousness, and to apply such terms to AI is to fundamentally misunderstand their nature and/or the nature of AI. I say a little bit more in support of this view in the above-mentioned essay.)

The reason ChatGPT is able to appear as though it has general intelligence, apart from some very clever programming of its mode of assimilation, is that it’s assimilated ~300 billion words of text from the internet, text created by human intelligence. It essentially was programmed to find patterns in this information and blindly responds and creates new text based on these patterns. Real minds perform actual general intelligence based on much less input.

Neither ChatGPT nor any other AI program actually understands anything you tell it or anything it says. It’s all just sequences of individual blind CPU or GPU instructions. This is shown by the fact that AI chat bots are known to make small yet glaring errors in logic/reasoning/consistency that a human never would. In the earlier post mentioned above, I argue that actual understanding requires consciousness, and AI can’t be conscious. Without actual understanding, AI can only go so far.

But let’s get to the point: what if AGI actually is possible and does get developed? People seem to think that, if we can develop a superintelligence smarter than humans, it can solve all our problems, I suppose through legislation and/or the development of new technologies. But I’d argue that our problems are not primarily due to the lack of intelligence or technology.

Our problems are primarily due to greed, selfishness and evil; lack of self-awareness and self-criticism; lack of fairness; lack of integrity; living in small, blissfully ignorant, convenient bubbles; lack of compassion and caring for others; dysfunctional upbringing of children; an imbalanced, left-brained zeitgeist, the patriarchy, political and economic systems in which the most sociopathic and unscrupulous personalities naturally end up with the most power; etc.

Perhaps some of these problems could be solved by an AGI, if we even choose to listen to its radical, subversive suggestions, but problems like cultural momentum and selfishness, which are more-essential problems, can’t be solved by AI. What would it do, tell every individual hundreds of micromanaging instructions per day that probably contradict their belief systems and values?

And as for AGI developing more sophisticated technology, contrary to popular belief, technology will not solve our problems. We’re always salivatingly looking forward to the next technological thing, as if it will make us happy, but the reason we do this so much is that we’re not happy now, and the reason we’re not happy now is because of technology.

Technology is seductive because it’s enabling, ever-increasing, and necessarily leads to somewhere, but we need the wisdom to use what enables us well, or what enables us will be our own downfall. As it is, technology (like the spice) gives with one hand and takes with all of its others. In the old days of the US, when the Native Americans still had their own, naturalistic culture, anyone who spent a lot of time with them found that they were much happier and didn’t want to go back to technological civilization.

According to The De-Textbook: The Stuff You Didn’t Know About the Stuff You Thought You Knew by cracked.com, “According to Loewen, ‘Europeans were always trying to stop the outflow. Hernando De Soto had to post guards to keep his men and women from defecting to Native societies.’ Pilgrims were so scared of Indian influence that they outlawed the wearing of long hair,” and “Ben Franklin noted that, ‘No European who has tasted Savage Life can afterwards bear to live in our societies.'”

I wrote more about why AI won’t save (nor destroy) the world here: https://myriachromat.wordpress.com/2023/05/02/ai-will-neither-save-nor-destroy-the-world/.

Idea #3 – Simple, mechanical, automatic CVT for small vehicles

Wouldn’t it be cool if your R/C car had automatic continuously variable transmission? It’d probably a lot faster for the same motors and power consumption, as well as better at going through rough terrains, up steep inclines, etc.

There may be a way.

Imagine something like a torsion spring (a concentric spiral spring made out of a long, flat metal sheet). It’s turned from its center by the vehicle’s motor. The outer rim of it is coated with something with a lot of friction, such as a rubber, and it powers a belt. The belt turns a nearby wheel that powers the wheels of the vehicle using gears.

The trick here is that the direction in which the spring spirals outward is the same direction it pulls the belt in, so, if there’s more resistance to turning from the belt, it will naturally cause the spring to expand by it being pulled open, and if there’s less resistance from the belt the spring will contract. The spring expanding will increase the ratio of its size to the size of the wheel the belt powers, thus increasing the mechanical advantage. The spring contracting will decrease the mechanical advantage.

So, when the vehicle is going through rough terrain, there will be more resistance from the wheels, hence more resistance from the belt, therefore the spring will expand, thus increasing the mechanical advantage (like putting it into a lower gear). The same will happen when the vehicle is accelerating or going uphill. If the spring is configured well, the “gear” the car is in will always be exactly appropriate for the conditions the car is driving in and how fast it’s going.

There’s one more part of this mechanism I haven’t mentioned: when the spring expands and contracts, it’ll take up more or less of the belt length, so you have to modify the path it’s going through accordingly. The easiest way is probably to have a separate wheel on the belt’s path that keeps tension on the belt by being able to move farther or nearer on a spring, like multi-gear bicycles have.

Oh, one other thing: Alternatively, the motor could power the non-expanding wheel which would then turn the torsion spring, rather than the other way around. I don’t know which is better.

Idea #2 – Efficient desalination

Use electrolysis to separate oceanwater into hydrogen and oxygen. In a separate tank, burn the hydrogen and oxygen to produce water again.

This would leave salt and other residue in the tank where the electrolysis is executed, but not if you keep oceanwater constantly running through it at a slow speed.

This may sound like it would take a heck of a lot of energy, just like conventional desalination, but maybe it won’t if you feed the energy released by the system back into the system in two ways.

1. Use the heat generated by burning the hydrogen, by converting it to electricity, to power the electrolysis happening simultaneously in a different area.

2. Use the collapsing of gas pressure caused by burning the hydrogen and oxygen to lower the pressure where you’re doing electrolysis, to help it along. Since electrolysis has to fight surrounding air pressure to convert the water into gas, and all energy is conserved, it stands to reason that, to some degree, the surrounding air pressure translates to more electricity being used by the electrolysis.

This may require a complex and expensive setup, but the energy saved may just make it worth it in the long run.