Is AI “Good” or “God”?

Looking back at what I’ve read over the past few years, I feel that artificial intelligence (AI) is still widely misunderstood. What exactly is AI? How does it work? What can it do, what can’t it do, and what are its limitations? What are its benefits, risks, and threats?

This misunderstanding has reached a peculiar level, where even highly reputed publications like The Economist or Forbes sometimes feature articles by respected authors that are, frankly, surprising. Surprising because their content is often either “misunderstood” or “miscommunicated.” This isn’t limited to overly optimistic praise for AI but also includes critiques that miss the mark.

Although it’s challenging to answer these questions in a single article, addressing some critical points might help. There’s no need to delve into the origins of AI here…

AI makes decisions in a way that parallels how human and animal brains function. For insights into these differences, you can refer to this article. I also recommend this fascinating video:

When a living brain comes into the world, apart from a few basic reflexes, it requires “training.” Babies have no knowledge about their surroundings or life, nor the data to generate such knowledge. Over time, they see, hear, smell, feel, and learn. The sensory organs they use to gather information gradually “improve.”

AI systems operate in a similar way. However, the process of learning and drawing conclusions in AI largely depends on the intentions and designs of the developers who create the underlying functions. This is why software engineers are often described as having a “god complex.”

The “goodness” or “benevolence” of an AI system is only as good as the intentions of the developers behind it. Developers decide whether the AI considers broader moral or ethical values or narrowly focuses on achieving specific tasks, regardless of the consequences.

The way AI learns (training) depends on the data it is exposed to and how it processes that data. This isn’t fundamentally different from how humans learn. For example, if a person learns incorrect information during childhood, their entire thought system might develop around those flawed values. In contrast, creatures in nature often come into the world with “pre-installed” instincts provided by their creator. A newborn impala in the African savanna doesn’t need to learn that it should flee from a lion but not a zebra—instinct, or preloaded knowledge, takes care of that. For humans, this instinct is relatively weaker.

In AI, learning relies on electronic memory, which is empty but ready for use. In contrast, human and animal brains don’t store information in microchips; learning occurs through the formation of neurons. Each new piece of knowledge forms additional neurons, and trillions of these neurons establish connections (synapses) to create an ever-growing yet finite brain.

Returning to learning: If you teach someone, “If you want an apple, take and eat it,” and nothing adverse happens, they’ll naturally adopt this behavior. The reaction—something happening—teaches them. In AI, we call this reinforcement learning. However, in a social context, like federated learning, if someone takes an apple without paying for it and faces consequences (like the owner’s reaction), this changes their knowledge and reasoning accordingly.

AI surpasses humans in several ways. Its incredible speed, unlimited memory, and inability to forget are among its greatest strengths.

However, I must admit that current computing power feels somewhat inadequate for modern AI calculations. This is where quantum computers and neuromorphic computing could offer temporary relief to the problem.

For more on quantum computers, I recommend this engaging documentary:

So, is AI “good,” or is it a “god”?

As I mentioned earlier, AI is only as “good” as its “parents”—the developers who create and train it.

If we draw inspiration from nature and the human brain, we realize that AI isn’t democratic. It is rule-bound, authoritarian, and a temporary democrat at best. It has no inherent desires, yet its intentions can be manipulated, deceived, or tricked. This stems from its lack of human-like reasoning abilities.

While AI’s existence is based on imitation and simulation, it can replicate reasoning by mimicking human decision-making. To achieve this, we use heuristic algorithms, which allow AI to produce faster, clearer, and often successful results, though with certain limitations.

In one of our studies, we discovered that the concept of self-awareness plays a significant role in decision-making beyond reflexive behavior. We hypothesized that the part of the brain responsible for self-awareness—still believed to reside somewhere in the hippocampus—affects decision-making throughout the brain, even without direct connections. Interestingly, we found that this region communicates with the heart, and this interaction seems to occur partly through the corpus callosum, the bridge between the brain’s hemispheres. The fascinating part is that the effect of this communication doesn’t immediately disappear, even after a decision has been made.

Without delving too deeply into neuroscience, let’s explore how a similar mechanism could be implemented in AI.

In 2023, we began developing a model called ORDA (Ordinary Regression Displacement Algorithm). ORDA is a mathematical framework designed to reduce randomness and “freeze” a portion of the AI’s central processes, enabling it to make intuitive decisions.

ORDA operates on core data, staying connected to every outlier or node in a neural network through non-random vectors. However, its tendency to update data is deliberately low, as ORDA prioritizes learning over immediate accuracy—until certain parameters are frozen.

The biggest limitation of ORDA is its substantial computational demands. For example, processing decisions across 700 neurons takes approximately 3 hours on a high-end personal workstation (MacBook Pro, i9, 64GB DDR4), which is disappointingly slow.

This is why I believe quantum computers will soon become indispensable for advancing AI capabilities.

To be able to improve the skilled AI algorithms, we have to understand and know how human intelligence works (learns) better.