Sat, Jan 3

Is AI intelligent? No. Then what is?

Note from Tom: I put the first version of this post up last week under a different title. Someone asked a good question that made me realize the post could have been a lot better. So I rewrote it. I’d be quite interested in hearing any comments you may have on it.

Is AI intelligent? To answer that question, we need first to answer the question, “What is intelligence?” I’m sure that most people - especially people involved with AI - would provide an answer that references use of language, such as “the ability to understand and create meaningful sentences”.

However, such a definition excludes the possibility of animal intelligence, since humans are the only species that uses symbolic language. Yet, many animals, and even some plants, exhibit behavior that could be called intelligent.

Of course, many people – including many scientists – consider any sign of animal intelligence to be simply pre-existing programming, i.e. “firmware” inherited from the animal’s parents. This was the dogma that I grew up on. I remember one teacher stating with a knowing smile that Doctoral candidates in biology often failed oral exams when they spoke about an animal’s actions as if there were intelligence behind them. In those benighted days, it was simply axiomatic that animals didn’t have intelligence. If any Doctoral candidate mentioned this idea even metaphorically, they were literally jeopardizing their career.

It’s impossible to hold such a position today. After all, what about chimpanzees that use tools? What about ravens that go back and dig up a bauble they’ve buried if they notice that a human, or another raven, was watching them when they buried it? What about dogs that find their way home after being lost hundreds of miles away? What about Alex the Grey Parrot, whose last words to his longtime owner/researcher were “You be good. I love you”? We should all end so fittingly.

For that matter, a living creature doesn’t need a brain to exhibit intelligence. Scientific American described 2-3 years ago how a slime mold, which is a collection of disconnected cells without any central coordination, can seek out fragrant food, even when there’s an uncomfortable barrier in the way. And it isn’t hard to design cellular automata that exhibit intelligent behavior, even though their electronic “bodies” don’t contain a single carbon atom.

What’s most amazing about animal intelligence is that it goes so far beyond what generative AI will ever able to do, yet does it using a tiny fraction of the energy that AI uses now to complete far simpler tasks.

For example, think about how complex a task catching a mouse is - yet cats do it all the time. Since there’s no way that this task could be encoded in the cat’s “firmware”, it must be learned, presumably as a kitten watches its mother catch mice.

Think of the huge amount of energy that would be required to train an AI model to catch a mouse (leaving aside the question of how this would be accomplished in the physical world). Then consider the fact that the human brain runs on less energy than what’s required to light a light bulb - so the cat’s brain must require just a fraction of that. Surely, an artifical intelligence that is based on how our brain (or even the brain of a cat) operates will be much more powerful and energy efficient than one of today’s AI models.

How can we develop this new AI? Doing it requires paying attention to the intelligent behavior that animals exhibit in their day-to-day activities. Those activities go far beyond the wildest predictions of what AI, as currently conceived, could possibly do. Maybe, if we just pay attention to what animals do and then “reverse engineer” how they do it, we’ll be able to develop truly intelligent systems that will require a tiny fraction of the power consumed by today’s AI (although this isn’t to say that today’s AI isn’t tremendously useful - just that it’s not based on the intelligence that we utilize in our day-to-day lives).

Here’s an example. Let’s say a dog leaves its home one day and wanders seemingly aimlessly around the neighborhood. There’s no way the dog could keep a record of every tree he watered, every driveway he crossed, every rabbit he chased, etc. Yet somehow he shows up at his home by dinner time. What’s most remarkable is that this is so unremarkable: Only about 15 percent of the time does a dog (or a cat, for that matter) disappear after leaving its home - although in most of those cases the dog or cat is returned through dog tags, shelters, etc.

How does the dog do this? He doesn’t have a map or GPS. Also, there’s no way he can use logic to find his way home. For example, he can’t say to himself, “When I was crossing this street a few hours ago and was almost hit by a car, I had just set out from my house. When I set out, I walked along this side of the street. Therefore, if I just keep walking along this side of the street, I’ll probably come home.”

Is there some way the dog can utilize a chain of “reasoning” like this, without consciously invoking it like humans do? Perhaps, but how would that result be achieved? It certainly can’t be genetic. Even if the dog was born in the neighborhood and has lived there ever since, there’s no process by which his genome could be altered so that he knows his way around the neighborhood almost from birth.

Could it be training? When the dog was a puppy, did its mother train it to wander around the neighborhood and find its way home? There are certainly animals, like bees and ants, that can find food outside of their “home” (e.g., their beehive) and return to instruct their peers how to find it (some bees do that by performing a dance for other bees that encodes the route it took to find a particularly abundant source of pollen), but this isn’t training. No dog conducts training classes for other dogs, even their own progeny. Yet, dogs find their way home, sometimes from great distances. How do they do that?

Of course, I don’t know the answer to this question. However, there are two things I know for sure:

1. The intelligence that got the dog home isn’t due to some mechanism like generative AI, which is essentially a huge statistical algorithm for predicting the next word in a sentence. An AI model (large language model or LLM) is “trained” on a huge dataset of documents and web pages, which essentially provides the coefficients used by the algorithm. The model doesn’t “understand” the words it uses.

2. Dogs don’t understand words, either. Yet the fact that a dog can demonstrate intelligent behavior shows there could be a different type of AI that doesn’t require the boiling-the-ocean approach of LLM training. If we could start utilizing this different type of AI, maybe we could slow down our headlong rush to dedicate most of our electric generating capacity to powering AI data centers. Maybe we could leave a little power for..you know..heating homes and things like that?

So, what could explain why the dog can find its way home, since it doesn’t have a Gen AI model embedded in its skull? Again, I can’t answer that question for certain, but I can point out that infants don’t have any command of words, yet they seem to be able to “reason” based on symmetry. For example, a baby – even a newborn – can recognize an asymmetrical face, and reacts differently to it than to a symmetrical face.

How does the baby do this? It’s simple: If the baby has a working knowledge of the mathematics of group theory – the mathematical basis of symmetry – it will have no problem determining when a face is asymmetrical. Now, perhaps you wonder how a baby can understand group theory. The answer to that question is also simple: it might be similar to how a Venus fly trap can count to five. That is, it’s part of the baby’s firmware.

I suggest that most higher-order animals (at least mammals) have an innate sense of symmetry. Another thing that helps those animals navigate is place cells and grid cells that “remember” a particular place and fire when the animal (or person) returns to that place. The output of these cells is assembled by the animal into a “mental map” that can guide the animal home.

For example, if a dog wanders around its neighborhood and place cells mark notable locations, these might self-assemble into a mental map, showing the entire trajectory followed by the dog since it left home. The dog could then use symmetry principles to “reflect” the map in a “mirror”, so the dog traverses his earlier course in reverse.

I think something like the above process guides a lot of human reasoning, which amounts to “navigation” through an abstract space in which ideas and memories take the place of houses and roads. For example, suppose you want to develop a computerized model (I won’t call it an AI model, but it does amount to artificial intelligence in the general sense) to decide the best way to finance the purchase of a badly needed new car.

You treat this as a question of the best path to follow between two points in an abstract multidimensional “personal finance space”. The dimensions of this space are savings, monthly income (assumed to be unchanging), monthly expenses, and credit rating (say your goal is to be able to buy a house in three years, so you don’t want to damage your credit rating very much).

The initial point is your current values of all four dimensions, while the ending point isn’t defined, other than it should include having the new car (meaning monthly expenses will be higher, due to car payments and insurance), and have the best possible combination of savings, monthly expenses and credit rating. You would give the model your tolerable ranges for each of these items and it would provide you with all ending points (i.e., values for all four dimensions) that fall within your acceptable ranges.

Each ending point would be accompanied by a “trajectory” that starts with your current location and includes $X of savings drawdown, $Y of monthly expenses (including car loan payments), and a Z-point change to your credit rating (normally, it will fall, of course). If you found one of these ending points to be acceptable, you would follow that trajectory. If you didn’t think any of them were acceptable, you would adjust your acceptable ranges for the ending point coordinates and rerun the analysis.

My point in describing this example isn’t to provide a guide to making car purchase decisions, but to show that a process akin to the one a dog follows when it navigates its neighborhood can be turned into a general model that can guide a lot of human activities, while consuming a tiny fraction of the energy that would be required to achieve the same purpose (and probably not as well) using generative AI.

If you would like to comment on what you have read here, I would love to hear from you. Please email me at [email protected] or comment on this blog’s Substack community chat.

Tom Alrich’s Blog, too is a reader-supported publication. You can view new posts for one month after they come out by becoming a free subscriber. You can also access my 1300 existing posts dating back to 2013, as well as support my work, by becoming a paid subscriber for $30 for one year (and if you feel so inclined, you can donate more than that!).

1
1 reply