Category Archives: Software

Characteristics of Human-Style Intelligence

In previous posts, I’ve made comparisons of various programs to the intelligence of various natural systems.  But that’s been a comparison of magnitude, and as I’ve said before intelligences are different in both magnitude and kind.  So, I’m here today to ruminate about what kind of intelligence human beings have.

There are creatures on Earth with bigger brains than us.  Elephants for example, and several kinds of marine mammal.  And those creatures, make no mistake, are very smart, in the sense I used the word “smart” before.  They produce responses to their environment with great subtlety and complexity which advance their interests in that environment with effectiveness greater than any less subtle and complex actions could.  IOW, they are intelligent in the broad sense because they execute subtle and complex survival strategies.

But their intelligence is different from human intelligence, and doesn’t have the same varieties of application as human intelligence.  Elephant brains give elephants deep memory.  When the drought hits elephants survive because they remember where the remaining water holes were when ten years previously the last drought hit.  Elephants remember where paths are, remember places, remember individual other animals, and remember previous interactions with those other animals – even creatures of different species.  They are way better at memory than we are, and their memory abilities are key to producing complex and highly appropriate behavior for them.  They got the kind of intelligence they needed to address their own survival problems.  But they didn’t get what we value about our own intelligence.

Dolphins have big brains much of which are devoted to processing hearing.  Lots of things have better hearing than us, but dolphins and orcas really are on an entirely different plane of performance with respect to acoustics.   Not only can they hear things happening halfway across the planet, but they can pick out tiny details in a scene – even details internal to most solid objects – by testing them with sonar.  Once again, that ability allows them to produce highly intelligent (complex and well-adapted) behavior.  But once again, it isn’t like human intelligence and it doesn’t give them the things we value about human intelligence.

So…. cognitively speaking, what is it (or what several things is it) that humans do better than other species, that produce our unique kind of intelligence?  What are our brains optimized for that other brains aren’t?

I think it starts with social interaction.  Humans have developed a unique social structure which amplifies the benefits we get from having our type of intelligence, which I call Individually Adaptive Specialization.  Let me break that down a bit.  What I mean is that an individual can perform some role beneficial to their group, and that these roles can change at need subject to learning how to do something else.  The adaptation to different roles isn’t set at birth the way it is for most species in which individuals are adapted to different roles, nor is it unchangeable after it is acquired.  More remarkably, it isn’t even limited to a choice of one among very few genetically predetermined roles.  And that is, as far as I know, an utterly unique way for a species to cooperate in dealing with their environment.

And so let’s start thinking there.  If our intelligence is about cooperation, then let’s think about human intelligence – human capabilities – in terms of what this kind of cooperation requires.

Let’s start with language.  Open-ended specialization requires complex communication. Someone has to be able to say what they’re providing and what they need, because it’s not predetermined.  An ant or a bee or whatever may be individually specialized by caste, but it just is what it is – it provides what members of its caste provide and it needs what members of its caste need, and those things are the same for all members of its caste.  They don’t have to be communicated because how to cooperate with entities of a preprogrammed subtype can be built into all the other preprogrammed subtypes.   But a human?  A human can have adaptive specializations in such an enormous variety of things that you can’t just have preprogrammed knowledge of how to interact with that “kind of” human.  A human can provide value by picking melons, digging roots, gathering berries, protecting the kids, or hunting deer for dinner.  A human can even switch between those things on different days. And then they swap the benefits of this specialization around, by barter.  I’ll give you a bag of berries if you watch today and make sure my kid doesn’t wander off and get hurt.  I’ll swap you some melons for a haunch of meat.  If you want to cook your meat go talk to her; she knows how.  But she’ll want some of the hide for something she’s making, and a couple of bones to take home for her dog.  And she’ll want a little extra to give to her daughter who brings her the firewood she uses to cook with.  And so on.  Language makes it work.  Language makes our adaptations more responsive in an enormous variety of environments.  Language allows behavior to be acquired or changed rapidly at need; hunters may think of themselves as providing meat, but when winter sets on, and people need furs, they talk to the hunters and ask for furs and the hunters’ behavior changes.   They adaptively specialize into providing furs while continuing to provide meat.  Maybe somebody picks up a new skill – a new adaptive specialization – at tanning hides so they stay soft and flexible, and somebody else picks up a new adaptive specialization for making garments and bedrolls out of the tanned hides.  And at every step of the way, this adaptation and specialization happens in a way that’s mediated by language.  People communicate needs, and people barter the value they can provide via their own adaptive specialization to get their needs fulfilled.

But that requires more than just language.  That requires planning.  If you want somebody to watch your kids today and you’re willing to offer them a bag of berries you haven’t picked yet as their reward, you have to be able to make the plan, and that requires thinking about the future.  It means realizing when your kids would be in danger if you just leave them when you go off to pick berries, and thinking about who’s a good choice to protect them.  It means thinking about what would be a good way to convince that person to do it – if you plan to pick berries anyway, would sharing them be a suitable reward?  It means making a plan to go get that reward (and enough for yourself and some to trade to other people) and then going and doing it.  So a skill in imagining the future is important.  Specifically a skill in imagining different versions of the future where you do different things and get different results and then picking which thing you do based on the futures you’ve imagined and which future you like better.   And that is another valued aspect of human intelligence – another thing we do better than any other species.

If the shaping force giving form to our intelligence is cooperation in adaptively specializing to general tasks, then being physically able to do a wide variety of tasks is important too.  That has several general implications.  One of them is that more of our brain would be dedicated to manipulative skills (acting on things held in the hands) than agility skills (movement to, from, and around things) because manipulation gives a greater variety of possible specializations.  Once again, manipulation is something we’re better at than any of our simian and prosimian cousins, and agility is something we (mostly) aren’t as good at.

Even more strongly than that, we amplify the benefit of individual adaptive specialization by having highly flexible skill sets.  With practice, we can become great at things which, for the most part, we are not born with any special ability to do.  We don’t have to be superb at everything, because our troop or tribe will have someone specialized in that role who is superb at it.  But we have to be able to get really really good at whatever it is we do because the whole troop or tribe will be depending on us to do that.

Accordingly, we have a high degree of brain plasticity and when we’re born we are strongly preprogrammed with only a very few basic skills.  Our interactions with our troop or tribe are open ended; we have to learn what they need us to do and learn how to get them to meet our needs by doing it for them.  We can’t be preprogrammed with it the way a horse is preprogrammed to know how to run, because whatever role it is we eventually specialize into, it’ll be different for different people and it will depend on the needs of our troop or tribe.  And yet again, getting better at things by practicing at them is something humans excel at.  Other species can learn skills by practice; but compared to us, the benefit they get from practice is much smaller.

So, this is my theory about what characterizes human as opposed to other kinds of intelligence. I think that our intelligence exists specifically to support the way we adopt specialized roles in order to further the survival interests of our social groups.  It reads like a laundry list of what we usually consider intelligence to be, because we’ve usually been trying to articulate what sets us apart from other species.   Here are things we excel at:

  • We need and value social cooperation.
  • We have excellent manipulative ability.
  • We can learn a nearly endless variety of skills, no matter how arbitrary they seem.
  • We learn to be much better at skills, by practicing them.
  • We have planning and reasoning that allow us to predict futures contingent on our actions and choose actions to maximize the value of those predictions.
  • We have language that allows us to negotiate cooperation – the use of skills on our behalf by others, or by us on behalf of others.

I usually use the word “intelligence” in a general way meaning the ability to execute a complex and responsive survival strategy, regardless of whether that survival strategy involves language or symbolic thinking. But this particular kind of intelligence is the basic survival strategy of our species; it is what we use our brains for. When people use the word “intelligence” in a much narrower sense meaning what distinguishes human beings from animals, this is mostly what they’re talking about. Many would claim that some things like manipulative abilities and deriving great benefit from skills practice belong in a different category. I think they’d be wrong about both of those things. Those things exist in an important synergy with our kind of intelligence. The physical and mental abilities are not independent categories; if either were missing the other would be diminished in importance to our survival strategy and therefore unlikely to have been as refined by evolution.

Now, looking back at that list? For the most part that is not what we’ve been designing our so-called artificial intelligences to do. We should not be surprised then, that they haven’t been doing it.

We’ve been building systems that have no clue what social cooperation is and no need for it, hence no adaptive use for language. We’ve been building systems that can learn to do exactly one task – and usually, their network morphology or however their structure is expressed, is explicitly specialized to do that single task and no other. They have no plasticity, in that their structure does not change in response to need in learning a new task. The systems we’ve been building, other than simple minmaxing systems for open games, do not choose between different courses of action depending on comparing extrapolations of the future.

And finally, for the most part they are trained rather than practicing, which is subtly different. Practice is exploratory, training is repetitive. Usually when people use the word practice they’re doing both, but when we train neural networks, we have been training them to reproduce known results given known examples, and that is training only.

That laundry list above, I maintain, is vital to the distinction between a genuinely conscious artificial intelligence, and a function maximizer.