Category Archives: Development

Provoking Intelligence

AI researchers spend a lot of time dreaming up new ways to make computers learn automatically, new ways to try to fix the problems we have with those methods, new ways to represent and work on knowledge about the world or the problem domain, about function maximizers and statistics and partial differentials with respect to error and a hundred dozen other things.

Know what most of us don’t spend time on?  We don’t spend a lot of time thinking — really thinking — about what it takes to make systems learn to be intelligent in the ways that we value in human intelligence.  You see, “real” intelligence doesn’t just have to be engineered.  It has to be provoked.

There is a school of thought about “embodiment” in Artificial Intelligence.  Sometimes it’s called “Situated” AI.  Its basic premise is that in order to learn to deal with an environment that is capable of being modified by the AI’s outputs (or actions) but fundamentally messy and unpredictable,  and capable of being sensed by the AI although sometimes the world or its senses are misleading,  the Artificial Intelligence has to be situated in such an environment.  So they elaborately code virtual worlds, or build robots with sensors for the AI to control.  This is just about the closest thing in the mainstream, I think, to my idea that Intelligence must be provoked.

But the idea goes further than that.  The Situationists are right that the system needs to handle the landscape.  But we need more than the landscape, we must also handle the mindscape.

I believe that a system which judges its own performance totally depending on a simple function will never achieve anything we’d understand as subjective experience, because its mindscape is one-dimensional.  You could build a perfect, or very general, learning system and it would learn to maximize that function and nothing else.  The model of the world world that it builds would not include any object or any detail other than those it requires to do that one job.   To the extent that it perceived the world at all, it would not be able to pick out any details other than those found necessary for maximizing that single function.  If you want qualia – a subjective point of view that creates a detailed sensory model of the world – then you have to build a system that is situated in an environment where qualia serve a purpose.  And qualia don’t serve a purpose, I think, unless your system must interpret its sensory input in terms of its relevance to a bunch of different fundamental goals, such that a detailed representation of the world is something it HAS to build in order to function correctly.

Similarly to qualia, all the things we hope to find in a synthetic mind of human-level intelligence cannot just be built – they have to be provoked.  The system won’t develop a trait, unless confronted with a need that it cannot fulfill without that trait.

No aspect of consciousness can be developed without agency – lacking the ability to act, there is no need for intelligence to direct actions. Empathy cannot be developed in the absence of common interests with others. Without needs or desires to satisfy you get a creature that just sits there. It doesn’t need to do anything; it doesn’t require intelligence to do nothing – ergo you get no intelligence.

Without the ability to take actions to satisfy its needs or desires, intelligence has no purpose because nothing you think without taking any action matters. Again you get no intelligence. You get something like a plant, where no actions are required to satisfy its needs and no intelligence is required to guide those actions.

Without a messy world where the instrumentality has limited capabilities and actions sometimes fail, no self-awareness is beneficial or needed – it winds up matching actions to needs, but because the actions never fail, it never has to apply its intelligence to figuring out its own limitations or becoming aware of them.

Without social interaction in the formative period, you get a sociopath or a solipsist that has no ability to compare itself to others, and no empathy. It’s probably crucial to develop a small community or family of synthetics together so that they experience interpersonal interaction with peers as a formative influence. You have to have others you can observe to learn that you can do things the way they do them because you have the same abilities and can use them in the same way. You have to have others to compete with in order to drive learning. You have to have others to cooperate with to learn that cooperation sometimes works better than competition.

With that in mind, let’s start going over the things we hope to find in a human-level artificial intelligence.

It’s not at all clear which of these are subsets or different aspects of others, whether all of them are essential to being a fully conscious being, nor which arise as a side effect of developing others, etc.  But if we want these traits, the agent must have needs and its environment must have limits on meeting them, which in combination provoke every one of these traits by making them or the things that cause them a requirement for dealing with those needs and that environment.

Conversely, people who believe we can build a “limited” or “chained” AI that lacks some of these traits but which is still a general intelligence, should think that every one of them needs to be considered and if we DON’T want that trait, it’s not enough to just not program it – we have to also make sure it’s not useful to the agent.

So here’s a brief and incomplete list of the mental traits we got by developing our remarkably peculiar strategy for meeting  the basic needs of “stay fed – have babies – don’t get eaten” in a fairly complicated world with lots of obstacles to meeting those needs.

Abstract thought, Acting, Affection, Analysis, Anger, Anticipation, Attention, Association, Awareness, Belief, Benevolence, Calculation Choice, Communication, Competition, Cooperation, Counterfactual thinking, Critical thinking, Creativity, Curiosity, Decision-making, Emotion, Emotional intelligence, Empathy, Enmity, Estimation, Exaggeration, Fear, Friendliness, Friendship, Goals and goal setting, Greed, Hope, Knowledge, Language, Learning, Lust, Morality, Imagination, Internal monologue, Introspection, Lateral thinking, Love, Malleability, Memory, Mental imagery, Metacognition, Moral intelligence, Motivation, Noogenesis, Pattern recognition, Perception, Planning, Prediction, Problem solving, Rationalization, Reasoning, Regret, Risk assessment, Sanity, Self actualization, Self control, Self-esteem, Social intelligence, Spatial intelligence, Strategic thinking, Stream of consciousness, Understanding, Values, Value judgment, Vanity, Verbal intelligence, Visual processing, Volition, Wishful thinking, Working memory.

Friendliness has gotten a lot of attention, but there’s a lot of other important stuff in there.  And a lot I didn’t mention.

Of these, I think Sanity is probably one of the most important.  Unfortunately we’ll have no idea how to do that until we get an understanding of how AI’s go insane or in exactly what ways the AIs we’ve developed started in states nowhere near sanity.

As I keep saying, we’re going to solve artificial intelligence, and then it’s going to solve us.