Category Archives: Development

Philosophy, Science, and Consciousness

In the business of empirically attempting to create consciousness, some of the basic issues I need to address are currently considered to be philosophical. But philosophy is the business of asking questions, and doesn’t seem to be particularly concerned with definitely answering them so much as it is concerned with discussing what answers we prefer. We have not discovered the world to be so easily explained by what we would prefer to be true prior to carrying out empirical experiments. The world is not usually consistent according to the concepts we start with, any more than chemical reactions can be adequately explained by the view held by ancient natural philosophers which supposed all matter was composed of the elements fire, water, earth, and air.

The world’s symmetry, consistency, and indeed the concepts that support that consistency, require us to discover them. When we carry out empirical experiments, we discover in what ways the concepts we started with are wrong, and many things that once were considered to be matters of “Natural philosophy” – light, magnetism, the movement of stars and planets, etc, became sciences grounded in knowledge as our theories converged toward better approximations of ground truth. Philosophy has a very important role as the foundation or starting point of sciences, and as a source of the theories that scientific inquiry must support or disprove. But philosophy itself is pre-scientific.

The fundamental problem staring us in the face here is that we have no blinking clue what consciousness is. The thing I am trying to create here, even if I am successful, will by many be considered to be merely a simulation, and by many others to be merely an approximation. And depending on what philosophical definition we give to consciousness, those people can be considered correct. Some people consider anything that can pass the so-called ‘Turing Test’ to be conscious, but we’ve already done that experiment; I’ve worked on chatterbots before, and bluntly they aren’t. People unfamiliar with their workings can see them as conscious for the same reason that we see human-ish faces in electrical sockets and in automobile headlights and grills. the Turing Test is not a good test because we are not good judges. Humans anthropomorphize the universe (it hates that!) and are predisposed to see creatures like ourselves where none exist.

And yet I confidently assert that chatterbots are not conscious, when we have no working understanding of what consciousness is. Clearly I’m using some definition of consciousness or I could not so clearly state that. That leads back to philosophy. At some point I have to decide what my working definition of consciousness actually is. I’ve said before that I consider intelligence to be something that exists in some measure in many different systems; that I consider ELIZA, for example, to be about as smart as grass, and modern chatterbots to be about as intelligent as clams, for example. Intelligence is relatively easy to define, at least for me; it has to do with the number of different states a system can be in, and how readily or correctly or appropriately to its existential purpose it transitions among its states. Consciousness per se, my goal and grail, is something else.

I think that consciousness means that there is an experiential subjective reality. It is the difference between electromagnetic radiation at 700 nm wavelengths, and the color red. It is the difference between the knowledge that a physical instrumentality has sustained damage, and the experience of pain. It is the difference between objective information and subjective experience. As Douglas Hofstadter said in Metamagical Themas, consciousness means that there is something that it is like to be the conscious thing. Unlike intelligence, I think that it exists in different kinds rather than in different amounts. Cats and bats and rats are clearly intelligent but not as intelligent as people. On the other hand they are also clearly conscious, but this is a difference in kind rather than degree; they are not conscious like people.

What are the differences? Lacking a clear definition for subjective reality, we have no way of stating what the differences are. But different kinds of creature have different experiences of subjective reality, and it goes much deeper than the fact that dogs have no experience of red due to lacking that kind of cone cell in their retinas. I see what I believe is evidence of deeper differences.

Dogs, for example, are pack animals; they cooperate in hunting, share territory cooperatively, and eat together from the same food sources at the same time. Therefore they have a subjective experience of cooperative concepts like pointing; wolves even point game for each other during hunts. When you point at something, a dog usually knows to look where you’re pointing. Because they are intensely cooperative in the way they do things, a dog has a subjective experience of cooperation succeeding or failing; a dog is eager to please. Because they eat together at the same time, their experience of eating is governed by a simple principle; the faster you eat, the more you get. According to a dog’s reality, the subjective experience of eating implies haste; the winner is the one who eats fastest. Finally wolf packs share territory with each other, but not with other packs; a dog experiences a strange dog as a threat to territory, and responds accordingly. A stranger on its own territory is not a threat, but a stranger on OUR territory must either be explicitly allowed by the pack leaders, or must be dealt with.

Cats, on the other hand, are pride animals. They hunt alone, share territory socially rather than cooperatively, and have the exclusive first opportunity to dine on their own kills. They share food when they choose to, usually with their mates or their own offspring, and if getting food is unreliable, often can gain a survival advantage by saving some for later. Because they hunt alone, pointing is not part of their subjective reality. When you point at something, the cat simply sees you displaying your finger and interprets that as a social act, intended perhaps to express something but the cat expects that the thing you’re expressing will be social rather than intended to convey information about the world. And when cats eat, only if they are hungry and very confident of tomorrow’s food supply will they eat quickly; otherwise they’ll be returning to their food at intervals throughout the day, or throughout several days. A stranger is a threat to a cat’s own safety or to its social group, not a threat to the pride’s exclusive control of the territory, and strangers are dealt with in the same way no matter where encountered. And a kitten gets food when mama gives up on walking and lays down, so a kitten’s instinct when it wants food is to get underfoot and make it hard for the one providing its food to walk.

Dogs and cats have a different subjective experience of the same objective reality, because down to the bottom of their brains, their subjective concept maps are different and support different sets of instinct and different modes of experiencing the world. Therefore I consider them to have a different kind of consciousness from each other, or for that matter from ourselves, although we can understand each other fairly well.

Human consciousness is very much about our intelligence; Intelligence, after all, is our primary method for dealing with the world. We think of consciousness in terms of intelligence, but that isn’t necessarily a characteristic of consciousness in general. Other creatures, if they thought of consciousness at all, would probably think of it in different terms.

But how far down the intelligence scale does consciousness, as a concept, go? Is there a subjective experience of being, for example, grass? I’d answer with an unqualified no. Of being a clam? I doubt it. Of being a cat, or a dog, or a rat? Well, certainly. Cats and dogs and rats make decisions about how to deal with the world, according to an understanding of both the world and their own abilities within it formed by experience, to meet a set of basic needs. They have realtime sensory feedback about the effect on the world of their own actions. They choose among both learned and instinctive strategies of meeting those needs. They evaluate options, and can reach different decisions when presented with equally-valid choices. When they experience pain they map it onto some aspect or condition of the experienced world and decide what to do about the condition, in addition to responding by simple reflex to draw away from the direction the pain comes from.

In some ways this merely regresses the question to what I mean by ‘an understanding of the world,’ which is also subjective and resistant to definition. But a few hypotheses that I’m willing to accept as working theories until I know enough to formulate experiments to test them, are that consciousness must be rooted in experience, knowledge of the world and one’s own abilities in the world, a set of needs that must be met, different potential strategies to meet them, decisions that must be made of which strategies to pursue and how, and stimulus or input that includes the effects of one’s own actions on the world and whether the state of the world meets one’s needs.

So, whatever else consciousness is, consciousness as I understand it cannot exist in an agent powerless to affect its world, or in an agent without needs that must be met, or in an agent that does not get feedback about what its actions do in or to its world and whether it can bring about states of the world that cause its needs to be met.

Of course this is philosophy, not science. I have only convinced myself of something by insight into what I believe about it, not formed a theory that I have a definite means of testing against the world. And if these conditions are necessary for consciousness, I still have no real confidence that they are sufficient. I am convinced that any agent which does not have these things will never be conscious, but I cannot say with confidence that an agent which does have these things is or can be conscious lacking some other things which I don’t even have the concepts yet to express.

And I have not addressed the question of whether such a consciousness would be ‘real’, ‘simulated’, or ‘approximated’, which is a question I consider fatuous. It is my contention that there is nothing the human brain does that cannot be explained by the theory that it is made of atoms interacting according to the laws of physics. Humans, and cats and dogs and rats, are an existence proof that consciousness can be realized in a purely physical instrumentality. And if that is true, then there is no reason to believe that a physical instrumentality made of protein and water is somehow privileged over one made of silicon and metal.