What Consciousness Is

Anybody who follows this blog (hi!  All three of you!) probably knows that two of the questions I have been obsessively pondering, are “What the heck is consciousness anyway?” and “Why is human intelligence in particular unique?”

I think that I have now answered these questions, at least to my  own satisfaction.  Regarding what consciousness is, Wittgenstein was right.  Consciousness is found nowhere except in the process of organizing information for planning purposeful action.

However, that’s a necessary but not sufficient condition.  I think that consciousness implies self-consciousness, and that therefore part of the information that a conscious thing is organizing has to be its current state and the plan or purpose in mind and the effect of each on the other.

This trivializes consciousness to the point where it’s ridiculously simple to create a thing that meets the definition, but once the planning process reaches sufficient complexity, I think that is what elevates something from problem solving to consciousness.

Now, on to the second question, what is it that’s special about our specific kind of symbol-using consciousness that gives our intelligence the shape it has?  This is an important question because every kind of mammal is conscious (sentient) and some of them have big brains and use them to execute very complex survival strategies that they couldn’t manage otherwise (intelligent) but our intelligence is of a different kind than the intelligence of any other terrestrial species.

I had been thinking in terms of consciousness and our kind of symbolic, linguistic intelligence as goals that the system should develop some strategy to achieve, but that was mistaken. These things are not goals, they are strategies.  This particular type of Intelligence is of enormous benefit for executing our particular survival strategy, and therefore potentially emergent from that strategy.

So what is special about our own survival strategy that requires our kind of intelligence – our symbolic, language-using intelligence?  Our intelligence that can manipulate abstract symbols in a way that allows us to do mathematics and to plan in terms of symbols and design aircraft?

Language.

Language is a tool for achieving our survival strategy, applicable to our origins as a social species with individually adaptive specialization.  We develop such varied specializations that we have to explain to each other what we can do, or what we need others to do in order to cooperate with us.  When we pass along a skill we use language to explain it.  The better we do at language the better we do at being a social species with individually adaptive specialization.

What I’d been missing was that language isn’t just another tool in our survival toolbox.   Language is something that adds tremendous effectiveness to our survival strategy, and we could not do it as well if we were not symbol-using intelligences.  Language requires us to communicate using symbols. Communication using symbols, if we’re to be any good at it, requires thinking using symbols.  And that kind of symbol-based thinking is exactly what we value about human-style intelligence.  Language isn’t something that a pre-existing symbolic intelligence just invents one day. Language is something that gives an intelligence a good reason to be a symbolic intelligence.

I can’t look back through time at those early troops of hominins and see what went on; This is one of many “just so stories” that plausibly explains how a symbol-using intelligence might have evolved.  But whether it’s how this happened to us or not, it’s a way that a symbol-using intelligence similar to ours could evolve. That’s good enough for empirical purposes.

It follows that this method could plausibly produce a symbolically-thinking, conscious, humanlike AI.  Define a task that requires self-aware planning, with the merely literal meaning that forming a sensible plan requires considering the effect on the planner of, and ability of the planner to, carry it out, and requires considering the effect of carrying it out on the self. Make sure that that using language with all its flexibility and expressiveness is a good strategy for solving it. Use machine learning techniques to try to create a solver for it.  A system capable of self-aware planning and using language to solve that problem would be expected to be a symbolic intelligence – ie, it should have intelligence of the specific kind (though not necessarily in the same degree) that we think of as uniquely human.

The problem can’t be language itself – language is a strategy, not a goal.  If you make the problem language itself, you just get something like a database frontend.  That is to say, a system that emulates some behavior in the absence of any reasons for that behavior to exist.

So what the heck kind of task is that, and how many years of computer time are we talking about?  I don’t know, but being able to frame the question this way is a definite improvement.  “what is consciousness” and “what is special about human-style symbolic intelligence” were philosophers’ questions, not subject to definite answers.  And I’m not a philosopher, because I’m not content to endlessly worry questions that can never be resolved.  “What kind of task is that” and “how can I use machine learning techniques to create a solver for it” are empirical questions, subject to experiment and testing.  So now I have a way forward, however difficult and unlikely, which I didn’t have a week ago.

2 thoughts on “What Consciousness Is

  1. anders horn

    I took me a long time to reply because I wanted to give a thoughtful and considered response. But then I threw it away and wrote this one instead.
    I absolutely agree that language is a strategy, and I agree that it is much better to aim at something that speaks because it needs to, instead of aiming at a parody of intelligence that imitates the form of language and fails at understanding. Although this is a fair goal, you may find it much easier if you aim at some suitable sub-goals first. I would expect that a the step from non-linguistic intelligence to linguistic intelligence is rather small on the implementation side even though the effect is drastic. Tomasello suggests that a non-linguistic intelligence, that was given the expectation that others are trying to cooperatively communicate and placed in an environment that rewards communication, would become a symbolic intelligence. (An environment that rewards language rather than a fixed system of signals, would need to have a large variety of tasks.)

    On the quest for non-linguistic intelligence I would like to briefly argue in favor of non-representation as the default. Representations and causal models are are very powerful in the domains that they apply to but they are built out of black boxes that act the way they do because that is the way they act. Furthermore models of things often conglomerate separate parts into single one that has just enough freedom to predict the external behavior. For example if you have something that is not self conscious, it can still learn to act effectively because it will interpret its own actions as predictable responses on the part of the environment.
    (the argument for non-representation reminds me of the argument for dynamic type. By only considering things that happen you waste less on speculation and can deal with a more complex and less systematic world)

    One interesting side note is that for my first 18 years language felt as something foreign and tacked on. I didn’t think in the same terms I spoke. My thinking was mainly in terms of spacial and continuous things.

    1. bear Post author

      It’s necessary to distinguish between several concepts.

      “Adaptive” is an important part of this, and it means capable of modifying behavior over time as a result of new information. There are one-celled adaptive organisms. Classic neural networks that optimize traffic flow in a major city or maximize ad revenue are adaptive. Adaptiveness can work at the level of base reflex, or at the level of learned or conditioned behavior.

      “Sentience” or “Consciousness”, IMO, is the point of development where it is necessary to represent many aspects of the world, including yourself and your abilities to affect it and the effect you can potentially have on it, in order to continue, and being motivated to continue. Animals, including us, have rich and detailed perception of ourselves and the world around us because it is impossible to know which details will be important. Some of these are to some extent “intelligent” in terms of having sophisticated survival strategies that require doing complex and subtle things with information – but all of them except ours work at a subsymbolic level. All sentient things are necessarily adaptive; there is no point in having a rich perception of the world if there is no learning from the world.

      “Sapience” is the ability to do symbol manipulation in order to perform goal-directed actions or reach conclusions – but this says nothing about whether the sapient thing is motivated to achieve anything or actually cares in any way about the goals or conclusions. Classical GOFAI expert systems were “sapient” in this narrow sense, but not sentient and, in most cases, not adaptive. Even when they mimicked sentience and used the vertical pronoun in their output, they were just engaging in programmed mimicry.

      A “sophont” entity or symbolic intelligence, of which the only example we have is humans, is all three — adaptive, sentient, and sapient.

      You’re absolutely right that symbolic intelligence can only arise in a context where there is pre-existing sentience. You need an adaptive system before sentience makes any damn sense, and for sentience to require sapient function as part of its survival strategy presupposes a sentient thing that is motivated to pursue a survival strategy.

      So, yes. What I’ve mentioned as a potential strategy above is like the last hundred meters of climbing Mount Everest. It’s necessary to reach the peak, but there’s a whole lot of ground, only some small fraction of which is mapped, to cover before we can even start that part of the operation.

Leave a Reply