The Mind

Wow, that’s an ambitious title. The Mind.

It’s also an ambitious idea. I had this brainstorm a few months ago, and I’ve been trying to write something about it that doesn’t seem completely incoherent or impossible to understand. The odds of me being exactly right about this are … well, let’s just say that they’re somewhat more than zero but likely closer to zero than unity. And even if I happen to be right, it’s a long way from idea to implementation and I’m going to need a lot of more specific knowledge to fill in things that from a sufficiently great altitude, might appear to be details.

Here is the point; I believe that I may now understand how the mind works. Even if I’m wrong about it being the way biological minds work, there is a possibility that the model I now consider may be a valuable step on the path to so-called “strong” Artificial Intelligence; That is to say, intelligence on the scale of animals, possibly one day including human brains. And it may be a way to “extend” human, or biological, intelligence with fully integrated computer-like faculties that evolution has not seen fit to give us yet.

So, yes, this is computer science. Specifically, Mad Science. I should insert on this page some pics of famous Mad Scientists from history and movies.

Let’s begin with things we know about animal brains. These things are as true of lizards and fish as they are of people.

The brain is organized in small units, or “cortical columns” of a few thousand to a few tens of thousands of neurons. Within these columns the neurons are tightly connected to one another, and between these columns, there are pervasive connections, but not nearly as complete a map of connections as within the columns.

Each column is capable of producing repetitive signals that vary in frequency and precise within-unit mapping of which neural connections are most active. These signals are propagated between columns in what we have been calling “brain waves”, usually with fairly high fidelity, resulting in the same or a closely related pattern of repetitive signals manifesting first in nearby columns and later in the process in increasingly distant columns.

But while columns which are close neighbors tend to have the same responses and carry the same patterns and can propagate patterns to each other with high fidelity, columns which are not so physically close to each other are not such close copies, so the repeating patterns produced, tend to diverge into related patterns as they propagate physically through the columns. This is especially true where columns are simultaneously responding to several different patterns propagated from various other points in the network. And when these new, related patterns arise, they, like all the others, themselves propagate as far as they may across the brain.

We know that different regions of the brain are specialized (to some extent physically, and also by training) to specific tasks. We say that symbolic thought is mostly done in the prefrontal cortex, that the autonomic nervous system is largely controlled by the medulla oblongata, that vision is processed in the visual cortex, that hearing, language understanding, and speech generation are closely intermixed in another region, that sexual impulses and desire are mainly functions of the R-complex, that the primary motor cortex regulates bodily movements, that the cerebellum interprets and alters these signals to maintain coordination and balance, and so on.

On a smaller scale, we speak of the “cortical homunculus” as the mapping of the regions in the primary motor cortex devoted to control of particular parts of the body. But we also know that these allocations of brain regions are plastic (within limits) and highly adaptable to damage or disability or different amounts of training and practice. Modern teenagers have been shown to have, on average, a greater portion of the primary motor cortex devoted to the control of their thumbs, for example, than previous generations, and it’s been theorized that this additional allocation of neurological space is due to thumb keyboards on phones and the modern habit of texting more or less continuously throughout the day.

We know that trainable tasks requiring great concentration or coordination reproduce more stable patterns of repetitive signals in larger parts of the brain. When someone who is really good at throwing things is throwing a thing, a fairly large part of that person’s brain is involved in very stably reproducing a characteristic pattern of brainwaves that corresponds with throwing a thing. The greater this allocation of brain is, (ie, the more columns are resonating in unison in that characteristic pattern) the more closely the produced motion matches an ideal motion, and the more accurate the resulting pitch in hitting a target. When someone who is not nearly as good or not nearly as practiced at throwing things is throwing a thing, a similar pattern is produced, but does not propagate across the brain as far. It is not reproduced with the same fidelity, and does not come to dominate nearly as the corresponding pattern does in the mind of the better pitcher.

We know that babies are not born with what we think of as “sight” — they learn to track and follow things with their eyes only gradually. The visual system, including the visual cortex and its connections to the rest of the brain, must be trained before it can adequately function. We know that many functions of interaction between different systems, such as coordination of vision with motor skills, take even longer to develop.

All the same, I have watched a colt twenty minutes old standing up and nursing, and the same colt had learned to run by the end of its second hour on Earth. That colt did not need to learn how to see, how to stand, etc… it already knew. The minds of reptiles are even more “pre-formed” than those of horses — snakes barely broken out of their eggshells act exactly like adult snakes. So it’s clear that some functionality is preprogrammed in the brain — in the brains of snakes and to a lesser degree horses, but apparently not so much in the brains of humans. For snakes and horses, that preprogrammed functionality includes coordination skills between vision and motor function.

Humans and horses have come down different evolutionary pathways, but both are more complicated than snakes, and it’s a bit hard to understand why something important like that, once mastered in an ancestral species, would be “left out” of the specification for human brains. But there is at least one plausible reason. Human beings have found an evolutionary advantage by thinking in terms of symbols, and the visual cortex is heavily influenced by what we believe we are seeing — which means, I think, that patterns caused by direct perception give rise to related patterns in the parts of our brain that connect them with meanings, and these related patterns, in turn, influence the patterns in the areas of our brains that are initially influenced by direct perception. In other words, the corresponding symbols from the way we understand what we see are part of a feedback loop; our visual cortex has to respond not just to visual stimulus but also respond to learned symbols.

Where symbols whose meanings can vary are involved, it requires interaction with the world after birth to acquire a meaningful symbol vocabulary to help us decode that which is seen into meaningful categories.

It seems plausible to me therefore that the longer period of human inability to process, and corresponding helplessness, corresponds with the need to complete a lot of neural training necessary for the development of humans’ symbolic, relatively abstract mode of thinking.

And this makes sense. Humans are found in every climate where there is food for them to eat and a way for them to breathe, from the equator to the arctic, and deal with an astonishing variety of local conditions in an even more astonishing variety of ways. The same external perceptions mean different things in different circumstances for humans; being born with a strong set of reflexes and instincts in place (and therefore with the necessary pre-formed vocabulary of symbols) would interfere with learning the appropriate set of symbols to deal with local conditions whatever they might be.

Horses on the other hand are plains-dwelling grazers found in relatively temperate climates, and their method of interacting with the world, and therefore what certain things must mean to them, is largely predetermined. It suits them to be born with a pre-formed set of symbols, especially since their primary defense is to run across open plains. That colt had to know how to stand up, how to run, and how to interpret the landscape it saw on an open plain. The human baby has to know something far harder; how to learn.

Nevertheless, humans do have some universal symbols and some preprogrammed behavior. I think every culture has nearly-identical perceptions of day and night, of anger, of desire, and of fear. I think every culture has some idea of its own organization and the existence of relationships between self and others, and between others which are not self. I think every culture has some idea that communities are safer than living alone in the wilderness. I think that the desire for continual social interaction among humans is nearly unquenchable, whereas many creatures are naturally solitary except when it is time for them to mate. We have instincts, lots of them. But our instincts are, arguably, more subtle and capable of far more interpretations than those of the horse or the snake.

Now, remember for a moment that I was talking about the organization of the brain. We know that some parts of the brain are organized for us before birth, and are plastic to a lesser extent than the rest of the brain after birth. The patterns that these parts of the brain are optimized to realize and propagate, we call “instincts”.

Finally, we know that neurotransmitters carry signals between neurons within the brain, and that hormones and enzyme released in the brain in particular regions affect the functioning of neurotransmitters, altering the ways that connections both in-column and between-columns work, often on a per-synapse basis because different synapses will be regulated or bound by different neurotransmitters. Predictably, we find that brains function differently in the presence of different medications affecting neurotransmitter release and uptake, and in the presence of different hormones and enzymes that similarly affect the functioning of neurotransmitters.

Okay. What does all this suggest about artificial minds?

In his book, How to create a Mind, Ray Kurzweil concludes that cortical columns can be modeled as Hidden Hierarchical Markov Models. A little light reading for mad scientists!

He’s close, but I think that he’s wrong. HHMM’s have many of the required properties, but they don’t train themselves in the way that neural networks do, they aren’t as versatile in terms of being able to learn large numbers of different patterns, and the more I understand the neuroanatomy involved, the more crucial I think is the activation of neighboring cortical columns in response to diverse input.

The pattern of firing which a particular cortical column responds with, is mostly the reproduction of firing patterns going on around it, but in some cases those patterns may be dissimilar, and in some locations it is normal for those patterns to be dissimilar. When that happens a column starts a pattern as closely related as it may to all of them. A given cortical column is also influenced by the connections it may have to the outside world via sensory or motor channels, by its particular makeup (some regions of the brain make columns larger, more complex, or sensitive to different sets of neurotransmitters and hormones than others) and by the “substrate” in which it finds itself – depending on its location it may be subject to a different environment of neurotransmitters and hormones, or regional signals of other kinds.

My insight here is that the neural groups can be modeled as ART (adaptive resonance theory) neural networks. ART networks can be trained to learn many thousands of patterns, and given a set of inputs, quickly converge on one of their learned patterns, which they then continue to signal with some particular rhythm (depending on the number of layers of looping neural connections engaged in producing that pattern). ART networks reproduce (or are capable of reproducing) all of the behaviors of neural cortical columns seen above.

Assuming you have a few thousand ART subnetworks, and the outputs of each are connected, not only to its own input, but also to the inputs of a few of its neighbors, you get the pattern of brainwaves propagating across the brain as the same pattern is reproduced in an increasingly large part of the brain. The pattern is transmitted in high fidelity, until it reaches one or more networks trained to different but similar patterns.

How did the networks become differently trained? By being differently affected by hormones that selectively block some synapses or prevent the backpropagation of pattern learning at particular times when patterns were being learned. How does practice enhance the ability of the system to reproduce desirable patterns over a larger part of the brain? By training a pattern under a variety of different conditions of blood chemistry and hormones, thereby teaching the pattern to the ART networks capable of learning it under each of those sets of conditions and enabling the pattern to transcend such barriers later.

Other results are associative memory, whereby when we have one thing in our minds it’s likely to lead us to remember a related thing, and then another and another. Analogy, where we notice a pattern of similarity between two different symbols and the way they interact with other symnbols. And even creativity, where the pattern produced is novel, which could result from an epiphenomenon or ‘local minimum’ of the other patterns trained locally to the ART subnetworks that produce it.

At some boundaries between regions of the brain, connections are very thin and/or feedback is differently structured. This means that at those boundaries, the ‘resonant pattern’ of one region of the brain becomes merely ‘input’ to another, in that the pattern may influence what firing pattern is produced in the other region, but that the influence isn’t necessarily geared toward reproducing the pattern.

Different regions of the brain have different connectivity to the sensory and/or motor apparatus, making those regions ‘naturally’ ideal for certain kinds of activity to happen. Motor connections are as important as sensory connections, contrary to the expectations of some; There’s a tight feedback loop in the real world between what you do and what you perceive, and in fact the whole purpose of the brain can be considered as the exercise of control on DOING in order to achieve given states of PERCEIVING. When seen in that way, it becomes very obvious that the brain has no purpose in the absence of an ability to act and influence its own perceptions. The neurons that send impulses to my typing fingers are as important a part of a larger system and pattern as the perceptive systems that watch the words appear on the screen, feel the keys under the fingertips, and hear the keyswitches as I type. On a more fundamental level, horses and snakes and people all use their brains to effect actions directed toward the perceptions of a well-fed stomach and the absence of perceived injuries.

But this is an interesting thing. If we are building an artificial intelligence, we don’t need to limit it to the same perceptive and manipulative capabilities we have. We can create specialized “organs” that respond to patterns of firing in some regions of cortical columns by producing other patterns at a separate region – thus acting as motor connections at the point where they pick up patterns, and sensory connections at the point where different patterns are produced. So this kind of “black box” could easily be used to embed, say, mathematical calculators and standard software, into the capabilities of an artifice, in such a way that it could use them consciously or unconsciously, as easily as we recognize someone’s face.

We could also directly give an artificial intelligence an internet-protocol box, which responds to a request with a webpage containing new potential requests in the form of links. And finally, because we sort of understand how it works for us, robotic embodiment and visual cameras and speakers and microphones. We know how to construct ART networks that do much of the job of a visual cortex; we can even pre-populate them with symbols, at the risk of limiting what else it can learn. We know how to train ART networks; if we can figure out what the training we need to model other parts of the brain are, we can build the individual groups.

There is a problem with backpropagation beyond about two layers of hidden nodes; a problem called ‘disappearance of the gradient.’ But, owing to something like regulatory mechanisms that restrain particular synapses, we can temporarily simplify the network to the point at which gradients are visible again; the patterns learned under those conditions will still influence the whole when conditions change. Also, we have available ‘global’ conditioning techniques, which reinforce or inhibit repetition of ‘recent’ behavior across a particular ART network when applied; these are slow, and highly imprecise relative to backpropagation, but they do work.

So, I propose this; an artificial mind having about a hundred billion artificial neurons, divided into thousands or hundreds of thousands of ART networks plus connections between those networks, with sensory input and motor output, divided into ‘regions’ having different suitability for particular functions and various kinds of ‘boundary signal barriers’, localized inhibitory/excitatory/reinforcement capabilities whose pattern targets are distinguished by habituation, recency, and frequency, and other localized inhibitory/excitatory/reinforcement capabilities affecting neuron-to-neuron transmissions, ART-node to ART-node transmissions, and ‘region barrier’ transmissions. These localized signals are in turn controlled by a particular section of the very same brain, playing the part of the prefrontal cortex, which is itself rewarded/reinforced when particular arbitrary needs are met or when it predicts the future correctly.

This, or something much like it, will be the basis on which strong artificial intelligence is founded. It can have built-in appliances or ‘motor/sensory’ inputs such as internet connectivity, camera connections, a CPU devoted to math and standard software, etc, which that ‘creature’ will be able to use with the same easy facility that we recognize faces. However, we will need to figure out how much ‘instinct’ or predecided things it should enter the world with (and which things should be predecided for it, and how plastic those things will or won’t be to accomodate future learning), and how to train these networks for functionality matching about that degree of ‘instinct’ while leaving unallocated complexity and pattern learning capability for genuine learning to take place.

I don’t imagine that I’m the first to think of this; the knowledge of how the brain works is out there, and well known to the multidisciplinary guys who are pursuing Artificial Intelligence. But right now I’m thunderstruck by it as an idea new to me, because in replacing Kurzweil’s Hidden Hierarchical Markov Models with ART networks, I’ve finally gotten a good handle on how consciousness and learning and practice give rise to their observed effects.

And while I’m aware that I’ve been speaking geek as far as many of my readers will be concerned, and that the brain has ten billion or so neurons organized into a hundred thousand cortical columns, this really is a fairly simple mechanism where the complexity is all emergent, brought about by the repetition and variation of a component that really isn’t very complex at all.

I’m boggled by the sheer simplicity and generality of the basic structure. Yes, it is complex, but it’s a structure that can self-organize much greater complexity in patterns than are required to merely explain its operation. And it has no obvious barriers to greater complexity still; if the hardware to run it becomes available, then every time the hardware gets ten times better, we may be able to just build a larger version and add a few points to its apparent IQ!

As an AI guy, I’ve built systems myself that were mostly smarter than clams. Almost all of them were smarter than grass. A few were as smart as wasps or mosquitos. A couple could give the illusion, for a few minutes, of being as smart as a human, but that’s all it was; an illusion, parroting back pre-written lines when particular stimulus was recognized, the same way wasps perform particular actions when they get a particular stimulus.

People tell me that now hardware has gotten better, they are building things that are as smart as bees. A year or so ago, some people managed to simulate a fairly substantial part of a mouse’s brain in real time, observing the same patterns in the simulation that had actually occurred in that part of the brain in normal mice. And at this point, I think I finally understand a fairly general model of how the brain actually works. And, well, how to build a brain (given access to hardware nobody’s got yet) that has access to fundamentally new capabilities.

So, did I manage to express it in terms that people who aren’t naturally mad scientists can get, or am I too far out of depth here?

Oh well, that’s one luxury of a blog. I can write this for the people who will get it even if they’re a tiny minority. Or I can be pleasantly surprised and discover that lots of people get it.

Leave a Reply