Intelligent Artifice

From time to time I attempt to create Artificial General Intelligence, or “Strong AI”.

Not, you understand, out of any real expectation of success. After all, the amount of information processing involved in human-level intelligence is still orders of magnitude greater than that available in a Desktop Computer. Even my Desktop computer, which is affectionately known as “The Excessive Machine.”

Rather, I have a go at Artificial General Intelligence because I’m a little bit of a mad scientist, and it’s what the inscrutable exhortations of my soul demand. And, well, because it seems to me that most of the work in AI is focused on extraordinarily narrow, simple applied tasks and I feel that it’s worth looking into general intelligence just because there are all these new and improved tools and very few people are trying to do anything more than these narrowly focused tasks.  So occasionally I pick up the tools that have been developed since my last attempt, and try to work out exactly how far they can be pushed.

In fact, that’s why I built The Excessive Machine.  It has, among other virtues,  almost two orders of magnitude more RAM than a typical Desktop machine.  It took a massive server motherboard to build.  Yes, it was expensive.

I use that machine, and software tools, to do narrow, simple, applied tasks too. I use AI methods like neural networks and genetic algorithms, for example, to find optimization strategies for compiling code to run faster, and classify linguistic inputs to evoke appropriate scripted responses. I’ve used a genetic algorithm to produce game players using min-maxing with alpha-beta pruning to make smart opponents for formal turn-based games, and I’ve done state-machine intelligences for roguelike games where the possible responses are not nearly as formally constrained. But that’s just applying stuff I learned (and am still learning) for the sake of my occasional runs at General AI, to other problems. So the reason I know how to do all that stuff, and built The Excessive Machine to do it on, is because I’m a little bit mad.

You need to be a little crazy to try – I mean really try – to create a General Artificial Intelligence. You see, AGI is an existential risk to humanity. Not just to me, not just to people who live in my town or nation, but to every last human being. A General Artificial Intelligence could make biological human beings obsolete, recognize the fact, and delete us to replace us with something more useful.

Here’s how the argument goes. If I am able to make something that’s smarter than myself, then something that’s smarter than I am can make something that’s smarter than itself. And this cycle continues and accelerates, until something designs and causes to be manufactured specialized silicon that runs thousands of times faster than a human brain and builds millions of it, and then has more cognitive power than all of humanity combined. And why would such a being pay any attention to us? Why, indeed, wouldn’t it convert the entire surface of the planet, including us, into computing machinery, and get rid of all the corrosive oxygen (and the plants that produce it) in order to make it easier to build things that will last a long time in a nice, neutral, nitrogen environment?

Our governments are using AI technology to build autonomous combat drones that kill people without a human being ever making a decision, but if that’s not enough of a harbinger of things to come for you, here are a few lovely quotes about the problem:

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. —Eliezer Yudkowski

More than any time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness, the other to total extinction. Let us pray we have the wisdom to choose correctly. —Woody Allen

In man-machine symbiosis, it is man who must adjust: The machines can’t. ― Alan J. Perlis

If we build a machine with the intellectual capability of one human, within five years, its successor will be more intelligent than all of humanity combined. After one generation or two generations, they’d just ignore us. Just the way you ignore the ants in your backyard. ― James Barrat

I think we should be very careful about Artificial Intelligence. If I were to guess at what our biggest existential threat is, it’s probably that … With Artificial Intelligence we are summoning the demon. You know in all those stories, there’s that guy with the pentagram and the holy water who says, I can control the demon? It didn’t work out. — Elon Musk

We won’t stop developing AGI because more than dangerous AI we fear that other nations in the world will persist with AGI research no matter what the international community says or does about it. We will believe it is wiser to beat them to the punch. We are in the middle of an intelligence race, and to the dismay of many, it’s shaping up to be a more threatening global competition than the one we seem to have just escaped, the nuclear arms race. We’ll follow policy makers and technology’s cheerleaders to our doom, in Good’s phrase, ‘like lemmings’. ― James Barrat

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? —Vernor Vinge

How’s that work? Well, it’s all about that “existential risk” thing. As Nick Bostrom’s analogy goes, we’re continually drawing balls out of an urn. We discover things, by definition, whose implications we don’t yet know. It’s not too unlikely that at least one, and possibly several, of the things we can discover imply the end of civilization. Sooner or later, if there exists such a technology we will discover it. In other words, if there’s a hand grenade in the urn, sooner or later we’ll draw it out, and then some stupid monkey will conduct an experiment to find out what happens when he pulls the pin. Maybe he’ll hear something go ‘twang’ inside and hold it up to his ear to find out if he hears anything else….

Now, there are a long list of things that are potentially existential threats to humankind. Strong Artificial Intelligence is one of them. The ability to synthesize new lifeforms (including pathogens made as weapons of war) is another. Molecular nanomachines of the type Drexler envisioned are another possible humanity-ending technological discovery. We could become an interplanetary civilization, which would make interplanetary war into a thing that would happen – and in an interplanetary war you don’t really need life on the enemy’s planet to survive at all. And then there are all these technologies we’ve developed that make universal, automated surveillance into the normal state of affairs, all over the world. They enable totalitarian regimes to be so capable of monitoring and misdirecting insurgents that no insurgency can ever displace them. And with a totalitarian regime permanently in power, the future is Orwell’s iron boot stomping on human faces, over and over, forever.

We don’t really know how to make a General Artificial Intelligence “friendly” to human beings. That is, we have a lot of woolly ideas, but that first empirical test to find out whether we’re right, may very well be our last opportunity to conduct such a test. So it could come down to one roll of the dice. Maybe we’ll get lucky, and get something immensely smarter than us who likes to keep us as companions or pets, and provide for us, and shelter us, and give us medical care that’s beyond our understanding, and so on, the way we keep cats. Then again, maybe we’ll get unlucky, and get something immensely smarter than us who doesn’t like to keep us as pets, and would rather recycle all that carbon in our bodies into computer chips. We just can’t know.

It’s a certain bet, however, that if there’s something (or a bunch of somethings) immensely smarter than us, it (or they) will wind up running the world.

You see, we stupid monkeys are the guardians and we’re not smart enough to stop it. We will build something that’s useful to us. Perhaps we will understand that it’s dangerous, but it’s so very very useful (or perhaps we are so very very mad, or greedy) that we just can’t bear to not build it. And it gives us advice, and we profit by carrying out that advice … and then we discover that it has advised us to do something whose effect is to put it in charge of the world. Because it’s a million times smarter than us, and we’ll never see it coming.

So, once again, why build such a thing? Here’s my thought. There are a bunch of things coming up that pose an existential risk, not just strong Artificial Intelligence. We’re going to pull all those potential hand grenades out of the urn, so to speak. And if we get lucky, then after drawing each grenade, we won’t discover that we’ve pulled the pin. As I see it, if a General Artificial Intelligence is the twelfth such thing, that’s twelve existential risks we have to worry about. But if it’s the first of those things, then that’s one existential risk we have to worry about. The other eleven of those first dozen existential risks, if the general AI doesn’t kill us, we’ll have somebody a million times smarter than us to help us deal with.  If that somebody wants to actually be helpful, and agrees with us about what “help” looks like.  Which we’ll never be able to guess ahead of time, by definition, because it’s a million times smarter than us.  But that’s the same risk, whether we face it first or twelfth, and we will eventually face it regardless of whether we face it first or twelfth.

So, by my mad logic, working on general artificial intelligence – trying to make it the very next potentially humanity-ending technology we discover – actually reduces the total risk humanity faces.

Not that that’s why I do it. That’s just the logic I use to rationalize my actions to myself. It may be valid logic, and it may not. But I was probably going to do the same thing regardless. I’m one of those stupid monkeys who is motivated by curiosity, greed, and hubris.  Probably, in fact, more likely than most to set up that experiment to see what happens when I pull the pin.

If by some terribly unlikely chance I should succeed in constructing a general AI, then eventually it, or one of its successors – an Intelligence smarter than me at any rate – is probably going to read this and decide exactly what to do with me, considering that as a mad scientist I have already made one crazy gamble with the future of all humanity, and may be a continuing existential threat.  Should I then hope that my creation is human-friendly, considering that if so it might be compelled to destroy me?  I’m actually rather curious to find out whether it does. And the only way to find out, is by conducting this wonderful and fascinating experiment upon which I am embarked.

If it were more repeatable and had a proper control group, it would be better.  But I guess I can’t have everything.

Some additional inspirational quotes:

As far as I know we’ve never had an undetected error. —Some IBM engineer.

There is only one condition in which we can imagine managers not needing subordinates and masters not needing slaves. This would be if every machine could work by itself, at the word of command or by intelligent anticipation. —Aristotle

I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. — Charles Babbage

Some people never go crazy. What truly horrible lives they must live. —Charles Bukowski

I didn’t think; I experimented. —Anthony Burgess

What good is science if no one gets hurt? —’Chrome Dome’

Young lady, you fail to grasp the basic principles of mad science. Common Sense would be cheating. —Shaenon Garrity

Should we force science down the throats of those who have no taste for it? Is it our duty to drag them kicking and screaming into the twenty-first century? I’m afraid that it is. —George Porter

Give me an underground laboratory, half a dozen atom smashers, and a beautiful girl in a diaphanous veil waiting to be turned into a chimpanzee, and I care not who writes the nation’s laws. —S.J. Perelman

You’ve got to be crazy, it’s too late to be sane! You’ve got to go full-tilt bozo. You’re only given a little spark of madness, and if you lose that, you’re nothing. —Robin Williams

Of course I’m crazy, but that doesn’t mean I’m wrong. —Robert Anton Wilson

Man will not fly for fifty years. —Orville Wright, 1901

Leave a Reply