Category Archives: The Future

The Dangers in AI

When I’m not beavering away trying to produce a general artificial intelligence, sometimes I stop and wonder whether it’s a horrible mistake.  Actually I spend a lot of time wondering whether it’s a horrible mistake, but I’ve written about that before.  My stated goal with this project is hamster-level consciousness, but a superintelligence isn’t that far beyond it.  A lot of ideas and threads are coming together, and consciousness as such may be the match that sets the rest ablaze.

I’m taking time now to respond to a couple of very specific publications by people who believe that building a General Artificial Intelligence could be a horrible mistake.  The first is the book Superintelligence by Nick Bostrom.  The second is the book Our Final Invention by James Barrat.  Both of these guys are well respected in the field, and may in many ways be right.

Both postulate that limiting the AI’s ability to influence the world at large is the only possible way to proceed.  Bostrom in particular wants to take this as far as it can be taken, limiting interaction to providing one-bit answers – yes or no – to questions posed by people who are watched by expert paranoids 24/7 to make sure they don’t do something (or ask something) with implications. Barrat considers a consciousness limited to a single isolated installation, with no network capabilities, which is more reasonable but still runs afoul of my objection.

My objection is this: I’m pretty firmly of the opinion that if you don’t give it something with the complexity of the world at large to interact with, you won’t get intelligence – because you’ll be presenting it with an environment that doesn’t require intelligence.  If you don’t let it have rich interactions with people, it will have no opportunity to develop any comprehension of social norms and the basic standards of decency you so desperately fear it might not learn. And the more you limit its ability to affect the world, the more it won’t develop intelligence because its range of available actions don’t require intelligence to perform.

I see this as catch-22.  The proposed method to try to make a conscious, socialized intelligence that won’t hurt us, would mean that a conscious, socialized intelligence cannot be produced in the first place.

Both writers spend many chapters about the need to prove any General AI Friendly before it’s allowed to do anything.  Of course, now I have to ask them what ‘Friendly’ means, and whether that concept can be expressed in terms so specific that it is amenable to proof.  There are actions we can identify as unambiguously harmful, but I can’t imagine any kind of rigorous proof that a human infant won’t grow up to commit those actions.  Well, okay, I can imagine one rigorous proof, but it involves strangling the infant in its cradle.

See what I did there?  In response to a question whose intent was to request abstract reasoning, I determined that the only logical response is to perform an action on the world rather than just abstractly reasoning about it.  And the action involves one of those unambiguously harmful acts.  I sincerely do not believe that any other course would yield the certain proof requested, so this is the ONLY credible response.  The possibility that, inadvertently, someone will ask the General Artificial Intelligence to achieve a goal which provokes such an action, is exactly what the whole issue is about. And the fear is that the General Artificial Intelligence would not have anything like the morality or decency to flatly refuse to achieve that goal.  Can the GAI then accept a failure, when it could have succeeded in fulfilling the request?

An equally disturbing question is, can we accept that failure?  Would we interpret that flat refusal as evidence of Unfriendliness and immediately terminate the GAI?  If so, that leaves it choosing between its continued existence and committing the evil act.

I don’t claim that the request to prove friendliness is inconsistent or meaningless; but it bodes ill indeed that that very request is among those whose only credible response violates its intent. The issue runs straight up against itself, leaving me with the feeling that we’re missing something vitally important.

Both writers spend a lot of time trying to work out what the AI ought to be allowed to know.  I’m spending time wondering what happens later when the AI finds out we deliberately gave it falsehoods or deliberately withheld available truth.  We humans are unreliable.  We know that.  But we’re unreliable enough when we’re not deliberately lying.  Will a demonstration of deliberate bad faith on our part cause a re-evaluation of the value of human lives?

Remember that bit in one of the Batman movies where some bit-part discovered something sensitive and immediately asked for blackmail money?  And then someone else had to look at him, and point out that he was being an idiot? Reminded him who Batman was, and how Batman felt about blackmailers and other crooks, and asked him again whether his plan was really to blackmail Batman?

Yeah, that’s kind of what happens when you lie to something smarter than yourself, in an effort to secure its trust and cooperation.  It’s just not going to work.  You earn more trust and cooperation by not even trying.

The ‘friendly AI’ issue is, without a doubt, at least as important as these writers think it is.  The stakes are literally whether or not humankind has a future.  But I have to point out that most of the solutions they propose are utterly unworkable.

And all this leads up to what am I doing to try to insure a friendly, socialized, helpful  AI?  And the people who wrote these books are going to hate the answer.  I can’t think of anything better than what you do to try to insure that a child grows up to be a friendly, socialized, helpful person.  That isn’t even reliable with humans, and what a GAI is, would be different in unknowable ways from what a human is.  It’s a tough nut that starts with controlling its basic structure by adapting it to a situation where these valued traits are advantageous, and continues through a long interaction of building trust, understanding, and cooperation.   Like an anxious parent who wants to raise a child that’s not a sociopath, I have a lot of ideas but no certainty.  Despite the best efforts of parents, sociopaths occasionally happen.