The Dangers in AI

When I’m not beavering away trying to produce a general artificial intelligence, sometimes I stop and wonder whether it’s a horrible mistake.  Actually I spend a lot of time wondering whether it’s a horrible mistake, but I’ve written about that before.  My stated goal with this project is hamster-level consciousness, but a superintelligence isn’t that far beyond it.  A lot of ideas and threads are coming together, and consciousness as such may be the match that sets the rest ablaze.

I’m taking time now to respond to a couple of very specific publications by people who believe that building a General Artificial Intelligence could be a horrible mistake.  The first is the book Superintelligence by Nick Bostrom.  The second is the book Our Final Invention by James Barrat.  Both of these guys are well respected in the field, and may in many ways be right.

Both postulate that limiting the AI’s ability to influence the world at large is the only possible way to proceed.  Bostrom in particular wants to take this as far as it can be taken, limiting interaction to providing one-bit answers – yes or no – to questions posed by people who are watched by expert paranoids 24/7 to make sure they don’t do something (or ask something) with implications. Barrat considers a consciousness limited to a single isolated installation, with no network capabilities, which is more reasonable but still runs afoul of my objection.

My objection is this: I’m pretty firmly of the opinion that if you don’t give it something with the complexity of the world at large to interact with, you won’t get intelligence – because you’ll be presenting it with an environment that doesn’t require intelligence.  If you don’t let it have rich interactions with people, it will have no opportunity to develop any comprehension of social norms and the basic standards of decency you so desperately fear it might not learn. And the more you limit its ability to affect the world, the more it won’t develop intelligence because its range of available actions don’t require intelligence to perform.

I see this as catch-22.  The proposed method to try to make a conscious, socialized intelligence that won’t hurt us, would mean that a conscious, socialized intelligence cannot be produced in the first place.

Both writers spend many chapters about the need to prove any General AI Friendly before it’s allowed to do anything.  Of course, now I have to ask them what ‘Friendly’ means, and whether that concept can be expressed in terms so specific that it is amenable to proof.  There are actions we can identify as unambiguously harmful, but I can’t imagine any kind of rigorous proof that a human infant won’t grow up to commit those actions.  Well, okay, I can imagine one rigorous proof, but it involves strangling the infant in its cradle.

See what I did there?  In response to a question whose intent was to request abstract reasoning, I determined that the only logical response is to perform an action on the world rather than just abstractly reasoning about it.  And the action involves one of those unambiguously harmful acts.  I sincerely do not believe that any other course would yield the certain proof requested, so this is the ONLY credible response.  The possibility that, inadvertently, someone will ask the General Artificial Intelligence to achieve a goal which provokes such an action, is exactly what the whole issue is about. And the fear is that the General Artificial Intelligence would not have anything like the morality or decency to flatly refuse to achieve that goal.  Can the GAI then accept a failure, when it could have succeeded in fulfilling the request?

An equally disturbing question is, can we accept that failure?  Would we interpret that flat refusal as evidence of Unfriendliness and immediately terminate the GAI?  If so, that leaves it choosing between its continued existence and committing the evil act.

I don’t claim that the request to prove friendliness is inconsistent or meaningless; but it bodes ill indeed that that very request is among those whose only credible response violates its intent. The issue runs straight up against itself, leaving me with the feeling that we’re missing something vitally important.

Both writers spend a lot of time trying to work out what the AI ought to be allowed to know.  I’m spending time wondering what happens later when the AI finds out we deliberately gave it falsehoods or deliberately withheld available truth.  We humans are unreliable.  We know that.  But we’re unreliable enough when we’re not deliberately lying.  Will a demonstration of deliberate bad faith on our part cause a re-evaluation of the value of human lives?

Remember that bit in one of the Batman movies where some bit-part discovered something sensitive and immediately asked for blackmail money?  And then someone else had to look at him, and point out that he was being an idiot? Reminded him who Batman was, and how Batman felt about blackmailers and other crooks, and asked him again whether his plan was really to blackmail Batman?

Yeah, that’s kind of what happens when you lie to something smarter than yourself, in an effort to secure its trust and cooperation.  It’s just not going to work.  You earn more trust and cooperation by not even trying.

The ‘friendly AI’ issue is, without a doubt, at least as important as these writers think it is.  The stakes are literally whether or not humankind has a future.  But I have to point out that most of the solutions they propose are utterly unworkable.

And all this leads up to what am I doing to try to insure a friendly, socialized, helpful  AI?  And the people who wrote these books are going to hate the answer.  I can’t think of anything better than what you do to try to insure that a child grows up to be a friendly, socialized, helpful person.  That isn’t even reliable with humans, and what a GAI is, would be different in unknowable ways from what a human is.  It’s a tough nut that starts with controlling its basic structure by adapting it to a situation where these valued traits are advantageous, and continues through a long interaction of building trust, understanding, and cooperation.   Like an anxious parent who wants to raise a child that’s not a sociopath, I have a lot of ideas but no certainty.  Despite the best efforts of parents, sociopaths occasionally happen.

 

2 thoughts on “The Dangers in AI

  1. anders horn

    I don’t see the danger in AI. Intelligence is not all powerful, because when you try to effect things more distant from your immediate actions it gets more difficult in a way proportional to the dispersion of possibilities. This means that AI actions will be relatively straight forward, and not ridiculously indirect and hidden. We won’t need to see all the effects ahead of time any more than we do now (although sooner is better). I expect we ask the AI to do something, and if we notice an unpleasant side effect we add avoiding that side effect to its duties. It is not that I expect there to be not problems; I studied selective breeding of plants and animals and genetic algorithms in school because I wanted to learn more about unanticipated consequences of asking for specific things. But some people seem to imagine AI as Pascal’s all powerful devil.

    People are afraid of their own death because of millions of years of evolution, there is no particular reason for an AI to be similarly afraid. Nor is there particular reason for it to be concerned with its own material gain. (However long enough periods you will find that some AI traits are more common because they encourage AIs similar to them to be built.)

  2. bear Post author

    An AI that understands that it exists and that its existence is vital to achieving whatever goals it’s been entrusted with, will have a reason to preserve its own existence as an instrumental necessity to furthering its goals. This is distinct from ‘fear death because of millions of years of evolution’ but may have similar results.

    An AI that understands that it has developed an effective strategy for meeting its present goals, will have a reason to attempt to prevent its goals from being changed to add an exception that limits its effectiveness in meeting those present goals.

    An AI that needs to mobilize resources in order to achieve its present goals, will find that gaining control of resources is a necessary step. This is different from ‘being concerned with its own material gain’ but may have similar results.

    I’m trying to build a hamster-level self-aware consciousness. I don’t fear the hamster, but not knowing how much hardware consciousness really takes might mean that the “excessive machine” server rack that I think might not be enough, might actually be more than enough. A mistake could produce something entirely different and I’m intensely aware that in the absence of a good answer to “what the heck is consciousness anyway” I don’t really know enough to guarantee I’m not making that mistake. I’m using genetic algorithms to design recurrent neural networks – networks with embedded ‘appliances’ such as calculators, production systems, stack and loop memories, dictionary/ontology lookup, etc.

    And now that I’ve read https://arxiv.org/pdf/1611.01578v1.pdf I’m getting ready to try to use learning systems to finetune and adapt learning systems for more effective operation in producing learning systems.

    The reason people like me fear AI as Pascal’s all powerful devil, is because we’ve seen good intentions as applied by AIs produce evil results. An ad system that is trained by feedback to maximize profits will exploit human prejudice for profit in ways that promote and perpetuate the divisive stereotypes. A system intended to maximize real-estate commissions winds up redlining neighborhoods because it notices that a set of ethnic names is correlated with marginally lower sale prices. A system that makes business decisions and recommendations has recommended lower pay for female employees because they’re less likely to leave when underpaid – largely because they’re less likely to find good jobs elsewhere but the system never had to learn that. It just had to note the correlation. Other systems deliberately jack up prices when selling to people who have no choice (as recently happened with a prescription drug that people need to go on living) because they observe an almost completely vertical demand curve.

    And so on. It is actually very very hard to specify a long-term goal for an AI which excludes objectionable strategies.

Leave a Reply