355 500 произведений, 25 200 авторов.

Электронная библиотека книг » Harry Harrison » The Turing Option » Текст книги (страница 16)
The Turing Option
  • Текст добавлен: 15 октября 2016, 05:52

Текст книги "The Turing Option"


Автор книги: Harry Harrison



сообщить о нарушении

Текущая страница: 16 (всего у книги 28 страниц)

Dusty smirked and stretched, touched his knuckle to his mustache. “We made believe that we knew each other because that was part of the deal. But I’ll tell you something, the old fart had forgotten, but I had seen him once before. And I even remember his name because one of the guys afterwards was bullshitting my ear off about what a hotshot this old guy had been in the old days.”

“You know his real name?”

“Yup. But we got to make a deal…”

Ben’s chair crashed to the floor and he strode forward into the camera’s view, seized the pilot by the collar and dragged him to his feet. “Listen you miserable piece of crap – the only deal I make is to send you to jail for life if you don’t shout that name out loud – now!”

“You can’t.”

“I can – and I will!” The pilot’s toes were dragging on the floor as Ben shook him like a great rag doll. “The name.”

“Let me go – I’ll help. A screwball foreign name, that’s what it was. Sounded like Doth – or Both.”

Ben dropped him slowly back into the chair, leaned forward until their faces were almost touching. Spoke with quiet menace.

“Could it have been Toth?”

“Yes – that’s it! Do you know the guy? Toth. A funny name.”

The tape ended, and when his recorded voice died away Benicoff spoke aloud.

“Toth. Arpad Toth was head of security here at Megalobe when the events occurred. I checked the Pentagon records at once.

“It appears that he has a brother, by the name of Alex Toth. A helicopter pilot who flew in Vietnam.”

24
February 22, 2024

“This is my responsibility now,” General Schorcht said, a glint of grim determination in his eye, a touch of cold anger in his voice. “Toth. Alex Toth. An army pilot!”

“That is a very good idea,” Ben agreed. “This is on your patch and you have the organization to do it. We will of course keep the investigation going at this end. I suggest that Colonel Davis and I liaise at least once a day, oftener if there are any dramatic developments. We must keep each other fully informed about our mutual progress. Is that satisfactory, General?”

“Satisfactory. Company dismissed.”

The two Army officers jumped to their feet, stood at attention, followed the General out.

“And you have a good day too, General,” Brian said to the stiff, vanishing backs. “Were you ever in the Army, Ben?”

“Happily, no.”

“Do you understand the military mind?”

“Unhappily, yes. But I don’t want to be rude in the presence of a serving officer.” Ben saw Shelly’s grim expression and softened his words with a smile. “A joke, Shelly, that’s all. Probably in the worst possible taste – so I apologize.

“No need,” she said, returning a slight smile. “I don’t know why I should be defensive about the military. I joined rotsee to pay for college. Then I enlisted in the Air Force as the only way to get through graduate school. My parents had a vegetable stand in Farmers Market in L.A. Which for anyone else would have been a gold mine. My father is a great Talmudic scholar but a really lousy businessman. The Air Force enabled me to do the only thing I wanted to do.”

“Which leads inexorably to the next generation,” Brian said. “Where does the investigation go from here?”

“I’m going to follow up all the leads that the copter development opened,” Ben said. “As to the Expert Program, our wizard detective Dick Tracy– – that is up to you, Shelly. What’s next?”

She poured herself a glass of water from the carafe on the conference table; gave herself a moment to think.

“I’m still running the Dick Tracy program. But I don’t expect it to find anything new until we get more data for it.”

“Which leaves you with free time – and that means you can work full time on the AI with me,” Brian said. “Because the work we do will eventually be fed into the Dick Tracy program.”

Ben looked puzzled. “Say again.”

“Think about it for a moment. Right now you are approaching the investigation from only the single point of view of the crime that was committed. Well and good – and I hope you succeed before they reach me again. Otherwise I’m for the knackers. But we should also be taking a second approach. Have you thought about just what it was that they stole?”

“Obviously, your AI machine.”

“No – it was more than that. They tried to kill everyone who had any knowledge of the AI, to steal or destroy every existing record. And they are still trying to kill me. That makes one thing very clear.”

“Of course!” Ben said. “I should have realized that. They not only wanted the AI – but they wanted a world monopoly on it. They might possibly be trying to market it now. They will want to use it commercially to turn a profit. But they have committed murder and theft and certainly don’t want to be found out. They have to conceal the fact that they’re using it, so they must exploit it in such a way that the AI cannot be traced back to them.”

“I see what you mean,” Shelly said. “Once they get it working, the stolen AI could be used for almost any purpose. To control mechanical processes, maybe to write software, follow new lines of research, aid product development – it could be used for almost anything.”

Benicoff nodded solemn agreement. “And that makes it rather hard to catch them out. We have to be on the lookout, not for anything very specific, but for virtually any type of program or machine that seems peculiarly advanced.”

“That’s much too general for my program to be able to deal with,” Shelly said. “Dick Tracy can only work with carefully structured data bases. It just doesn’t have enough knowledge or common sense to help with a problem as broad as this.”

“Then we will have to improve it,” Brian said. “This is exactly what I’m driving at. It is now perfectly clear what we have to do. First we have to make Dick Tracy smarter, to equip it with more general knowledge.”

“You mean to make it into a better AI?” Benicoff asked. “And then use it to find the other AIs. Like setting a thief to stop a thief.”

“That’s half of it. The other half is what I’m doing with the robot Robin. Making it more like the AI in the notes. If I can do that, then we’ll know more about what the stolen machine is capable of. And that will help narrow the search.”

“Especially if we can upload those same capabilities into Dick Tracy,” Shelly said. “Then it could really know what to search for!”

They all looked at one another, but there seemed to be little more to say. It was clear what each of them had to do.

Ben stopped them as they rose to leave. “One last and important matter to discuss. Shelly’s living quarters.”

“I’m sorry you mentioned that,” she said. “I thought I was getting a lovely little apartment. But at the very last moment the whole deal fell through.”

Ben looked uncomfortable. “I’m sorry but, well, that was my doing. I have been thinking about the attacks on Brian’s life and I realized that you must also be a target now. Once you start developing AI, the murderous power out there will… it’s not easy to say, will want to kill you as well as Brian. Do you agree?”

Shelly nodded a reluctant yes.

“Which means you will have to live with the same degree of security as Brian. Here in Megalobe.”

“I’ll get suicidal if I have to live in the businessmen’s flophouse where I am staying now.”

“No question of that! I speak with feeling because I have spent many a miserable night there myself. Now can I make a suggestion? There are WAC quarters in the barracks here with provision for female Army personnel. If we knocked a couple of rooms together and fitted them up as a small apartment – would you mind staying there?”

“I’ll want a say in the decorating.”

“You pick it out – we’ll pick up the tab. Electronic kitchen, Jacuzzi bath – anything you want. The army engineers will install it.”

“Offer accepted. When do I get the catalogs?”

“I have them in my office right now.”

“Ben – you’re terrible. How did you know I would go along with this plan?”

“I didn’t know – just hoped. And when you look at it from all sides it really turns out to be the only safe thing to do.”

“Can I see the catalogs now?”

“Of course. In this building, room 412. I’ll call my assistant and have her dig them out.”

Shelly started for the door – then spun about. “I’m sorry, Brian. I should have asked you first if you need me.”

“I think it’s a great idea. In any case I have some other things to do today away from the lab. What do we say we meet there at nine a.m. tomorrow?”

“Right”

Brian waited until the door had closed before he turned to Ben, chewed his lip in silence before he managed to speak. “I still haven’t told her about the CPU implant in my brain. And she hasn’t asked me about that session where it produced the clue about the theft. Has she mentioned it to you?”

“No – and I don’t think she will. Shelly is a very private person and I think she extends the same privacy to others. Is it important?”

“Only to me. What I told you before about feeling like a freak—”

“You’re not, and you know it. I doubt if the topic will come up again.”

“I’ll tell her about it, someday. Just not now. Particularly since I have arranged some lengthy sessions with Dr. Snaresbrook.” He glanced at his watch. “The first one will be starting soon. The main reason I am doing this is that I am determined to speed up the AI work.”

“How?”

“I want to improve my approach to the research. Right now all that I am doing is going through the material from the backup data bank we brought back from Mexico. But these are mostly notes and questions about work in progress. What I need to do is locate the real memories and the results of the research based upon them. At the present time it has been slow and infuriating work.”

“In what way?”

“I was, am, are…” Brian smiled wryly. “I guess there is no correct syntax to express it. What I mean is the me that made those notes was a sloppy note maker. You know how, when you write a note to yourself, you mostly scribble a couple of words that will remind you of the whole idea. But that particular me no longer exists, so my old notes don’t remind me of anything. So I’ve started working with Dr. Snaresbrook to see if we can use the CPU implant to link the notes to additional disconnected memories that are still in my brain. It took me ten years to develop AI the first time – and I’m afraid it will take that long again if I don’t have some help. I must get those lost memories back.”

“Are there any results of your accessing these memories?”

“Early days yet. We are still trying to find a way to make connections that I can reliably activate at will. The CPU is a machine – and I’m not – and we interface badly at the best of times. It is like a bad phone connection at other times. You know, both people talking at once and nothing coming across. Or I just simply cannot make sense of what is getting through. Have to stop all input and go back to square A. Frustrating, I can tell you. But I’m going to lick it. It can only improve. I hope.”

Ben walked Brian over to the Megalobe clinic and left him outside Dr. Snaresbrook’s office. He watched him enter, stood there for some time, deep in thought. There was plenty to think about.

The session went well. Brian could, access the CPU at will now, use it to extract specific memories. The system was functioning better – although sometimes he would retrieve fragments of knowledge that were hard to comprehend. It was as though they came as suggestions from someone else rather than from his own memories. Occasionally, when he accessed a memory of his earlier, adult self, he would find himself losing track of his own thoughts. When he regained control he found it hard to recall how it had felt. How strange, he thought to himself. Am I maintaining two personalities? Can a single mind have room for two personalities at onceone old, the other new?

The probing certainly was saving a great deal of time in his research and, as the novelty began to wear off, Brian’s thoughts returned to the most serious problems that still beset him on the AI. All the different bugs that led to failures – to breakdowns in which the machine would end up at one extreme of behavior or another.

“Brian – are you there?”

“What – ?”

“Welcome back. I asked you the same question three times. You were wandering, weren’t you?”

“Sorry. It just seems so intractable and there is nothing in the notes to help me out. What I need is to have a part of my mind that is watching itself without the rest of the mind knowing what is happening. Something that would help keep the system’s control circuitry in balance. That’s not particularly hard when the system itself is stable, not changing or learning very much – but nothing seems to work when the system learns new ways to learn. What I need is some system, some sort of separate submind that can maintain a measure of control.”

“Sounds very Freudian.”

“I beg your pardon?”

“Like the theories of Sigmund Freud.”

“I don’t recall anyone with that name in any AI research.”

“Easy enough to see why. He was a psychiatrist working in the 1890s, before there were any computers. When he first proposed his theories – about how the mind is made of a number of different agencies – he gave them names like id, ego, superego, censor and so on. It is understood that every normal person is constantly dealing, unconsciously, with all sorts of conflicts, contradictions, and incompatible goals. That’s why I thought you might get some feedback if you were to study Freud’s theories of mind.”

“Sounds fine to me. Let’s do it now, download all the Freudian theories into my memory banks.”

Snaresbrook was concerned. As a scientist, she still regarded the use of the implant computer as an experimental study – but Brian had already absorbed it as a natural part of his lifestyle. No more poring over printed texts for him. Get it all into memory in an instant, then deal with it later.

He did not go back to his room, but paced the floor, while in his mind he dipped first into one part of the text, then another, making links and changing them – then gasped out loud.

“This has to be it – really it! A theory that fits my problem perfectly. The superego appears to be a sort of goal-learning mechanism that probably evolved on top of the imprinting mechanisms that evolved earlier. You know, the systems discovered by Konrad Lorenz, that are used to hold many infant animals within a safe sphere of nurture and protection. These produce a relatively permanent, stable goal system in the child. Once a child introjects a mother or father image, that structure can remain there for the rest of that child’s life. But how can we provide my AI with a superego? Consider this – we should be able to download a functioning superego for my AI if we can find some way of downloading enough of the details of my own unconscious value structure. And why not? Activate each of my K-lines and nemes, sense and record the emotional values associated with them. Use that data to first build a representation of my conscious self-image. Then add my self-ideal – what the superego says I ought to be. If we can download that, we might be much further on the way toward being able to stabilize and regulate our machine intelligence.”

“Let’s do it,” Snaresbrook said. “Even if no one has proven yet that the thing exists. We’ll simply assume that you do indeed have a perfectly fine one inside your head. And we are perhaps the first people ever to be in a position to find it. Look at what we have been doing for months now, searching out and downloading your matrix of memories and thought processes. Now we may as well push a little further – only backward instead of forward in time. We can try to do more backtracking toward your infancy, and see if we can find some nenies and attached memories that might correspond to your earliest value systems.”

“And you think that you can do this?”

“I don’t see any reason why not – unless what we’re seeking just doesn’t exist. In any case the search will probably involve locating another few hundred thousand old K-lines and nemes. But cautiously. There might be some serious dangers here, in giving you access to such deeply buried activities. I’ll first want to work up a way to do this by using an external computer, while disabling your own internal connection machine for a while. That way, we’ll have a record of the structures we discover in external form, which might be used in improving Robin. This will prevent the experiments from affecting you until we’re more sure of ourselves.”

“Well, then – let’s give it a try.”

25
May 31, 2024

“Brian Delaney – have you been working here all night? You promised it would just be a few minutes more when I left you here last night. And that was at ten o’clock.” Shelly stamped into the lab radiating displeasure.

Brian rubbed his fingers over rough revelatory whiskers, blinked through red-rimmed guilty eyes. Equivocated.

“What makes you think that?”

Shelly flared her nostrils. “Well, just looking at you reveals more than enough evidence. You look terrible. In addition to that I tried to phone you and there was no answer. As you imagine I was more than a little concerned.”

Brian grabbed at his belt where he kept his phone – it was gone. “I must have put it down somewhere, didn’t hear it ring.”

She took out her own phone and hit the memory key to dial his number. There was a distant buzzing. She tracked it down beside the coffeemaker. Returned it to him in stony silence.

“Thanks.”

“It should be near you at all times. I had to go looking for your bodyguards – they told me you were still here.”

“Traitors,” he muttered.

“They’re as concerned as I am. Nothing is so important that you have to ruin your health for it.”

“Something is, Shelly, that’s just the point. You remember when you left last night, the trouble we were having with the new manager program? No matter what we did yesterday the system would simply curl up and die. So then I started it out with a very simple program of sorting out colored blocks, then complicated it with blocks of different shapes as well as colors. The next time I looked, the manager program was still running – but all the other parts of the program seemed to have shut down. So I recorded what happened when I tried it again, and this time installed a natural language trace program to record all the manager’s commands to the other subunits. This slowed things down enough for me to discover what was going on. Let’s look at what happened.”

He turned on the recording he had made during the night. The screen showed the AI rapidly sorting colored blocks, then slowing – then barely moving until it finally stopped completely. The deep bass voice of Robin 3 poured rapidly from the speaker.

“…K-line 8997, response needed to input 10983 – you are too slow – respond immediately – inhibiting. Selecting subproblem 384. Response accepted from K-4093, inhibiting slower responses from K-3724 and K-2314, Selecting subproblem 385. Responses from K-2615 and K-1488 are in conflict – inhibiting both. Selecting…”

Brian switched it off. “Did you understand that?”

“Not really. Except that the program was busy inhibiting things – – ”

“Yes, and that was its problem. It was supposed to learn from experience, by rewarding successful subunits and inhibiting the ones that failed. But the manager’s threshold for success had been set so high that it would accept only perfect and instant compliance. So it was rewarding only the units that responded quickly, and disconnecting the slower ones – even if what they were trying to do might have been better in the end.”

“I see. And that started a domino effect because as each subunit was inhibited, that weakened other units’ connection to it?”

“Exactly. And then the responses of those other units became slower until they got inhibited in turn. Before long the manager program had killed off them all.”

“What a horrible thought! You are saying, really, that it committed suicide.”

“Not at all.” His voice was hoarse, fatigue abraded his temper. “When you say that, you are just being anthropomorphic. A machine is not a person. What on earth is horrible about one circuit disconnecting another circuit? Christ – there’s nothing here but a bunch of electronic components and software. Since there are no human beings involved nothing horrible can possibly occur, that’s pretty obvious—”

“Don’t speak to me that way or use that tone of voice!”

Brian’s face reddened with anger, then he dropped his eyes. “I’m sorry, I take that back. I’m a little tired, I think.”

“You think – I know. Apology accepted. And I agree, I was being anthropomorphic. It wasn’t what you said to me – it was how you said it. Now let’s stop snapping at each other and get some fresh air. And get you to bed.”

“All right – but let me look at this first.”

Brian went straight to the terminal and proceeded to retrace the robot’s internal computations. Chart after chart appeared on the screen. Eventually he nodded gloomily. “Another bug of course. It only showed up after I fixed the last one. You remember, I set things up to suppress excessive inhibition, so that the robot would not spontaneously shut itself down. But now it goes to the opposite extreme. It doesn’t know when it ought to stop.

“This AI seems to be pretty good at answering straightforward questions, but only when the answer can be found with a little shallow reasoning. But you saw what happened when it didn’t know the answer. It began random searching, lost its way, didn’t know when to stop. You might say that it didn’t know what it didn’t know.”

“It seemed to me that it simply went mad.”

“Yes, you could say that. We have lots of words for human-mind bugs – paranoias, catatonias, phobias, neuroses, irrationalities. I suppose we’ll need new sets of words for all the new bugs that our robots will have. And we have no reason to expect that any new version should work the first time it’s turned on. In this case, what happened was that it tried to use all of its Expert Systems together on the same problem. The manager wasn’t strong enough to suppress the inappropriate ones. All those jumbles of words showed that it was grasping at any and every association that might conceivably have guided it toward the problem it needed to solve – no matter how unlikely on the face of it. It also showed that when one approach failed, the thing didn’t know when to give up. Even if this AI worked there is no rule,that it had to be sane on our terms.”

Brian rubbed his bristly jaw and looked at the now silent machine. “Let’s look more closely here.” He pointed to the chart on the machine. “You can see right here what happened this time. In Rob-3.1 there was too much inhibition, so everything shut down. So I changed these parameters and now there’s not enough inhibition.”

“So what’s the solution?”

“The answer is that there is no answer. No, I don’t mean anything mystical. I mean that the manager here has to have more knowledge. Precisely because there’s no magic, no general answer. There’s no simple fix that will work in all cases – because all cases are different. And once you recognize that, everything is much clearer! This manager must be knowledge-based. And then it can learn what to do!”

“Then you’re saying that we must make a manager to learn which strategy to use in each situation, by remembering what worked in the past?”

“Exactly. Instead of trying to find a fixed formula that always works, let’s make it learn from experience, case by case. Because we want a machine that’s intelligent on its own, so that we don’t have to hang around forever, fixing it whenever anything goes wrong. Instead we must give it some ways to learn to fix new bugs as soon as they come up. By itself, without our help.”

“So now I know just what to do. Remember when it seemed stuck in a loop, repeating the same things about the color red? It was easy for us to see that it wasn’t making any progress. It couldn’t see that it was stuck, precisely because of being stuck. It couldn’t jump out of that loop to see what it was doing on a larger scale. We can fix that by adding a recorder to remember the history of what it has been doing recently. And also a clock that interrupts the program frequently, so that it can look at that recording to see if it has been repeating itself.”

“Or even better we could add a second processor that is always running at the same time, looking at the first one. A B-brain watching an A-brain.”

“And perhaps even a C-brain to see if the B-brain has got stuck. Damn! I just remembered that one of my old notes said, ‘Use the B-brain here to suppress looping.’ I certainly wish I had written clearer notes the first time around. I better get started on designing that B-brain.”

“But you’d better not do it now! In your present state, you’ll just make it worse.”

“You’re right. Bedtime. I’ll get there, don’t worry – but I want to get something to eat first.”

“I’ll go with you, have a coffee.”

Brian let them out and blinked at the bright sunshine. “That sounds as though you don’t trust me.”

“I don’t. Not after last night!”

Shelly sipped at her coffee while Brian worked his way through a Texas breakfast – steak, eggs and flapjacks. He couldn’t quite finish it all, sighed and pushed the plate away. Except for two guards just off duty, sitting at a table on the far wall, they were alone in the mess hall.

“I’m feeling slightly less inhuman,” he said. “More coffee?”

“I’ve had more than enough, thank you. Do you think that you can fix your screw-loose AI?”

“No. I was getting so annoyed at the thing that I’ve wiped its memory. We will have to rewrite some of the program before we load it again. Which will take a couple of hours. Even LAMA-5’s assembler takes a long time on a system this large. And this time I’m going to make a backup copy before we run the new version.”

“A backup means a duplicate. When you do get a functioning humanoid artificial intelligence – do you think that you will be able to copy it as well?”

“Of course. Whatever it does – it will still just be a program. Every copy of a program is absolutely identical. Why do you ask?”

“It’s a matter of identity, I guess. Will the second AI be the same as the first?”

“Yes – but only at the instant it is copied. As soon as it begins to run, to think for itself, it will start changing. Remember, we are our memories. When we forget something, or learn something new, we produce a new thought or make a new connection – we change. We are someone different. The same will apply to an AI.”

“Can you be sure of that?” she asked doubtfully.

“Positive. Because that is how mind functions. Which means I have a lot of work to do in weighting memory. It’s the same reason why so many earlier versions of Robin failed. The credit assignment problem that we talked about before. It is really not enough to learn just by short-term stimulus-response-reward methods – because this will solve only simple, short-term problems. Instead, there must be a larger scale reflective analysis, in which you think over your performance on a longer scale, to recognize which strategies really worked, and which of them led to sidetracks, moves that seemed to make progress but eventually led to dead ends.”

“You make the mind sound like – well – an onion!”

“It is.” He smiled at the thought. “A good analogy. Layer within layer and all interconnected. Human memory is not merely associative, connecting situations, responses and rewards. It is also prospective and reflective. The connections made must also be involved with long-range goals and plans. That is why there is this important separation between short-term and long-term memory. Why does it take about an hour to long-term memorize anything? Because there must be a buffer period to decide which behaviors actually were beneficial enough to record.”

Sudden fatigue hit him. The coffee was cold; his head was beginning to ache; depression was closing in. Shelly saw this, lightly touched his hand.

“Time to retire,” she said. He nodded sluggish agreement and struggled to push back the chair.


    Ваша оценка произведения:

Популярные книги за неделю