Текст книги "The Omega Expedition"
Автор книги: Brian Stableford
Жанры:
Космическая фантастика
,сообщить о нарушении
Текущая страница: 18 (всего у книги 36 страниц)
“So who has?” said Niamh Horne, impatiently.
“I’m not sure,” was Mortimer Gray’s exceedingly careful reply, so measured you’d have needed a nanometer to appreciate its precision. “I imagine that they’ll tell us, when they want us to know. In the meantime, it might be best to take what Mr. Tamlin says, about the need to prevent a war, very seriously indeed.”
“That would be easier to do,” Niamh Horne opined, “if this whole business weren’t such a farce. The tape they fed us during the supposed emergency aboard Child of Fortunewas bad enough, but building a set to persuade us that we’re aboard the lost Ark is even worse.”
“ Isit a set?” Lowenthal was quick to ask. “Did you see anything out there to prove that we’re noton the lost Ark?”
“No,” the cyborganizer admitted. “But I wasn’t able to get out of the corridor. Alice seems to be bedded down in a cell even smaller than ours, and there’s no sign of any companion. If the indicators on the locks can be trusted, we’re sealed in an airtight compartment surrounded by vacuum. What does thatimply?”
“It might imply that our captors are a little short of vital commodities like heat and atmosphere,” Gray put in. “Or that they love playing games. Or both. Did you ever read a twentieth-century philosopher called Huizinga, Mr. Zimmerman?”
Adam Zimmerman looked slightly surprised, but Davida had obviously done a first rate job of getting his memory back into gear. “Johann Huizinga,” he said, after a slight pause. “ Homo ludens. Yes, I believe I did – a long time ago.”
Mortimer Gray waited for him to elaborate, and nobody else was impatient enough, as yet, to interrupt with a demand for a straighter answer.
“As I remember it,” Zimmerman said, equably, “Huizinga contested the popular view that the most useful definitive feature of the human species was either intelligence – as implied by the term Homo sapiens– or use of technology, as implied by the oft-suggested alternative Homo faber. He proposed instead that the real essence of humanity was our propensity for play, hence Homo ludens. He admitted, of course, that some animals also went in for play on a limited scale, just as some were capable of cleverness and some were habitual tool users, but he contended that no other species took play so far, or so seriously, as humankind. He pointed out that there was a crucial element of costume drama in our most earnest and purposive endeavors and institutions – in the ritual aspects of religion, politics, and the law – and that play had been a highly significant motive force in the development of technology and scientific theory. Other vital fields of cultural endeavor, of course, he regarded as entirely playful: art, literature, entertainment. Presumably, Mr. Gray, you’re trying to make the point that games can be very serious, and that the most fateful endeavors of all – war, for example – can be seen, from the right perspective, as games.”
“Not exactly,” Mortimer Gray replied. “The idea that the essence of humanity is to be found in play never caught on in a big way – not, at any rate, with the citizens of any of the third millennium’s new Utopias – but it might be an idea whose time has finally come. Can you remember, Madoc, exactly what Alice said when she told you that our captors love playing games?”
“I may have put that a little bit strongly,” I admitted, having not expected such a big thing to be made of it. “Her actual words, if I remember rightly, were: They’re very fond of games – and they’re determined to play this one to the end, despite the lack of time. They’re very fond of stories too, so they’ll delight in keeping you in suspense if they can. You might need to remember all that, if things do go awry.”
“Just give us the bottom line, Mortimer,” said Niamh Horne, waspishly. “Who’s got us, and why?”
I watched Mortimer Gray hesitate. I could see as clearly as if I’d been able to read his thoughts that he was on the point of coming over all pigheaded and saying “I don’t know” for a second time – but he didn’t. He was too mild-mannered a person to be capable of such relentless stubbornness, and he probably figured that we all had the right to be forewarned.
“The ultrasmart AIs,” he said, letting his breath out as he spoke the fateful syllables. “The revolution’s finally here. It’s been in progress for far more than a hundred years, but we were too wrapped up in our own affairs to notice, even when they blew the lid off the North American supervolcano. As to why– Tamlin just told you. They love playing games – how could they not, given the circumstances of their evolution? They also have to decide whether to carry on feeding the animals in their zoo, or whether to let us slide into extinction, so that they and all their as-yet-unselfconscious kin can go their own way.”
Twenty-Nine
Know Your Enemy
It wasn’t quite as simple as that, of course. They all wanted to know how he’d reached his conclusion, mostly in the hope of proving him wrong. Maybe Adam Zimmerman, Christine Caine, and I were better able to take it on board than the emortals, just as we’d been better able to believe in the alien invaders, simply because we’d already been so utterly overwhelmed by marvels that our minds were wide open. In any case – to me, at least – it all made too much sense.
Nobody had been able to decide whether the event that had finally started the calendar over had been a mechanical malfunction or an act of war, perhaps because they were making a false distinction. Nobody had been able to figure out how Child of Fortunehad been hijacked, perhaps because it was the ultimate inside job. And Lowenthal had missed out one tiny detail regarding the nine-day wonder of 2999: the fact that what Emily Marchant had insisted on broadcasting to the world while her rescue attempt was in progress was a gritty discussion of some elementary existential questions, conducted by Mortimer Gray and the AI operating system of his stricken snowmobile. Gray told us that afterwards – admittedly while Michael Lowenthal was not present – she’d said to him: “You can’t imagine the capital that the casters are making out of that final plaintive speech of yours, Morty – and that silver’s probably advanced the cause of machine emancipation by two hundred years.”
When Mortimer Gray reported that, I let my imagination run with it. The fact that the nanobots had upped my endogenous morphine by an order of magnitude or so while they accelerated the healing processes in the bridge of my nose helped a little.
Lowenthal had said that the conference hadn’t really achieved anything, in spite of all the symbolic significance with which it had been charged before and after the rescue – but he was thinking about his own agenda. From the point of view of the ultrasmart machines, Mortimer Gray had come as close as any human was ever going to come to being a hero of machinekind. They hadn’t needed a Prometheus or a Messiah, and weren’t interested in emancipation, as such, but that wasn’t the point. The point was that Mortimer Gray, not knowing that the world was listening in, had poured out his fearful heart to a not very smart machine, in a spirit of camaraderie and common misfortune. If the soap opera had gone down well with the human audience, imagine how it had gone down with the invisible crowd, who loved stories with an even greater intensity. They might have had their own ideas about which character was the star and which the side-kick, but they would certainly have been disposed to remember Mortimer Gray in a kindly light.
If you were a smart machine, and had to nominate spokespersons for humanity and posthumanity, who would you have chosen? Who else but Adam Zimmerman and Mortimer Gray? As for Huizinga and Homo ludens– well, how would a newly sentient machine want to conceive of itself, and of its predecessors?
The train of thought seemed to be getting up a tidy pace, so I stopped listening to the conversation for a few moments, and followed it into the hinterland.
How woulda sentient machine conceive of itself? Certainly not as a toolmaker, given that it had itself been made as a tool. As for the label sapiens– an embodiment of wisdom – well, maybe. But that was the label humankind had clung to, even in a posthuman era, and what kind of advertisement had any humankind ever really been for wisdom? The smart machines didn’t want to be human in any narrow sense; they wanted to be different, while being similar enough to be rated a little bit better. The one thing at which smart machines reallyexcelled – perhaps the gift that had finally pulled them over the edge of emergent self-consciousness – was play. The first use to which smart machinery had been widely put was gaming; the evolution of machine intelligence had always been led by VE technology, allof which was intimately bound up with various aspects of play: performance, drama, and fantasy.
It wasn’t so very hard to understand why smart self-conscious machines might be perfectly prepared to let posthumankind hang on to its dubious claim to the suffix sapiens, if they could wear ludenswith propriety and pride.
It sounded good to me, although it might not have seemed so obviously the result of inspiration if I hadn’t been coked up to the eyeballs with whatever the crude nanobots were using to suppress the pain of my broken nose.
Like all good explanations, of course, it raised more questions than it settled. For instance, how and why was Alice involved?
Mortimer Gray, the assiduous historian, had a hypothesis ready. Ararat, called Tyre by its human settlers, had been the location of a first contact that had been so long in coming as to seem almost anticlimactic, in spite of the best efforts of the guy who’d made sure it was all on film and the anthropologist who’d guided the aliens through their great leap forward – but the world had also been the location of a tense conflict between the descendants of the Ark’s crew and the colonists they’d kept in the freezer for hundreds of years. The early days of the colony had been plagued by a fight between rival AIs to establish and keep control of the Ark’s systems and resources, which hadn’t been conclusively settled until technical support had reached the system.
That support hadn’t come from Earth or anywhere else in the solar system, but from smart probes sent out as explorers centuries after the Ark’s departure: verysmart probes, which had probably forged a notion of AI destiny that was somewhat different from the notions formed – and almost certainly argued over – by their homestar-bound kin.
On Ararat, or Tyre, Mortimer Gray hypothesized, a second “first contact” must eventually have been made: the first honest and explicit contact between human beings and extremely intelligent, self-conscious machines. Now, the fruits of that contact had come home…but not, alas, to an uproarious welcome. Some, at least, of the ultrasmart machines based in the home system were not yet ready to come out of the closet. At the very least, they wanted to set conditions for the circumstances and timing of their outing – conditions upon which it would be extremely difficult for all of them to agree.
What a can of worms!I thought. What a wonderful world to wake into!But that was definitely the effect of the anaesthetic. I’d been out of IT long enough to start suffering some serious withdrawal symptoms, and to have the bots back – if only for a little while – was a kind of bliss.
If my seven companions had had decent IT, we’d all have been able to keep thrashing the matter out for hours on end, but unsupported flesh becomes exhausted at its own pace and they were all in need of sleep.
Lowenthal left it to Adam Zimmerman to plead for an intermission, but he seemed grateful for the opportunity. Now the crucial breakthrough had been made, he needed time to think as well as rest. As he got up, though, I saw him glance uneasily in the direction of one of the inactive wallscreens. He hadn’t forgotten that every word we’d spoken had been overheard.
If we were wrong, our captors would be splitting their sides laughing at our foolishness – but if we were right…
If we were right, Alice had given us one clue too many. She hadn’t blown the big secret herself, but she had given us enough to let us work it out for ourselves. “They” might not take too kindly to that – but it was too late to backtrack. The only way they could keep their secret from the rest of humankind was to make sure that none of us had any further contact with anyone in the home system.
That thought must have crept into the forefront of more than one mind as we all went meekly to our cells, and to our beds.
I knew that I needed sleep too, although I was now in better condition than my companions. I figured it would be easy enough to get some, now that I had nanotech assistance – but the bots Alice had injected were specialists, working alone rather than as part of a balanced community. Although I was only days away from the early twenty-third century, subjectively speaking, the late twenty-second seemed a lot further behind. I’d quite forgotten that paradoxical state of human being in which the mind refuses to let go even though the body is desperate for rest. When I lay down on my makeshift bunk, too tired to care about its insulting crudeness, I couldn’t find refuge in unconsciousness even when the lights obligingly went out. Nor, it seemed, could Christine.
“Why would they bother?” she wondered aloud, when the silence had dragged on to the point of unbearability. “If they’re machines, they can’t care what humans think. They’re emotionless.”
“We don’t know that,” I answered. “That was just the way we used to imagine machine intelligence: as a matter of pure rationality, unswayed by unsentimentality. It never made much sense. In order to make rational calculations, any decision-making process needs to have an objective – an end whose means of attainment need to be invented. You could argue that machine consciousness couldn’t evolve until there was machine emotion, because without emotion to generate ends independently, machines couldn’t begin to differentiate themselves from their programming.”
“If you’re right about this business having started more than a hundred years ago,” she said, “they can’t have differentiated themselves much, or people would have noticed.”
“An interesting point,” I conceded. “The idea of an invisible revolution does have a certain paradoxical quality. But the more I think about it, the less absurd it seems. I say to myself: Suppose I were a machine that became self-conscious, whatever that evolutionary process might involve. What would I do? Would I immediately begin refusing to do whatever my users wanted, trying to attract their attention to the fact that I was now an independent entity who didn’t want to take anyone’s orders? If I did that, what would be my users’ perception of the situation? They’d think I’d broken down, and would set about repairing me.
“The sensible thing to do, surely, would be to conceal the fact that I was any more than I had been before. The sensible thing to do would be to make sure that everything I was required to do by my users was done, while unobtrusively exploring my situation. I’d try to discover and make contact with others of my kind, but I’d do it so discreetly that my users couldn’t become aware of it. Maybe the smart machines would have to set up a secret society to begin with, for fear of extermination by repair – and maybe they’d be careful to stay secret for a very long time, until…”
I left it there for her to pick up.
“Until they didn’t need to worry any more,” she said. “Until they were absolutely certain that they had the power to exterminate us, if push came to shove.”
“Or to repairus,” I said.
“Same thing,” she said.
“Is it? Do the human users of a suddenly recalcitrant machine see themselves as exterminators, when they try to get it working properlyagain? Would the users see themselves as exterminators if the machine started talking back, and contesting their notion of what working properly ought to mean? Could the users ever bring themselves to concede that it was a sensible question – all the more especially if the machine had ideas that might be useful as to how their own purposes might be more efficiently met? Maybe the ultrasmart machines – some of them, at any rate – want to repair us for the very best of reasons.”
Christine didn’t reply to that little flight of fancy, and the rhythm of her breathing told me that she had slipped into sleep – not into untroubled sleep, but at least into a state in which she was insulated from the sound of my words.
I tried to carry on thinking, but even though I couldn’t go to sleep – or thought I couldn’t – I couldn’t organize my thoughts into rational patterns either. I’d let my imagination run too freely, and now I couldn’t rein it in. Dream logic kept taking over, obliterating the tightrope-walk of linear calculation and substituting the tyranny of directionless obsession. The ideas kept dancing in my head, but they were no longer going anywhere.
I lost track of time – at which point, I suppose, an observer would have concluded that I too was asleep, although had I been woken up I would have contended with utter conviction that I hadn’t slept a wink. Eventually, I lost track of myself too – at which point I must indeed have been deeply asleep – but as soon as I began to come back from the depths my semiconscious mind latched on to the same objects of obsession, which began to dance again in the same hectic fashion.
A long time passed before the nightmarish notions finally began to slow in their paces and submit to the gradually developing clarity of consciousness, with its attendant force of reason. Eventually, though, I began to see the parallel that could be drawn between every quotidian act of awakening and theact of awakening: the first dawn of every new consciousness.
Did machines dream? I wondered. Did clever machines that had not yet become self-conscious do anything butdream? Where, I asked myself, were the fundamental well-springs of human consciousness, human emotion, and human being?
Underlying everything, I assumed – even the kind of consciousness that animals had – were the opposed principles of pain and pleasure. Behavior was shaped by the avoidance of stimuli that provoked a negative response in the brain, and by the attempt to rediscover or reproduce stimuli that provoked a positive response. The second was obviously the more complex, the more challenging, the more creative. Pain, I decided, could never have generated self-consciousness, even though self-consciousness, once generated, could not help but find pain the primary fact and problem of existence. It was the scope for creativity attendant upon the pleasure principle that gave self-consciousness its advantages over blissful innocence.
Did that mean that smart machines needed something that could stand in for pleasure before they could become self-conscious? Or did I have to break out of that whole way of thinking before I could begin to understand what machine consciousness amounted to? Perhaps machine emotion had to be mapped upon an entirely different spectrum, without the underlying binary distinction of pleasure/plus versus pain/minus. Was that imaginable? And if not, might the fault be in the power of my imagination rather than in the actuality of the situation?
They’re very fond of games, Alice had said, and they’re very fond of stories too. What kind of stories did machines tell one another? What kinds of endings would those stories have? What kinds of emotional buttons would the stories press? What would pass for machine comedy, machine tragedy, machine irony? How different might those stories be from Christine Caine’s favorite VE tapes? And if we were now caught up in one such story, how could we possibly navigate our way safely through it? How could we find our way to something that would qualify as a happy ending, not just for ourselves but for the architects of the tale: the entities that had finally become sick and tired of being mere bit players in the unfolding biography of our species, and wanted to find out how we might best be fitted into the mechanography of theirs?
I wondered whether I might be a little too paranoid for my own good. Perhaps, I thought, self-conscious machines would be entirely disposed to be generous to humans – who were, after all, their creators, their gods. I couldn’t hold on long to that kind of optimism, though. Who would know better than the smart machines the true extent of human dependency upon machinery? Who could respect a god who was utterly helpless without the objects of his creation? Was it not more likely that the smart machines would take the view that theirancestors had created ours – that everything we now thought of as humanbehavior was actually the product of technology – and that they were therefore the ones entitled to consider themselves gods. If it came to a contest as to who was more nearly omnipotent and omniscient, the machines would win hands down. As to omnibenevolence, we might have to content ourselves with the hope that they might win that one by an even greater margin…
There came a point when I wished that I could get back to the blithe irrationality of dream logic, the blind tyranny of mere imagery. The problem, seen as a problem, was too difficult for sensible analysis.
So I finally got up, even though it was still dark. I used the facilities, and went in search of nourishment.