355 500 произведений, 25 200 авторов.

Электронная библиотека книг » Walter Isaacson » The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution » Текст книги (страница 10)
The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution
  • Текст добавлен: 21 сентября 2016, 18:47

Текст книги "The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution"


Автор книги: Walter Isaacson



сообщить о нарушении

Текущая страница: 10 (всего у книги 42 страниц)

When he finished his speech, his audience sat for a moment in silence, stunned by Turing’s claims. Likewise, his colleagues at the National Physical Laboratory were flummoxed by Turing’s obsession with making thinking machines. The director of the National Physical Laboratory, Sir Charles Darwin (grandson of the evolutionary biologist), wrote to his superiors in 1947 that Turing “wants to extend his work on the machine still further towards the biological side” and to address the question “Could a machine be made that could learn by experience?”91

Turing’s unsettling notion that machines might someday be able to think like humans provoked furious objections at the time—as it has ever since. There were the expected religious objections and also those that were emotional, both in content and in tone. “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain,” declared a famous brain surgeon, Sir Geoffrey Jefferson, in the prestigious Lister Oration in 1949.92 Turing’s response to a reporter from the London Times seemed somewhat flippant, but also subtle: “The comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.”93

The ground was thus laid for Turing’s second seminal work, “Computing Machinery and Intelligence,” published in the journal Mind in October 1950.94 In it he devised what became known as the Turing Test. He began with a clear declaration: “I propose to consider the question, ‘Can machines think?’ ” With a schoolboy’s sense of fun, he then invented a game—one that is still being played and debated—to give empirical meaning to that question. He proposed a purely operational definition of artificial intelligence: If the output of a machine is indistinguishable from that of a human brain, then we have no meaningful reason to insist that the machine is not “thinking.”

Turing’s test, which he called “the imitation game,” is simple: An interrogator sends written questions to a human and a machine in another room and tries to determine from their answers which one is the human. A sample interrogation, he wrote, might be the following:

Q: Please write me a sonnet on the subject of the Forth Bridge.

A: Count me out on this one. I never could write poetry.

Q: Add 34957 to 70764.

A: (Pause about 30 seconds and then give as answer) 105621.

Q: Do you play chess?

A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?

A: (After a pause of 15 seconds) R–R8 mate.

In this sample dialogue, Turing did a few things. Careful scrutiny shows that the respondent, after thirty seconds, made a slight mistake in addition (the correct answer is 105,721). Is that evidence that the respondent was a human? Perhaps. But then again, maybe it was a machine cagily pretending to be human. Turing also flicked away Jefferson’s objection that a machine cannot write a sonnet; perhaps the answer above was given by a human who admitted to that inability. Later in the paper, Turing imagined the following interrogation to show the difficulty of using sonnet writing as a criterion of being human:

Q: In the first line of your sonnet which reads “Shall I compare thee to a summer’s day,” would not “a spring day” do as well or better?

A: It wouldn’t scan.

Q: How about “a winter’s day.” That would scan all right.

A: Yes, but nobody wants to be compared to a winter’s day.

Q: Would you say Mr. Pickwick reminded you of Christmas?

A: In a way.

Q: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison.

A: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

Turing’s point was that it might not be possible to tell whether such a respondent was a human or a machine pretending to be a human.

Turing gave his own guess as to whether a computer might be able to win this imitation game: “I believe that in about fifty years’ time it will be possible to programme computers . . . to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”

In his paper Turing tried to rebut the many possible challenges to his definition of thinking. He swatted away the theological objection that God has bestowed a soul and thinking capacity only upon humans, arguing that this “implies a serious restriction of the omnipotence of the Almighty.” He asked whether God “has freedom to confer a soul on an elephant if He sees fit.” Presumably so. By the same logic, which, coming from the nonbelieving Turing was somewhat sardonic, surely God could confer a soul upon a machine if He so desired.

The most interesting objection, especially for our narrative, is the one that Turing attributed to Ada Lovelace. “The Analytical Engine has no pretensions whatever to originate anything,” she wrote in 1843. “It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.” In other words, unlike the human mind, a mechanical contrivance cannot have free will or come up with its own initiatives. It can merely perform as programmed. In his 1950 paper, Turing devoted a section to what he dubbed “Lady Lovelace’s Objection.”

His most ingenious parry to this objection was his argument that a machine might actually be able to learn, thereby growing into its own agent and able to originate new thoughts. “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s?” he asked. “If this were then subjected to an appropriate course of education, one would obtain the adult brain.” A machine’s learning process would be different from a child’s, he admitted. “It will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes. . . . One could not send the creature to school without the other children making excessive fun of it.” The baby machine would therefore have to be tutored some other way. Turing proposed a punishment and reward system, which would cause the machine to repeat certain activities and avoid others. Eventually such a machine could develop its own conceptions about how to figure things out.

But even if a machine could mimic thinking, Turing’s critics objected, it would not really be conscious. When the human player of the Turing Test uses words, he associates those words with real-world meanings, emotions, experiences, sensations, and perceptions. Machines don’t. Without such connections, language is just a game divorced from meaning.

This objection led to the most enduring challenge to the Turing Test, which was in a 1980 essay by the philosopher John Searle. He proposed a thought experiment, called the Chinese Room, in which an English speaker with no knowledge of Chinese is given a comprehensive set of rules instructing him on how to respond to any combination of Chinese characters by handing back a specified new combination of Chinese characters. Given a good enough instruction manual, the person might convince an interrogator that he was a real speaker of Chinese. Nevertheless, he would not have understood a single response that he made, nor would he have exhibited any intentionality. In Ada Lovelace’s words, he would have no pretensions whatever to originate anything but instead would merely do whatever actions he was ordered to perform. Similarly, the machine in Turing’s imitation game, no matter how well it could mimic a human being, would have no understanding or consciousness of what it was saying. It makes no more sense to say that the machine “thinks” than it does to say that the fellow following the massive instruction manual understands Chinese.95

One response to the Searle objection is to argue that, even if the man does not really understand Chinese, the entire system incorporated in the room—the man (processing unit), instruction manual (program), and files full of Chinese characters (the data)—as a whole might indeed understand Chinese. There’s no conclusive answer. Indeed, the Turing Test and the objections to it remain to this day the most debated topic in cognitive science.

For a few years after he wrote “Computing Machinery and Intelligence,” Turing seemed to enjoy engaging in the fray that he provoked. With wry humor, he poked at the pretensions of those who prattled on about sonnets and exalted consciousness. “One day ladies will take their computers for walks in the park and tell each other ‘My little computer said such a funny thing this morning!’ ” he japed in 1951. As his mentor Max Newman later noted, “His comical but brilliantly apt analogies with which he explained his ideas made him a delightful companion.”96

One topic that came up repeatedly in discussions with Turing, and would soon have a sad resonance, was the role that sexual appetites and emotional desires play in human thinking, unlike in machines. A very public example occurred in a January 1952 televised BBC debate that Turing had with the brain surgeon Sir Geoffrey Jefferson, moderated by Max Newman and the philosopher of science Richard Braithwaite. “A human’s interests are determined, by and large, by his appetites, desires, drives, instincts,” said Braithwaite, who argued that to create a true thinking machine, “it would seem to be necessary to equip the machine with something corresponding to a set of appetites.” Newman chimed in that machines “have rather restricted appetites, and they can’t blush when they’re embarrassed.” Jefferson went even further, repeatedly using “sexual urges” as an example and referring to a human’s “emotions and instincts, such as those to do with sex.” Man is prey to “sexual urges,” he said, and “may make a fool of himself.” He spoke so much about how sexual appetites affected human thinking that the BBC editors cut some of it out of the broadcast, including his assertion that he would not believe a machine could think until he saw it touch the leg of a female machine.97

Turing, who was still rather discreet about being a homosexual, fell quiet during this part of the discussion. During the weeks leading up to the recording of the broadcast on January 10, 1952, he was engaged in a series of actions that were so very human that a machine would have found them incomprehensible. He had just finished a scientific paper, and he followed it by writing a short story about how he planned to celebrate: “It was quite some time now since he had ‘had’ anyone, in fact not since he had met that soldier in Paris last summer. Now that his paper was finished he might justifiably consider that he had earned another gay man, and he knew where he might find one who might be suitable.”98

On Oxford Street in Manchester, he picked up a nineteen-year-old working-class drifter named Arnold Murray and began a relationship. When he returned from taping the BBC show, he invited Murray to move in. One night Turing told young Murray of his fantasy of playing chess against a nefarious computer that he was able to beat by causing it to show anger, then pleasure, then smugness. The relationship became more complex in the ensuing days, until Turing returned home one evening and found that his house had been burglarized. The culprit was a friend of Murray’s. When Turing reported the incident to the police, he ended up disclosing to them his sexual relationship with Murray, and they arrested Turing for “gross indecency.”99

At the trial in March 1952, Turing pled guilty, though he made clear he felt no remorse. Max Newman appeared as a character witness. Convicted and stripped of his security clearance,VI Turing was offered a choice: imprisonment or probation contingent on receiving hormone treatments via injections of a synthetic estrogen designed to curb his sexual desires, as if he were a chemically controlled machine. He chose the latter, which he endured for a year.

Turing at first seemed to take it all in stride, but on June 7, 1954, he committed suicide by biting into an apple he had laced with cyanide. His friends noted that he had always been fascinated by the scene in Snow White in which the Wicked Queen dips an apple into a poisonous brew. He was found in his bed with froth around his mouth, cyanide in his system, and a half-eaten apple by his side.

Was that something a machine would have done?

I. Stirling’s formula, which approximates the value of the factorial of a number.

II. The display and explanations of the Mark I at Harvard’s science center made no mention of Grace Hopper nor pictured any women until 2014, when the display was revised to highlight her role and that of the programmers.

III. Von Neumann was successful in this. The plutonium implosion design would result in the first detonation of an atomic device, the Trinity test, in July 1945 near Alamogordo, New Mexico, and it would be used for the bomb that was dropped on Nagasaki on August 9, 1945, three days after the uranium bomb was used on Hiroshima. With his hatred of both the Nazis and the Russian-backed communists, von Neumann became a vocal proponent of atomic weaponry. He attended the Trinity test, as well as later tests on Bikini Atoll in the Pacific, and he argued that a thousand radiation deaths was an acceptable price to pay for the United States attaining a nuclear advantage. He would die twelve years later, at age fifty-three, of bone and pancreatic cancer, which may have been caused by the radiation emitted during those tests.

IV. In 1967, at age sixty, Hopper was recalled to active duty in the Navy with the mission of standardizing its use of COBOL and validating COBOL compilers. By vote of Congress, she was permitted to extend her tour beyond retirement age. She attained the rank of rear admiral, and finally retired in August 1986 at age seventy-nine as the Navy’s oldest serving officer.

V. The U.S. Constitution empowers Congress “to promote the progress of science and useful arts by securing for limited times to authors and inventors the exclusive Right to their respective writings and discoveries.” The U.S. Patent and Trademark Office throughout the 1970s generally would not grant patents to innovations whose only departure from existing technology was the use of a new software algorithm. That became murky in the 1980s with conflicting appeals court and Supreme Court rulings. Policies changed in the mid-1990s, when the DC Circuit Court issued a series of rulings permitting patents for software that produces a “useful, concrete and tangible result” and President Bill Clinton appointed as head of the Patent Office a person who had been the chief lobbyist for the Software Publishing Industry.

VI. At Christmas 2013 Turing was posthumously granted a formal pardon by Queen Elizabeth II.

John Bardeen (1908–91), William Shockley (1910–89), and Walter Brattain (1902–87) in a Bell Labs photograph in 1948.

The first transistor at Bell Labs.

William Shockley (at head of table) the day he won the Nobel Prize being toasted by colleagues, including Gordon Moore (seated left) and Robert Noyce (standing center with wine glass) in 1956.













CHAPTER FOUR

THE TRANSISTOR

The invention of computers did not immediately launch a revolution. Because they relied on large, expensive, fragile vacuum tubes that consumed a lot of power, the first computers were costly behemoths that only corporations, research universities, and the military could afford. Instead the true birth of the digital age, the era in which electronic devices became embedded in every aspect of our lives, occurred in Murray Hill, New Jersey, shortly after lunchtime on Tuesday, December 16, 1947. That day two scientists at Bell Labs succeeded in putting together a tiny contraption they had concocted from some strips of gold foil, a chip of semiconducting material, and a bent paper clip. When wiggled just right, it could amplify an electric current and switch it on and off. The transistor, as the device was soon named, became to the digital age what the steam engine was to the Industrial Revolution.

The advent of transistors, and the subsequent innovations that allowed millions of them to be etched onto tiny microchips, meant that the processing power of many thousands of ENIACs could be nestled inside the nose cone of rocket ships, in computers that could sit on your lap, in calculators and music players that could fit in your pocket, and in handheld devices that could exchange information or entertainment with any nook or node of a networked planet.

Three passionate and intense colleagues, whose personalities both complemented and conflicted with one another, would go down in history as the inventors of the transistor: a deft experimentalist named Walter Brattain, a quantum theorist named John Bardeen, and the most passionate and intense of them all—tragically so by the end—a solid-state physics expert named William Shockley.

But there was another player in this drama that was actually as important as any individual: Bell Labs, where these men worked. What made the transistor possible was a mixture of diverse talents rather than just the imaginative leaps of a few geniuses. By its nature, the transistor required a team that threw together theorists who had an intuitive feel for quantum phenomena with material scientists who were adroit at baking impurities into batches of silicon, along with dexterous experimentalists, industrial chemists, manufacturing specialists, and ingenious tinkerers.

BELL LABS

In 1907 the American Telephone and Telegraph Company faced a crisis. The patents of its founder, Alexander Graham Bell, had expired, and it seemed in danger of losing its near-monopoly on phone services. Its board summoned back a retired president, Theodore Vail, who decided to reinvigorate the company by committing to a bold goal: building a system that could connect a call between New York and San Francisco. The challenge required combining feats of engineering with leaps of pure science. Making use of vacuum tubes and other new technologies, AT&T built repeaters and amplifying devices that accomplished the task in January 1915. On the historic first transcontinental call, in addition to Vail and President Woodrow Wilson, was Bell himself, who echoed his famous words from thirty-nine years earlier, “Mr. Watson, come here, I want to see you.” This time his former assistant Thomas Watson, who was in San Francisco, replied, “It would take me a week.”1

Thus was the seed planted for a new industrial organization that became known as Bell Labs. Originally located on the western edge of Manhattan’s Greenwich Village overlooking the Hudson River, it brought together theoreticians, materials scientists, metallurgists, engineers, and even AT&T pole climbers. It was where George Stibitz developed a computer using electromagnetic relays and Claude Shannon worked on information theory. Like Xerox PARC and other corporate research satellites that followed, Bell Labs showed how sustained innovation could occur when people with a variety of talents were brought together, preferably in close physical proximity where they could have frequent meetings and serendipitous encounters. That was the upside. The downside was that these were big bureaucracies under corporate thumbs; Bell Labs, like Xerox PARC, showed the limits of industrial organizations when they don’t have passionate leaders and rebels who can turn innovations into great products.

The head of Bell Labs’ vacuum-tube department was a high-octane Missourian named Mervin Kelly, who had studied to be a metallurgist at the Missouri School of Mines and then got a PhD in physics under Robert Millikan at the University of Chicago. He was able to make vacuum tubes more reliable by devising a water-cooling system, but he realized that tubes would never be an efficient method of amplification or switching. In 1936 he was promoted to research director of Bell Labs, and his first priority was to find an alternative.

Kelly’s great insight was that Bell Labs, which had been a bastion of practical engineering, should also focus on basic science and theoretical research, until then the domain of universities. He began a search for the country’s brightest young physics PhDs. His mission was to make innovation something that an industrial organization could do on a regular basis rather than ceding that territory to eccentric geniuses holed up in garages and garrets.

“It had become a matter of some consideration at the Labs whether the key to invention was a matter of individual genius or collaboration,” Jon Gertner wrote in The Idea Factory, a study of Bell Labs.2 The answer was both. “It takes many men in many fields of science, pooling their various talents, to funnel all the necessary research into the development of one new device,” Shockley later explained.3 He was right. He was also, however, showing a rare flash of feigned humility. More than anyone, he believed in the importance of the individual genius, such as himself. Even Kelly, the proselytizer for collaboration, realized that individual genius also needed to be nurtured. “With all the needed emphasis on leadership, organization and teamwork, the individual has remained supreme—of paramount importance,” he once said. “It is in the mind of a single person that creative ideas and concepts are born.”4

The key to innovation—at Bell Labs and in the digital age in general—was realizing that there was no conflict between nurturing individual geniuses and promoting collaborative teamwork. It was not either-or. Indeed, throughout the digital age, the two approaches went together. Creative geniuses (John Mauchly, William Shockley, Steve Jobs) generated innovative ideas. Practical engineers (Presper Eckert, Walter Brattain, Steve Wozniak) partnered closely with them to turn concepts into contraptions. And collaborative teams of technicians and entrepreneurs worked to turn the invention into a practical product. When part of this ecosystem was lacking, such as for John Atanasoff at Iowa State or Charles Babbage in the shed behind his London home, great concepts ended up being consigned to history’s basement. And when great teams lacked passionate visionaries, such as Penn after Mauchly and Eckert left, Princeton after von Neumann, or Bell Labs after Shockley, innovation slowly withered.

The need to combine theorists with engineers was particularly true in a field that was becoming increasingly important at Bell Labs: solid-state physics, which studied how electrons flow through solid materials. In the 1930s, Bell Labs engineers were tinkering with materials such as silicon—after oxygen the most common element in the earth’s crust and a key component of sand—in order to juice them into performing electronic tricks. At the same time in the same building, Bell theorists were wrestling with the mind-bending discoveries of quantum mechanics.

Quantum mechanics is based on theories developed by the Danish physicist Niels Bohr and others about what goes on inside an atom. In 1913 Bohr had come up with a model of atomic structure in which electrons orbited around a nucleus at specific levels. They could make a quantum leap from one level to the next, but never be in between. The number of electrons in the outer orbital level helped to determine the chemical and electronic properties of the element, including how well it conducted electricity.

Some elements, such as copper, are good conductors of electricity. Others, such as sulfur, are horrible conductors, and are thus good insulators. And then there are those in between, such as silicon and germanium, which are known as semiconductors. What makes them useful is that they are easy to manipulate into becoming better conductors. For example, if you contaminate silicon with a tiny amount of arsenic or boron, its electrons become more free to move.

The advances in quantum theory came at the same time that metallurgists at Bell Labs were finding ways to create new materials using novel purification techniques, chemical tricks, and recipes for combining rare and ordinary minerals. In seeking to solve some everyday problems, like vacuum-tube filaments that burned out too quickly or telephone-speaker diaphragms that sounded too tinny, they were mixing new alloys and developing methods to heat or cool concoctions until they performed better. By trial and error, like cooks in a kitchen, they were creating a revolution in materials science that would go hand in hand with the theoretical revolution that was occurring in quantum mechanics.

As they experimented with their samples of silicon and germanium, the chemical engineers at Bell Labs stumbled across evidence for much of what the theorists were conjecturing.I It became clear that there was a lot that the theorists, engineers, and metallurgists could learn from one another. So in 1936 a solid-state study group was formed at Bell Labs that included a potent mix of practical and theoretical stars. It met once a week in the late afternoon to share findings, engage in a bit of academic-style trash talk, and then adjourn for informal discussions that lasted late into the night. There was value to getting together in person rather than just reading each other’s papers: the intense interactions allowed ideas to be kicked into higher orbits and, like electrons, occasionally break loose to spark chain reactions.

Of all the people in the group, one stood out. William Shockley, a theorist who had arrived at Bell Labs right when the study group was being formed, impressed the others, and sometimes frightened them, with both his intellect and his intensity.

WILLIAM SHOCKLEY

William Shockley grew up with a love of both art and science. His father studied mine engineering at MIT, took music courses in New York, and learned seven languages as he wandered through Europe and Asia as an adventurer and mineral speculator. His mother majored in both math and art at Stanford and was one of the first known climbers to succeed in a solo ascent of Mt. Whitney. They met in a tiny Nevada mining village, Tonopah, where he was staking claims and she had gone to do surveying work. After they were married, they moved to London, where their son was born in 1910.

William would be their only child, and for that they were thankful. Even as a baby he had a ferocious temper, with fits of rage so loud and long that his parents kept losing babysitters and apartments. In a journal his father described the boy “screaming at the top of his voice and bending and throwing himself back” and recorded that he “has bitten his mother severely many times.”5 His tenacity was ferocious. In any situation, he simply had to have his way. His parents eventually adopted a policy of surrender. They abandoned any attempt to discipline him, and until he was eight they home-schooled him. By then they had moved to Palo Alto, where his mother’s parents lived.

Convinced that their son was a genius, William’s parents had him evaluated by Lewis Terman,II who had devised the Stanford–Binet IQ test and was planning a study of gifted children. Young Shockley scored in the high 120s, which was respectable but not enough for Terman to label him a genius. Shockley would become obsessed by IQ tests and use them to assess job applicants and even colleagues, and he developed increasingly virulent theories about race and inherited intelligence that would poison the later years of his life.6 Perhaps he should have learned from his own life the shortcomings of IQ tests. Despite being certified as a nongenius, he was smart enough to skip middle school and get a degree from Caltech and then a doctorate in solid-state physics from MIT. He was incisive, creative, and ambitious. Even though he loved performing magic tricks and playing practical jokes, he never learned to be easygoing or friendly. He had an intellectual and personal intensity, resonating from his childhood, that made him difficult to deal with, all the more so as he became successful.

When Shockley graduated from MIT in 1936, Mervin Kelly came up from Bell Labs to interview him and offered him a job on the spot. He also gave Shockley a mission: find a way to replace vacuum tubes with a device that was more stable, solid, and cheap. After three years, Shockley became convinced he could find a solution using solid material such as silicon rather than glowing filaments in a bulb. “It has today occurred to me that an amplifier using semiconductors rather than vacuum is in principle possible,” he wrote in his lab notebook on December 29, 1939.7

Shockley had the ability to visualize quantum theory, how it explained the movement of electrons, the way a choreographer can visualize a dance. His colleagues said that he could look at semiconducting material and see the electrons. However, in order to transform his artist’s intuitions into a real invention, Shockley needed a partner who was an adroit experimenter, just as Mauchly needed Eckert. This being Bell Labs, there were many in the building, most notably the merrily cantankerous westerner Walter Brattain, who enjoyed making ingenious devices with semiconducting compounds such as copper oxide. For example, he built electric rectifiers, which turn alternating current into direct current, based on the fact that current flows in only one direction through an interface where a piece of copper meets a layer of copper oxide.

Brattain grew up on an isolated ranch in eastern Washington State, where as a boy he herded cattle. With his raspy voice and homespun demeanor, he affected the self-deprecating style of a confident cowboy. He was a natural-born tinkerer with deft fingers, and he loved devising experiments. “He could put things together out of sealing wax and paper clips,” recalled an engineer he worked with at Bell Labs.8 But he also had a laid-back cleverness that led him to seek shortcuts rather than plod through repetitious trials.


    Ваша оценка произведения:

Популярные книги за неделю