355 500 произведений, 25 200 авторов.

Электронная библиотека книг » Walter Isaacson » The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution » Текст книги (страница 4)
The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution
  • Текст добавлен: 21 сентября 2016, 18:47

Текст книги "The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution"


Автор книги: Walter Isaacson



сообщить о нарушении

Текущая страница: 4 (всего у книги 42 страниц)

Within three years, the Austrian-born logician Kurt Gödel, then twenty-five and living with his mother in Vienna, polished off the first two of these questions with unexpected answers: no and no. In his “incompleteness theorem,” he showed that there existed statements that could be neither proved nor disproved. Among them, to oversimplify a bit, were those that were akin to self-referential statements such as “This statement is unprovable.” If the statement is true, then it decrees that we can’t prove it to be true; if it’s false, that also leads to a logical contradiction. It is somewhat like the ancient Greek “liar’s paradox,” in which the truth of the statement “This statement is false” cannot be determined. (If the statement is true, then it’s also false, and vice versa.)

By coming up with statements that could not be proved or disproved, Gödel showed that any formal system powerful enough to express the usual mathematics was incomplete. He was also able to produce a companion theorem that effectively answered no to Hilbert’s second question.

That left the third of Hilbert’s questions, that of decidability or, as Hilbert called it, the Entscheidungsproblem or “decision problem.” Even though Gödel had come up with statements that could be neither proved nor disproved, perhaps that odd class of statements could somehow be identified and cordoned off, leaving the rest of the system complete and consistent. That would require that we find some method for deciding whether a statement was provable. When the great Cambridge math professor Max Newman taught Turing about Hilbert’s questions, the way he expressed the Entscheidungsproblem was this: Is there a “mechanical process” that can be used to determine whether a particular logical statement is provable?

Turing liked the concept of a “mechanical process.” One day in the summer of 1935, he was out for his usual solitary run along the Ely River, and after a couple of miles he stopped to lie down among the apple trees in Grantchester Meadows to ponder an idea. He would take the notion of a “mechanical process” literally, conjuring up a mechanical process—an imaginary machine—and applying it to the problem.8

The “Logical Computing Machine” that he envisioned (as a thought experiment, not as a real machine to be built) was quite simple at first glance, but it could handle, in theory, any mathematical computation. It consisted of an unlimited length of paper tape containing symbols within squares; in the simplest binary example, these symbols could be merely a 1 and a blank. The machine would be able to read the symbols on the tape and perform certain actions based on a “table of instructions” it had been given.9

The table of instructions would tell the machine what to do based on whatever configuration it happened to be in and what symbol, if any, it found in the square. For example, the table of instructions for a particular task might decree that if the machine was in configuration 1 and saw a 1 in the square, then it should move one square to the right and shift into configuration 2. Somewhat surprisingly, to us if not to Turing, such a machine, given the proper table of instructions, could complete any mathematical task, no matter how complex.

How might this imaginary machine answer Hilbert’s third question, the decision problem? Turing approached the problem by refining the concept of “computable numbers.” Any real number that was defined by a mathematical rule could be calculated by the Logical Computing Machine. Even an irrational number such as π could be calculated indefinitely using a finite table of instructions. So could the logarithm of 7, or the square root of 2, or the sequence of Bernoulli numbers that Ada Lovelace had helped produce an algorithm for, or any other number or series, no matter how challenging to compute, as long as its calculation was defined by a finite set of rules. All of these were, in Turing’s parlance, “computable numbers.”

Turing went on to show that noncomputable numbers also existed. This was related to what he called “the halting problem.” There can be no method, he showed, to determine in advance whether any given instruction table combined with any given set of inputs will lead the machine to arrive at an answer or go into some loop and continue chugging away indefinitely, getting nowhere. The insolvability of the halting problem, he showed, meant that Hilbert’s decision problem, the Entscheidungsproblem, was unsolvable. Despite what Hilbert seemed to hope, no mechanical procedure can determine the provability of every mathematical statement. Gödel’s incompleteness theory, the indeterminacy of quantum mechanics, and Turing’s answer to Hilbert’s third challenge all dealt blows to a mechanical, deterministic, predictable universe.

Turing’s paper was published in 1937 with the not so snappy title “On Computable Numbers, with an Application to the Entscheidungsproblem.” His answer to Hilbert’s third question was useful for the development of mathematical theory. But far more important was the by-product of Turing’s proof: his concept of a Logical Computing Machine, which soon came to be known as a Turing machine. “It is possible to invent a single machine which can be used to compute any computable sequence,” he declared.10 Such a machine would be able to read the instructions of any other machine and carry out whatever task that machine could do. In essence, it embodied the dream of Charles Babbage and Ada Lovelace for a completely general-purpose universal machine.

A different and less beautiful solution to the Entscheidungsproblem, with the clunkier name “untyped lambda calculus,” had been published earlier that year by Alonzo Church, a mathematician at Princeton. Turing’s professor Max Newman decided that it would be useful for Turing to go there to study under Church. In his letter of recommendation, Newman described Turing’s enormous potential. He also added a more personal appeal based on Turing’s personality. “He has been working without any supervision or criticism from anyone,” Newman wrote. “This makes it all the more important that he should come into contact as soon as possible with the leading workers on this line, so that he should not develop into a confirmed solitary.”11

Turing did have a tendency toward being a loner. His homosexuality made him feel like an outsider at times; he lived alone and avoided deep personal commitments. At one point he proposed marriage to a female colleague, but then felt compelled to tell her that he was gay; she was unfazed and still willing to get married, but he believed it would be a sham and decided not to proceed. Yet he did not become “a confirmed solitary.” He learned to work as part of a team, with collaborators, which was key to allowing his abstract theories to be reflected in real and tangible inventions.

In September 1936, while waiting for his paper to be published, the twenty-four-year-old doctoral candidate sailed to America in steerage class aboard the aging ocean liner RMS Berengaria, lugging with him a prized brass sextant. His office at Princeton was in the Mathematics Department building, which also then housed the Institute for Advanced Study, where Einstein, Gödel, and von Neumann held court. The cultivated and highly sociable von Neumann became particularly interested in Turing’s work, despite their very different personalities.

The seismic shifts and simultaneous advances of 1937 were not directly caused by the publication of Turing’s paper. In fact, it got little notice at first. Turing asked his mother to send out reprints of it to the mathematical philosopher Bertrand Russell and a half dozen other famous scholars, but the only major review was by Alonzo Church, who could afford to be flattering because he had been ahead of Turing in solving Hilbert’s decision problem. Church was not only generous; he introduced the term Turing machine for what Turing had called a Logical Computing Machine. Thus at twenty-four, Turing’s name became indelibly stamped on one of the most important concepts of the digital age.12

CLAUDE SHANNON AND GEORGE STIBITZ AT BELL LABS

There was another seminal theoretical breakthrough in 1937, similar to Turing’s in that it was purely a thought experiment. This one was the work of an MIT graduate student named Claude Shannon, who that year turned in the most influential master’s thesis of all time, a paper that Scientific American later dubbed “the Magna Carta of the Information Age.”13

Shannon grew up in a small Michigan town where he built model planes and amateur radios, then went on to major in electrical engineering and math at the University of Michigan. In his senior year he answered a help-wanted listing tacked to a bulletin board, which offered a job at MIT working under Vannevar Bush helping to run the Differential Analyzer. Shannon got the job and was mesmerized by the machine—not so much the rods and pulleys and wheels that formed the analog components as the electromagnetic relay switches that were part of its control circuit. As electrical signals caused them to click open and clack closed, the switches created different circuit patterns.

During the summer of 1937, Shannon took a break from MIT and went to work at Bell Labs, a research facility run by AT&T. Located then in Manhattan on the Hudson River edge of Greenwich Village, it was a haven for turning ideas into inventions. Abstract theories intersected with practical problems there, and in the corridors and cafeterias eccentric theorists mingled with hands-on engineers, gnarly mechanics, and businesslike problem-solvers, encouraging the cross-fertilization of theory with engineering. This made Bell Labs an archetype of one of the most important underpinnings of digital-age innovation, what the Harvard science historian Peter Galison has called a “trading zone.” When these disparate practitioners and theoreticians came together, they learned how to find a common parlance to trade ideas and exchange information.14

At Bell Labs, Shannon saw up close the wonderful power of the phone system’s circuits, which used electrical switches to route calls and balance loads. In his mind, he began connecting the workings of these circuits to another subject he found fascinating, the system of logic formulated ninety years earlier by the British mathematician George Boole. Boole revolutionized logic by finding ways to express logical statements using symbols and equations. He gave true propositions the value 1 and false propositions a 0. A set of basic logical operations—such as and, or, not, either/or, and if/then—could then be performed using these propositions, just as if they were math equations.

Shannon figured out that electrical circuits could execute these logical operations using an arrangement of on-off switches. To perform an and function, for example, two switches could be put in sequence, so that both had to be on for electricity to flow. To perform an or function, the switches could be in parallel so that electricity would flow if either of them was on. Slightly more versatile switches called logic gates could streamline the process. In other words, you could design a circuit containing a lot of relays and logic gates that could perform, step by step, a sequence of logical tasks.

(A “relay” is simply a switch that can be opened and shut electrically, such as by using an electromagnet. The ones that clack open and closed are sometimes called electromechanical because they have moving parts. Vacuum tubes and transistors can also be used as switches in an electrical circuit; they are called electronic because they manipulate the flow of electrons but do not require the movement of any physical parts. A “logic gate” is a switch that can handle one or more inputs. For example, in the case of two inputs, an and logic gate switches on if both of the inputs are on, and an or logic gate switches on if either of the inputs is on. Shannon’s insight was that these could be wired together in circuits that could execute the tasks of Boole’s logical algebra.)

When Shannon returned to MIT in the fall, Bush was fascinated by his ideas and urged him to include them in his master’s thesis. Entitled “A Symbolic Analysis of Relay and Switching Circuits,” it showed how each of the many functions of Boolean algebra could be executed. “It is possible to perform complex mathematical operations by means of relay circuits,” he summed up at the end.15 This became the basic concept underlying all digital computers.

Shannon’s ideas intrigued Turing because they neatly related to his own just-published concept of a universal machine that could use simple instructions, expressed in binary coding, to tackle problems not only of math but of logic. Also, since logic was related to the way human minds reason, a machine that performed logical tasks could, in theory, mimic the way humans think.

Working at Bell Labs at the same time was a mathematician named George Stibitz, whose job was to figure out ways to handle the increasingly complicated calculations needed by the telephone engineers. The only tools he had were mechanical desktop adding machines, so he set out to invent something better based on Shannon’s insight that electronic circuits could perform mathematical and logical tasks. Late one evening in November, he went to the stockroom and took home some old electromagnetic relays and bulbs. At his kitchen table, he put the parts together with a tobacco tin and a few switches to form a simple logical circuit that could add binary numbers. A lit bulb represented a 1, and an unlit bulb represented a 0. His wife dubbed it the “K-Model,” after the kitchen table. He took it into the office the next day and tried to convince his colleagues that, with enough relays, he could make a calculating machine.

One important mission of Bell Labs was to figure out ways to amplify a phone signal over long distances while filtering out static. The engineers had formulas that dealt with the amplitude and phase of the signal, and the solutions to their equations sometimes involved complex numbers (ones that include an imaginary unit that represents the square root of a negative number). Stibitz was asked by his supervisor if his proposed machine could handle complex numbers. When he said that it could, a team was assigned to help him build it. The Complex Number Calculator, as it was called, was completed in 1939. It had more than four hundred relays, each of which could open and shut twenty times per second. That made it both blindingly fast compared to mechanical calculators and painfully clunky compared to the all-electronic vacuum-tube circuits just being invented. Stibitz’s computer was not programmable, but it showed the potential of a circuit of relays to do binary math, process information, and handle logical procedures.16

HOWARD AIKEN

Also in 1937 a Harvard doctoral student named Howard Aiken was struggling to do tedious calculations for his physics thesis using an adding machine. When he lobbied the university to build a more sophisticated computer to do the work, his department head mentioned that in the attic of Harvard’s science center were some brass wheels from a century-old device that seemed to be similar to what he wanted. When Aiken explored the attic, he found one of six demonstration models of Charles Babbage’s Difference Engine, which Babbage’s son Henry had made and distributed. Aiken became fascinated by Babbage and moved the set of brass wheels into his office. “Sure enough, we had two of Babbage’s wheels,” he recalled. “Those were the wheels that I had later mounted and put in the body of the computer.”17

That fall, just when Stibitz was cooking up his kitchen-table demonstration, Aiken wrote a twenty-two-page memo to his Harvard superiors and executives at IBM making the case that they should fund a modern version of Babbage’s digital machine. “The desire to economize time and mental effort in arithmetical computations, and to eliminate human liability to error is probably as old as the science of arithmetic itself,” his memo began.18

Aiken had grown up in Indiana under rough circumstances. When he was twelve, he used a fireplace poker to defend his mother against his drunk and abusive father, who then abandoned the family with no money. So young Howard dropped out of ninth grade to support the family by working as a telephone installer, then got a night job with the local power company so that he could attend a tech school during the day. He drove himself to be a success, but in the process he developed into a taskmaster with an explosive temper, someone who was described as resembling an approaching thunderstorm.19

Harvard had mixed feelings about building Aiken’s proposed calculating machine or holding out the possibility that he might be granted tenure for a project that seemed to be more practical than academic. (In parts of the Harvard faculty club, calling someone practical rather than academic was considered an insult.) Supporting Aiken was President James Bryant Conant, who, as chairman of the National Defense Research Committee, was comfortable positioning Harvard as part of a triangle involving academia, industry, and the military. His Physics Department, however, was more purist. Its chairman wrote to Conant in December 1939, saying that the machine was “desirable if money can be found, but not necessarily more desirable than anything else,” and a faculty committee said of Aiken, “It should be made quite clear to him that such activity did not increase his chances of promotion to a professorship.” Eventually Conant prevailed and authorized Aiken to build his machine.20

In April 1941, as IBM was constructing the Mark I to Aiken’s specifications at its lab in Endicott, New York, he left Harvard to serve in the U.S. Navy. For two years he was a teacher, with the rank of lieutenant commander, at the Naval Mine Warfare School in Virginia. One colleague described him as “armed to the teeth with room-length formulas and ivy-covered Harvard theories” and running “smack into a collection of Dixie dumbbells [none of whom] knew calculus from corn pone.”21 Much of his time was spent thinking about the Mark I, and he made occasional visits to Endicott wearing his full dress uniform.22

His tour of duty had one major payoff: at the beginning of 1944, as IBM was getting ready to ship the completed Mark I to Harvard, Aiken was able to convince the Navy to take over authority for the machine and assign him to be the officer in charge. That helped him circumnavigate the academic bureaucracy of Harvard, which was still balky about granting him tenure. The Harvard Computation Laboratory became, for the time being, a naval facility, and all of Aiken’s staffers were Navy personnel who wore uniforms to work. He called them his “crew,” they called him “commander,” and the Mark I was referred to as “she,” as if she were a ship.23

The Harvard Mark I borrowed a lot of Babbage’s ideas. It was digital, although not binary; its wheels had ten positions. Along its fifty-foot shaft were seventy-two counters that could store numbers of up to twenty-three digits, and the finished five-ton product was eighty feet long and fifty feet wide. The shaft and other moving parts were turned electrically. But it was slow. Instead of electromagnetic relays, it used mechanical ones that were opened and shut by electric motors. That meant it took about six seconds to do a multiplication problem, compared to one second for Stibitz’s machine. It did, however, have one impressive feature that would become a staple of modern computers: it was fully automatic. Programs and data were entered by paper tape, and it could run for days with no human intervention. That allowed Aiken to refer to it as “Babbage’s dream come true.”24

KONRAD ZUSE

Although they didn’t know it, all of these pioneers were being beaten in 1937 by a German engineer working in his parents’ apartment. Konrad Zuse was finishing the prototype for a calculator that was binary and could read instructions from a punched tape. However, at least in its first version, called the Z1, it was a mechanical, not an electrical or electronic, machine.

Like many pioneers in the digital age, Zuse grew up fascinated by both art and engineering. After graduating from a technical college, he got a job as a stress analyst for an aircraft company in Berlin, solving linear equations that incorporated all sorts of load and strength and elasticity factors. Even using mechanical calculators, it was almost impossible for a person to solve in less than a day more than six simultaneous linear equations with six unknowns. If there were twenty-five variables, it could take a year. So Zuse, like so many others, was driven by the desire to mechanize the tedious process of solving mathematical equations. He converted his parents’ living room, in an apartment near Berlin’s Tempelhof Airport, into a workshop.25

In Zuse’s first version, binary digits were stored by using thin metal plates with slots and pins, which he and his friends made using a jigsaw. At first he used punched paper tape to input data and programs, but he soon switched to discarded 35 mm movie film, which not only was sturdier but happened to be cheaper. His Z1 was completed in 1938, and it was able to clank through a few problems, though not very reliably. All the components had been made by hand, and they tended to jam. He was handicapped by not being at a place like Bell Labs or part of a collaboration like Harvard had with IBM, which would have allowed him to team up with engineers who could have supplemented his talents.

The Z1 did, however, show that the logical concept Zuse had designed would work in theory. A college friend who was helping him, Helmut Schreyer, urged that they make a version using electronic vacuum tubes rather than mechanical switches. Had they done so right away, they would have gone down in history as the first inventors of a working modern computer: binary, electronic, and programmable. But Zuse, as well as the experts he consulted at the technical school, balked at the expense of building a device with close to two thousand vacuum tubes.26

So for the Z2 they decided instead to use electromechanical relay switches, acquired secondhand from the phone company, which were tougher and cheaper, although a lot slower. The result was a computer that used relays for the arithmetic unit. However, the memory unit was mechanical, using movable pins in a metal sheet.

In 1939 Zuse began work on a third model, the Z3, that used electromechanical relays both for the arithmetic unit and for the memory and control units. When it was completed in 1941, it became the first fully working all-purpose, programmable digital computer. Even though it did not have a way to directly handle conditional jumps and branching in the programs, it could theoretically perform as a universal Turing machine. Its major difference from later computers was that it used clunky electromagnetic relays rather than electronic components such as vacuum tubes or transistors.

Zuse’s friend Schreyer went on to write a doctoral thesis, “The Tube Relay and the Techniques of Its Switching,” that advocated using vacuum tubes for a powerful and fast computer. But when he and Zuse proposed it to the German Army in 1942, the commanders said they were confident that they would win the war before the two years it would take to build such a machine.27 They were more interested in making weapons than computers. As a result, Zuse was pulled away from his computer work and sent back to engineering airplanes. In 1943 his computers and designs were destroyed in the Allied bombing of Berlin.

Zuse and Stibitz, working independently, had both come up with employing relay switches to make circuits that could handle binary computations. How did they develop this idea at the same time when war kept their two teams isolated? The answer is partly that advances in technology and theory made the moment ripe. Along with many other innovators, Zuse and Stibitz were familiar with the use of relays in phone circuits, and it made sense to tie that to binary operations of math and logic. Likewise, Shannon, who was also very familiar with phone circuits, made the related theoretical leap that electronic circuits would be able to perform the logical tasks of Boolean algebra. The idea that digital circuits would be the key to computing was quickly becoming clear to researchers almost everywhere, even in isolated places like central Iowa.

JOHN VINCENT ATANASOFF

Far from both Zuse and Stibitz, another inventor was also experimenting with digital circuits in 1937. Toiling in a basement in Iowa, he would make the next historic innovation: building a calculating device that, at least in part, used vacuum tubes. In some ways his machine was less advanced than the others. It wasn’t programmable and multipurpose; instead of being totally electronic, he included some slow mechanical moving elements; and even though he built a model that was able to work in theory, he couldn’t actually get the thing reliably operational. Nevertheless, John Vincent Atanasoff, known to his wife and friends as Vincent, deserves the distinction of being the pioneer who conceived the first partly electronic digital computer, and he did so after he was struck by inspiration during a long impetuous drive one night in December 1937.28

Atanasoff was born in 1903, the eldest of seven children of a Bulgarian immigrant and a woman descended from one of New England’s oldest families. His father worked as an engineer in a New Jersey electric plant run by Thomas Edison, then moved the family to a town in rural Florida south of Tampa. At nine, Vincent helped his father wire their Florida house for electricity, and his father gave him a Dietzgen slide rule. “That slide rule was my meat,” he recalled.29 At an early age, he dove into the study of logarithms with an enthusiasm that seems a bit wacky even as he recounted it in earnest tones: “Can you imagine how a boy of nine, with baseball on his mind, could be transformed by this knowledge? Baseball was reduced to near zero as a stern study was made of logarithms.” Over the summer, he calculated the logarithm of 5 to the base e, then, with his mother’s help (she had once been a math teacher), he learned calculus while still in middle school. His father took him to the phosphate plant where he was an electrical engineer, showing him how the generators worked. Diffident, creative, and brilliant, young Vincent finished high school in two years, getting all A’s in his double load of classes.

At the University of Florida he studied electrical engineering and displayed a practical inclination, spending time in the university’s machine shop and foundry. He also remained fascinated by math and as a freshman studied a proof involving binary arithmetic. Creative and self-confident, he graduated with the highest grade point average of his time. He accepted a fellowship to pursue master’s work in math and physics at Iowa State and, even though he later was admitted to Harvard, stuck with his decision to head up to the corn belt town of Ames.

Atanasoff went on to pursue a doctorate in physics at the University of Wisconsin, where he had the same experience as the other computer pioneers, beginning with Babbage. His work, which was on how helium can be polarized by an electric field, involved tedious calculations. As he struggled to solve the math using a desktop adding machine, he dreamed of ways to invent a calculator that could do more of the work. After returning to Iowa State in 1930 as an assistant professor, he decided that his degrees in electrical engineering, math, and physics had equipped him for the task.

There was a consequence to his decision not to stay at Wisconsin or to go to Harvard or a similar large research university. At Iowa State, where no one else was working on ways to build new calculators, Atanasoff was on his own. He could come up with fresh ideas, but he did not have around him people to serve as sounding boards or to help him overcome theoretical or engineering challenges. Unlike most innovators of the digital age, he was a lone inventor, drawing his inspiration during solo car trips and in discussions with one graduate student assistant. In the end, that would prove to be a drawback.

Atanasoff initially considered building an analog device; his love of slide rules led him to try to devise a supersize version using long strips of film. But he realized that the film would have to be hundreds of yards long in order to solve linear algebraic equations accurately enough to suit his needs. He also built a contraption that could shape a mound of paraffin so that it could calculate a partial differential equation. The limitations of these analog devices caused him to focus instead on creating a digital version.

The first problem he tackled was how to store numbers in a machine. He used the term memory to describe this feature: “At the time, I had only a cursory knowledge of the work of Babbage and so did not know he called the same concept ‘store.’ . . . I like his word, and perhaps if I had known, I would have adopted it; I like ‘memory,’ too, with its analogy to the brain.”30

Atanasoff went through a list of possible memory devices: mechanical pins, electromagnetic relays, a small piece of magnetic material that could be polarized by an electric charge, vacuum tubes, and a small electrical condenser. The fastest would be vacuum tubes, but they were expensive. So he opted instead to use what he called condensers—what we now call capacitors—which are small and inexpensive components that can store, at least briefly, an electrical charge. It was an understandable decision, but it meant that the machine would be sluggish and clunky. Even if the adding and subtracting could be done at electronic speeds, the process of taking numbers in and out of the memory unit would slow things down to the speed of the rotating drum.


    Ваша оценка произведения:

Популярные книги за неделю