355 500 произведений, 25 200 авторов.

Электронная библиотека книг » Walter Isaacson » The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution » Текст книги (страница 34)
The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution
  • Текст добавлен: 21 сентября 2016, 18:47

Текст книги "The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution"


Автор книги: Walter Isaacson



сообщить о нарушении

Текущая страница: 34 (всего у книги 42 страниц)

Page and Brin pushed to make Google better in two ways. First, they deployed far more bandwidth, processing power, and storage capacity to the task than any rival, revving up their Web crawler so that it was indexing a hundred pages per second. In addition, they were fanatic in studying user behavior so that they could constantly tweak their algorithms. If users clicked on the top result and then didn’t return to the results list, it meant they had gotten what they wanted. But if they did a search and returned right away to revise their query, it meant that they were dissatisfied and the engineers should learn, by looking at the refined search query, what they had been seeking in the first place. Anytime users scrolled to the second or third page of the search results, it was a sign that they were unhappy with the order of results they received. As the journalist Steven Levy pointed out, this feedback loop helped Google learn that when users typed in dogs they also were looking for puppies, and when they typed in boiling they might also be referring to hot water, and eventually Google also learned that when they typed in hot dog they were not looking for boiling puppies.158

One other person came up with a link-based scheme very similar to PageRank: a Chinese engineer named Yanhong (Robin) Li, who studied at SUNY Buffalo and then joined a division of Dow Jones based in New Jersey. In the spring of 1996, just as Page and Brin were creating PageRank, Li came up with an algorithm he dubbed RankDex that determined the value of search results by the number of inbound links to a page and the content of the text that anchored those links. He bought a self-help book on how to patent the idea, and then did so with the help of Dow Jones. But the company did not pursue the idea, so Li moved west to work for Infoseek and then back to China. There he cofounded Baidu, which became that country’s largest search engine and one of Google’s most powerful global competitors.

By early 1998 Page and Brin’s database contained maps of close to 518 million hyperlinks, out of approximately 3 billion by then on the Web. Page was eager that Google not remain just an academic project but would also become a popular product. “It was like Nikola Tesla’s problem,” he said. “You make an invention you think is great, and so you want it to be used by many people as soon as possible.”159

The desire to turn their dissertation topic into a business made Page and Brin reluctant to publish or give formal presentations on what they had done. But their academic advisors kept pushing them to publish something, so in the spring of 1998 they produced a twenty-page paper that managed to explain the academic theories behind PageRank and Google without opening their kimono so wide that it revealed too many secrets to competitors. Titled “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” it was delivered at a conference in Australia in April 1998.

“In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext,” they began.160 By mapping more than a half billion of the Web’s 3 billion links, they were able to calculate a PageRank for at least 25 million Web pages, which “corresponds well with people’s subjective idea of importance.” They detailed the “simple iterative algorithm” that produced PageRanks for every page. “Academic citation literature has been applied to the web, largely by counting citations or backlinks to a given page. This gives some approximation of a page’s importance or quality. PageRank extends this idea by not counting links from all pages equally.”

The paper included many technical details about ranking, crawling, indexing, and iterating the algorithms. There were also a few paragraphs about useful directions for future research. But by the end, it was clear this was not an academic exercise or purely scholarly pursuit. They were engaged in what would clearly become a commercial enterprise. “Google is designed to be a scalable search engine,” they declared in conclusion. “The primary goal is to provide high quality search results.”

This may have been a problem at universities where research was supposed to be pursued primarily for scholarly purposes, not commercial applications. But Stanford not only permitted students to work on commercial endeavors, it encouraged and facilitated it. There was even an office to assist with the patenting process and licensing arrangements. “We have an environment at Stanford that promotes entrepreneurship and risk-taking research,” President John Hennessy declared. “People really understand here that sometimes the biggest way to deliver an effect to the world is not by writing a paper but by taking technology you believe in and making something of it.”161

Page and Brin began by trying to license their software to other companies, and they met with the CEOs of Yahoo!, Excite, and AltaVista. They asked for a $1 million fee, which was not exorbitant since it would include the rights to their patents as well as the personal services of the two of them. “Those companies were worth hundreds of millions or more at the time,” Page later said. “It wasn’t that significant of an expense to them. But it was a lack of insight at the leadership level. A lot of them told us, ‘Search is not that important.’ ”162

As a result, Page and Brin decided to start a company of their own. It helped that within a few miles of the campus there were successful entrepreneurs to act as angel investors, as well as eager venture capitalists just up Sand Hill Road to provide working capital. David Cheriton, one of their professors at Stanford, had founded an Ethernet product company with one such investor, Andy Bechtolsheim, which they had sold to Cisco Systems. In August 1998 Cheriton suggested to Page and Brin that they meet with Bechtolsheim, who had also cofounded Sun Microsystems. Late one night, Brin sent him an email. He got an instant reply, and early the next morning they all met on Cheriton’s Palo Alto porch.

Even at that unholy hour for students, Page and Brin were able to give a compelling demo of their search engine, showing that they could download, index, and page-rank much of the Web on racks of minicomputers. It was a comfortable meeting at the height of the dotcom boom, and Bechtolsheim’s questions were encouraging. Unlike the scores of pitches that came to him each week, this was not a PowerPoint presentation of some vaporware that didn’t yet exist. He could actually type in queries, and answers popped up instantly that were far better than what AltaVista produced. Plus the two founders were whip smart and intense, the type of entrepreneurs he liked to bet on. Bechtolsheim appreciated that they were not throwing large amounts of money—or any money, for that matter—at marketing. They knew that Google was good enough to spread by word of mouth, so every penny they had went to components for the computers they were assembling themselves. “Other Web sites took a good chunk of venture funding and spent it on advertising,” Bechtolsheim said. “This was the opposite approach. Build something of value and deliver a service compelling enough that people would just use it.”163

Even though Brin and Page were averse to accepting advertising, Bechtolsheim knew that it would be simple—and not corrupting—to put clearly labeled display ads on the search results page. That meant there was an obvious revenue stream waiting to be tapped. “This is the single best idea I have heard in years,” he told them. They talked about valuation for a minute, and Bechtolsheim said they were setting their price too low. “Well, I don’t want to waste time,” he concluded, since he had to get to work. “I’m sure it’ll help you guys if I just write a check.” He went to the car to get his checkbook and wrote one made out to Google Inc. for $100,000. “We don’t have a bank account yet,” Brin told him. “Deposit it when you get one,” Bechtolsheim replied. Then he rode off in his Porsche.

Brin and Page went to Burger King to celebrate. “We thought we should get something that tasted really good, though it was really unhealthy,” Page said. “And it was cheap. It seemed like the right combination of ways to celebrate the funding.”164

Bechtolsheim’s check made out to Google Inc. provided a spur to get themselves incorporated. “We had to quickly get a lawyer,” Brin said.165 Page recalled, “It was like, wow, maybe we really should start a company now.”166 Because of Bechtolsheim’s reputation—and because of the impressive nature of Google’s product—other funders came in, including Amazon’s Jeff Bezos. “I just fell in love with Larry and Sergey,” Bezos declared. “They had a vision. It was a customer-focused point of view.”167 The favorable buzz around Google grew so loud that, a few months later, it was able to pull off the rare feat of getting investments from both of the valley’s rival top venture capital firms, Sequoia Capital and Kleiner Perkins.

Silicon Valley had one other ingredient, in addition to a helpful university and eager mentors and venture capitalists: a lot of garages, like the ones in which Hewlett and Packard designed their first products and Jobs and Wozniak assembled the first Apple I boards. When Page and Brin realized that it was time to put aside plans for dissertations and leave the Stanford nest, they found a garage—a two-car garage, which came with a hot tub and a couple of spare rooms inside the house—that they could rent for $1,700 a month at the Menlo Park house of a Stanford friend, Susan Wojcicki, who soon joined Google. In September 1998, one month after they met with Bechtolsheim, Page and Brin incorporated their company, opened a bank account, and cashed his check. On the wall of the garage they put up a whiteboard emblazoned “Google Worldwide Headquarters.”

In addition to making all of the World Wide Web’s information accessible, Google represented a climactic leap in the relationship between humans and machines—the “man-computer symbiosis” that Licklider had envisioned four decades earlier. Yahoo! had attempted a more primitive version of this symbiosis by using both electronic searches and human-compiled directories. The approach that Page and Brin took might appear, at first glance, to be a way of removing human hands from this formula by having the searches performed by Web crawlers and computer algorithms only. But a deeper look reveals that their approach was in fact a melding of machine and human intelligence. Their algorithm relied on the billions of human judgments made by people when they created links from their own websites. It was an automated way to tap into the wisdom of humans—in other words, a higher form of human-computer symbiosis. “The process might seem completely automated,” Brin explained, “but in terms of how much human input goes into the final product, there are millions of people who spend time designing their webpages, determining who to link to and how, and that human element goes into it.”168

In his seminal 1945 essay “As We May Think,” Vannevar Bush had set forth the challenge: “The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.” In the paper they submitted to Stanford just before they left to launch their company, Brin and Page made the same point: “The number of documents in the indices has been increasing by many orders of magnitude, but the user’s ability to look at documents has not.” Their words were less eloquent than Bush’s, but they had succeeded in fulfilling his dream of a human-machine collaboration to deal with information overload. In doing so, Google became the culmination of a sixty-year process to create a world in which humans, computers, and networks were intimately linked. Anyone could share with people anywhere and, as the Victorian-era almanac promised, enquire within upon everything.

I. Like the Web’s HTTP, Gopher was an Internet (TCP/IP) application layer protocol. It primarily facilitated a menu-based navigation for finding and distributing documents (usually text-based) online. The links were done by the servers rather than embedded in the documents. It was named after the university’s mascot and was also a pun on “go for.”

II. A year later, Andreessen would join with the serially successful entrepreneur Jim Clark to launch a company called Netscape that produced a commercial version of the Mosaic browser.

III. Bitcoin and other cryptocurrencies incorporate mathematically coded encryption techniques and other principles of cryptography to create a secure currency that is not centrally controlled.

IV. In March 2003 blog as both a noun and a verb was admitted into the Oxford English Dictionary.

V. Tellingly, and laudably, Wikipedia’s entries on its own history and the roles of Wales and Sanger have turned out, after much fighting on the discussion boards, to be balanced and objective.

VI. Created by the Byte Shop’s owner Paul Terrell, who had launched the Apple I by ordering the first fifty for his store.

VII. The one written by Bill Gates.

VIII. Gates donated to computer buildings at Harvard, Stanford, MIT, and Carnegie Mellon. The one at Harvard, cofunded with Steve Ballmer, was named Maxwell Dworkin, after their mothers.

IX. The Oxford English Dictionary added google as a verb in 2006.













CHAPTER TWELVE

ADA FOREVER

LADY LOVELACE’S OBJECTION

Ada Lovelace would have been pleased. To the extent that we are permitted to surmise the thoughts of someone who’s been dead for more than 150 years, we can imagine her writing a proud letter boasting about her intuition that calculating devices would someday become general-purpose computers, beautiful machines that can not only manipulate numbers but make music and process words and “combine together general symbols in successions of unlimited variety.”

Machines such as these emerged in the 1950s, and during the subsequent thirty years there were two historic innovations that caused them to revolutionize how we live: microchips allowed computers to become small enough to be personal appliances, and packet-switched networks allowed them to be connected as nodes on a web. This merger of the personal computer and the Internet allowed digital creativity, content sharing, community formation, and social networking to blossom on a mass scale. It made real what Ada called “poetical science,” in which creativity and technology were the warp and woof, like a tapestry from Jacquard’s loom.

Ada might also be justified in boasting that she was correct, at least thus far, in her more controversial contention: that no computer, no matter how powerful, would ever truly be a “thinking” machine. A century after she died, Alan Turing dubbed this “Lady Lovelace’s Objection” and tried to dismiss it by providing an operational definition of a thinking machine—that a person submitting questions could not distinguish the machine from a human—and predicting that a computer would pass this test within a few decades. But it’s now been more than sixty years, and the machines that attempt to fool people on the test are at best engaging in lame conversation tricks rather than actual thinking. Certainly none has cleared Ada’s higher bar of being able to “originate” any thoughts of its own.

Ever since Mary Shelley conceived her Frankenstein tale during a vacation with Ada’s father, Lord Byron, the prospect that a man-made contraption might originate its own thoughts has unnerved generations. The Frankenstein motif became a staple of science fiction. A vivid example was Stanley Kubrick’s 1968 movie, 2001: A Space Odyssey, featuring the frighteningly intelligent computer HAL. With its calm voice, HAL exhibits attributes of a human: the ability to speak, reason, recognize faces, appreciate beauty, show emotion, and (of course) play chess. When HAL appears to malfunction, the human astronauts decide to shut it down. HAL becomes aware of the plan and kills all but one of them. After a lot of heroic struggle, the remaining astronaut gains access to HAL’s cognitive circuits and disconnects them one by one. HAL regresses until, at the end, it intones “Daisy Bell”—an homage to the first computer-generated song, sung by an IBM 704 at Bell Labs in 1961.

Artificial intelligence enthusiasts have long been promising, or threatening, that machines like HAL would soon emerge and prove Ada wrong. Such was the premise of the 1956 conference at Dartmouth organized by John McCarthy and Marvin Minsky, where the field of artificial intelligence was launched. The conferees concluded that a breakthrough was about twenty years away. It wasn’t. Decade after decade, new waves of experts have claimed that artificial intelligence was on the visible horizon, perhaps only twenty years away. Yet it has remained a mirage, always about twenty years away.

John von Neumann was working on the challenge of artificial intelligence shortly before he died in 1957. Having helped devise the architecture of modern digital computers, he realized that the architecture of the human brain is fundamentally different. Digital computers deal in precise units, whereas the brain, to the extent we understand it, is also partly an analog system, which deals with a continuum of possibilities. In other words, a human’s mental process includes many signal pulses and analog waves from different nerves that flow together to produce not just binary yes-no data but also answers such as “maybe” and “probably” and infinite other nuances, including occasional bafflement. Von Neumann suggested that the future of intelligent computing might require abandoning the purely digital approach and creating “mixed procedures” that include a combination of digital and analog methods. “Logic will have to undergo a pseudomorphosis to neurology,” he declared, which, roughly translated, meant that computers were going to have to become more like the human brain.1

In 1958 a Cornell professor, Frank Rosenblatt, attempted to do this by devising a mathematical approach for creating an artificial neural network like that of the brain, which he called a Perceptron. Using weighted statistical inputs, it could, in theory, process visual data. When the Navy, which was funding the work, unveiled the system, it drew the type of press hype that has accompanied many subsequent artificial intelligence claims. “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence,” the New York Times reported. The New Yorker was equally enthusiastic: “The Perceptron, . . . as its name implies, is capable of what amounts to original thought. . . . It strikes us as the first serious rival to the human brain ever devised.”2

That was almost sixty years ago. The Perceptron still does not exist.3 Nevertheless, almost every year since then there have been breathless reports about some marvel on the horizon that would replicate and surpass the human brain, many of them using almost the exact same phrases as the 1958 stories about the Perceptron.

Discussion about artificial intelligence flared up a bit, at least in the popular press, after IBM’s Deep Blue, a chess-playing machine, beat the world champion Garry Kasparov in 1997 and then Watson, its natural-language question-answering computer, won at Jeopardy! against champions Brad Rutter and Ken Jennings in 2011. “I think it awakened the entire artificial intelligence community,” said IBM CEO Ginni Rometty.4 But as she was the first to admit, these were not true breakthroughs of humanlike artificial intelligence. Deep Blue won its chess match by brute force; it could evaluate 200 million positions per second and match them against 700,000 past grandmaster games. Deep Blue’s calculations were fundamentally different, most of us would agree, from what we mean by real thinking. “Deep Blue was only intelligent the way your programmable alarm clock is intelligent,” Kasparov said. “Not that losing to a $10 million alarm clock made me feel any better.”5

Likewise, Watson won at Jeopardy! by using megadoses of computing power: it had 200 million pages of information in its four terabytes of storage, of which the entire Wikipedia accounted for merely 0.2 percent. It could search the equivalent of a million books per second. It was also rather good at processing colloquial English. Still, no one who watched would bet on its passing the Turing Test. In fact, the IBM team leaders were afraid that the show’s writers might try to turn the game into a Turing Test by composing questions designed to trick a machine, so they insisted that only old questions from unaired contests be used. Nevertheless, the machine tripped up in ways that showed it wasn’t human. For example, one question was about the “anatomical oddity” of the former Olympic gymnast George Eyser. Watson answered, “What is a leg?” The correct answer was that Eyser was missing a leg. The problem was understanding oddity, explained David Ferrucci, who ran the Watson project at IBM. “The computer wouldn’t know that a missing leg is odder than anything else.”6

John Searle, the Berkeley philosophy professor who devised the “Chinese room” rebuttal to the Turing Test, scoffed at the notion that Watson represented even a glimmer of artificial intelligence. “Watson did not understand the questions, nor its answers, nor that some of its answers were right and some wrong, nor that it was playing a game, nor that it won—because it doesn’t understand anything,” Searle contended. “IBM’s computer was not and could not have been designed to understand. Rather, it was designed to simulate understanding, to act as if it understood.”7

Even the IBM folks agreed with that. They never held Watson out to be an “intelligent” machine. “Computers today are brilliant idiots,” said the company’s director of research, John E. Kelly III, after the Deep Blue and Watson victories. “They have tremendous capacities for storing information and performing numerical calculations—far superior to those of any human. Yet when it comes to another class of skills, the capacities for understanding, learning, adapting, and interacting, computers are woefully inferior to humans.”8

Rather than demonstrating that machines are getting close to artificial intelligence, Deep Blue and Watson actually indicated the contrary. “These recent achievements have, ironically, underscored the limitations of computer science and artificial intelligence,” argued Professor Tomaso Poggio, director of the Center for Brains, Minds, and Machines at MIT. “We do not yet understand how the brain gives rise to intelligence, nor do we know how to build machines that are as broadly intelligent as we are.”9

Douglas Hofstadter, a professor at Indiana University, combined the arts and sciences in his unexpected 1979 best seller, Gödel, Escher, Bach. He believed that the only way to achieve meaningful artificial intelligence was to understand how human imagination worked. His approach was pretty much abandoned in the 1990s, when researchers found it more cost-effective to tackle complex tasks by throwing massive processing power at huge amounts of data, the way Deep Blue played chess.10

This approach produced a peculiarity: computers can do some of the toughest tasks in the world (assessing billions of possible chess positions, finding correlations in hundreds of Wikipedia-size information repositories), but they cannot perform some of the tasks that seem most simple to us mere humans. Ask Google a hard question like “What is the depth of the Red Sea?” and it will instantly respond, “7,254 feet,” something even your smartest friends don’t know. Ask it an easy one like “Can a crocodile play basketball?” and it will have no clue, even though a toddler could tell you, after a bit of giggling.11

At Applied Minds near Los Angeles, you can get an exciting look at how a robot is being programmed to maneuver, but it soon becomes apparent that it still has trouble navigating an unfamiliar room, picking up a crayon, and writing its name. A visit to Nuance Communications near Boston shows the wondrous advances in speech-recognition technologies that underpin Siri and other systems, but it’s also apparent to anyone using Siri that you still can’t have a truly meaningful conversation with a computer, except in a fantasy movie. At the Computer Science and Artificial Intelligence Laboratory of MIT, interesting work is being done on getting computers to perceive objects visually, but even though the machine can discern pictures of a girl with a cup, a boy at a water fountain, and a cat lapping up cream, it cannot do the simple abstract thinking required to figure out that they are all engaged in the same activity: drinking. A visit to the New York City police command system in Manhattan reveals how computers scan thousands of feeds from surveillance cameras as part of a Domain Awareness System, but the system still cannot reliably identify your mother’s face in a crowd.

All of these tasks have one thing in common: even a four-year-old can do them. “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard,” according to Steven Pinker, the Harvard cognitive scientist.12 As the futurist Hans Moravec and others have noted, this paradox stems from the fact that the computational resources needed to recognize a visual or verbal pattern are huge.

Moravec’s paradox reinforces von Neumann’s observations from a half century ago about how the carbon-based chemistry of the human brain works differently from the silicon-based binary logic circuits of a computer. Wetware is different from hardware. The human brain not only combines analog and digital processes, it also is a distributed system, like the Internet, rather than a centralized one, like a computer. A computer’s central processing unit can execute instructions much faster than a brain’s neuron can fire. “Brains more than make up for this, however, because all the neurons and synapses are active simultaneously, whereas most current computers have only one or at most a few CPUs,” according to Stuart Russell and Peter Norvig, authors of the foremost textbook on artificial intelligence.13

So why not make a computer that mimics the processes of the human brain? “Eventually we’ll be able to sequence the human genome and replicate how nature did intelligence in a carbon-based system,” Bill Gates speculates. “It’s like reverse-engineering someone else’s product in order to solve a challenge.”14 That won’t be easy. It took scientists forty years to map the neurological activity of the one-millimeter-long roundworm, which has 302 neurons and 8,000 synapses.I The human brain has 86 billion neurons and up to 150 trillion synapses.15

At the end of 2013, the New York Times reported on “a development that is about to turn the digital world on its head” and “make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control.” The phrases were reminiscent of those used in its 1958 story on the Perceptron (“will be able to walk, talk, see, write, reproduce itself and be conscious of its existence”). Once again, the strategy was to replicate the way the human brain’s neural networks operate. As the Times explained, “the new computing approach is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information.”16 IBM and Qualcomm each disclosed plans to build “neuromorphic,” or brainlike, computer processors, and a European research consortium called the Human Brain Project announced that it had built a neuromorphic microchip that incorporated “fifty million plastic synapses and 200,000 biologically realistic neuron models on a single 8-inch silicon wafer.”17

Perhaps this latest round of reports does in fact mean that, in a few more decades, there will be machines that think like humans. “We are continually looking at the list of things machines cannot do—play chess, drive a car, translate language—and then checking them off the list when machines become capable of these things,” said Tim Berners-Lee. “Someday we will get to the end of the list.”18

These latest advances may even lead to the singularity, a term that von Neumann coined and the futurist Ray Kurzweil and the science fiction writer Vernor Vinge popularized, which is sometimes used to describe the moment when computers are not only smarter than humans but also can design themselves to be even supersmarter, and will thus no longer need us mortals. Vinge says this will occur by 2030.19

On the other hand, these latest stories might turn out to be like the similarly phrased ones from the 1950s, glimpses of a receding mirage. True artificial intelligence may take a few more generations or even a few more centuries. We can leave that debate to the futurists. Indeed, depending on your definition of consciousness, it may never happen. We can leave that debate to the philosophers and theologians. “Human ingenuity,” wrote Leonardo da Vinci, whose Vitruvian Man became the ultimate symbol of the intersection of art and science, “will never devise any inventions more beautiful, nor more simple, nor more to the purpose than Nature does.”

There is, however, yet another possibility, one that Ada Lovelace would like, which is based on the half century of computer development in the tradition of Vannevar Bush, J. C. R. Licklider, and Doug Engelbart.

HUMAN-COMPUTER SYMBIOSIS: “WATSON, COME HERE”

“The Analytical Engine has no pretensions whatever to originate anything,” Ada Lovelace declared. “It can do whatever we know how to order it to perform.” In her mind, machines would not replace humans but instead become their partners. What humans would bring to this relationship, she said, was originality and creativity.


    Ваша оценка произведения:

Популярные книги за неделю