355 500 произведений, 25 200 авторов.

Электронная библиотека книг » Walter Isaacson » The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution » Текст книги (страница 35)
The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution
  • Текст добавлен: 21 сентября 2016, 18:47

Текст книги "The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution"


Автор книги: Walter Isaacson



сообщить о нарушении

Текущая страница: 35 (всего у книги 42 страниц)

This was the idea behind an alternative to the quest for pure artificial intelligence: pursuing instead the augmented intelligence that occurs when machines become partners with people. The strategy of combining computer and human capabilities, of creating a human-computer symbiosis, turned out to be more fruitful than the pursuit of machines that could think on their own.

Licklider helped chart that course back in 1960 in his paper “Man-Computer Symbiosis,” which proclaimed: “Human brains and computing machines will be coupled together very tightly, and the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”20 His ideas built on the memex personal computer that Vannevar Bush had imagined in his 1945 essay, “As We May Think.” Licklider also drew on his work designing the SAGE air defense system, which required an intimate collaboration between humans and machines.

The Bush-Licklider approach was given a friendly interface by Engelbart, who in 1968 demonstrated a networked computer system with an intuitive graphical display and a mouse. In a manifesto titled “Augmenting Human Intellect,” he echoed Licklider. The goal, Engelbart wrote, should be to create “an integrated domain where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully co-exist with . . . high-powered electronic aids.” Richard Brautigan, in his poem “All Watched Over by Machines of Loving Grace,” expressed that dream a bit more lyrically: “a cybernetic meadow / where mammals and computers / live together in mutually / programming harmony.”

The teams that built Deep Blue and Watson have adopted this symbiosis approach rather than pursue the objective of the artificial intelligence purists. “The goal is not to replicate human brains,” says John Kelly, the director of IBM Research. Echoing Licklider, he adds, “This isn’t about replacing human thinking with machine thinking. Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results, each bringing their own superior skills to the partnership.”21

An example of the power of this human-computer symbiosis arose from a realization that struck Kasparov after he was beaten by Deep Blue. Even in a rule-defined game such as chess, he came to believe, “what computers are good at is where humans are weak, and vice versa.” That gave him an idea for an experiment: “What if instead of human versus machine we played as partners?” When he and another grandmaster tried that, it created the symbiosis that Licklider had envisioned. “We could concentrate on strategic planning instead of spending so much time on calculations,” Kasparov said. “Human creativity was even more paramount under these conditions.”

A tournament along these lines was held in 2005. Players could work in teams with computers of their choice. Many grandmasters entered the fray, as did the most advanced computers. But neither the best grandmaster nor the most powerful computer won. Symbiosis did. “The teams of human plus machine dominated even the strongest computers,” Kasparov noted. “Human strategic guidance combined with the tactical acuity of a computer was overwhelming.” The final winner was not a grandmaster nor a state-of-the-art computer, nor even a combination of both, but two American amateurs who used three computers at the same time and knew how to manage the process of collaborating with their machines. “Their skill at manipulating and coaching their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants,” according to Kasparov.22

In other words, the future might belong to people who can best partner and collaborate with computers.

In a similar fashion, IBM decided that the best use of Watson, the Jeopardy!-playing computer, would be for it to collaborate with humans rather than try to top them. One project involved using the machine to work in partnership with doctors on cancer treatment plans. “The Jeopardy! challenge pitted man against machine,” said IBM’s Kelly. “With Watson and medicine, man and machine are taking on a challenge together—and going beyond what either could do on its own.”23 The Watson system was fed more than 2 million pages from medical journals and 600,000 pieces of clinical evidence, and could search up to 1.5 million patient records. When a doctor put in a patient’s symptoms and vital information, the computer provided a list of recommendations ranked in order of its confidence.24

In order to be useful, the IBM team realized, the machine needed to interact with human doctors in a manner that made collaboration pleasant. David McQueeney, the vice president of software at IBM Research, described programming a pretense of humility into the machine: “Our early experience was with wary physicians who resisted by saying, ‘I’m licensed to practice medicine, and I’m not going to have a computer tell me what to do.’ So we reprogrammed our system to come across as humble and say, ‘Here’s the percentage likelihood that this is useful to you, and here you can look for yourself.’ ” Doctors were delighted, saying that it felt like a conversation with a knowledgeable colleague. “We aim to combine human talents, such as our intuition, with the strengths of a machine, such as its infinite breadth,” said McQueeney. “That combination is magic, because each offers a piece that the other one doesn’t have.”25

That was one of the aspects of Watson that impressed Ginni Rometty, an engineer with a background in artificial intelligence who took over as CEO of IBM at the beginning of 2012. “I watched Watson interact in a collegial way with the doctors,” she said. “It was the clearest testament of how machines can truly be partners with humans rather than try to replace them. I feel strongly about that.”26 She was so impressed that she decided to launch a new IBM division based on Watson. It was given a $1 billion investment and a new headquarters in the Silicon Alley area near Manhattan’s Greenwich Village. Its mission was to commercialize “cognitive computing,” meaning computing systems that can take data analysis to the next level by teaching themselves to complement the thinking skills of the human brain. Instead of giving the new division a technical name, Rometty simply called it Watson. It was in honor of Thomas Watson Sr., the IBM founder who ran the company for more than forty years, but it also evoked Sherlock Holmes’s companion Dr. John (“Elementary, my dear”) Watson and Alexander Graham Bell’s assistant Thomas (“Come here, I want to see you”) Watson. Thus the name helped to convey that Watson the computer should be seen as a collaborator and companion, not a threat like 2001’s HAL.

Watson was a harbinger of a third wave of computing, one that blurred the line between augmented human intelligence and artificial intelligence. “The first generation of computers were machines that counted and tabulated,” Rometty says, harking back to IBM’s roots in Herman Hollerith’s punch-card tabulators used for the 1890 census. “The second generation involved programmable machines that used the von Neumann architecture. You had to tell them what to do.” Beginning with Ada Lovelace, people wrote algorithms that instructed these computers, step by step, how to perform tasks. “Because of the proliferation of data,” Rometty adds, “there is no choice but to have a third generation, which are systems that are not programmed, they learn.”27

But even as this occurs, the process could remain one of partnership and symbiosis with humans rather than one designed to relegate humans to the dustbin of history. Larry Norton, a breast cancer specialist at New York’s Memorial Sloan-Kettering Cancer Center, was part of the team that worked with Watson. “Computer science is going to evolve rapidly, and medicine will evolve with it,” he said. “This is coevolution. We’ll help each other.”28

This belief that machines and humans will get smarter together is a process that Doug Engelbart called “bootstrapping” and “coevolution.”29 It raises an interesting prospect: perhaps no matter how fast computers progress, artificial intelligence may never outstrip the intelligence of the human-machine partnership.

Let us assume, for example, that a machine someday exhibits all of the mental capabilities of a human: giving the outward appearance of recognizing patterns, perceiving emotions, appreciating beauty, creating art, having desires, forming moral values, and pursuing goals. Such a machine might be able to pass a Turing Test. It might even pass what we could call the Ada Test, which is that it could appear to “originate” its own thoughts that go beyond what we humans program it to do.

There would, however, be still another hurdle before we could say that artificial intelligence has triumphed over augmented intelligence. We can call it the Licklider Test. It would go beyond asking whether a machine could replicate all the components of human intelligence to ask whether the machine accomplishes these tasks better when whirring away completely on its own or when working in conjunction with humans. In other words, is it possible that humans and machines working in partnership will be indefinitely more powerful than an artificial intelligence machine working alone?

If so, then “man-computer symbiosis,” as Licklider called it, will remain triumphant. Artificial intelligence need not be the holy grail of computing. The goal instead could be to find ways to optimize the collaboration between human and machine capabilities—to forge a partnership in which we let the machines do what they do best, and they let us do what we do best.

SOME LESSONS FROM THE JOURNEY

Like all historical narratives, the story of the innovations that created the digital age has many strands. So what lessons, in addition to the power of human-machine symbiosis just discussed, might be drawn from the tale?

First and foremost is that creativity is a collaborative process. Innovation comes from teams more often than from the lightbulb moments of lone geniuses. This was true of every era of creative ferment. The Scientific Revolution, the Enlightenment, and the Industrial Revolution all had their institutions for collaborative work and their networks for sharing ideas. But to an even greater extent, this has been true of the digital age. As brilliant as the many inventors of the Internet and computer were, they achieved most of their advances through teamwork. Like Robert Noyce, some of the best of them tended to resemble Congregational ministers rather than lonely prophets, madrigal singers rather than soloists.

Twitter, for example, was invented by a team of people who were collaborative but also quite contentious. When one of the cofounders, Jack Dorsey, started taking a lot of the credit in media interviews, another cofounder, Evan Williams, a serial entrepreneur who had previously created Blogger, told him to chill out, according to Nick Bilton of the New York Times. “But I invented Twitter,” Dorsey said.

“No, you didn’t invent Twitter,” Williams replied. “I didn’t invent Twitter either. Neither did Biz [Stone, another cofounder]. People don’t invent things on the Internet. They simply expand on an idea that already exists.”30

Therein lies another lesson: the digital age may seem revolutionary, but it was based on expanding the ideas handed down from previous generations. The collaboration was not merely among contemporaries, but also between generations. The best innovators were those who understood the trajectory of technological change and took the baton from innovators who preceded them. Steve Jobs built on the work of Alan Kay, who built on Doug Engelbart, who built on J. C. R. Licklider and Vannevar Bush. When Howard Aiken was devising his digital computer at Harvard, he was inspired by a fragment of Charles Babbage’s Difference Engine that he found, and he made his crew members read Ada Lovelace’s “Notes.”

The most productive teams were those that brought together people with a wide array of specialties. Bell Labs was a classic example. In its long corridors in suburban New Jersey, there were theoretical physicists, experimentalists, material scientists, engineers, a few businessmen, and even some telephone-pole climbers with grease under their fingernails. Walter Brattain, an experimentalist, and John Bardeen, a theorist, shared a workspace, like a librettist and a composer sharing a piano bench, so they could perform a call-and-response all day about how to make what became the first transistor.

Even though the Internet provided a tool for virtual and distant collaborations, another lesson of digital-age innovation is that, now as in the past, physical proximity is beneficial. There is something special, as evidenced at Bell Labs, about meetings in the flesh, which cannot be replicated digitally. The founders of Intel created a sprawling, team-oriented open workspace where employees from Noyce on down all rubbed against one another. It was a model that became common in Silicon Valley. Predictions that digital tools would allow workers to telecommute were never fully realized. One of Marissa Mayer’s first acts as CEO of Yahoo! was to discourage the practice of working from home, rightly pointing out that “people are more collaborative and innovative when they’re together.” When Steve Jobs designed a new headquarters for Pixar, he obsessed over ways to structure the atrium, and even where to locate the bathrooms, so that serendipitous personal encounters would occur. Among his last creations was the plan for Apple’s new signature headquarters, a circle with rings of open workspaces surrounding a central courtyard.

Throughout history the best leadership has come from teams that combined people with complementary styles. That was the case with the founding of the United States. The leaders included an icon of rectitude, George Washington; brilliant thinkers such as Thomas Jefferson and James Madison; men of vision and passion, including Samuel and John Adams; and a sage conciliator, Benjamin Franklin. Likewise, the founders of the ARPANET included visionaries such as Licklider, crisp decision-making engineers such as Larry Roberts, politically adroit people handlers such as Bob Taylor, and collaborative oarsmen such as Steve Crocker and Vint Cerf.

Another key to fielding a great team is pairing visionaries, who can generate ideas, with operating managers, who can execute them. Visions without execution are hallucinations.31 Robert Noyce and Gordon Moore were both visionaries, which is why it was important that their first hire at Intel was Andy Grove, who knew how to impose crisp management procedures, force people to focus, and get things done.

Visionaries who lack such teams around them often go down in history as merely footnotes. There is a lingering historical debate over who most deserves to be dubbed the inventor of the electronic digital computer: John Atanasoff, a professor who worked almost alone at Iowa State, or the team led by John Mauchly and Presper Eckert at the University of Pennsylvania. In this book I give more credit to members of the latter group, partly because they were able to get their machine, ENIAC, up and running and solving problems. They did so with the help of dozens of engineers and mechanics plus a cadre of women who handled programming duties. Atanasoff’s machine, by contrast, never fully worked, partly because there was no team to help him figure out how to make his punch-card burner operate. It ended up being consigned to a basement, then discarded when no one could remember exactly what it was.

Like the computer, the ARPANET and Internet were designed by collaborative teams. Decisions were made through a process, begun by a deferential graduate student, of sending around proposals as “Requests for Comments.” That led to a weblike packet-switched network, with no central authority or hubs, in which power was fully distributed to every one of the nodes, each having the ability to create and share content and route around attempts to impose controls. A collaborative process thus produced a system designed to facilitate collaboration. The Internet was imprinted with the DNA of its creators.

The Internet facilitated collaboration not only within teams but also among crowds of people who didn’t know each other. This is the advance that is closest to being revolutionary. Networks for collaboration have existed ever since the Persians and Assyrians invented postal systems. But never before has it been easy to solicit and collate contributions from thousands or millions of unknown collaborators. This led to innovative systems—Google page ranks, Wikipedia entries, the Firefox browser, the GNU/Linux software—based on the collective wisdom of crowds.

There were three ways that teams were put together in the digital age. The first was through government funding and coordination. That’s how the groups that built the original computers (Colossus, ENIAC) and networks (ARPANET) were organized. This reflected the consensus, which was stronger back in the 1950s under President Eisenhower, that the government should undertake projects, such as the space program and interstate highway system, that benefited the common good. It often did so in collaboration with universities and private contractors as part of a government-academic-industrial triangle that Vannevar Bush and others fostered. Talented federal bureaucrats (not always an oxymoron), such as Licklider, Taylor, and Roberts, oversaw the programs and allocated public funds.

Private enterprise was another way that collaborative teams were formed. This happened at the research centers of big companies, such as Bell Labs and Xerox PARC, and at entrepreneurial new companies, such as Texas Instruments and Intel, Atari and Google, Microsoft and Apple. A key driver was profits, both as a reward for the players and as a way to attract investors. That required a proprietary attitude to innovation that led to patents and intellectual property protections. Digital theorists and hackers often disparaged this approach, but a private enterprise system that financially rewarded invention was a component of a system that led to breathtaking innovation in transistors, chips, computers, phones, devices, and Web services.

Throughout history, there has been a third way, in addition to government and private enterprises, that collaborative creativity has been organized: through peers freely sharing ideas and making contributions as part of a voluntary common endeavor. Many of the advances that created the Internet and its services occurred in this fashion, which the Harvard scholar Yochai Benkler has labeled “commons-based peer production.”32 The Internet allowed this form of collaboration to be practiced on a much larger scale than before. The building of Wikipedia and the Web were good examples, along with the creation of free and open-source software such as Linux and GNU, OpenOffice and Firefox. As the technology journalist Steven Johnson has noted, “their open architecture allows others to build more easily on top of existing ideas, just as Berners-Lee built the Web on top of the Internet.”33 This commons-based production by peer networks was driven not by financial incentives but by other forms of reward and satisfaction.

The values of commons-based sharing and of private enterprise often conflict, most notably over the extent to which innovations should be patent-protected. The commons crowd had its roots in the hacker ethic that emanated from the MIT Tech Model Railroad Club and the Homebrew Computer Club. Steve Wozniak was an exemplar. He went to Homebrew meetings to show off the computer circuit he built, and he handed out freely the schematics so that others could use and improve it. But his neighborhood pal Steve Jobs, who began accompanying him to the meetings, convinced him that they should quit sharing the invention and instead build and sell it. Thus Apple was born, and for the subsequent forty years it has been at the forefront of aggressively patenting and profiting from its innovations. The instincts of both Steves were useful in creating the digital age. Innovation is most vibrant in the realms where open-source systems compete with proprietary ones.

Sometimes people advocate one of these modes of production over the others based on ideological sentiments. They prefer a greater government role, or exalt private enterprise, or romanticize peer sharing. In the 2012 election, President Barack Obama stirred up controversy by saying to people who owned businesses, “You didn’t build that.” His critics saw it as a denigration of the role of private enterprise. Obama’s point was that any business benefits from government and peer-based community support: “If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable American system that we have that allowed you to thrive. Somebody invested in roads and bridges.” It was not the most elegant way for him to dispel the fantasy that he was a closet socialist, but it did point to a lesson of modern economics that applies to digital-age innovation: that a combination of all of these ways of organizing production—governmental, market, and peer sharing—is stronger than favoring any one of them.

None of this is new. Babbage got most of his funding from the British government, which was generous in financing research that could strengthen its economy and empire. He adopted ideas from private industry, most notably the punch cards that had been developed by the textile firms for automated looms. He and his friends were founders of a handful of new peer-network clubs, including the British Association for the Advancement of Science, and though it may seem a stretch to view that august group as a fancy-dress forerunner to the Homebrew Computer Club, both existed to facilitate commons-based peer collaboration and the sharing of ideas.

The most successful endeavors in the digital age were those run by leaders who fostered collaboration while also providing a clear vision. Too often these are seen as conflicting traits: a leader is either very inclusive or a passionate visionary. But the best leaders could be both. Robert Noyce was a good example. He and Gordon Moore drove Intel forward based on a sharp vision of where semiconductor technology was heading, and they both were collegial and nonauthoritarian to a fault. Even Steve Jobs and Bill Gates, with all of their prickly intensity, knew how to build strong teams around them and inspire loyalty.

Brilliant individuals who could not collaborate tended to fail. Shockley Semiconductor disintegrated. Similarly, collaborative groups that lacked passionate and willful visionaries also failed. After inventing the transistor, Bell Labs went adrift. So did Apple after Jobs was ousted in 1985.

Most of the successful innovators and entrepreneurs in this book had one thing in common: they were product people. They cared about, and deeply understood, the engineering and design. They were not primarily marketers or salesmen or financial types; when such folks took over companies, it was often to the detriment of sustained innovation. “When the sales guys run the company, the product guys don’t matter so much, and a lot of them just turn off,” Jobs said. Larry Page felt the same: “The best leaders are those with the deepest understanding of the engineering and product design.”34

Another lesson of the digital age is as old as Aristotle: “Man is a social animal.” What else could explain CB and ham radios or their successors, such as WhatsApp and Twitter? Almost every digital tool, whether designed for it or not, was commandeered by humans for a social purpose: to create communities, facilitate communication, collaborate on projects, and enable social networking. Even the personal computer, which was originally embraced as a tool for individual creativity, inevitably led to the rise of modems, online services, and eventually Facebook, Flickr, and Foursquare.

Machines, by contrast, are not social animals. They don’t join Facebook of their own volition nor seek companionship for its own sake. When Alan Turing asserted that machines would someday behave like humans, his critics countered that they would never be able to show affection or crave intimacy. To indulge Turing, perhaps we could program a machine to feign affection and pretend to seek intimacy, just as humans sometimes do. But Turing, more than almost anyone, would probably know the difference.

According to the second part of Aristotle’s quote, the nonsocial nature of computers suggests that they are “either a beast or a god.” Actually, they are neither. Despite all of the proclamations of artificial intelligence engineers and Internet sociologists, digital tools have no personalities, intentions, or desires. They are what we make of them.

ADA’S LASTING LESSON: POETICAL SCIENCE

That leads to a final lesson, one that takes us back to Ada Lovelace. As she pointed out, in our symbiosis with machines we humans have brought one crucial element to the partnership: creativity. The history of the digital age—from Bush to Licklider to Engelbart to Jobs, from SAGE to Google to Wikipedia to Watson—has reinforced this idea. And as long as we remain a creative species, this is likely to hold true. “The machines will be more rational and analytic,” IBM’s research director John Kelly says. “People will provide judgment, intuition, empathy, a moral compass, and human creativity.”35

We humans can remain relevant in an era of cognitive computing because we are able to think different, something that an algorithm, almost by definition, can’t master. We possess an imagination that, as Ada said, “brings together things, facts, ideas, conceptions in new, original, endless, ever-varying combinations.” We discern patterns and appreciate their beauty. We weave information into narratives. We are storytelling as well as social animals.

Human creativity involves values, intentions, aesthetic judgments, emotions, personal consciousness, and a moral sense. These are what the arts and humanities teach us—and why those realms are as valuable a part of education as science, technology, engineering, and math. If we mortals are to uphold our end of the human-computer symbiosis, if we are to retain a role as the creative partners of our machines, we must continue to nurture the wellsprings of our imagination and originality and humanity. That is what we bring to the party.

At his product launches, Steve Jobs would conclude with a slide, projected on the screen behind him, of street signs showing the intersection of the Liberal Arts and Technology. At his last such appearance, for the iPad 2 in 2011, he stood in front of that image and declared, “It’s in Apple’s DNA that technology alone is not enough—that it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.” That’s what made him the most creative technology innovator of our era.

The converse to this paean to the humanities, however, is also true. People who love the arts and humanities should endeavor to appreciate the beauties of math and physics, just as Ada did. Otherwise, they will be left as bystanders at the intersection of arts and science, where most digital-age creativity will occur. They will surrender control of that territory to the engineers.

Many people who celebrate the arts and the humanities, who applaud vigorously the tributes to their importance in our schools, will proclaim without shame (and sometimes even joke) that they don’t understand math or physics. They extoll the virtues of learning Latin, but they are clueless about how to write an algorithm or tell BASIC from C++, Python from Pascal. They consider people who don’t know Hamlet from Macbeth to be Philistines, yet they might merrily admit that they don’t know the difference between a gene and a chromosome, or a transistor and a capacitor, or an integral and a differential equation. These concepts may seem difficult. Yes, but so, too, is Hamlet. And like Hamlet, each of these concepts is beautiful. Like an elegant mathematical equation, they are expressions of the glories of the universe.

C. P. Snow was right about the need to respect both of “the two cultures,” science and the humanities. But even more important today is understanding how they intersect. Those who helped lead the technology revolution were people in the tradition of Ada, who could combine science and the humanities. From her father came a poetic streak and from her mother a mathematical one, and it instilled in her a love for what she called “poetical science.” Her father defended the Luddites who smashed mechanical looms, but Ada loved how punch cards instructed those looms to weave beautiful patterns, and she envisioned how this wondrous combination of art and technology could be manifest in computers.

The next phase of the Digital Revolution will bring even more new methods of marrying technology with the creative industries, such as media, fashion, music, entertainment, education, literature, and the arts. Much of the first round of innovation involved pouring old wine—books, newspapers, opinion pieces, journals, songs, television shows, movies—into new digital bottles. But new platforms, services, and social networks are increasingly enabling fresh opportunities for individual imagination and collaborative creativity. Role-playing games and interactive plays are merging with collaborative forms of storytelling and augmented realities. This interplay between technology and the arts will eventually result in completely new forms of expression and formats of media.


    Ваша оценка произведения:

Популярные книги за неделю