Текст книги "The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolution"
Автор книги: Walter Isaacson
Жанр:
Биографии и мемуары
сообщить о нарушении
Текущая страница: 28 (всего у книги 42 страниц)
“Greed is never good,” Torvalds declared. His approach helped turn him into a folk hero, suitable for veneration at conferences and on magazine covers as the anti-Gates. Charmingly, he was self-aware enough to know that he relished such acclaim and that this made him a little bit more egotistical than his admirers realized. “I’ve never been the selfless, ego-free, techno-lovechild the hallucinating press insists I am,” he admitted.135
Torvalds decided to use the GNU General Public License, not because he fully embraced the free-sharing ideology of Stallman (or for that matter his own parents) but because he thought that letting hackers around the world get their hands on the source code would lead to an open collaborative effort that would make it a truly awesome piece of software. “My reasons for putting Linux out there were pretty selfish,” he said. “I didn’t want the headache of trying to deal with parts of the operating system that I saw as the crap work. I wanted help.”136
His instinct was right. His release of his Linux kernel led to a tsunami of peer-to-peer volunteer collaboration that became a model of the shared production that propelled digital-age innovation.137 By the fall of 1992, a year after its release, Linux’s newsgroup on the Internet had tens of thousands of users. Selfless collaborators added improvements such as a Windows-like graphical interface and tools to facilitate the networking of computers. Whenever there was a bug, someone somewhere stepped in to fix it. In his book The Cathedral and the Bazaar, Eric Raymond, one of the seminal theorists of the open software movement, propounded what he called “Linus’s Law”: “Given enough eyeballs, all bugs are shallow.”138
Peer-to-peer sharing and commons-based collaboration were nothing new. An entire field of evolutionary biology has arisen around the question of why humans, and members of some other species, cooperate in what seem to be altruistic ways. The tradition of forming voluntary associations, found in all societies, was especially strong in early America, evidenced in cooperative ventures ranging from quilting bees to barn raisings. “In no country in the world has the principle of association been more successfully used, or more unsparingly applied to a multitude of different objects, than in America,” Alexis de Tocqueville wrote.139 Benjamin Franklin in his Autobiography propounded an entire civic creed, with the motto “To pour forth benefits for the common good is divine,” to explain his formation of voluntary associations to create a hospital, militia, street-sweeping corps, fire brigade, lending library, night-watch patrol, and many other community endeavors.
The hacker corps that grew up around GNU and Linux showed that emotional incentives, beyond financial rewards, can motivate voluntary collaboration. “Money is not the greatest of motivators,” Torvalds said. “Folks do their best work when they are driven by passion. When they are having fun. This is as true for playwrights and sculptors and entrepreneurs as it is for software engineers.” There is also, intended or not, some self-interest involved. “Hackers are also motivated, in large part, by the esteem they can gain in the eyes of their peers by making solid contributions. . . . Everybody wants to impress their peers, improve their reputation, elevate their social status. Open source development gives programmers the chance.”
Gates’s “Letter to Hobbyists,” complaining about the unauthorized sharing of Microsoft BASIC, asked in a chiding way, “Who can afford to do professional work for nothing?” Torvalds found that an odd outlook. He and Gates were from two very different cultures, the communist-tinged radical academia of Helsinki versus the corporate elite of Seattle. Gates may have ended up with the bigger house, but Torvalds reaped antiestablishment adulation. “Journalists seemed to love the fact that, while Gates lived in a high-tech lakeside mansion, I was tripping over my daughter’s playthings in a three-bedroom ranch house with bad plumbing in boring Santa Clara,” he said with ironic self-awareness. “And that I drove a boring Pontiac. And answered my own phone. Who wouldn’t love me?”
Torvalds was able to master the digital-age art of being an accepted leader of a massive, decentralized, nonhierarchical collaboration, something that Jimmy Wales at Wikipedia was doing at around the same time. The first rule for such a situation is to make decisions like an engineer, based on technical merit rather than personal considerations. “It was a way of getting people to trust me,” Torvalds explained. “When people trust you, they take your advice.” He also realized that leaders in a voluntary collaborative have to encourage others to follow their passion, not boss them around. “The best and most effective way to lead is by letting people do things because they want to do them, not because you want them to.” Such a leader knows how to empower groups to self-organize. When it’s done right, a governance structure by consensus naturally emerges, as happened both with Linux and Wikipedia. “What astonishes so many people is that the open source model actually works,” Torvalds said. “People know who has been active and who they can trust, and it just happens. No voting. No orders. No recounts.”140
The combination of GNU with Linux represented, at least in concept, the triumph of Richard Stallman’s crusade. But moral prophets rarely indulge in victory celebrations. Stallman was a purist. Torvalds wasn’t. The Linux kernel he eventually distributed contained some binary blobs with proprietary features. That could be remedied; indeed Stallman’s Free Software Foundation created a version that was completely free and nonproprietary. But there was a deeper and more emotional issue for Stallman. He complained that referring to the operating system as “Linux,” which almost everybody did, was misleading. Linux was the name of the kernel. The system as a whole should be called GNU/Linux, he insisted, sometimes angrily. One person who was at a software expo recounted how Stallman had reacted when a nervous fourteen-year-old boy asked him about Linux. “You ripped into that boy and tore him a brand new asshole, and I watched as his face fell and his devotion to you and our cause crumpled in a heap,” the onlooker later berated Stallman.141
Stallman also insisted that the goal should be to create what he called free software, a phrase that reflected a moral imperative to share. He objected to the phrase that Torvalds and Eric Raymond began to use, open-source software, which emphasized the pragmatic goal of getting people to collaborate in order to create software more effectively. In practice, most free software is also open-source and vice versa; they are usually thrown together under the rubric of free and open-source software. But to Stallman it mattered not only how you made your software but also your motivations. Otherwise the movement might be susceptible to compromise and corruption.
The disputes went beyond mere substance and became, in some ways, ideological. Stallman was possessed by a moral clarity and unyielding aura, and he lamented that “anyone encouraging idealism today faces a great obstacle: the prevailing ideology encourages people to dismiss idealism as ‘impractical.’ ”142 Torvalds, on the contrary, was unabashedly practical, like an engineer. “I led the pragmatists,” he said. “I have always thought that idealistic people are interesting, but kind of boring and scary.”143
Torvalds admitted to “not exactly being a huge fan” of Stallman, explaining, “I don’t like single-issue people, nor do I think that people who turn the world into black and white are very nice or ultimately very useful. The fact is, there aren’t just two sides to any issue, there’s almost always a range of responses, and ‘it depends’ is almost always the right answer in any big question.”144 He also believed that it should be permissible to make money from open-source software. “Open source is about letting everybody play. Why should business, which fuels so much of society’s technological advancement, be excluded?”145 Software may want to be free, but the people who write it may want to feed their kids and reward their investors.
These disputes should not overshadow the astonishing accomplishment that Stallman and Torvalds and their thousands of collaborators wrought. The combination of GNU and Linux created an operating system that has been ported to more hardware platforms, ranging from the world’s ten biggest supercomputers to embedded systems in mobile phones, than any other operating system. “Linux is subversive,” wrote Eric Raymond. “Who would have thought that a world-class operating system could coalesce as if by magic out of part-time hacking by several thousand developers scattered all over the planet, connected only by the tenuous strands of the Internet?”146 Not only did it become a great operating system; it became a model for commons-based peer production in other realms, from Mozilla’s Firefox browser to Wikipedia’s content.
By the 1990s there were many models for software development. There was the Apple approach, in which the hardware and the operating system software were tightly bundled, as with the Macintosh and iPhone and every iProduct in between. It made for a seamless user experience. There was the Microsoft approach, in which the operating system was unbundled from the hardware. That allowed more user choices. In addition, there were the free and open-source approaches, which allowed the software to be completely unfettered and modifiable by any user. Each model had its advantages, each had its incentives for creativity, and each had its prophets and disciples. But the approach that worked best was having all three models coexisting, along with various combinations of open and closed, bundled and unbundled, proprietary and free. Windows and Mac, UNIX and Linux, iOS and Android: a variety of approaches competed over the decades, spurring each other on—and providing a check against any one model becoming so dominant that it stifled innovation.
I. After they became successful, Gates and Allen donated a new science building to Lakeside and named its auditorium after Kent Evans.
II. Steve Wozniak’s unwillingness to tackle this tedious task when he wrote BASIC for the Apple II would later force Apple to have to license BASIC from Allen and Gates.
III. Reading a draft version of this book online, Steve Wozniak said that Dan Sokol made only eight copies, because they were hard and time-consuming to make. But John Markoff, who reported this incident in What the Dormouse Said, shared with me (and Woz and Felsenstein) the transcript of his interview with Dan Sokol, who said he used a PDP-11 with a high-speed tape reader and punch. Every night he would make copies, and he estimated he made seventy-five in all.
IV. The lawyers were right to be worried. Microsoft later was involved in a protracted antitrust suit brought by the Justice Department, which charged that it had improperly leveraged its dominance of the operating system market to seek advantage in browsers and other products. The case was eventually settled after Microsoft agreed to modify some of its practices.
V. By 2009 the Debian version 5.0 of GNU/Linux had 324 million source lines of code, and one study estimated that it would have cost about $8 billion to develop by conventional means (http://gsyc.es/~frivas/paper.pdf).
Larry Brilliant (1944– ) and Stewart Brand on Brand’s houseboat in 2010.
William von Meister (1942–1995).
Steve Case (1958– ).
CHAPTER TEN
ONLINE
The Internet and the personal computer were both born in the 1970s, but they grew up apart from one another. This was odd, and all the more so when they continued to develop on separate tracks for more than a decade. This was partly because there was a difference in mind-set between those who embraced the joys of networking and those who got giddy at the thought of a personal computer of their very own. Unlike the utopians of the Community Memory project who loved forming virtual communities, many early fans of personal computers wanted to geek out alone on their own machines, at least initially.
There was also a more tangible reason that personal computers arose in a way that was disconnected from the rise of networks. The ARPANET of the 1970s was not open to ordinary folks. In 1981 Lawrence Landweber at the University of Wisconsin pulled together a consortium of universities that were not connected to the ARPANET to create another network based on TCP/IP protocols, which was called CSNET. “Networking was available only to a small fraction of the U.S. computer research community at the time,” he said.1 CSNET became the forerunner of a network funded by the National Science Foundation, NSFNET. But even after these were all woven together into the Internet in the early 1980s, it was hard for an average person using a personal computer at home to get access. You generally had to be affiliated with a university or research institution to jack in.
So for almost fifteen years, beginning in the early 1970s, the growth of the Internet and the boom in home computers proceeded in parallel. They didn’t intertwine until the late 1980s, when it became possible for ordinary people at home or in the office to dial up and go online. This would launch a new phase of the Digital Revolution, one that would fulfill the vision of Bush, Licklider, and Engelbart that computers would augment human intelligence by being tools both for personal creativity and for collaborating.
EMAIL AND BULLETIN BOARDS
“The street finds its own uses for things,” William Gibson wrote in “Burning Chrome,” his 1982 cyberpunk story. Thus it was that the researchers who had access to the ARPANET found their own use for it. It was supposed to be a network for time-sharing computer resources. In that it was a modest failure. Instead, like many technologies, it shot to success by becoming a medium for communications and social networking. One truth about the digital age is that the desire to communicate, connect, collaborate, and form community tends to create killer apps. And in 1972 the ARPANET got its first. It was email.
Electronic mail was already used by researchers who were on the same time-sharing computer. A program called SNDMSG allowed a user of a big central computer to send a message to the personal folder of another user who was sharing the same computer. In late 1971 Ray Tomlinson, an MIT engineer working at BBN, decided to concoct a cool hack that would allow such messages to be sent to folders on other mainframes. He did it by combining SNDMSG with an experimental file transfer program called CPYNET, which could exchange files between distant computers on the ARPANET. Then he came up with something that was even more ingenious: in order to instruct a message to go to the file folder of a user at a different site, he used the @ sign on his keyboard to create the addressing system that we all use now, username@hostname. Thus Tomlinson created not only email but the iconic symbol of the connected world.2
The ARPANET allowed researchers at one center to tap into the computing resources somewhere else, but that rarely happened. Instead email became the main method for collaborating. ARPA’s director, Stephen Lukasik, became one of the first email addicts, thus causing all researchers who needed to deal with him to follow suit. He commissioned a study in 1973 which found that, less than two years after it was invented, email accounted for 75 percent of the traffic on the ARPANET. “The largest single surprise of the ARPANET program has been the incredible popularity and success of network mail,” a BBN report concluded a few years later. It should not have been a surprise. The desire to socially network not only drives innovations, it co-opts them.
Email did more than facilitate the exchange of messages between two computer users. It led to the creation of virtual communities, ones that, as predicted in 1968 by Licklider and Taylor, were “selected more by commonality of interests and goals than by accidents of proximity.”
The earliest virtual communities began with email chains that were distributed to large self-selected groups of subscribers. They became known as mailing lists. The first major list, in 1975, was SF-Lovers, for science fiction fans. The ARPA managers initially wanted to shut it down out of fear that some senator might not be amused by the use of military money to support a sci-fi virtual hangout, but the moderators of the group successfully argued that it was a valuable training exercise in juggling large information exchanges.
Soon other methods of forming online communities arose. Some used the backbone of the Internet; others were more jury-rigged. In February 1978 two members of the Chicago Area Computer Hobbyists’ Exchange, Ward Christensen and Randy Suess, found themselves snowed in by a huge blizzard. They spent the time developing the first computer Bulletin Board System, which allowed hackers and hobbyists and self-appointed “sysops” (system operators) to set up their own online forums and offer files, pirated software, information, and message posting. Anyone who had a way to get online could join in. The following year, students at Duke University and the University of North Carolina, which were not yet connected to the Internet, developed another system, hosted on personal computers, which featured threaded message-and-reply discussion forums. It became known as “Usenet,” and the categories of postings on it were called “newsgroups.” By 1984 there were close to a thousand Usenet terminals at colleges and institutes around the country.
Even with these new bulletin boards and newsgroups, most average PC owners could not easily join virtual communities. Users needed a way to connect, which wasn’t easy from home or even most offices. But then, in the early 1980s, an innovation came along, part technological and part legal, that seemed small but had a huge impact.
MODEMS
The little device that finally created a connection between home computers and global networks was called a modem. It could modulate and demodulate (hence the name) an analog signal, like that carried by a telephone circuit, in order to transmit and receive digital information. It thus allowed ordinary people to connect their computers to others online by using phone lines. The online revolution could now begin.
It was slow in coming because AT&T had a near-monopoly over the nation’s phone system, even controlling the equipment you could use in your home. You couldn’t connect anything to your phone line, or even to your phone, unless Ma Bell leased it to you or approved it. Although AT&T offered some modems in the 1950s, they were clunky and costly and designed mainly for industrial or military use, rather than being conducive to homebrew hobbyists creating virtual communities.
Then came the Hush-A-Phone case. It involved a simple plastic mouthpiece that could be snapped onto a phone to amplify your voice while making it harder for those nearby to overhear you. It had been around for twenty years, causing no harm, but then an AT&T lawyer spotted one in a shopwindow, and the company decided to sue on the absurd ground that any external device, including a little plastic cone, could damage its network. It showed how far the company would go to protect its monopoly.
Fortunately, AT&T’s effort backfired. A federal appeals court dismissed the company’s claim, and the barriers to jacking into its network began to crumble. It was still illegal to connect a modem into the phone system electronically, but you could do so mechanically, such as by taking your phone’s handset and cradling it into the suction cups of an acoustical coupler. By the early 1970s there were a few modems of this type, including the Pennywhistle, designed for the hobbyist crowd by Lee Felsenstein, that could send and receive digital signals at three hundred bits per second.I
The next step came when, a headstrong Texas cowboy won, after a twelve-year legal battle he financed by selling off his cattle, the right for his customers to use a radio-enabled extension phone he had invented. It took a few years for all of the regulations to be worked out, but by 1975 the Federal Communications Commission opened the way for consumers to attach electronic devices to the network.
The rules were stringent, due to AT&T lobbying, so electronic modems were initially expensive. But in 1981 the Hayes Smartmodem came on the market. It could be plugged directly into a phone line and connected to a computer, with no need for a clunky acoustic coupler. Pioneering hobbyists and cyberpunks, along with ordinary home computer users, could type in the phone number of an online service provider, hold their breath while they waited for the staticky screech that indicated a data connection had been made, and then tap into the virtual communities that formed around bulletin boards, newsgroups, mailing lists, and other online hangouts.
THE WELL
In almost every decade of the Digital Revolution, the amused and amusing Stewart Brand found a way to stand at the locus where technology overlapped with community and the counterculture. He had produced the techno-psychedelic show at Ken Kesey’s Trips Festival, reported on Spacewar and Xerox PARC for Rolling Stone, aided and abetted Doug Engelbart’s Mother of All Demos, and founded the Whole Earth Catalog. So in the fall of 1984, just as modems were becoming easily available and personal computers were becoming user-friendly, it was not surprising that Brand helped to conjure up the idea for the prototypic online community, The WELL.
It began when Brand was visited by another of the playfully earnest and creative denizens of the idealistic techno-counterculture, Larry Brilliant. A physician and epidemiologist, Brilliant had a compulsion to change the world and have fun while doing so. He had served as the doctor for an American Indian occupation of Alcatraz, sought enlightenment at a Himalayan ashram with the famed guru Neem Karoli Baba (where he first crossed paths with Steve Jobs), enlisted in the World Health Organization’s campaign to eliminate smallpox, and with support from Jobs and the counterculture luminaries Ram Dass and Wavy Gravy founded the Seva Foundation, which focused on curing blindness in poor communities around the world.
When one of the helicopters used by the Seva Foundation in Nepal had mechanical problems, Brilliant used a computer conferencing system and an Apple II that Jobs had donated to organize online a repair mission. The potential power of online discussion groups impressed him. When he went to teach at the University of Michigan, he helped to build a company around a computer conferencing system that had been created on the university’s network. Known as PicoSpan, it allowed users to post comments on different topics and strung them into threads for all to read. Brilliant’s idealism, techno-utopianism, and entrepreneurialism flowed together. He used the conferencing system to bring medical expertise to Asian hamlets and organize missions when something went wrong.
When Brilliant went to a conference in San Diego, he called his old friend Stewart Brand for lunch. They met at a beachside restaurant near where Brand planned to spend the day skinny-dipping. Brilliant had two interwoven goals: to popularize the PicoSpan conferencing software and to create an online intellectual commune. He pitched Brand on a partnership in which Brilliant would put up $200,000 in capital, buy a computer, and provide the software. “Stewart would then manage the system and extend it throughout his network of smart, interesting people,” Brilliant explained.3 “My idea was to use this new technology as a way of discussing everything in the Whole Earth Catalog. There can be a social network around Swiss Army knives or solar stoves or anything.”4
Brand turned the idea into something grander: creating the world’s most stimulating online community where people could discuss anything they wanted. “Let’s just have a conversation and get the smartest people in the world,” he suggested, “and let them figure out whatever they want to talk about.”5 Brand came up with a name, The WELL, and reverse-engineered an acronym for it: the Whole Earth ’Lectronic Link. A playful apostrophe, he later said, was “always worth having in a name.”6
Brand championed a concept, abandoned by many later virtual communities, that was critical to making The WELL a seminal service. The participants could not be totally anonymous; they could use a handle or pseudonym, but they had to provide their real name when they joined, and other members could know who they were. Brand’s credo, which popped up on the opening screen, was “You own your own words.” You were accountable for what you posted.
Like the Internet itself, The WELL became a system designed by its users. By 1987 the topics of its online forums, known as conferences, ranged from the Grateful Dead (the most popular) to UNIX programming, from art to parenting, aliens to software design. There was minimal hierarchy or control, so it evolved in a collaborative way. That made it both an addictive experience and a fascinating social experiment. Whole books were written about it, including ones by the influential tech chroniclers Howard Rheingold and Katie Hafner. “Just being on The Well, talking with people you might not consider befriending in any other context, was its own seduction,” Hafner wrote.7 In his book Rheingold explained, “It’s like having the corner bar, complete with old buddies and delightful newcomers and new tools waiting to take home and fresh graffiti and letters, except instead of putting on my coat, shutting down the computer, and walking down to the corner, I just invoke my telecom program and there they are.”8 When Rheingold discovered that his two-year-old daughter had a tick in her scalp, he found out how to treat it from a doctor on The WELL before his own physician had called him back.
Online conversations could be intense. A discussion leader named Tom Mandel, who became a central character in Hafner’s book and also helped me and my colleagues at Time manage our online forums, regularly engaged in fiery exchanges, known as flame wars, with other members. “I expressed opinions about everything,” he recalled. “I even started an altercation that dragged half of West Coast cyberspace into an electronic brawl and got myself banished from the WELL.”9 But when he revealed he was dying of cancer, they rallied around him emotionally. “I’m sad, terribly sad, I cannot tell you how sad and grief stricken I am that I cannot stay to play and argue with you much longer,” he wrote in one of his last posts.10
The WELL was a model of the type of intimate, thoughtful community that the Internet used to feature. It still remains, after three decades, a tight-knit community, but it was long ago overtaken in popularity by more commercial online services and then by less communal discussion venues. The widespread retreat into anonymity online has undermined Brand’s creed that people should be accountable for what they say, thus making many online comments less thoughtful and discussions less intimate. As the Internet goes through different cycles—it has been a platform for time-sharing, community, publishing, blogging, and social networking—there may come a time when the natural yearning that humans have for forging trusted communities, akin to corner bars, will reassert itself, and The WELL or startups that replicate its spirit will become the next hot innovation. Sometimes innovation involves recovering what has been lost.
AMERICA ONLINE
William von Meister was an early example of the new frontiersmen who would drive digital innovation beginning in the late 1970s. Like Ed Roberts of Altair, von Meister was a supercharged serial entrepreneur. Fueled by the proliferation of venture capitalists, this breed of innovators threw off ideas like sparks, got an adrenaline rush from risk taking, and touted new technologies with the zeal of evangelists. Von Meister was both an exemplar and a caricature. Unlike Noyce and Gates and Jobs, he did not set out to build companies but instead to launch them and see where they landed. Rather than being afraid of failure, he was energized by it, and his ilk made forgiving failure a feature of the Internet age. A magnificent rogue, he started nine companies in ten years, most of which either crashed or ejected him. But through his serial failures, he helped to define the archetype of the Internet entrepreneur and, in the process, invent the online business.11
Von Meister’s mother was an Austrian countess and his father, a godson of Kaiser Wilhelm II, ran the U.S. division of the German zeppelin company that operated the Hindenburg until its 1937 explosion, and then ran a division of a chemical company until he was indicted for fraud. His style rubbed off on young Bill, born in 1942, who seemed hell-bent on matching his father’s flameouts in flamboyance if not in severity. Growing up in a whitewashed brick mansion known as Blue Chimneys on a twenty-eight-acre estate in New Jersey, he loved escaping to the attic to operate his ham radio and build electronic gadgets. Among the devices he made was a radio transmitter that his father kept in his car and used to signal when he was nearing home from work so that the household staff could prepare his tea.
After a desultory academic career that consisted of dropping into and out of colleges in Washington, DC, von Meister joined Western Union. He made money with a bunch of side ventures, including salvaging some of the company’s discarded equipment, and then launched a service that allowed people to dictate important letters to call centers for overnight delivery. It was successful, but in what became a pattern, von Meister was forced out for spending wildly and not paying any attention to operations.II