Ellen Pao: This Is How Sexism Works in Silicon Valley


— excerpt below —

The Cut Toggle Menu This Is How Sexism Works in Silicon Valley My lawsuit failed. Others won’t. By Ellen Pao Photograph by Dan Winters Photograph by Dan Winters AUGUST 20, 2017 9:00 PM Share Tweet Pin It Email In December 2010, Sheryl Sandberg gave a talk about women’s leadership in which she mentioned “sitting at the table.” Women, she said, have to pull up a chair and sit at the conference-room table rather than clinging to the edges of the room, “because no one gets to the corner office by sitting on the side.” Less than a year later, I would take those words to heart. I had been working for six years at the Silicon Valley firm Kleiner Perkins Caufield & Byers as a junior partner and chief of staff for managing partner John Doerr. Kleiner was then one of the three most powerful venture-­capital firms in the world. One day, I was part of a small group flying from San Francisco to New York on the private jet of another managing partner, Ted Schlein. I was the first to arrive at Hayward Airport. The main cabin of the plane was set up with four chairs in pairs facing each other. Usually the most powerful seat faces forward, looking at the TV screen, with the second most powerful next to it. Then came the seats facing backward. I was sure the white men booked on the flight (Ted, senior partner Matt Murphy, a tech CEO, and a tech investor) would be taking those four seats and I would end up on the couch in back. But Sheryl’s words echoed in my mind, and I moved to one of the power seats — the fourth, backward-facing seat, but at the table nonetheless. The rest of the folks filed in one by one. Ted sat across from me, the CEO next to him, and the tech investor next to me on my right. Matt ended up with what would have been my original seat on the couch. RELATED STORIES Ellen Pao and the Sexism You Can’t Quite Prove The Real Lesson of the Ellen Pao Verdict Once we were airborne, the CEO, who’d brought along a few bottles of wine, started bragging about meeting Jenna Jameson, talking about her career as the world’s greatest porn star and how he had taken a photo with her at the Playboy Mansion. He asked if I knew who she was and then proceeded to describe her pay-per-view series (Jenna’s American Sex Star), on which women competed for porn-movie contracts by performing sex acts before a live audience. “Nope,” I said. “Not a show I’m familiar with.” Then the CEO switched topics. To sex workers. He asked Ted what kind of “girls” he liked. Ted said that he preferred white girls — Eastern European, to be specific. Eventually we all moved to the couch for a working session to help the tech CEO; he was trying to recruit a woman to his all-male board. I suggested Marissa Mayer, but the CEO looked at me and dismissively said, “Nah, too controversial.” Then he grinned at Ted and added, “Though I would let her join the board because she’s hot.” Somehow, I got the distinct vibe that the group couldn’t wait to ditch me. And once we landed at Teterboro, the guys made plans to go to a club, while I headed into Manhattan alone. Taking your seat at the table doesn’t work so well, I thought, when no one wants you there. (When Sandberg’s book Lean In came out, that same Jenna Jameson–obsessed CEO became a vocal spokesperson for it.) Seven months later, I would sue Kleiner Perkins for sexual harassment and discrimination in a widely publicized case in which I was often cast as the villain — incompetent, greedy, aggressive, and cold. My husband and I were both dragged through the mud, our privacy destroyed. For a long time I didn’t challenge those stories, because I wasn’t ready to talk about my experience in detail. Now I am. When I first got the three pages of specs for a chief-of-staff position at Kleiner Perkins in 2005, it was almost as if someone had copied my résumé. The list of requirements was comically long: an engineering degree (only in computer science or electrical engineering), a law degree and a business degree (only from top schools), management-consulting experience (only at Booz Allen or Bain), start-up experience (only at a top start-up), enterprise-software-­company experience (only at a big established player known for training employees) … oh, and fluency in Mandarin. John Doerr wanted his new chief of staff to “leverage his time,” which he valued at $200,000 per hour. I liked John. People sometimes compare him to Woody Allen, because he has that strange mixture of nervous energy, nerdy charm, and awkwardness, though John was also an unapologetic salesman. His pitch to me: I would be senior to others in this role; Kleiner Perkins was one of the few VC firms with women, and he wanted to bring even more onboard; diversity was important to him. In retrospect, there were some early warning signs, like when John declared that he’d specifically requested an Asian woman for my position. He liked the idea of a “Tiger Mom–raised” woman. He usually had two chiefs of staff at a time, one of each gender, but the male one seemed to focus mostly on investing and the female one did more of the grunt work and traveled with him. “There are certain things I am just more comfortable asking a woman to do,” John once told me matter-of-factly. Still, my new job felt thrilling. At Kleiner, we focused on the really big ideas: the companies that were trying to transform an industry or revolutionize daily life. And John was king of king­makers. He had some early hits: Genentech, Intuit, Amazon. He was one of the first to invest in the internet, with Netscape, and cemented his online reputation later with Google. In venture capital, a ton of power is concentrated in just a few people who all know one another. Tips and information are exchanged at all-male dinners, outings to Vegas, and sports events. Networks are important inside a VC firm, too. One secret of the venture-capital world is that many firms run on vote trading. A person might offer to vote in favor of investing in another partner’s investment so that partner will support his upcoming investment. Many firms, including Kleiner, also had a veto rule: Any one person could veto another member’s investment. No one ever exercised a veto while I was there, but fear of it motivated us to practice the California art of superficial collegiality, where everything seems tan and shiny on the outside but behind closed doors, people would trash your investment, block it, or send you on unending “rock fetches” — time-consuming, unproductive tasks to stall you until you gave up. Venture capital’s underbelly of competitiveness exists in part because the more I invest, the less money for you, my partner, to make your investments. And we’re all trying to make as many investments as possible because chances are low that any one investment is going to be successful. Partners can increase their own odds by excluding all of your investments. And as a junior partner you faced another dilemma: Your investments could be poached by senior partners. You wanted to pitch your venture so it would be supported but not so much that it would be stolen. Once a senior partner laid claim to a venture you were driving, you were better off just keeping quiet. Otherwise, you could be branded as having sharp elbows and not being a good team player. But this was true, I noticed, only for women. Junior men could sometimes even take ventures from senior partners. Predicting who will succeed is an imperfect art, but also, sometimes, a self-fulfilling prophecy. When venture capitalists say — and they do say — “We think it’s young white men, ideally Ivy League dropouts, who are the safest bets,” then invest only in young white men with Ivy League backgrounds, of course young white men with Ivy League backgrounds are the only ones who make money for them. They’re also the only ones who lose money for them. Sometimes the whole world felt like a nerdy frat house. People in the venture world spoke fondly about the early shenanigans at big companies. A friend told me how he sublet office space to Facebook, only to find people having sex there on the floor of the main public area. They wanted to see if the Reactrix — an interactive floor display hooked up to light sensors — would enhance their experience. At VC meetings, male partners frequently spoke over female colleagues or repeated what the women said and took the credit. Women were admonished when they “raised their voices” yet chastised when they couldn’t “own the room.” When I was still relatively new, a male partner made a big show of passing a plate of cookies around the table — but curiously ignored me and the woman next to him. Part of me thought, They’re just cookies. But after everyone left, my co-worker turned to me and shrugged. “It’s like we don’t exist,” she said. Then, in 2006, I took a fateful business trip. Ajit Nazre, a fellow partner, had asked me to go with him on a tour of German green-tech start-ups. Every time we were alone in the evening, he would tell me that he and his wife had a terrible relationship, that he was desperately lonely, and that he and I would be good together. Honestly, I might have considered dating him had he been less arrogant and less married. I was awfully lonely too. After our last set of meetings, Ajit asked for my room number. Since he and I were leaving the next morning, I figured he might want to coordinate our departure for the airport. So I told him the number. But I must have subconsciously given him the wrong one. The next morning at checkout, he was livid. It seemed he’d gone to what he thought was my room and I wasn’t there. He stormed off to the airport by himself. After the trip, I tried to placate Ajit by sending a couple of friendly emails. He slowly became friendlier; then we worked on a project together, and he was actually helpful. I tried to keep the relationship professional, but he soon started saying that he and his wife were having problems again. I kept up my mantra: “You should do counseling.” Until, one day, he said, “I wanted you to know that my wife and I have separated. We’re getting divorced. I want to be with you.” He’d never said anything like that before. I felt a dash of hope that this could be a real thing. We started seeing each other and had what eventually amounted to a short-lived, sporadic fling. It was fun bonding over work. Ajit told me the history of the firm and gave me the scoop on departed partners, and I felt like I was at last being let in on company secrets. Finally I had someone who was willing to talk about the dysfunction we saw in our workplace, and to be honest about how decisions were really made. Then one day in a meeting, one of the managing partners, oblivious to my relationship with Ajit, said, “Can you believe my weekend? I was in a suite at the Ritz-­Carlton at Half Moon Bay, standing on the balcony in my bathrobe, and who did I see? Ajit and his wife walking across the lawn!” I broke it off with Ajit, but I was hopeful we could move past it personally and professionally. Women were admonished when they “raised their voices” yet chastised when they couldn’t “own the room.” As it turned out, I would soon meet and fall in love with the man I would marry, Buddy Fletcher. He was a financial arbitrageur who’d helped fund the first professorship for LGBTQ studies at Harvard. During our first date in New York, he told me during hours of conversation that he had previously been with men, something I never had a problem with but which would later be used in the press as evidence that our marriage was a sham. We got engaged just six weeks after we met. My newly joyous home life was a liability in one sense, though: It made me recognize how very uncomfortable my work situation had become. Ajit had grown increasingly hostile toward me, excluding me from information and meetings. Even with other managers, I often got ignored or interrupted. At one point, John had a suggestion for how I could get more airtime. He wanted me to go to school — to learn to be a stand-up comic. Partners had become so aggressive about pursuing ventures I was working on that CEOs started to point it out. One CEO I had been working with, Mike McCue, called me to relate how John and another managing partner, Bing Gordon, had met with him and asked to invest more money in his start-up Flipboard. Just a few months earlier, I had lobbied hard for the firm to go all in and invest a large amount in Flipboard, but had been pushed to take a smaller investment. “They offered to pay $15 million for another 5 percent,” Mike told me. That price worked out to 20 percent higher than in the latest round. When Mike told them he was done raising, they upped their offer to $25 million for 5 percent. I gasped inwardly. We’d had a chance to buy the same number of shares for only $10 million just a month ago. “Then,” he continued, “Bing asked for a board seat for himself or John. I said no. You know I don’t want anyone but you on my board right now. What is going on?” Now I understood why they hadn’t invited me to the meeting or even told me about it. I had made the initial investment and was a board member, so standard practice would have been for me to be part of any discussions about Flipboard. After the call, I confronted the two partners with Mike’s account. John seemed sheepish and blamed the gambit on Bing. Bing just looked alarmed. I don’t think he ever expected to fail in his bid or be held accountable for his bad manners. I didn’t get an apology, but the look of shock on his face was almost enough to make me feel better. And I could console myself with the knowledge that at least I had relationships worth trying to steal. Of the few women I encountered at or above my professional level, almost none had young families. One partner told me that when she happily announced her third pregnancy, a male senior partner responded, “I don’t know any professional working woman who has three kids.” When I gave birth to my first child, some partners at work treated my taking maternity leave as the equivalent of abandoning a ship in the middle of a typhoon to get a manicure. Juliet de Baubigny, one of the partners who had helped recruit me, had warned me that taking time off would put my companies at risk of being commandeered by another partner. I knew two other women who had board seats taken away during their maternity leaves. Juliet coached me on how to keep at least one company by leading their search for a CEO even though technically I was on leave. I’d arranged to take four months off, but after three I felt pressure to return. Back at Kleiner, I continued to have a huge problem with Ajit. Not only was he blocking my work, he had been promoted to a position of even greater responsibility and was giving me negative reviews. I started to lodge formal complaints about him. In response, the firm suggested I transfer to the China office. It wasn’t until the spring of 2011 that I finally told a few colleagues about my harassment by Ajit. One instructed me never to mention it again. But when I told fellow junior partner Trae Vassallo, she grew uncharacteristically quiet. Then she said something I never expected: She had been harassed by Ajit, too. He’d asked her out for drinks to talk shop, and in the course of the evening he started touching her with his leg under the table. Then I said something I still feel bad about. I recommended that she not report it. I had, and had been paying the price ever since. Fortunately, Trae didn’t take my advice. She reported Ajit’s behavior soon after, when she found out he was about to do her review. She was promised that the firm would keep an eye on it, but no other action was taken. In that round of summer reviews, Kleiner had six junior partners who had worked there for four or more years. The women had twice as many years at Kleiner, but only the men got promoted. Around the end of November, Ajit persuaded Trae to go to New York with him for an important work trip. He said they’d be having dinner with a CEO who might be able to help one of Trae’s companies. But when they arrived, Trae saw that the table was set for two. The trip was just her and Ajit, in a hotel together for the weekend. Later that night he came to her room in his bathrobe, asking to be let in. She eventually had to push him out the door. Later, when she told one of the managing partners about the fake trip, he said, “You should feel flattered.” It was now clear to me that the firm was unwilling to take the difficult actions needed to fix its problems. On January 4, 2012, I sent an email to the managing partners presenting all the facts as clearly as I could and asking for substantive changes and either protection from further ostracism or help with an exit. After more than a month, the company put Ajit on leave. Two tense months after that, he finally left. When I spoke to the COO, he asked how much I wanted in order to quietly leave. “I want no less than what Ajit gets,” I said — which I suspected was around $10 million. The COO gasped. Life at Kleiner got progressively worse. At one point I found out the partners had taken some CEOs and founders on an all-male ski trip. They spent $50,000 on the private jet to and from Vail. I was later told that they didn’t invite any women because women probably wouldn’t want to share a condo with men. Finally, an outside independent investigator looked into Trae’s complaint and the issues I’d raised in my memo. Almost all of the women came to me after their interviews with him and said the same thing: “He really didn’t ask questions. He asked if we had ever seen porn in the office.” He didn’t seem interested in finding out about actual discrimination, bias, or harassment. In my own interview, when I mentioned that my colleagues had talked about a porn star when we were on a plane together, the investigator asked if it was Sasha Grey. I said no. He pressed the point, saying that Sasha Grey was crossing over into legitimate acting. At another point, the investigator asked, in a ­“gotcha” tone, “Well, if they look down on women so much, if they block you from opportunities, they don’t include you at their events, why do they even keep you around in the first place?” I hadn’t thought about it before. I replied slowly as the answer crystallized in my mind: If you had the opportunity to have workers who were overeducated, underpaid, and highly experienced, whom you could dump all the menial tasks you didn’t want to do on, whom you could get to clean up all the problems, and whom you could create a second class out of, wouldn’t you want them to stay? I noticed he didn’t write that down in his notebook. Among the other things the investigator did not write down: that there was no sexual-harassment training, not even a line in the hiring paperwork saying: Hey, be appropriate. Don’t do things that make people feel uncomfortable. Don’t touch people. Kleiner’s managing partners flouted hiring rules, too, asking inappropriate questions in interviews like: Are you married? Do you have kids? How old are you? Are you thinking about having kids? What does your husband do? What did your ex-husband do? It was noted at some point that such questions created a giant legal risk, and the response was, effectively, Well, who’s going to sue us? Courtroom drawings from the 2015 Pao v. Kleiner Perkins trial. Clockwise: Pao on the stand; Doerr, Pao’s former boss, testifying before the jury.; Pao hugging her lawyer after the closing arguments.; The court clerk reading the verdict. Illustration: Vicki Behringer Apparently, me. My claim — 12 pages covering everything that had happened to me over seven years at Kleiner — specified gender discrimination in promotion and pay, and retaliation against me after I reported the harassment. I asked for damages to cover the lost pay and to prevent them from doing it again. Meanwhile, Kleiner had notified me that its investigation was done: The finding was that there had been no retaliation or discrimination. In response to my suit, Kleiner hired a powerful crisis-­management PR firm, Brunswick. On their website, they bragged about having troll farms — “integrated networks of influence,” used in part for “reputation management” — and I believe they enlisted one to defame me online. Dozens, then thousands, of messages a day derided me as bad at my job, crazy, an embarrassment. Repeatedly, Kleiner called me a “poor performer.” A Vanity Fair story implied that Buddy was gay, a fraud, and a fake husband. My lawyer said my case would be stronger if I continued to work at Kleiner. The general partners sometimes had long meetings to discuss the lawsuit; the ten of them would file into one of the large, windowed conference rooms. I could see them hunched around the table looking annoyed as a team of lawyers blared over the speaker­phone. If I walked down the hall, the room would fall silent and their eyes would follow me until I was out of sight. I tried to stay focused on my personal life. After a lot of trying, I was pregnant with our second child. Still, the negativity wore me down. Then in June 2012, during a regular checkup, my doctor looked at the sonogram and I could tell something was wrong. I’d had a miscarriage. “When I saw all the horrible things being said about you,” he said, “I was worried about you and the baby. Stress can be a factor.” He was referring to an article in the New York Times in which an expert was quoted saying he was skeptical about my claims because he hadn’t ever heard of mistreatment of women in Silicon Valley. I felt, in that moment, that Kleiner had taken everything from me. Then I had my summer review. Ted and Matt told me that CEOs had complained about my board performance. When I asked which ones, Matt just said, “All of them.” A few months later, Matt asked me to leave, claiming I hadn’t improved since my last review. I was told to be out of the office by the end of the day. On the drive home, I wept. Some of it was sorrow. Most of it was relief. The trial would last five weeks. Lynne Hermle of Orrick, Kleiner’s defense attorney, had once been so intense during another case that she’d made an opponent vomit in the courtroom, and she wasted no time in painting a picture of me as talentless, stupid, and greedy. “She did not have the necessary skills for the job,” she said. “She did not even come close.” At other times, the trial was almost gratifying. The CFO, who was also still at Kleiner, admitted that until 2012 the company didn’t even have an HR department and didn’t provide employees with an anti-discrimination policy; they hadn’t had one until Trae and I formally raised our concerns. When Hermle tried to show that women did rise at Kleiner, we pointed out that nearly all those promotions had happened since I brought up these issues. At the start of 2012, when Trae and I had lodged our complaint, only one woman in the firm’s 40-year history had ever been promoted to senior partner. And anomalies in reviews were finally cleared up: It turned out that Ted had set up a process designed to make me look bad. He started with the standard procedure: I was asked to list people I had worked with; our outside consultant asked everyone on my list to review me; she organized their feedback and sent it to Ted. Ted then solicited negative feedback from phantom reviewers — people I had not worked closely with, who were not on my list, and whom he didn’t list as reviewers in the final document. The everyone-hates-you feedback Ted had delivered was in fact from a board-member crony of his and another venture capitalist I had not worked with much at all. I’d be lying if I said it wasn’t a thrill to hear Ted questioned by one of my lawyers, Alan Exelrod, about the positive feedback he’d hidden from me and excluded from my review. Had we not gone to trial, it would never have surfaced. Alan: He said that “She was very engaged and proactive,” correct? Ted: Yes. Alan: And that “She tries very hard to be helpful”? Ted: Correct. Alan: “100 percent behind the company”? Ted: Yes. Alan: “She is one of the three people I call from the board for her advice”? Ted: Correct. Such satisfaction was short-lived. The verdict came down on March 27, 2015: I had lost on all four counts. Before suing, I’d consulted other women who had sued big, powerful companies over harassment and discrimination, and they all gave me pretty much the same advice: “Don’t do it.” One woman told me, “It’s a complete mismatch of resources. They don’t fight fair. Even if you win, it will destroy your reputation.” Renée Fassbender Amochaev, an investment adviser, told me she’d been miserable from the moment she filed her lawsuit. She became an outcast and a target. Her co-workers started a petition to have her leave. Every morning, she would get to the parking lot and throw up. “You have to prepare for it to be harder than you can even imagine,” she said. “Do you regret it?” I asked. There was a pause. “No,” she said. Losing my suit hurt, but I didn’t have regrets either. I could have received millions from Kleiner if I would just have signed a non-disparagement contract; I turned it down so I could finally share my story, which I have been doing by speaking at events across the country and through Project Include — a nonprofit I co-founded to give everyone a fair shot to succeed in tech. I started it with an impressive group of women from the tech industry, many of whom shared similarly painful experiences. We channeled our frustration with the tepid “diversity solutions” prevalent in the industry, ones focused on PR-oriented initiatives that spend more time outlining the problems than implementing solutions. To become truly inclusive, companies needed solutions that included all people, covered everything a company does, and used detailed metrics to hold leaders accountable. So we decided to give CEOs and start-ups just that. We launched on May 3, 2016, with 87 recommendations. Since then, more than 1,500 people have signed up in support of our efforts, including 100 tech CEOs. Soon after, we partnered with ten start-up CEOs who were far along in their understanding of diversity and inclusion to help them address these issues in their own companies. We’ve had to reassure some of them that we aren’t out to shame them; we just offer a starting point and a supportive community. We’ve also partnered with 16 tech-focused investment firms; through them, we’ll be collecting industry­wide diversity data to help set benchmarks across the tech sector. It was a huge relief to be past the explain-define-and-prove-the-problem-exists conversations my co-founders and I had each gotten dragged into too many times. Over the past year, despite the ongoing public exposure of the ways both the president and tech companies like Uber discourage diversity and inclusion, we’ve seen results that give us hope. More personally, I’ve come out of the experience with great friends and supporters. We’ve changed jobs, started companies, taken time off, moved across the country, and switched careers. I’ve watched as each of us — myself included — has become more vocal, more open, and more courageous in advocating for change in tech. In the wake of my suit, I often heard people say that my case was a matter of “right issues, wrong plaintiff,” or that the reason I lost was because I wasn’t a “perfect victim.” I’ll grant that only someone a little bit masochistic would sign up for the onslaught of personal attacks that comes with a high-profile case, but I reject the argument that I wasn’t the right person to bring suit. I was one of the only people who had the resources and the position to do so. I believed I had an obligation to speak out about what I’d seen. Since the trial, I have had time to think of all the things I wished I’d done differently. I might have had better luck with public opinion, for instance, if I’d spent more time with the press and prepared a few pages of talking points every day, like Kleiner had. But Kleiner also had tremendous resources that I couldn’t match, and it made a difference. For example, I didn’t have time to go through all my emails to figure out which ones to give Kleiner, so during the discovery process we gave them practically everything, some 700,000 emails — most of which we could have legally withheld. Kleiner meanwhile handed over just 5,000 emails, claiming they didn’t have the resources to search for anything other than emails that we specifically requested. They did have the resources to pick over my emails, though — I heard they hired a team in India to read and sort through every single one. Their work would show: During depositions, they brought up everything from my nanny’s contract to an exercise I’d done in therapy where I listed resentments. Emails to friends, emails to my husband, emails to other family members, even emails to my lawyers. In retrospect, the most painful part of the trial was being cross-examined by Kleiner’s lawyer. At one point, she claimed I’d never invested in a woman’s company. “You’ve never done anything for women, have you?” she said snidely. I’d been instructed by my lawyers not to respond to comments like that, because it might open me up to more criticism — jurors could find me difficult or aggressive, the very things Kleiner was trying so hard to portray me as in court. I ended up coming across as distant, even a bit robotic, as I tried to keep my answers noncombative. But it hurt to leave that one unchallenged. It was patently false. At Kleiner, I helped drive investments in six women founders. A few months after I was fired by Kleiner, I invested in ten companies with my own money; five had women CEOs. But I didn’t say any of that. I just sat there. Before my suit was over, though, other women had begun to sue tech companies with public filings. One of my lawyers represented a Taiwanese woman who sued Facebook for discrimination; her suit alleged that she was given menial tasks like serving drinks to the men on the team. Another lawyer at the firm represented Whitney Wolfe, one of the co-founders at Tinder, who sued for sexual harassment. Both of those suits settled, but others, against Microsoft and Twitter, are ongoing. Some reporters even came up with a name for the phenomenon of women or minorities in tech suing or speaking up. They called it the “Pao effect.” Excerpted from the book Reset, by Ellen Pao. Copyright © 2017 by Ellen K. Pao. Published by Spiegel & Grau, an imprint of Random House, a division of Penguin Random House LLC. *This article appears in the August 21, 2017, issue of New York Magazine. TAGS: NEW YORK MAGAZINE ELLEN PAO SILICON VALLEY RESET SEXISM WORKPLACE SEXISM WORKPLACE DISCRIMINATION BOOK EXCERPT MORE SHARE ON FACEBOOK TWEET THIS STORY MOST VIEWED STORIES This Is How Sexism Works in Silicon Valley The Editor Who Keeps Getting Ghosted Missing Journalist Kim Wall Died in ‘Accident’ on Submarine, According to Danish Inventor The Cut Has a New Design Madame Clairevoyant: Horoscopes for the Week of August 21 The 43-Day Fashion Shoot Men’s-Rights Activism Is the Gateway Drug for the Alt-Right It’s Eclipse Season: What You Need to Know Is Taylor Swift About to Blow All of Our Minds? My Boyfriend Used a Penis Pump Promoted links by Taboola by Taboola Sponsored Links You May Like Walk In Tub Prices Might Surprise You Walk-in Bathtub Quotes 13 Actors Who Actually Did It On Screen Trend Chaser 12 Early Cancer Symptoms And Signs BuzzFond Comments 16 Comments PattyMBaker 9 minutes ago what Ralph answered I didnt know that anybody can get paid $6830 in 1 month on the computer . see more……http://www.smart-job5.com Like Reply SHOW MORE THE LATEST ON THE CUT 3:18 P.M. As Steve Bannon Returns, Breitbart Launches New Attacks Against Ivanka New stories on the site recall Bannon’s threats to go “nuclear” on the First Daughter. 3:15 P.M. This $185 Natural Face Oil Became a Cult Beauty Favorite People who work in beauty are obsessed with Vintner’s Daughter. But does it actually work? 2:29 P.M. Kylie Jenner Indulges in a Fame-Free Farm Fantasy While Being Filmed On this week’s Life of Kylie, our star has a real-estate agent pull droppings from nature off of her Yeezys. 1:01 P.M. Brit Bennett on the Book That Changed Her The Color Purple by Alice Walker. 12:44 P.M. Is Taylor Swift About to Blow All of Our Minds? Swift broke her social-media blackout with a cryptic Instagram video. 12:34 P.M. Ending a Pregnancy Because of Down Syndrome Is Not a Precursor to Eugenics It’s a personal choice. 12:31 P.M. Remembering the Plotline on Golden Girls Where Rose Was Addicted to Pills Should we be treating Rose’s bizarre pill addiction story line as canon? 12:23 P.M. People Are Really Desperate to Have Eclipse Sex “I don’t want to go blind. So let’s spend the few hours getting our freak on.” 12:04 P.M. Read Heather Heyer’s Cousin’s Powerful Op-Ed on White Privilege “Why has the death of a white woman at the hands of a white supremacist group finally gotten the attention of white folk?” 12:00 P.M. Everyone Needs a Hero Blouse It’s a shirt that does all the heavy lifting for your outfit. 11:52 A.M. See the Newest Fashion Campaigns for Fall And more from Gucci’s Star Trek–inspired campaign. 11:51 A.M. The Balmain x L’Oréal Collection Will Be Sold at Barneys But not a Barneys price. 11:45 A.M. Science of Us Is Joining the Cut Some personal news. 11:26 A.M. North Carolina KKK Leader Threatens to ‘Burn’ Black Univision Reporter Christopher Barker made derogatory and threatening comments to journalist Ilia Calderón. 11:15 A.M. How to Get Out of Work to Watch the Solar Eclipse Your boss might be wondering where you are between the hours of 10 a.m. and 4 p.m. Here’s what you can tell her. 11:00 A.M. Deal of the Day: A 60 Percent Off Lisa Marie Fernandez Bikini Our mother taught us two things: Buy holiday cards the day after Christmas, and bathing suits in August. 11:00 A.M. My Favorite New Korean Cleanser Is Snowflake Jelly It’s weird, and I love it. 10:47 A.M. The Women Who ‘Shaped’ Donald Trump Are Getting Their Own Book A new book will profile Trump’s mother, grandmother, and three wives. 10:38 A.M. This Bachelor Baby Has Already Made More Money Than You Baby’s first spon-con. 10:18 A.M. Missing Journalist Kim Wall Reportedly Died in ‘Accident’ on Submarine Danish inventor Peter Madsen now says he buried Kim Wall at sea. The Cut STYLE SELF CULTURE POWER CONNECT: Like Us Follow Us Follow Us Follow Us RSS FEEDLY NEWSLETTERS PRIVACY TERMS SITEMAP MEDIA KIT AD CHOICES ABOUT US CONTACTS FEEDBACK WE’RE HIRING © 2017, NEW YORK MEDIA LLC. VIEW ALL TRADEMARKS

I invented the web. Here are three things we need to change to save it | Tim Berners-Lee | Technology | The Guardian


 — excerpt below —

Today marks 28 years since I submitted my original proposal for the worldwide web. I imagined the web as an open platform that would allow everyone, everywhere to share information, access opportunities, and collaborate across geographic and cultural boundaries. In many ways, the web has lived up to this vision, though it has been a recurring battle to keep it open. But over the past 12 months, I’ve become increasingly worried about three new trends, which I believe we must tackle in order for the web to fulfill its true potential as a tool that serves all of humanity.

1) We’ve lost control of our personal data

The current business model for many websites offers free content in exchange for personal data. Many of us agree to this – albeit often by accepting long and confusing terms and conditions documents – but fundamentally we do not mind some information being collected in exchange for free services. But, we’re missing a trick. As our data is then held in proprietary silos, out of sight to us, we lose out on the benefits we could realise if we had direct control over this data and chose when and with whom to share it. What’s more, we often do not have any way of feeding back to companies what data we’d rather not share – especially with third parties – the T&Cs are all or nothing.

This widespread data collection by companies also has other impacts. Through collaboration with – or coercion of – companies, governments are also increasingly watching our every move online and passing extreme laws that trample on our rights to privacy. In repressive regimes, it’s easy to see the harm that can be caused – bloggers can be arrested or killed, and political opponents can be monitored. But even in countries where we believe governments have citizens’ best interests at heart, watching everyone all the time is simply going too far. It creates a chilling effect on free speech and stops the web from being used as a space to explore important topics, such as sensitive health issues, sexuality or religion.

2) It’s too easy for misinformation to spread on the web

Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And they choose what to show us based on algorithms that learn from our personal data that they are constantly harvesting. The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or fake news, which is surprising, shocking, or designed to appeal to our biases, can spread like wildfire. And through the use of data science and armies of bots, those with bad intentions can game the system to spread misinformation for financial or political gain.

3) Political advertising online needs transparency and understanding

Political advertising online has rapidly become a sophisticated industry. The fact that most people get their information from just a few platforms and the increasing sophistication of algorithms drawing upon rich pools of personal data mean that political campaigns are now building individual adverts targeted directly at users. One source suggests that in the 2016 US election, as many as 50,000 variations of adverts were being served every single day on Facebook, a near-impossible situation to monitor. And there are suggestions that some political adverts – in the US and around the world – are being used in unethical ways – to point voters to fake news sites, for instance, or to keep others away from the polls. Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups. Is that democratic?

Play Video


Current Time0:00


Duration Time1:03

Loaded: 0%

Progress: 0%



 Sir Tim Berners-Lee: how the web went from idea to reality

These are complex problems, and the solutions will not be simple. But a few broad paths to progress are already clear. We must work together with web companies to strike a balance that puts a fair level of data control back in the hands of people, including the development of new technology such as personal “data pods” if needed and exploring alternative revenue models such as subscriptions and micropayments. We must fight against government overreach in surveillance laws, including through the courts if necessary. We must push back against misinformation by encouraging gatekeepers such as Google and Facebook to continue their efforts to combat the problem, while avoiding the creation of any central bodies to decide what is “true” or not. We need more algorithmic transparency to understand how important decisions that affect our lives are being made, and perhaps a set of common principles to be followed. We urgently need to close the “internet blind spot” in the regulation of political campaigning.

Our team at the Web Foundation will be working on many of these issues as part of our new five-year strategy – researching the problems in more detail, coming up with proactive policy solutions and bringing together coalitions to drive progress towards a web that gives equal power and opportunity to all.

I may have invented the web, but all of you have helped to create what it is today. All the blogs, posts, tweets, photos, videos, applications, web pages and more represent the contributions of millions of you around the world building our online community. All kinds of people have helped, from politicians fighting to keep the web open, standards organisations like W3C enhancing the poweraccessibility and security of the technology, and people who have protested in the streets. In the past year, we have seen Nigerians stand up to a social media bill that would have hampered free expression online, popular outcry and protests at regional internet shutdowns in Cameroon and great public support for net neutrality in both India and the European Union.

It has taken all of us to build the web we have, and now it is up to all of us to build the web we want – for everyone.

The Web Foundation is at the forefront of the fight to advance and protect the web for everyone. We believe doing so is essential to reverse growing inequality and empower citizens. You can follow our work by signing up to our newsletter, and find a local digital rights organisation to support here on this list. Additions to the list are welcome and may be sent to contact@webfoundation.org

Click here to make a donation.

Google’s Driverless Car – The New Yorker

The Google car knows every turn. It never gets drowsy or distracted, or wonders who has the right-of-way. But not everyone finds the technology appealing.

Source: Google’s Driverless Car – The New Yorker

— excerpt below—

man beings make terrible drivers. They talk on the phone and run red lights, signal to the left and turn to the right. They drink too much beer and plow into trees or veer into traffic as they swat at their kids. They have blind spots, leg cramps, seizures, and heart attacks. They rubberneck, hotdog, and take pity on turtles, cause fender benders, pileups, and head-on collisions. They nod off at the wheel, wrestle with maps, fiddle with knobs, have marital spats, take the curve too late, take the curve too hard, spill coffee in their laps, and flip over their cars. Of the ten million accidents that Americans are in every year, nine and a half million are their own damn fault.

A case in point: The driver in the lane to my right. He’s twisted halfway around in his seat, taking a picture of the Lexus that I’m riding in with an engineer named Anthony Levandowski. Both cars are heading south on Highway 880 in Oakland, going more than seventy miles an hour, yet the man takes his time. He holds his phone up to the window with both hands until the car is framed just so. Then he snaps the picture, checks it onscreen, and taps out a lengthy text message with his thumbs. By the time he puts his hands back on the wheel and glances up at the road, half a minute has passed.

Levandowski shakes his head. He’s used to this sort of thing. His Lexus is what you might call a custom model. It’s surmounted by a spinning laser turret and knobbed with cameras, radar, antennas, and G.P.S. It looks a little like an ice-cream truck, lightly weaponized for inner-city work. Levandowski used to tell people that the car was designed to chase tornadoes or to track mosquitoes, or that he belonged to an élite team of ghost hunters. But nowadays the vehicle is clearly marked: “Self-Driving Car.”

Every week for the past year and a half, Levandowski has taken the Lexus on the same slightly surreal commute. He leaves his house in Berkeley at around eight o’clock, waves goodbye to his fiancée and their son, and drives to his office in Mountain View, forty-three miles away. The ride takes him over surface streets and freeways, old salt flats and pine-green foothills, across the gusty blue of San Francisco Bay, and down into the heart of Silicon Valley. In rush-hour traffic, it can take two hours, but Levandowski doesn’t mind. He thinks of it as research. While other drivers are gawking at him, he is observing them: recording their maneuvers in his car’s sensor logs, analyzing traffic flow, and flagging any problems for future review. The only tiresome part is when there’s roadwork or an accident ahead and the Lexus insists that he take the wheel. A chime sounds, pleasant yet insistent, then a warning appears on his dashboard screen: “In one mile, prepare to resume manual control.”

Levandowski is an engineer at Google X, the company’s semi-secret lab for experimental technology. He turned thirty-three last March but still has the spindly build and nerdy good nature of the kids in my high-school science club. He wears black frame glasses and oversized neon sneakers, has a long, loping stride—he’s six feet seven—and is given to excitable talk on fantastical themes. Cybernetic dolphins! Self-harvesting farms! Like a lot of his colleagues in Mountain View, Levandowski is equal parts idealist and voracious capitalist. He wants to fix the world and make a fortune doing it. He comes by these impulses honestly: his mother is a French diplomat, his father an American businessman. Although Levandowski spent most of his childhood in Brussels, his English has no accent aside from a certain absence of inflection—the bright, electric chatter of a processor in overdrive. “My fiancée is a dancer in her soul,” he told me. “I’m a robot.”

What separates Levandowski from the nerds I knew is this: his wacky ideas tend to come true. “I only do cool shit,” he says. As a freshman at Berkeley, he launched an intranet service out of his basement that earned him fifty thousand dollars a year. As a sophomore, he won a national robotics competition with a machine made out of Legos that could sort Monopoly money—a fair analogy for what he’s been doing for Google lately. He was one of the principal architects of Street View and the Google Maps database, but those were just warmups. “The Wright Brothers era is over,” Levandowski assured me, as the Lexus took us across the Dumbarton Bridge. “This is more like Charles Lindbergh’s plane. And we’re trying to make it as robust and reliable as a 747.”

Not everyone finds this prospect appealing. As a commercial for the Dodge Charger put it two years ago, “Hands-free driving, cars that park themselves, an unmanned car driven by a search-engine company? We’ve seen that movie. It ends with robots harvesting our bodies for energy.” Levandowski understands the sentiment. He just has more faith in robots than most of us do. “People think that we’re going to pry the steering wheel from their cold, dead hands,” he told me, but they have it exactly wrong. Someday soon, he believes, a self-driving car will save your life.

The Google car is an old-fashioned sort of science fiction: this year’s model of last century’s make. It belongs to the gleaming, chrome-plated age of jet packs and rocket ships, transporter beams and cities beneath the sea, of a predicted future still well beyond our technology. In 1939, at the World’s Fair in New York, visitors stood in lines up to two miles long to see the General Motors Futurama exhibit. Inside, a conveyor belt carried them high above a miniature landscape, spread out beneath a glass dome. Its suburbs and skyscrapers were laced together by superhighways full of radio-guided cars. “Does it seem strange? Unbelievable?” the announcer asked. “Remember, this is the world of 1960.”

Not quite. Skyscrapers and superhighways made the deadline, but driverless cars still putter along in prototype. Human beings, as it turns out, aren’t easy to improve upon. For every accident they cause, they avoid a thousand others. They can weave through tight traffic and anticipate danger, gauge distance, direction, pace, and momentum. Americans drive nearly three trillion miles a year, I was told by Ron Medford, a former deputy administrator of the National Highway Traffic Safety Administration who now works for Google. It’s no wonder that we have thirty-two thousand fatalities along the way, he said. It’s a wonder the number is so low.

Levandowski keeps a collection of vintage illustrations and newsreels on his laptop, just to remind him of all the failed schemes and fizzled technologies of the past. When he showed them to me one night at his house, his face wore a crooked grin, like a father watching his son strike out in Little League. From 1957: A sedan cruises down a highway, guided by circuits in the road, while a family plays dominoes inside. “No traffic jam . . . no collisions . . . no driver fatigue.” From 1977: Engineers huddle around a driverless Ford on a test track. “Cars like this one may be on the nation’s roads by the year 2000!” Levandowski shook his head. “We didn’t come up with this idea,” he said. “We just got lucky that the computers and sensors were ready for us.”

Almost from the beginning, the field divided into two rival camps: smart roads and smart cars. General Motors pioneered the first approach in the late nineteen-fifties. Its Firebird III concept car—shaped like a jet fighter, with titanium tail fins and a glass-bubble cockpit—was designed to run on a test track embedded with an electrical cable, like the slot on a toy speedway. As the car passed over the cable, a receiver in its front end picked up a radio signal and followed it around the curve. Engineers at Berkeley later went a step further: they spiked the track with magnets, alternating their polarity in binary patterns to send messages to the car—“Slow down, sharp curve ahead.” Systems like these were fairly simple and reliable, but they had a chicken-and-egg problem. To be useful, they had to be built on a large scale; to be built on a large scale, they had to be useful. “We don’t have the money to fix potholes,” Levandowski says. “Why would we invest in putting wires in the road?”

Smart cars were more flexible but also more complex. They needed sensors to guide them, computers to steer them, digital maps to follow. In the nineteen-eighties, a German engineer named Ernst Dickmanns, at the Bundeswehr University in Munich, equipped a Mercedes van with video cameras and processors, then programmed it to follow lane lines. Soon it was steering itself around a track. By 1995, Dickmanns’s car was able to drive on the Autobahn from Munich to Odense, Denmark, going up to a hundred miles at a stretch without assistance. Surely the driverless age was at hand! Not yet. Smart cars were just clever enough to get drivers into trouble. The highways and test tracks they navigated were strictly controlled environments. The instant more variables were added—a pedestrian, say, or a traffic cop—their programming faltered. Ninety-eight per cent of driving is just following the dotted line. It’s the other two per cent that matters.

“There was no way, before 2000, to make something interesting,” the roboticist Sebastian Thrun told me. “The sensors weren’t there, the computers weren’t there, and the mapping wasn’t there. Radar was a device on a hilltop that cost two hundred million dollars. It wasn’t something you could buy at Radio Shack.” Thrun, who is forty-six, is the founder of the Google Car project. A wunderkind from the west German city of Solingen, he programmed his first driving simulator at the age of twelve. Slender and tan, with clear blue eyes and a smooth, seemingly boneless gait, he looks as if he just stepped off a dance floor in Ibiza. And yet, like Levandowski, he has a gift for seeing things through a machine’s eyes—for intuiting the logic by which it might apprehend the world.

When Thrun first arrived in the United States, in 1995, he took a job at the country’s leading center for driverless-car research: Carnegie Mellon University. He went on to build robots that explored mines in Virginia, guided visitors through the Smithsonian, and chatted with patients at a nursing home. What he didn’t build was driverless cars. Funding for private research in the field had dried up by then. And though Congress had set a goal that a third of all ground combat vehicles be autonomous by 2015, little had come of the effort. Every so often, Thrun recalls, military contractors, funded by the Defense Advanced Research Projects Agency, would roll out their latest prototype. “The demonstrations I saw mostly ended in crashes and breakdowns in the first half mile,” he told me. “darpa was funding people who weren’t solving the problem. But they couldn’t tell if it was the technology or the people. So they did this crazy thing, which was really visionary.”

They held a race.

The first darpa Grand Challenge took place in the Mojave Desert on March 13, 2004. It offered a million-dollar prize for what seemed like a simple task: build a car that can drive a hundred and forty-two miles without human intervention. Ernst Dickmanns’s car had gone similar distances on the Autobahn, but always with a driver in the seat to take over in the tricky stretches. The cars in the Grand Challenge would be empty, and the road would be rough: from Barstow, California, to Primm, Nevada. Instead of smooth curves and long straightaways, it had rocky climbs and hairpin turns; instead of road signs and lane lines, G.P.S. waypoints. “Today, we could do it in a few hours,” Thrun told me. “But at the time it felt like going to the moon in sneakers.”

Levandowski first heard about it from his mother. She’d seen a notice for the race when it was announced online, in 2002, and recalled that her son used to play with remote-control cars as a boy, crashing them into things on his bedroom floor. Was this so different? Levandowski was now a student at Berkeley, in the industrial-engineering department. When he wasn’t studying or rowing crew or winning Lego competitions, he was casting about for cool new shit to build—for a profit, if possible. “If he’s making money, it’s his confirmation that he’s creating value,” his friend Randy Miller told me. “I remember, when we were in college, we were at his house one day, and he told me that he’d rented out his bedroom. He’d put up a wall in his living room and was sleeping on a couch in one half, next to a big server tower that he’d built. I said, ‘Anthony, what the hell are you doing? You’ve got plenty of money. Why don’t you get your own place?’ And he said, ‘No. Until I can move to a stateroom on a 747, I want to live like this.’ “

“Whosoever pulleth this sword from this stone, and can eat just two or three of these double-chocolate Amaretto things without finishing the whole box, shall be king born of England!”

darpa’s rules were vague on the subject of vehicles: anything that could drive itself would do. So Levandowski made a bold decision. He would build the world’s first autonomous motorcycle. This seemed like a stroke of genius at the time. (Miller says that it came to them in a hot tub in Tahoe, which sounds about right.) Good engineering is all about gaming the system, Levandowski says—about sidestepping obstacles rather than trying to run over them. His favorite example is from a robotics contest at M.I.T. in 1991. Tasked with building a machine that could shoot the most Ping-Pong balls into a tube, the students came up with dozens of ingenious contraptions. The winner, though, was infuriatingly simple: it had a mechanical arm reach over, drop a ball into the tube, then cover it up so that no others could get in. It won the contest in a single move. The motorcycle could be like that, Levandowski thought: quicker off the mark than a car and more maneuverable. It could slip through tighter barriers and drive just as fast. Also, it was a good way to get back at his mother, who’d never let him ride motorcycles as a kid. “Fine,” he thought. “I’ll just make one that rides itself.”

The flaw in this plan was obvious: a motorcycle can’t stand up on its own. It needs a rider to balance it—or else a complex, computer-controlled system of shafts and motors to adjust its position every hundredth of a second. “Before you can drive ten feet you have to do a year of engineering,” Levandowski says. The other racers had no such problem. They also had substantial academic and corporate backing: the Carnegie Mellon team was working with General Motors, Caltech with Northrop Grumman, Ohio State with Oshkosh trucking. When Levandowski went to the Berkeley faculty with his idea, the reaction was, at best, bemused disbelief. His adviser, Ken Goldberg, told him frankly that he had no chance of winning. “Anthony is probably the most creative undergraduate I’ve encountered in twenty years,” he told me. “But this was a very great stretch.”

Levandowski was unfazed. Over the next two years, he made more than two hundred cold calls to potential sponsors. He gradually scraped together thirty thousand dollars from Raytheon, Advanced Micro Devices, and others. (No motorcycle company was willing to put its name on the project.) Then he added a hundred thousand dollars of his own. In the meantime, he went about poaching the faculty’s graduate students. “He paid us in burritos,” Charles Smart, now a professor of mathematics at M.I.T., told me. “Always the same burritos. But I remember thinking, I hope he likes me and lets me work on this.” Levandowski had that effect on people. His mad enthusiasm for the project was matched only by his technical grasp of its challenges—and his willingness to go to any lengths to meet them. At one point, he offered Smart’s girlfriend and future wife five thousand dollars to break up with him until the project was done. “He was fairly serious,” Smart told me. “She hated the motorcycle project.”

There came a day when Goldberg realized that half his Ph.D. students had been working for Levandowski. They’d begun with a Yamaha dirt bike, made for a child, and stripped it down to its skeleton. They added cameras, gyros, G.P.S. modules, computers, roll bars, and an electric motor to turn the wheel. They wrote tens of thousands of lines of code. The videos of their early test runs, edited together, play like a jittery reel from “The Benny Hill Show”: bike takes off, engineers jump up and down, bike falls over—more than six hundred times in a row. “We built the bike and rebuilt the bike, just sort of groping in the dark,” Smart told me. “It’s like one of my colleagues once said: ‘You don’t understand, Charlie, this is robotics. Nothing actually works.’ “

Finally, a year into the project, a Russian engineer named Alex Krasnov cracked the code. They’d thought that stability was a complex, nonlinear problem, but it turned out to be fairly simple. When the bike tipped to one side, Krasnov had it steer ever so slightly in the same direction. This created centrifugal acceleration that pulled the bike upright again. By doing this over and over, tracing tiny S-curves as it went, the motorcycle could hold to a straight line. On the video clip from that day, the bike wobbles a little at first, like a baby giraffe finding its legs, then suddenly, confidently circles the field—as if guided by an invisible hand. They called it the Ghost Rider.

The Grand Challenge proved to be one of the more humbling events in automotive history. Its sole consolation lay in shared misery. None of the fifteen finalists made it past the first ten miles; seven broke down within a mile. Ohio State’s six-wheel, thirty-thousand-pound TerraMax was brought up short by some bushes; Caltech’s Chevy Tahoe crashed into a fence. Even the winner, Carnegie Mellon, earned at best a Pyrrhic victory. Its robotic Humvee, Sandstorm, drove just seven and a half miles before careering off course. A helicopter later found it beached on an embankment, wreathed in smoke, its back wheels spinning so furiously that they’d burst into flame.

As for the Ghost Rider, it managed to beat out more than ninety cars in the qualifying round—a mile-and-a-half obstacle course on the California Speedway in Fontana. But that was its high-water mark. On the day of the Grand Challenge, standing at the starting line in Barstow, half delirious with adrenaline and fatigue, Levandowski forgot to turn on the stability program. When the gun went off, the bike sputtered forward, rolled three feet, and fell over.

“That was a dark day,” Levandowski says. It took him a while to get over it—at least by his hyperactive standards. “I think I took, like, four days off,” he told me. “And then I was like, Hey, I’m not done yet! I need to go fix this!” darpa apparently had the same thought. Three months later, the agency announced a second Grand Challenge for the following October, doubling the prize money to two million dollars. To win, the teams would have to address a daunting list of failures and shortcomings, from fried hard drives to faulty satellite equipment. But the underlying issue was always the same: as Joshua Davis later wrote in Wired, the robots just weren’t smart enough. In the wrong light, they couldn’t tell a bush from a boulder, a shadow from a solid object. They reduced the world to a giant marble maze, then got caught in the thickets between holes. They needed to raise their I.Q.

In the early nineties, Dean Pomerleau, a roboticist at Carnegie Mellon, had hit upon an unusually efficient way to do this: he let his car teach itself. Pomerleau equipped the computer in his minivan with artificial neural networks, modelled on those in the brain. As he drove around Pittsburgh, they kept track of his driving decisions, gathering statistics and formulating their own rules of the road. “When we started, the car was going about two to four miles an hour along a path through a park—you could ride a tricycle faster,” Pomerleau told me. “By the end, it was going fifty-five miles per hour on highways.” In 1996, the car steered itself from Washington, D.C., to San Diego with only minimal intervention—nearly four times as far as Ernst Dickmanns’s cars had gone a year earlier. “No Hands Across America,” Pomerleau called it.

Machine learning is an idea nearly as old as computer science—Alan Turing, one of the fathers of the field, considered it the essence of artificial intelligence. It’s often the fastest way for a computer to learn a complex behavior, but it has its drawbacks. A self-taught car can come to some strange conclusions. It may confuse the shadow of a tree for the edge of the road, or reflected headlights for lane markers. It may decide that a bag floating across a road is a solid object and swerve to avoid it. It’s like a baby in a stroller, deducing the world from the faces and storefronts that flicker by. It’s hard to know what it knows. “Neural networks are like black boxes,” Pomerleau says. “That makes people nervous, particularly when they’re controlling a two-ton vehicle.”

Computers, like children, are more often taught by rote. They’re given thousands of rules and bits of data to memorize—If X happens, do Y; avoid big rocks—then sent out to test them by trial and error. This is slow, painstaking work, but it’s easier to predict and refine than machine learning. The trick, as in any educational system, is to combine the two in proper measure. Too much rote learning can make for a plodding machine. Too much experiential learning can make for blind spots and caprice. The roughest roads in the Grand Challenge were often the easiest to navigate, because they had clear paths and well-defined shoulders. It was on the open, sandy trails that the cars tended to go crazy. “Put too much intelligence into a car and it becomes creative,” Sebastian Thrun told me.

The second Grand Challenge put these two approaches to the test. Nearly two hundred teams signed up for the race, but the top contenders were clear from the start: Carnegie Mellon and Stanford. The C.M.U. team was led by the legendary roboticist William (Red) Whittaker. (Pomerleau had left the university by then to start his own firm.) A burly, mortar-headed ex-marine, Whittaker specialized in machines for remote and dangerous locations. His robots had crawled over Antarctic ice fields and active volcanoes, and inspected the damaged nuclear reactors at Three Mile Island and Chernobyl. Seconded by a brilliant young engineer named Chris Urmson, Whittaker approached the race as a military operation, best won by overwhelming force. His team spent twenty-eight days laser-scanning the Mojave to create a computer model of its topography; then they combined those scans with satellite data to help identify obstacles. “People don’t count those who died trying,” he later told me.

The Stanford team was led by Thrun. He hadn’t taken part in the first race, when he was still just a junior faculty member at C.M.U. But by the following summer he had accepted an endowed professorship in Palo Alto. When darpa announced the second race, he heard about it from one of his Ph.D. students, Mike Montemerlo. “His assessment of whether we should do it was no, but his body and his eyes and everything about him said yes,” Thrun recalls. “So he dragged me into it.” The contest would be a study in opposites: Thrun the suave cosmopolitan; Whittaker the blustering field marshal. Carnegie Mellon with its two military vehicles, Sandstorm and Highlander; Stanford with its puny Volkswagen Touareg, nicknamed Stanley.

It was an even match. Both teams used similar sensors and software, but Thrun and Montemerlo concentrated more heavily on machine learning. “It was our secret weapon,” Thrun told me. Rather than program the car with models of the rocks and bushes it should avoid, Thrun and Montemerlo simply drove it down the middle of a desert road. The lasers on the roof scanned the area around the car, while the camera looked farther ahead. By analyzing this data, the computer learned to identify the flat parts as road and the bumpy parts as shoulders. It also compared its camera images with its laser scans, so that it could tell what flat terrain looked like from a distance—and therefore drive a lot faster. “Every day it was the same,” Thrun recalls. “We would go out, drive for twenty minutes, realize there was some software bug, then sit there for four hours reprogramming and try again. We did that for four months.” When they started, one out of every eight pixels that the computer labelled as an obstacle was nothing of the sort. By the time they were done, the error rate had dropped to one in fifty thousand.

On the day of the race, two hours before start time, darpa sent out the G.P.S. coördinates for the course. It was even harder than the first time: more turns, narrower lanes, three tunnels, and a mountain pass. Carnegie Mellon, with two cars to Stanford’s one, decided to play it safe. They had Highlander run at a fast clip—more than twenty miles an hour on average—while Sandstorm hung back a little. The difference was enough to cost them the race. When Highlander began to lose power because of a pinched fuel line, Stanley moved ahead. By the time it crossed the finish line, six hours and fifty-three minutes after it started, it was more than ten minutes ahead of Sandstorm and more than twenty minutes ahead of Highlander.

It was a triumph of the underdog, of brain over brawn. But less for Stanford than for the field as a whole. Five cars finished the hundred-and-thirty-two-mile course; more than twenty cars went farther than the winner had in 2004. In one year, they’d made more progress than darpa’s contractors had in twenty. “You had these crazy people who didn’t know how hard it was,” Thrun told me. “They said, ‘Look, I have a car, I have a computer, and I need a million bucks.’ So they were doing things in their home shops, putting something together that had never been done in robotics before, and some were insanely impressive.” A team of students from Palos Verdes High School in California, led by a seventeen-year-old named Chris Seide, built a self-driving “Doom Buggy” that, Thrun recalls, could change lanes and stop at stop signs. A Ford S.U.V. programmed by some insurance-company employees from Louisiana finished just thirty-seven minutes behind Stanley. Their lead programmer had lifted his preliminary algorithms from textbooks on video-game design.

“When you look back at that first Grand Challenge, we were in the Stone Age compared to where we are now,” Levandowski told me. His motorcycle embodied that evolution. Although it never made it out of the semifinals of the second race—tripped up by some wooden boards—the Ghost Rider had become, in its way, a marvel of engineering, beating out seventy-eight four-wheeled competitors. Two years later, the Smithsonian added the motorcycle to its collection; a year after that, it added Stanley as well. By then, Thrun and Levandowski were both working for Google.

“Just give me my balance on a piece of paper, put it in the bag, and nobody gets hurt—understand?”

The driverless-car project occupies a lofty, garagelike space in suburban Mountain View. It’s part of a sprawling campus built by Silicon Graphics in the early nineties and repurposed by Google, the conquering army, a decade later. Like a lot of high-tech offices, it’s a mixture of the whimsical and the workaholic—candy-colored sheet metal over a sprung-steel chassis. There’s a Foosball table in the lobby, exercise balls in the sitting room, and a row of what look like clown bicycles parked out front, free for the taking. When you walk in, the first things you notice are the wacky tchotchkes on the desks: Smurfs, “Star Wars” toys, Rube Goldberg devices. The next things you notice are the desks: row after row after row, each with someone staring hard at a screen.

It had taken me two years to gain access to this place, and then only with a staff member shadowing my every step. Google guards its secrets more jealously than most. At the gourmet cafeterias that dot the campus, signs warn against “tailgaters”—corporate spies who might slink in behind an employee before the door swings shut. Once inside, though, the atmosphere shifts from vigilance to an almost missionary zeal. “We want to fundamentally change the world with this,” Sergey Brin, the co-founder of Google, told me.

Brin was dressed in a charcoal hoodie, baggy pants, and sneakers. His scruffy beard and flat, piercing gaze gave him a Rasputinish quality, dulled somewhat by his Google Glass eyewear. At one point, he asked if I’d like to try the glasses on. When I’d positioned the miniature projector in front of my right eye, a single line of text floated poignantly into view: “3:51 p.m. It’s okay.”

“As you look outside, and walk through parking lots and past multilane roads, the transportation infrastructure dominates,” Brin said. “It’s a huge tax on the land.” Most cars are used only for an hour or two a day, he said. The rest of the time, they’re parked on the street or in driveways and garages. But if cars could drive themselves, there would be no need for most people to own them. A fleet of vehicles could operate as a personalized public-transportation system, picking people up and dropping them off independently, waiting at parking lots between calls. They’d be cheaper and more efficient than taxis—by some calculations, they’d use half the fuel and a fifth the road space of ordinary cars—and far more flexible than buses or subways. Streets would clear, highways shrink, parking lots turn to parkland. “We’re not trying to fit into an existing business model,” Brin said. “We are just on such a different planet.”

When Thrun and Levandowski first came to Google, in 2007, they were given a simpler task: to create a virtual map of the country. The idea came from Larry Page, the company’s other co-founder. Five years earlier, Page had strapped a video camera on his car and taken several hours of footage around the Bay Area. He’d then sent it to Marc Levoy, a computer-graphics expert at Stanford, who created a program that could paste such footage together to show an entire streetscape. Google engineers went on to jury-rig some vans with G.P.S. and rooftop cameras that could shoot in every direction. Eventually, they were able to launch a system that could show three-hundred-and-sixty-degree panoramas for any address. But the equipment was unreliable. When Thrun and Levandowski came on board, they helped the team retool and reprogram. Then they equipped a hundred cars and sent them all over the United States.

Google Street View has since spread to more than a hundred countries. It’s both a practical tool and a kind of magic trick—a spyglass onto distant worlds. To Levandowski, though, it was just a start. The same data, he argued, could be used to make digital maps more accurate than those based on G.P.S. data, which Google had been leasing from companies like navteq. The street and exit names could be drawn straight from photographs, for instance, rather than faulty government records. This sounded simple enough but proved to be fiendishly complicated. Street View mostly covered urban areas, but Google Maps had to be comprehensive: every logging road logged on a computer, every gravel drive driven down. Over the next two years, Levandowski shuttled back and forth to Hyderabad, India, to train more than two thousand data processors to create new maps and fix old ones. When Apple’s new mapping software failed so spectacularly a year ago, he knew exactly why. By then, his team had spent five years entering several million corrections a day.

Street View and Maps were logical extensions of a Google search. They showed you where to locate the things you’d found. What was missing was a way to get there. Thrun, despite his victory in the second Grand Challenge, didn’t think that driverless cars could work on surface streets—there were just too many variables. “I would have told you then that there is no way on earth we can drive safely,” he says. “All of us were in denial that this could be done.” Then, in February of 2008, Levandowski got a call from a producer of “Prototype This!,” a series on the Discovery Channel. Would he be interested in building a self-driving pizza delivery car? Within five weeks, he and a team of fellow Berkeley graduates and other engineers had retrofitted a Prius for the purpose. They patched together a guidance system and persuaded the California Highway Patrol to let the car cross the Bay Bridge—from San Francisco to Treasure Island. It would be the first time an unmanned car had driven legally on American streets.

On the day of the filming, the city looked as if it were under martial law. The lower level of the bridge was closed to regular traffic, and eight police cruisers and eight motorcycle cops were assigned to accompany the Prius over it. “Obama was there the week before and he had a smaller escort,” Levandowski recalls. The car made its way through downtown and crossed the bridge in fine form, only to wedge itself against a concrete wall on the far side. Still, it gave Google the nudge that it needed. Within a few months, Page and Brin had called Thrun to green-light a driverless-car project. “They didn’t even talk about budget,” Thrun says. “They just asked how many people I needed and how to find them. I said, ‘I know exactly who they are.’ ”

Every Monday at eleven-thirty, the lead engineers for the Google car project meet for a status update. They mostly cleave to a familiar Silicon Valley demographic—white, male, thirty to forty years old—but they come from all over the world. I counted members from Belgium, Holland, Canada, New Zealand, France, Germany, China, and Russia at one sitting. Thrun began by cherry-picking the top talent from the Grand Challenges: Chris Urmson was hired to develop the software, Levandowski the hardware, Mike Montemerlo the digital maps. (Urmson now directs the project, while Thrun has shifted his attention to Udacity, an online education company that he co-founded two years ago.) Then they branched out to prodigies of other sorts: lawyers, laser designers, interface gurus—anyone, at first, except automotive engineers. “We hired a new breed,” Thrun told me. People at Google X had a habit of saying that So-and-So on the team was the smartest person they’d ever met, till the virtuous circle closed and almost everyone had been singled out by someone else. As Levandowski said of Thrun, “He thinks at a hundred miles an hour. I like to think at ninety.”

close dialog
To get more of the latest
stories from The New Yorker,
sign up for our newsletter.
Get access.

When I walked in one morning, the team was slouched around a conference table in T-shirts and jeans, discussing the difference between the Gregorian and the Julian calendar. The subtext, as usual, was time. Google’s goal isn’t to create a glorified concept car—a flashy idea that will never make it to the street—but a polished commercial product. That means real deadlines and continual tests and redesigns. The main topic for much of that morning was the user interface. How aggressive should the warning sounds be? How many pedestrians should the screen show? In one version, a jaywalker appeared as a red dot outlined in white. “I really don’t like that,” Urmson said. “It looks like a real-estate sign.” The Dutch designer nodded and promised an alternative for the next round. Every week, several dozen Google volunteers test-drive the cars and fill out user surveys. “In God we trust,” the company faithful like to say. “Everyone else, bring data.”

In the beginning, Brin and Page presented Thrun’s team with a series of darpa-like challenges. They managed the first in less than a year: to drive a hundred thousand miles on public roads. Then the stakes went up. Like boys plotting a scavenger hunt, Brin and Page pieced together ten itineraries of a hundred miles each. The roads wound through every part of the Bay Area—from the leafy lanes of Menlo Park to the switchbacks of Lombard Street. If the driver took the wheel or tapped the brakes even once, the trip was disqualified. “I remember thinking, How can you possibly do that?” Urmson told me. “It’s hard to game driving through the middle of San Francisco.”

They started the project with Levandowski’s pizza car and Stanford’s open-source software. But they soon found that they had to rebuild from scratch: the car’s sensors were already outdated, the software just glitchy enough to be useless. The darpa cars hadn’t concerned themselves with passenger comfort. They just went from point A to point B as efficiently as possible. To smooth out the ride, Thrun and Urmson had to make a deep study of the physics of driving. How does the plane of a road change as it goes around a curve? How do tire drag and deformation affect steering? Braking for a light seems simple enough, but good drivers don’t apply steady pressure, as a computer might. They build it gradually, hold it for a moment, then back off again.

For complicated moves like that, Thrun’s team often started with machine learning, then reinforced it with rule-based programming—a superego to control the id. They had the car teach itself to read street signs, for instance, but they underscored that knowledge with specific instructions: “stop” means stop. If the car still had trouble, they’d download the sensor data, replay it on the computer, and fine-tune the response. Other times, they’d run simulations based on accidents documented by the National Highway Traffic Safety Administration. A mattress falls from the back of a truck. Should the car swerve to avoid it or plow ahead? How much advance warning does it need? What if a cat runs into the road? A deer? A child? These were moral questions as well as mechanical ones, and engineers had never had to answer them before. The darpa cars didn’t even bother to distinguish between road signs and pedestrians—or “organics,” as engineers sometimes call them. They still thought like machines.

Four-way stops were a good example. Most drivers don’t just sit and wait their turn. They nose into the intersection, nudging ahead while the previous car is still passing through. The Google car didn’t do that. Being a law-abiding robot, it waited until the crossing was completely clear—and promptly lost its place in line. “The nudging is a kind of communication,” Thrun told me. “It tells people that it’s your turn. The same thing with lane changes: if you start to pull into a gap and the driver in that lane moves forward, he’s giving you a clear no. If he pulls back, it’s a yes. The car has to learn that language.”

It took the team a year and a half to master Page and Brin’s ten hundred-mile road trips. The first one ran from Monterey to Cambria, along the cliffs of Highway 1. “I was in the back seat, screaming like a little girl,” Levandowski told me. One of the last started in Mountain View, went east across the Dumbarton Bridge to Union City, back west across the bay to San Mateo, north on 101, east over the Bay Bridge to Oakland, north through Berkeley and Richmond, back west across the bay to San Rafael, south to the mazy streets of the Tiburon Peninsula, so narrow that they had to tuck in the side mirrors, and over the Golden Gate Bridge to downtown San Francisco. When they finally arrived, past midnight, they celebrated with a bottle of champagne. Now they just had to design a system that could do the same thing in any city, in all kinds of weather, with no chance of a do-over. Really, they’d just begun.

These days, Levandowski and the other engineers divide their time between two models: the Prius, which is used to test new sensors and software; and the Lexus, which offers a more refined but limited ride. (The Prius can drive on surface streets; the Lexus only on highways.) As the cars have evolved, they’ve sprouted appendages and lost them again, like vat-grown creatures in a science-fiction movie. The cameras and radar are now tucked behind sheet metal and glass, the laser turret reduced from a highway cone to a sand pail. Everything is smaller, sleeker, and more powerful than before, but there’s still no mistaking the cars. When Levandowski picked me up or dropped me off near the Berkeley campus on his commute, students would look up from their laptops and squeal, then run over to take snapshots of the car with their phones. It was their version of the Oscar Mayer Wienermobile.

“I’m sending you to someone who’s less squeamish.”

Still, my first thought on settling into the Lexus was how ordinary things looked. Google’s experiments had left no scars, no signs of cybernetic alteration. The interior could have passed for that of any luxury car: burl-wood and leather, brushed metal and Bose speakers. There was a screen in the center of the dashboard for digital maps; another above it for messages from the computer. The steering wheel had an On button to the left and an Off button to the right, lit a soft, fibre-optic green and red. But there was nothing to betray their exotic purpose. The only jarring element was the big red knob between the seats. “That’s the master kill switch,” Levandowski said. “We’ve never actually used it.”

Levandowski kept a laptop open beside him as we rode. Its screen showed a graphic view of the data flowing in from the sensors: a Tron-like world of neon objects drifting and darting on a wireframe nightscape. Each sensor offered a different perspective on the world. The laser provided three-dimensional depth: its sixty-four beams spun around ten times per second, scanning 1.3 million points in concentric waves that began eight feet from the car. It could spot a fourteen-inch object a hundred and sixty feet away. The radar had twice that range but nowhere near the precision. The camera was good at identifying road signs, turn signals, colors, and lights. All three views were combined and color-coded by a computer in the trunk, then overlaid by the digital maps and Street Views that Google had already collected. The result was a road atlas like no other: a simulacrum of the world.

I was thinking about all this as the Lexus headed south from Berkeley down Highway 24. What I wasn’t thinking about was my safety. At first, it was a little alarming to see the steering wheel turn by itself, but that soon passed. The car clearly knew what it was doing. When the driver beside us drifted into our lane, the Lexus drifted the other way, keeping its distance. When the driver ahead hit his brakes, the Lexus was already slowing down. Its sensors could see so far in every direction that it saw traffic patterns long before we did. The effect was almost courtly: drawing back to let others pass, gliding into gaps, keeping pace without strain, like a dancer in a quadrille.

The Prius was an even more capable car, but also a rougher ride. When I rode in it with Dmitri Dolgov, the team’s lead programmer, it had an occasional lapse in judgment: tailgating a truck as it came down an exit ramp; rushing late through a yellow light. In those cases, Dolgov made a note on his laptop. By that night, he’d have adjusted the algorithm and run simulations till the computer got it right.

The Google car has now driven more than half a million miles without causing an accident—about twice as far as the average American driver goes before crashing. Of course, the computer has always had a human driver to take over in tight spots. Left to its own devices, Thrun says, it could go only about fifty thousand miles on freeways without a major mistake. Google calls this the dog-food stage: not quite fit for human consumption. “The risk is too high,” Thrun says. “You would never accept it.” The car has trouble in the rain, for instance, when its lasers bounce off shiny surfaces. (The first drops call forth a small icon of a cloud onscreen and a voice warning that auto-drive will soon disengage.) It can’t tell wet concrete from dry or fresh asphalt from firm. It can’t hear a traffic cop’s whistle or follow hand signals.

And yet, for each of its failings, the car has a corresponding strength. It never gets drowsy or distracted, never wonders who has the right-of-way. It knows every turn, tree, and streetlight ahead in precise, three-dimensional detail. Dolgov was riding through a wooded area one night when the car suddenly slowed to a crawl. “I was thinking, What the hell? It must be a bug,” he told me. “Then we noticed the deer walking along the shoulder.” The car, unlike its riders, could see in the dark. Within a year, Thrun added, it should be safe for a hundred thousand miles.

The real question is who will build it. Google is a software firm, not a car company. It would rather sell its programs and sensors to Ford or GM than build its own cars. The companies could then repackage the system as their own, as they do with G.P.S. units from navteq or TomTom. The difference is that the car companies have never bothered to make their own maps, but they’ve spent decades working on driverless cars. General Motors sponsored Carnegie Mellon’s darpa races and has a large testing facility for driverless cars outside of Detroit. Toyota opened a nine-acre laboratory and “simulated urban environment” for self-driving cars last November, at the foot of Mt. Fuji. But aside from Nissan, which recently announced that it would sell fully autonomous cars by 2020, the manufacturers are much more pessimistic about the technology. “It’ll happen, but it’s a long way out,” John Capp, General Motors’ director of electrical, controls, and active safety research, told me. “It’s one thing to do a demonstration—‘Look, Ma, no hands!’ But I’m talking about real production variance and systems we’re confident in. Not some circus vehicle.”

When I went to visit the most recent International Auto Show in New York, the exhibits were notably silent about autonomous driving. That’s not to say that it wasn’t on display. Outside the convention center, Jeep had set up an obstacle course for its new Wrangler, including a row of logs to drive over and a miniature hill to climb. When I went down the hill with a Jeep sales rep, he kept telling me to take my foot off the brake. The car was equipped with “descent control,” he explained, but, like the other exhibitors, he avoided terms like “self-driving.” “We don’t even include it in our vocabulary,” Alan Hall, a communications manager at Ford, told me. “Our view of the future is that the driver remains in control of the vehicle. He is the captain of the ship.”

This was a little disingenuous—necessity passing as principle. The car companies can’t do full autonomy yet, so they do it piece by piece. Every decade or so, they introduce another bit of automation, another task gently lifted from the captain’s hands: power steering in the nineteen-fifties, cruise control as a standard feature in the seventies, antilock brakes in the eighties, electronic stability control in the nineties, the first self-parking cars in the two-thousands. The latest models can detect lane lines and steer themselves to stay within them. They can keep a steady distance from the car ahead, braking to a stop if necessary. They have night vision, blind-spot detection, and stereo cameras that can identify pedestrians. Yet the over-all approach hasn’t changed. As Levandowski puts it, “They want to make cars that make drivers better. We want to make cars that are better than drivers.”

Along with Nissan, Toyota and Mercedes are probably closest to developing systems like Google’s. Yet they hesitate to introduce them for different reasons. Toyota’s customers are a conservative bunch, less concerned with style than with comfort. “They tend to have a fairly long adoption curve,” Jim Pisz, the corporate manager of Toyota’s North American business strategy, told me. “It was only five years ago that we eliminated cassette players.” The company has been too far ahead of the curve before. In 2005, when Toyota introduced the world’s first self-parking car, it was finicky and slow to maneuver, as well as expensive. “We need to build incremental levels of trust,” Pisz said.

Mercedes has a knottier problem. It has a reputation for fancy electronics and a long history of innovation. Its newest experimental car can maneuver in traffic, drive on surface streets, and track obstacles with cameras and radar much as Google’s do. But Mercedes builds cars for people who love to drive, and who pay a stiff premium for the privilege. Taking the steering wheel out of their hands would seem to defeat the purpose—as would sticking a laser turret on a sculpted chassis. “Apart from the reliability factor, which can easily become a nightmare, it is not nice to look at,” Ralf Herrtwich, Mercedes’s director of driver assistance and chassis systems, told me. “One of my designers said, ‘Ralf, if you ever suggest building such a thing on top of one of our cars, I’ll throw you out of this company.’ ”

Even if the new components could be made invisible, Herrtwich says, he worries about separating people from the driving process. The Google engineers like to compare driverless cars to airplanes on autopilot, but pilots are trained to stay alert and take over in case the computer fails. Who will do the same for drivers? “This one-shot, winner-take-all approach, it’s perhaps not a wise thing to do,” Herrtwich says. Then again alert, fully engaged drivers are already becoming a thing of the past. More than half of all eighteen-to-twenty-four-year-olds admit to texting while driving, and more than eighty per cent drive while on the phone. Hands-free driving should seem like second nature to them: they’ve been doing it all along.

One afternoon, not long after the car show, I got an unsettling demonstration of this from engineers at Volvo. I was sitting behind the wheel of one of their S60 sedans in the parking lot of the company’s American headquarters in Rockleigh, New Jersey. About a hundred yards ahead, they’d placed a life-size figure of a boy. He was wearing khaki pants and a white T-shirt and looked to be about six years old. My job was to try to run him over.

Volvo has less faith in drivers than most companies do. Since the seventies, it has kept a full-time forensics team on call at its Swedish headquarters, in Gothenburg. Whenever a Volvo gets into an accident within a sixty-mile radius, the team races to the scene with local police to assess the wreckage and injuries. Four decades of such research have given Volvo engineers a visceral sense of all that can go wrong in a car, and a database of more than forty thousand accidents to draw on for their designs. As a result, the chances of getting hurt in a Volvo have dropped from more than ten per cent to less than three per cent over the life of a car. The company says this is just a start. “Our vision is that no one is killed or injured in a Volvo by 2020,” it declared three years ago. “Ultimately, that means designing cars that do not crash.”

Most accidents are caused by what Volvo calls the four D’s: distraction, drowsiness, drunkenness, and driver error. The company’s newest safety systems try to address each of these. To keep the driver alert, they use cameras, lasers, and radar to monitor the car’s progress. If the car crosses a lane line without a signal from the blinker, a chime sounds. If a pattern emerges, the dashboard flashes a steaming coffee cup and the words “Time for a break.” To instill better habits, the car rates the driver’s attentiveness as it goes, with bars like those on a cell phone. (Mercedes goes a step further: its advanced cruise control won’t work unless at least one of the driver’s hands is on the wheel.) In Europe, some Volvos even come with Breathalyzer systems, to discourage drunken driving. When all else fails, the cars take preëmptive action: tightening the seat belts, charging the brakes for maximum traction, and, at the last moment, stopping the car.

This was the system that I was putting to the test in the parking lot. Adam Kopstein, the manager of Volvo’s automotive safety and compliance office, was a man of crisp statistics and nearly Scandinavian scruples. So it was a little unnerving to hear him urge me to go faster. I’d spent the first fifteen minutes trying to crash into an inflatable car, keeping to a sedate twenty miles an hour. Three-quarters of all accidents occur at this speed, and the Volvo handled it with ease. But Kopstein was looking for a sterner challenge. “Go ahead and hit the gas,” he said. “You’re not going to hurt anyone.”

I did as instructed. The boy was just a mannequin, after all, stuffed with reflective material to simulate the water in a human body. First a camera behind the windshield would identify him as a pedestrian. Then radar from behind the grille would bounce off his reflective innards and deduce the distance to impact. “Some people scream,” Kopstein said. “Others just can’t do it. It’s so unnatural.” As the car sped up—fifteen, twenty, thirty-five miles an hour—the warning chime sounded, but I kept my foot off the brake. Then, suddenly, the car ground to a halt, juddering toward the boy with a final double lurch. It came to a stop with about five inches to spare.

Since 2010, Volvos equipped with a safety system have had twenty-seven per cent fewer property-damage claims than those without it, according to a study by the Insurance Institute for Highway Safety. The system goes out of its way to leave the driver in charge, braking only in extreme circumstances and ceding control at the tap of a pedal or a turn of the wheel. Still, the car sometimes gets confused. Later that afternoon, I took the Volvo out for a test drive on the Palisades Parkway. I contented myself with steering, while the car took care of braking and acceleration. Like Levandowski’s Lexus, it quickly earned my trust: keeping pace with highway traffic, braking smoothly at lights. Then something strange happened. I’d circled back to the Volvo headquarters and was about to turn into the parking lot when the car suddenly surged forward, accelerating into the curve.

“Please help me get these squirrels out of my pants. This isn’t a pickup line. I really need assistance.”

The incident lasted only a moment—when I hit the brakes, the system disengaged—but it was a little alarming. Kopstein later guessed that the car thought it was still on the highway, in cruise control. For most of the drive, I’d been following Kopstein’s Volvo, but when that car turned into the parking lot, my car saw a clear road ahead. That’s when it sped up, toward what it thought was the speed limit: fifty miles an hour.

To some drivers, this may sound worse than the four D’s. Distraction and drowsiness we can control, but a peculiar horror attaches to the thought of death by computer. The screen freezes or power fails; the sensors jam or misread a sign; the car squeals to a stop on the highway or plows headlong into oncoming traffic. “We’re all fairly tolerant of cell phones and laptops not working,” GM’s John Capp told me. “But you’re not relying on your cell phone or laptop to keep you alive.”

Toyota got a taste of such calamities in 2009, when some drivers began to complain that their cars would accelerate of their own accord—sometimes up to a hundred miles an hour. The news caused panic among Toyota owners: the cars were accused of causing thirty-nine deaths. But this proved to be largely fictional. A ten-month study by nasa and the National Highway Traffic Safety Administration found that most of the incidents were caused by driver error or roving floor mats, and only a few by sticky gas pedals. By then, Toyota had recalled some ten million cars and paid more than a billion dollars in legal settlements. “Frankly, that was an indicator that we need to go slow,” Jim Pisz told me. “Deliberately slow.”

An automated highway could also be a prime target for cyberterrorism. Last year, darpa funded a pair of well-known hackers, Charlie Miller and Chris Valasek, to see how vulnerable existing cars might be. In August, Miller presented some of their findings at the annual Defcon hackers conference in Las Vegas. By sending commands from their laptop, they’d been able to make a Toyota Prius blast its horn, jerk the wheel from the driver’s hands, and brake violently at eighty miles an hour. True, Miller and Valasek had to use a cable to patch into the car’s maintenance port. But a team at the University of California, San Diego, led by the computer scientist Stefan Savage, has shown that similar instructions could be sent wirelessly, through systems as innocuous as a Bluetooth receiver. “Existing technology is not as robust as we think it is,” Levandowski told me.

Google claims to have answers to all these threats. Its engineers know that a driverless car will have to be nearly perfect to be allowed on the road. “You have to get to what the industry calls the ‘six sigma’ level—three defects per million,” Ken Goldberg, the industrial engineer at Berkeley, told me. “Ninety-five per cent just isn’t good enough.” Aside from its test drives and simulations, Google has encircled its software with firewalls, backup systems, and redundant power supplies. Its diagnostic programs run thousands of internal checks per second, searching for system errors and anomalies, monitoring its engine and brakes, and continually recalculating its route and lane position. Computers, unlike people, never tire of self-assessment. “We want it to fail gracefully,” Dolgov told me. “When it shuts down, we want it to do something reasonable, like slow down and go on the shoulder and turn on the blinkers.”

Still, sooner or later, a driverless car will kill someone. A circuit will fail, a firewall collapse, and that one defect in three hundred thousand will send a car plunging across a lane or into a tree. “There will be crashes and lawsuits,” Dean Pomerleau said. “And because the car companies have deep pockets they will be targets, regardless of whether they’re at fault or not. It doesn’t take many fifty- or hundred-million-dollar jury decisions to put a big damper on this technology.” Even an invention as benign as the air bag took decades to make it into American cars, Pomerleau points out. “I used to say that autonomous vehicles are fifteen or twenty years out. That was twenty years ago. We still don’t have them, and I still think they’re ten years out.”

If driverless cars were once held back by their technology, then by ideas, the limiting factor now is the law. Strictly speaking, the Google car is already legal: drivers must have licenses; no one said anything about computers. But the company knows that won’t hold up in court. It wants the cars to be regulated just like human drivers. For the past two years, Levandowski has spent a good deal of his time flying around the country lobbying legislatures to support the technology. First Nevada, then Florida, California, and the District of Columbia have legalized driverless cars, provided that they’re safe and fully insured. But other states have approached the issue more skeptically. The bills proposed by Michigan and Wisconsin, for instance, both treat driverless cars as experimental technology, legal only within narrow limits.

Much remains to be defined. How should the cars be tested? What’s their proper speed and spacing? How much warning do drivers need before taking the wheel? Who’s responsible when things go wrong? Google wants to leave the specifics to motor-vehicle departments and insurers. (Since premiums are based on statistical risk, they should go down for driverless cars.) But the car companies argue that this leaves them too vulnerable. “Their original position was ‘We shouldn’t rush this. It’s not ready for prime time. It shouldn’t be legalized,’ “ Alex Padilla, the state senator who sponsored the California bill, told me. But their real goal, he believes, was just to buy time to catch up. “It became clear to me that the interest here was a race to the market. And everybody’s in the race.” The question is how fast should they go.

At the tech meeting I attended, Levandowski showed the team a video of Google’s newest laser, slated to be installed within the year. It had more than twice the range of previous models—eleven hundred feet instead of two hundred and sixty—and thirty times the resolution. At three hundred feet, it could spot a metal plate less than two inches thick. The laser would be about the size of a coffee mug, he told me, and cost around ten thousand dollars—seventy thousand less than the current model.

“Cost is the least of my worries,” Sergey Brin had told me earlier. “Driving the price of technology down is like”—he snapped his fingers. “You just wait a month. It’s not fundamentally expensive.” Brin and his engineers are motivated by more personal concerns: Brin’s parents are in their late seventies and starting to get unsteady behind the wheel. Thrun lost his best friend to a car accident, and Urmson has children just a few years shy of driving age. Like everyone else at Google, they know the statistics: worldwide, car accidents kill 1.24 million people a year, and injure another fifty million.

For Levandowski, the stakes first became clear three years ago. His fiancée, Stefanie Olsen, was nine months pregnant at the time. One afternoon, she had just crossed the Golden Gate Bridge on her way to visit a friend in Marin County when the car ahead of her abruptly stopped. Olsen slammed on her brakes and skidded to a halt, but the driver behind her wasn’t so quick. He collided into her Prius at more than thirty miles an hour, pile-driving it into the car ahead. “It was like a tin can,” Olsen told me. “The car was totalled and I was accordioned in there.” Thanks to her seat belt, she escaped unharmed, as did her baby. But when Alex was born he had a small patch of white hair on the back of his head.

“That accident never should have happened,” Levandowski told me. If the car behind Olsen had been self-driving, it would have seen the obstruction three cars ahead. It would have calculated the distance to impact, scanned the neighboring lanes, realized it was boxed in, and hit the brakes, all within a tenth of a second. The Google car drives more defensively than people do: it tailgates five times less, rarely coming within two seconds of the car ahead. Under the circumstances, Levandowski says, our fear of driverless cars is increasingly irrational. “Once you make the car better than the driver, it’s almost irresponsible to have him there,” he says. “Every year that we delay this, more people die.”

After a long day in Mountain View, the drive home to Berkeley can be a challenge. Levandowski’s mind, accustomed to pinwheeling in half a dozen directions, can have trouble focussing on the two-ton hunks of metal hurtling around him. “People should be happy when I’m on automatic mode,” he told me, as we headed home one night. He leaned back in his seat and put his hands behind his head, as if taking in the seaside sun. He looked like the vintage illustrations of driverless cars on his laptop: “Highways made safe by electricity!”

The reality was so close that he could envision each step: The first cars coming to market in five to ten years. Their numbers few at first—strange beasts on a new continent—relying on sensors to get the lay of the land, mapping territory street by street. Then spreading, multiplying, sharing maps and road conditions, accident alerts and traffic updates; moving in packs, drafting off one another to save fuel, dropping off passengers and picking them up, just as Brin had imagined. For once it didn’t seem like a fantasy. “If you look at my track record, I usually do something for two years and then I want to leave,” Levandowski said. “I’m a first-mile kind of guy—the guy who rushes the beach at Normandy, then lets other people fortify it. But I want to see this through. What we’ve done so far is cool; it’s scientifically interesting; but it hasn’t changed people’s lives.”

When we arrived at his house, his family was waiting. “I’m a bull!” his three-year-old, Alex, roared as he ran up to greet us. We acted suitably impressed, then wondered why a bull would have long whiskers and a red nose. “He was a kitten a little while ago,” his mother whispered. A former freelance reporter for the Times and CNET, Olsen was writing a techno-thriller set in Silicon Valley. She worked from home now, and had been cautious about driving since the accident. Still, two weeks earlier, Levandowski had taken her and Alex on their first ride in the Google car. She was a little nervous at first, she admitted, but Alex had wondered what all the fuss was about. “He thinks everything’s a robot,” Levandowski said.

While Olsen set the table, Levandowski gave me a brief tour of their place: an Arts and Crafts house from 1909, once home to a hippie commune led by Tom Hayden. “You can still see the burn marks on the living-room floor,” he said. For a registered Republican and a millionaire many times over, it was a quirky, modest choice. Levandowski probably could have afforded that stateroom in a 747 by now, and made good use of it. Last year alone, he flew more than a hundred thousand miles in his lobbying efforts. There was just one problem, he said. It was irrational, he knew. It went against all good sense and a raft of statistics, but he couldn’t help it. He was afraid of flying. 

A Beautiful Mind | The Huffington Post



TECH 08/13/2012 02:42 pm ET | Updated Oct 19, 2012A Beautiful MindBy Bianca Bosker160“Let’s see if I can get us killed,” Sebastian Thrun advises me in a Germanic baritone as we shoot south onto the 101 in his silver Nissan Leaf. Thrun, who pioneered the self-driving car, cuts across two lanes of traffic, then jerks into a third, threading the car into a sliver of space between an eighteen-wheeler and a sedan. Thrun seems determined to halve the normally eleven minute commute from the Palo Alto headquarters of Udacity, the online university he oversees, to Google X, the secretive Google research lab he co-founded and leads.He’s also keen to demonstrate the urgency of replacing human drivers with the autonomous automobiles he’s engineered.“Would a self-driving car let us do this?” I ask, as mounting G-forces press me back into my seat.“No,” Thrun answers. “A self-driving car would be much more careful.”Thrun, 45, is tall, tanned and toned from weekends biking and skiing at his Lake Tahoe home. More surfer than scientist, he smiles frequently and radiates serenity—until he slams on his brakes at the sight of a cop idling in a speed trap at the side of the highway. Something heavy thumps against the seat behind us and when Thrun opens the trunk moments later, he discovers that three sheets of glass he’s been shuttling around have shattered.Once we reach Google X, he regains his stride, leaving me trotting by his side as he racewalks to his office. Motion is a constant in his life. A pair of black roller skates sit by his desk. Twelve years ago, he borrowed his wife’s sneakers to run the Pittsburg marathon, without bothering to train for the race. He got his son on skis before most other kids his age got out of diapers.When Thrun finds something he wants to do or, better yet, something that is “broken,” it drives him “nuts” and, he says, he becomes “obsessed” with fixing it.Over the last 17 years, Thrun has been the author of, or a pivotal force behind, a list of solutions to a entire roster of “broken” things, making him a folk hero of sorts among Silicon Valley innovators, though hardly a household name elsewhere. While he’s in a hurry in almost every other aspect of his life, he embraces a slow-cooking approach to invention and product-building that sets him apart from many of the create-it-fund-it-and-flip-it whiz kids and veterans who populate the Valley.Thrun’s resume is populated with seismic efforts, either those already set in motion or others just around the corner. There are various robotic self-navigating vehicles that guide tourists through museums, explore abandoned mines, and assist the elderly. There is the utopian self-driving car that promises to relieve humanity from the tedium of commuting while helping reduce emissions, gridlock, and deaths caused by driver error. There are the “magic” Google Glasses that allow wearers to instantly share what they see, as they are seeing it, with anyone anywhere in the world—with the blink of an eye. And there is the free online university Udacity, a potentially game-changing educational effort that, if Thrun has his way, will level the playing field for learners of all stripes.“While everyone is running around saying ‘I’m going to do a better mobile photo thing so I can defeat Facebook and suck out more of their market cap to me,’ Sebastian is going around saying, ‘I think driving is totally screwed up and there should be autonomous cars,’” says venture capitalist George Zachary, an investor in Udacity. “He thinks much more boldly about the problems.”Other observers say all of this is firmly in the tradition of the best sort of innovators.“What’s unique about Sebastian, and all innovators, perhaps, is that they don’t start with the current situation and try to make incrementally better based on what’s been done in the past. They look out and say, ‘Given the current state of technology, what can I do radically differently to make a discontinuity—not an incremental change, but put us in a different place?’” says Dean Kamen, the inventor of the Segway. “He is a true innovator…And he has a fantastic vision.”Many Silicon Valley standouts have succeeded by making radical improvements to products that already exist. Facebook, for example, did social networking better than any of its predecessors. Smartphones were around well before the iPhone, but Apple came up with a gadget far slicker than the competition.Thrun likes creating new things from scratch and invents for a world that should be, for an audience that may not yet be out there, for conditions that may never be met. “I have a strong disrespect for authority and for rules,” he says. “Including gravity. Gravity sucks.”To that end, and for all of his bravado, Thrun also says that he distrusts even his own beliefs and theories, calling them “traps” that might ensnare him in a solution based more on his own ego than logic.“Every time I act on a fear, I feel disappointed in myself.  I have a lot of fear.  If I c

Source: A Beautiful Mind | The Huffington Post


— excerpt  —


08/13/2012 02:42 pm ET | Updated Oct 19, 2012

A Beautiful Mind


“Let’s see if I can get us killed,” Sebastian Thrun advises me in a Germanic baritone as we shoot south onto the 101 in his silver Nissan Leaf.

Thrun, who pioneered the self-driving car, cuts across two lanes of traffic, then jerks into a third, threading the car into a sliver of space between an eighteen-wheeler and a sedan. Thrun seems determined to halve the normally eleven minute commute from the Palo Alto headquarters of Udacity, the online university he oversees, to Google X, the secretive Google research lab he co-founded and leads.

He’s also keen to demonstrate the urgency of replacing human drivers with the autonomous automobiles he’s engineered.

“Would a self-driving car let us do this?” I ask, as mounting G-forces press me back into my seat.

“No,” Thrun answers. “A self-driving car would be much more careful.”

Thrun, 45, is tall, tanned and toned from weekends biking and skiing at his Lake Tahoe home. More surfer than scientist, he smiles frequently and radiates serenity—until he slams on his brakes at the sight of a cop idling in a speed trap at the side of the highway. Something heavy thumps against the seat behind us and when Thrun opens the trunk moments later, he discovers that three sheets of glass he’s been shuttling around have shattered.

Once we reach Google X, he regains his stride, leaving me trotting by his side as he racewalks to his office. Motion is a constant in his life. A pair of black roller skates sit by his desk. Twelve years ago, he borrowed his wife’s sneakers to run the Pittsburg marathon, without bothering to train for the race. He got his son on skis before most other kids his age got out of diapers.

When Thrun finds something he wants to do or, better yet, something that is “broken,” it drives him “nuts” and, he says, he becomes “obsessed” with fixing it.

Over the last 17 years, Thrun has been the author of, or a pivotal force behind, a list of solutions to a entire roster of “broken” things, making him a folk hero of sorts among Silicon Valley innovators, though hardly a household name elsewhere. While he’s in a hurry in almost every other aspect of his life, he embraces a slow-cooking approach to invention and product-building that sets him apart from many of the create-it-fund-it-and-flip-it whiz kids and veterans who populate the Valley.

Thrun’s resume is populated with seismic efforts, either those already set in motion or others just around the corner. There are various robotic self-navigating vehicles that guide tourists through museums, explore abandoned mines, and assist the elderly. There is the utopian self-driving car that promises to relieve humanity from the tedium of commuting while helping reduce emissions, gridlock, and deaths caused by driver error. There are the “magic” Google Glasses that allow wearers to instantly share what they see, as they are seeing it, with anyone anywhere in the world—with the blink of an eye. And there is the free online university Udacity, a potentially game-changing educational effort that, if Thrun has his way, will level the playing field for learners of all stripes.

“While everyone is running around saying ‘I’m going to do a better mobile photo thing so I can defeat Facebook and suck out more of their market cap to me,’ Sebastian is going around saying, ‘I think driving is totally screwed up and there should be autonomous cars,’” says venture capitalist George Zachary, an investor in Udacity. “He thinks much more boldly about the problems.”

Other observers say all of this is firmly in the tradition of the best sort of innovators.

“What’s unique about Sebastian, and all innovators, perhaps, is that they don’t start with the current situation and try to make incrementally better based on what’s been done in the past. They look out and say, ‘Given the current state of technology, what can I do radically differently to make a discontinuity—not an incremental change, but put us in a different place?’” says Dean Kamen, the inventor of the Segway. “He is a true innovator…And he has a fantastic vision.”

Many Silicon Valley standouts have succeeded by making radical improvements to products that already exist. Facebook, for example, did social networking better than any of its predecessors. Smartphones were around well before the iPhone, but Apple came up with a gadget far slicker than the competition.

Thrun likes creating new things from scratch and invents for a world that should be, for an audience that may not yet be out there, for conditions that may never be met. “I have a strong disrespect for authority and for rules,” he says. “Including gravity. Gravity sucks.”

To that end, and for all of his bravado, Thrun also says that he distrusts even his own beliefs and theories, calling them “traps” that might ensnare him in a solution based more on his own ego than logic.

“Every time I act on a fear, I feel disappointed in myself.  I have a lot of fear.  If I can quit all fear in my life and all guilt, then I tend to be much, much more living up to my standards,” Thrun says. “I’ve never seen a person fail if they didn’t fear failure.”

Thrun imagines a future where cars fly, news articles are tailored to the time you have to read them, and teachers are as famous and well-paid as Hollywood celebrities. He grouses that we don’t wear devices to monitor our health twenty-four-seven instead of relying on symptoms to diagnose what ails us. He can spot inefficiencies everywhere he turns, and in most cases, sees technology as the magic bullet.

When he talks about his mission to “look for areas that are just intolerably broken where even small amounts of technology can yield a fundamental sea change,” Thrun makes it clear that his goal isn’t to make us high-tech, but to make us high-human.

“I have a really deep belief that we create technologies to empower ourselves. We’ve invented a lot of technology that just makes us all faster and better and I’m generally a big fan of this,” Thrun says. “I just want to make sure that this technology stays subservient to people. People are the number one entity there is on this planet.”

Simple and Streamlined

Though Thrun says his adult life revolves around trying to find ways that technology can help people, his childhood and adolescence were mainly about self-help.

The youngest of three children, Thrun was born in 1967 in Solingen, Germany. His parents, devout Catholics, told him he was an unplanned baby. Thrun recalls having little contact with his parents, and especially his father. His siblings “required a lot of attention and there was almost no attention left for me,” he says.

His father was a construction company executive and more often than not his first order of business was disciplining Sebastian or his one of siblings with a beating, at the request of his wife. Thrun says his stay-at-home mom was “heavy into punishing people and sins and all that stuff.”

Thrun responded by retreating into a solo world of calculators, computers and code.

“I reacted a lot by just insulating myself from this and so mentally, emotionally I wasn’t that connected,” he says. “I learned to basically pull my own weight, just do my own thing. I spent a lot of time alone and I loved it. It was actually really great because to the present day I love spending time alone. I go bicycling alone, go climbing alone and I just love being with myself and observing myself and learning something.”

Thrun befriended an inventor in his neighborhood who gave him spare parts and a soldering iron, then let him tinker. As an eight-year-old, he’d come home from school, shut himself up in his room, turn on Pink Floyd, AC/DC, Mozart, or Bach, and spend hours sitting on his bed programming his Texas Instruments TI-57 calculator to solve math problems and play games (These days you can find him blasting a mix of classical concertos and Rihanna).

The calculator had no memory, of course, so every time he switched it off, he lost all his code. Eventually, he graduated from his calculator to a display model computer at the local department store, but basically, he was still dealing with the same problem: after four or five hours building games on the store machine, he’d be kicked out and all his work vanished. He took this inconvenience as a challenge to perfect his code so that he could re-enter it in the fewest possible steps. This fastidious dedication to simple, streamlined programming stayed with him, and he would later require his students to write straightforward, elegant code.

When not sitting at a screen, Thrun sang in a five-person choir with Petra Dierkes, a girl two years his junior who would become his girlfriend when he was 18, and, eventually, his wife and colleague at Stanford University. He also played the piano, improvising his own songs as a way to study and express his emotions.

Thrun was a gifted student and terrible pupil with a self-imposed homework ban that lasted from seventh grade through high school graduation. In college, the unprecedented freedom to choose his own coursework sparked a newfound passion for his academic work. He combined a major in computer science with an unorthodox double minor in medicine and economics, a combination that would eventually help him design a “nursebot” to assist elderly patients. When he graduated from the University of Bonn with a Ph. D. in computer science and statistics in 1995, he leaped at the chance to join the faculty at Carnegie Mellon University—what then seemed like “paradise” to Thrun—and spent eight years there before moving to Stanford, where he was computer science guru.

Out in the Valley, Thrun struck up an acquaintanceship with Google co-founder Larry Page, who asked him to see a robot Page had built in his spare time. The two men met for dinner at a casual Japanese restaurant in Palo Alto and Thrun returned to Page’s house to see his creation. The robot’s hardware was in decent shape, but Page “got stuck on the software side of it,” according to Thrun’s diagnosis. He borrowed the robot, flew in a few friends, and returned Page’s bot within a day after giving it the ability to localize itself. After another two or three days of work, the robot could navigate. Thrun said Page was “blown away.”

In 2005, Thrun’s engineering team at the Stanford Artificial Intelligence Laboratory built a driverless car, a blue Volkswagen Touareg SUV named Stanley, that managed to navigate 132 miles of desert terrain on its own, becoming the first self-driving car in history to win the Defense Advanced Research Projects Agency (DARPA) Grand Challenge — a race through the sands of Nevada organized by the United States Department of Defense. The previous year, not a single one of the 15 entries from some of the most powerful robotics engineers in the world had managed to complete more than eight miles of the course. Thrun won the first year he competed, just 15 months after deciding to enter the race.

Page, who professes self-driving cars have “been a passion of mine for years,” watched Stanley’s triumph in the Mojave desert. Soon after, Google hired Thrun to sire the sons of Stanley. In 2010, Thrun helped Page and Sergey Brin, Google’s other co-founder, launch Google X, a top-secret and closely-guarded lab that the search giant tasked with making the impossible possible. The following year, Thrun relinquished his tenure at Stanford.

Xtreme engineering

Google X’s engineers are housed in a low structure covered in squares of dark, mirrored glass that offer a mercury-tinted reflection of the parking lot, bikes and trees that surround it. There are jails less secure than this research lab. Employees need a key card to unlock the entrance, and then are admitted to a small waiting area furnished with two chairs and a foosball table. From there, employees must swipe their badges again to enter any of the labs within, each door plastered with signs warning Googlers to stay vigilant of “tailgaters.”

For a visitor, it’s like stepping into the labs of a mad, hipster scientist. Floors are made of concrete, wires hang from ceiling, tubing covered in foil gleams from the rafters and row after row of black metal desks fill the wide-open space. Thrun’s desk stands in the center of a vast space, at the end of a long row of identical workstations. His is tidy and spare, save for a nametag, an unopened cardboard box, a DVD about the DARPA Grand Challenge, a white Japanese humanoid robot and The Idea Factory, Jon Gertner’s history of Bell Labs—AT&T’s legendary innovations incubator that won seven Nobel prizes and helped usher in the information age.

Thrun says he rarely reads books (they’re “too long”), but Gertner’s tome is particularly fitting in a place that aspires to be the heir to the Bell Labs throne. Its mission, according to Thrun, is to work on areas of innovation that have “hard scientific challenges” and “can influence society in a massive way.” Thrun had considered working with the government to deploy self-driving technology to help soldiers in the field, but the military’s stipulation that he not publish his results killed the collaboration. He instead brought his autonomous vehicles to Google, where they provided the inspiration for Google X and, in Thrun’s view, would get the support they needed to “impact large, large numbers of people.”

Thrun crouches down to strap on his roller skates, but is distracted by a Google X-branded skateboard produced by a colleague. He grabs the board and starts wheeling around the room.

“Sergey fell on this? Awesome,” Thrun remarks with a smile on his second lap. The cavernous area, nearly empty at 9 a.m., echoes with his chirps — “Aah!” “Whee!”— as he loops the room, narrowly missing the edges of the desks, bookcases and fridges stocked with free food.

“Don’t fall, we need you,” a Googler shouts at Thrun.

A fascination with images as facilitator for human relationships infused Thrun’s work on Google Street View, which allows people to digitally meander the streets of Mumbai, trace a nature hike in Yosemite, or tour New York’s Times Square—all from the comfort of their homes. In 2007, Google acquired mapping technology which Thrun’s team at Stanford had developed to train Stanley—technology Thrun nearly used to start his own company, Vutool.

Page tasked Thrun with applying the software to scaling Google Street View as quickly as possible.
“I always felt that if countries knew each other better, there would be less war,” says Thrun. “Often conflict goes with demonizing other countries and cultures. I figured if we could bridge the gap between cultures with images, that would not be a bad thing to do.”

Two years ago, Thrun assembled a team of Google X engineers and tasked them with another assignment, one also rooted in the future: to reinvent the computer.

The result is Project Glass, a.k.a. Google Glasses, an endeavor Thrun makes a point of asking me to note is now being led by his colleague Babak Parviz. Thrun hands me a pair of the “glasses,” which will be available for $1500 to a limited group of tech industry insiders in early 2013. Worn like a pair of lens-less spectacles, the device suspends a glass cube around half an inch wide just far enough to the right of my retina that I can still make direct eye contact with Thrun, who all but hovers with excitement in the chair across from mine.

A video of fireworks begins to play on the cube and the screen glows purple, pink and blue, both from my vantage point and Thrun’s. A faint soundtrack of the explosions hums from a speaker just above my ear. The image on the glass shifts as I tilt my chin and move my gaze, and without realizing it, I snap a picture of Thrun. A small row of icons appears with the option to share it.

Google Glasses’ creators have taken pains to design a device that won’t isolate people from their surroundings. For example, the speaker sits above the wearer’s ear, not in it, and the cube rests above the eye, not in front of it. The suspended square of glass lights up from both sides, so a person speaking to someone wearing Google Glasses can tell if the wearer has the device switched on.

Thrun’s deep investment in the project seems to come from a personal aversion to the madly proliferating gadgets that stand between people and the world around them. The inspiration is to “get technology out of your way” so people “spend less time on technology and more time on the real world,” he says.

And for someone who hopes to see us endowed with an all-seeing electronic third eye, Thrun is remarkably hostile to his devices. Cellphones are a distraction that make us socially “cut off” from an environment, he gripes. He’ll finish a two-hour meal without once glancing at his phone. To him, phone calls are a “super negative” experience because they interrupt what he’s doing.

“I once saw a family of five children and two parents in a Lake Tahoe restaurant, where every single person was just looking at their phone while they were having dinner together. That made me so sad because they have this brief of moment of time with their family and they should just enjoy each other,” Thrun recalls. “I can’t tell if Google Glass has succeeded, but it’s a really big emotional thing for me: having the technology that we love and connections that bring us to other people. Technology is synonymous for connection with other people.”


A cellphone can slip into a pocket and be temporarily out of sight. Google Glasses are at eye level and constantly in your face, or on someone else’s face. Making it easier to snap and share photos all but guarantees we’ll take more of them and share more of them, thus connecting ourselves more directly to the people who aren’t present. Surveillance—and documentation—will become more pervasive as well in a world full of Google Glasses.

Does Thrun worry that omnipresent Google Glasses will make us more likely to disconnect from people around us?

“All the time,” he says, explaining that he and other Google X engineers have been wearing the device as much as possible to see what dinner table conversation is like once the novelty of the gadget has worn off. “Maybe the outcome will be socially not that acceptable, we don’t know.”

So far, he’s felt “amazingly empowered” by the ability to take pictures, share pictures, and bring people into what he’s doing at that very moment. To Thrun, Google Glasses’ primary appeal is as a camera. He predicts we’ll share ten times as many photos as we do now and that the images we share will be “uglier”— more personal, more authentic, and more of the moment. These intimate images of what we’re seeing right this instant — a baby’s face, the steak we’re about to bite into — will allow a kind of elementary teleportation that lets us each bring everyone along for the ride.

Your mind can be closer than ever to mine.

If Google Glasses embody Thrun’s vision for a device that brings people together, the house he’s building near Palo Alto is a wish for a home that does the same.

The frame of the house tops a gold, grassy hill on a $5.9 million, nine-acre plot of land in Los Altos Hills. Seen from afar, it might be mistaken for a red flying saucer that has descended on Silicon Valley. Designed by Eli Attia, former chief of design for Philip Johnson, the building is a squat, single-story cylinder with exterior walls made entirely of floor-to-ceiling glass. A glass cone protrudes from the roof at the center of the circle and directly below it, a spiral staircase leads to a garage. Thrun says with a touch of pride that at 5,000 square feet, the three-bedroom home is a fraction the size of its neighboring mansions. There are also no corridors or load-bearing walls in the floor plan, and much of the eco-friendly home is given over to common areas.

“It’s really compact,” Thrun says. “The idea to make as compact as possible so family stays as close together as possible.”

During the tour, a neighbor stops by to ask if Thrun will join him at this year’s Bohemian Club retreat. Like Thrun, he’s a member of this elite society where men—and only men—with big checkbooks and big roles to play in life get together to schmooze, booze, sing and pee in the woods, according to accounts. Thrun says it isn’t likely. Later, he tells me he wouldn’t want to go on vacation without his wife and son.

The Laws Of Motion

Even as Thrun seeks to get gadgets out of our way, his vision suggests an effort to make humans a bit more like computers: more rational and less inclined to give into foolish fears. Thrun sees a very real and important place for technology that advances clarity, eliminates obfuscation, and gives people all the help they need to solve problems on their own.

Thrun approaches problems armed with facts and cool hard logic, and seems troubled by people who do otherwise. He has an impressive number of statistics at his fingertips: the energy efficiency of planes versus trains, the fraction of materials shipped to a construction site that go to waste, the number of years required to fly to Mars and the percentage of Americans who don’t believe in evolution (a number irksomely large, in his view). He imagines a device more instantaneous, personalized and melded with our mind than a smartphone, one that would elevate conversations by allowing user to more easily research and surface facts during a discussion. No more messy speculation or faulty memories.

“We’ve stopped thinking. We’ve really stopped thinking,” he says. “We don’t look at problems logically, we look at them emotionally. We look at them through the guts. We look at them as if we’re doing a high school problem, like what is beautiful, what makes me recognized among my peers. We don’t go and think about things. We as a society don’t wish to engage in rational thought.”

Thrun blames the sorry state of our minds on an education system that raises students “like robots” and trains them to “follow rules.” Thrun’s pedagogy, at Carnegie Mellon, Stanford and now Udacity, leans heavily on learning by doing. He advises that I take up snowboarding so I can understand the laws of motion by living them rather than memorizing them in a classroom.
Thrun also believes that connectivity is fundamental to learning. It’s through interactions with as many good minds as possible that good ideas take hold.

Conversations with people like Dean Kamen, Elon Musk and Google’s co-founders are crucial to Thrun’s problem solving process. He listens, debates and tests ideas out on people to see how they react. Being around Page and Brin makes Thrun feel “stupid,” like “a schoolboy,” and he says he can’t get enough of it.

“For me these are the high points of my life: When I go in and somebody just shows me how dumb I am and how little I know. That’s what I live for. Just to learn something new,” he says.

On a recent afternoon, Thrun is at Udacity’s headquarters in Palo Alto, just blocks from Stanford’s campus, rallying the troops. He has called an all-hands meeting and the company’s 30-odd employees, mostly 20-somethings in jeans, are gathered in a semi circle around him leaning on desks, squished onto couches, or sitting cross-legged on the floor. His two co-founders, David Stavens and Mike Sokolsky, former members of Stanford’s self-driving car team, have also joined.

“The purpose of this week has been for me to think about where the focus is and I know all of you have been asking me for this and it’s obviously something I’ve been slacking to do and not doing really well, so score me on the performance review and make sure that you put a check mark on ‘Sebastian is not particularly fast,’” he tells his staff.

Since Udacity launched in 2011, first under the name Know Labs, over 730,000 students have enrolled in classes—including the 160,000 that registered for Thrun’s first online course, Introduction to Artificial Intelligence—and 150,000 of them are actively taking Udacity courses. Enrollment is down, Thrun acknowledges, though he doesn’t say by how much.

But Thrun is undaunted.

“If we do a really good job here, then we’re going to shape society, together with our partners and other entities in the space, to really, really redefine education,” he says. “That’s pretty cool for a mission. That’s much better than being Instagram.”

Thrun predicts education will radically transform in the next ten years. Like blockbuster films, blockbuster online classes will command huge audiences and cost millions of dollars to produce. Many alma maters will shutter their doors as low-cost, high-quality online courses put second-tier schools out of business. Learning won’t stop the moment careers begin, and instead co-exist with work throughout life. He hopes to see teens start working earlier. Books will play a reduced role in teaching and short-but-comprehensive, quiz-intensive lessons will replace them.

Udacity marks Thrun’s effort to make all of the above come true. He’s after an audience of people from 18 to 80 years old, from Sacremento to Shanghai, from novice to knowledgeable. Thrun calls Udacity the “Twitter of education,” in keeping with his vision that universities “will go from mammoth degrees to 140-character education.”

Shorter, more digestible units created by professors concerned with teaching, not tenure, will seamlessly “fit” in students’ lives. Udacity’s lessons — YouTube videos split into segments three to five minutes in length — feature a professor narrating principles or equations as they are sketched out by a disembodied hand.

Each lesson ends with a quiz, followed by an explanation of how to properly answer the problem.

Unlike traditional universities, Udacity plans to turn a profit. For a fee, the company will provide official certification to students who pass course exams at an in-person testing center. Udacity also plans to play matchmaker between students and companies looking to hire them, and, like LinkedIn, will charge firms to browse its database of resumes.

Upsetting the status quo in lecture halls around the country has become big business and Udacity faces a growing number of competitors, most of which have, unlike Udacity, partnered with existing universities to produce their courses. Coursera, a company that’s the brainchild of two Stanford professors, boasts a dozen partners from Princeton to Penn. EdX is a not-for-profit initiative founded by Harvard and the Massachusetts Institute of Technology to provide instruction online. And 2tor is working with a growing roster of universities to offer online graduate degrees in business, law and nursing, among other fields.
Thrun says he welcomes these rivals because more choice is the best thing that could happen to students.

Udacity looks to be a breeding ground for cultivating the talents of the young Thruns of the world: motivated individuals who want to learn, know what subjects they care about, seek a braniac community and are determined to teach themselves, no matter what. It’s the experience Thrun didn’t have growing up, but would have wanted. Classes are structured around solving a problem — building a search engine, programming a robotic car — rather than mastering theory or reviewing a canon. The thirteencourses offered so far cover programming physics, math, statistics and artificial intelligence.

“It’s opening up the chances for other people to also become innovators,” Zachary, the venture capitalist, says of Udacity and Thrun. “It is passing forward his spirit of innovation.”

Thrun considers Udacity his most important undertaking and it will perhaps prove his most challenging one. Regardless, he doesn’t think about his legacy and he doesn’t imagine he’ll be remembered in a generation. After all, he’s only human.

“I screw up every day,” he says. “I have a broken piece of glass in my car. I almost got a ticket this morning.” In the meantime, he plans to keep aiming high.

“Question every assumption and go towards the problem, like the way they flew to the moon,” he says. “We should have more moon shots and flights to the moon in areas of societal importance.”

This story originally appeared in Huffington, in the iTunes App store.