“There is no doubting the force of [the] arguments … the problem is a research challenge worthy of the next generation’s best mathematical talent. Human civilisation is at stake.” – Clive Cookson, Science Editor at the Financial Times
Around 1800, civilization underwent one of the most profound shifts in human history: the industrial revolution.
This wasn’t the first such event – the agricultural revolution had upended human lives 12,000 years earlier.
A growing number of experts believe that a third revolution will occur during the 21st century, through the invention of machines with intelligence which far surpasses our own. These range from Stephen Hawking to Stuart Russell, the author of the best-selling AI textbook, AI: A Modern Approach.1
Rapid progress in machine learning has raised the prospect that algorithms will one day be able to do most or all of the mental tasks currently performed by humans. This could ultimately lead to machines that are much better at these tasks than humans.
These advances could lead to extremely positive developments, presenting solutions to now-intractable global problems, but they also pose severe risks. Humanity’s superior intelligence is pretty much the sole reason that it is the dominant species on the planet. If machines surpass humans in intelligence, then just as the fate of gorillas currently depends on the actions of humans, the fate of humanity may come to depend more on the actions of machines than our own. For a technical explanation of the risks from the perspective of computer scientists, see these papers.811
This might be the most important transition of the next century – either ushering in an unprecedented era of wealth and progress, or heralding disaster. But it’s also an area that’s highly neglected: while billions are spent making AI more powerful,12 we estimate fewer than 100 people in the world are working on how to make AI safe.4
This problem is an unusual one, and it took us a long time to really understand it. Does it sound weird? Definitely. When we first encountered these ideas in 2009 we were sceptical. But like many others, the more we read the more concerned we became.7We’ve also come to believe the technical challenge can probably be overcome if humanity puts in the effort.
Working on a newly recognized problem means that you risk throwing yourself at an issue that never materializes or is solved easily – but it also means that you may have a bigger impact by pioneering an area others have yet to properly appreciate, just like many of the highest impact people in history have done. In what follows, we will cover the arguments for working on this area, and look at the best ways you can contribute.
Many experts believe that there is a significant chance that humanity will develop machines more intelligent than ourselves during the 21st century. This could lead to large, rapid improvements in human welfare, but there are good reasons to think that it could also lead to disastrous outcomes. The problem of how one might design a highly intelligent machine to pursue realistic human goals safely is very poorly understood. If AI research continues to advance without enough work going into the research problem of controlling such machines, catastrophic accidents are much more likely to occur. Despite growing recognition of this challenge, fewer than 100 people worldwide are directly working on the problem.
Our overall view
Recommended This is among the most pressing problems to work on.
Scale <span class="fa fa-question-circle text-primary icon-tooltip career-tooltip" data-placement="right" data-toggle="tooltip" title="" data-original-title="If we solved this problem, by how much would the world become a better place? Read more.” style=”box-sizing: border-box; display: inline-block; font-family: FontAwesome; line-height: 1; -webkit-font-smoothing: antialiased; color: rgb(46, 189, 209);”>
We estimate that the risk of a serious catastrophe caused by machine intelligence within the next 100 years is between 1 and 10%.
Neglectedness <span class="fa fa-question-circle text-primary icon-tooltip career-tooltip" data-placement="right" data-toggle="tooltip" title="" data-original-title="How many resources are already being dedicated to tackling this problem? Read more.” style=”box-sizing: border-box; display: inline-block; font-family: FontAwesome; line-height: 1; -webkit-font-smoothing: antialiased; color: rgb(46, 189, 209);”>
$10m of annual funding.
Solvability <span class="fa fa-question-circle text-primary icon-tooltip career-tooltip" data-placement="right" data-toggle="tooltip" title="" data-original-title="If we doubled direct effort on this problem, what fraction of the remaining problem would we expect to solve? Read more.” style=”box-sizing: border-box; display: inline-block; font-family: FontAwesome; line-height: 1; -webkit-font-smoothing: antialiased; color: rgb(46, 189, 209);”>
We think a doubling of effort would reduce the size of the existing risk by around 1%.
This profile is based on interviews with: Professor Nick Bostrom at the University of Oxford, the author of Superintelligence; a leading professor of computer science; Jaan Tallinn, one of the largest donors in the space and the co-founder of Skype; Jan Leike, a machine learning researcher now at DeepMind; Miles Brundage, an AI policy researcher at the Future of Humanity Institute at Oxford University; Nate Soares, the Executive Director of the Machine Intelligence Research Institute; Daniel Dewey, who works full-time finding researchers and funding opportunities in the field; and several other researchers in the area. We also read advice from David Krueger, a Machine Learning PhD student.
Those who are already familiar with computer science may prefer to watch this talk by University of California Berkeley Professor of Computer Science Stuart Russell instead, as it goes further into potential research agendas.
Recent progress in machine learning suggests that AI’s impact may be large and sudden
When Tim Urban started investigating his article on this topic, he expected to finish it in a few days. Instead he spent weeks reading everything he could, because, he says, “it hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far the most important topic for our future.”
In October 2015 an AI system named AlphaGo shocked the world by defeating a professional at the ancient Chinese board game of Go for the first time. A mere five months later, a second shock followed: AlphaGo had bested one of the world’s topGo professionals, winning 4 matches out of 5. Seven months later, the same program had further improved, crushing the world’s top players in a 60-win streak. In the span of a year, AI had advanced from being too weak to win a single match against the worst human professionals, to being impossible for even the best players in the world to defeat.
This was shocking because Go is considered far harder for a machine to play than Chess. The number of possible moves in Go is vast, so it’s not possible to work out the best move through “brute force”. Rather, the game requires strategic intuition. Some experts thought it would take at least a decade for Go to be conquered.15
Since then, AlphaGo has discovered that certain ways of playing Go that humans had dismissed as foolish for thousands of years were actually superior. Ke Jie, the top ranked go player in the world, has been astonished: “after humanity spent thousands of years improving our tactics,”he said, “computers tell us that humans are completely wrong. I would go as far as to say not a single human has touched the edge of the truth of Go.”2
The advances above became possible due to progress in an AI technique called “deep learning”. In the past, we had to give computers detailed instructions for every task. Today, we have programs that teach themselves how to achieve a goal – for example, a program was able to learn how to play Atari games based only on reward feedback from the score. This has been made possible by improved algorithms, faster processors, bigger data sets, and huge investments by companies like Google. It has led to amazing advances far faster than expected.
But those are just games. Is generalmachine intelligence still far away? Maybe, but maybe not. It is really hard to predict the future of technology, and lots of past attempts have been completely off the mark. However, the best available surveys of experts assign a significant probability to the development of powerful AI within our lifetimes.
One survey of the 100 most-cited living computer science researchers, of whom 29 responded, found that more than halfthought there was a greater than 50% chance of “high-level machine intelligence” – one that can carry out most human professions at least as well as a typical human – being created by 2050, and a greater than 10% chance of it happening by 2024 (see figure below).19
When superintelligent AI arrives, it could have huge positive and negative impacts
If the experts are right, an AI system that reaches and then exceeds human capabilities could have very large impacts, both positive and negative. If AI matures in fields such as mathematical or scientific research, these systems could make rapid progress in curing diseases or engineering robots to serve human needs.
On the other hand, many people worry about the disruptive social effects of this kind of machine intelligence, and in particular its capacity to take over jobs previously done by less skilled workers. If the economy is unable to create new jobs for these people quickly enough, there will be widespread unemployment and falling wages.13 These outcomes could be avoided through government policy, but doing so would likely require significant planning.
However, those aren’t the only impacts highly intelligent machines could have.
Professor Stuart Russell, who wrote the leading textbook on artificial intelligence, has written:3
Success brings huge risks. … the combination of [goal] misalignment with increasingly capable decision-making systems can lead to problems – perhaps even species-ending problems if the machines are more capable than humans.
Here is a highly simplified example of the concern:
The owners of a pharmaceutical company use machine learning algorithms to rapidly generate and evaluate new organic compounds.
As the algorithms improve in capability, it becomes increasingly impractical to keep humans involved in the algorithms’ work – and the humans’ ideas are usually worse anyway. As a result, the system is granted more and more autonomy in designing and running experiments on new compounds.
Eventually the algorithms are assigned the goal of “reducing the incidence of cancer,” and offer up a compound that initial tests show is highly effective at preventing cancer. Several years pass, and the drug comes into universal usage as a cancer preventative…
…until one day, years down the line, a molecular clock embedded in the compound causes it to produce a potent toxin that suddenly kills anyone with trace amounts of the substance in their bodies.
It turns out the algorithm had found that the compound that was most effective at driving cancer rates to 0 was one that killed humans before they could grow old enough to develop cancer. The system also predicted that its drug would only achieve this goal if it were widely used, so it combined the toxin with a helpful drug that would incentivize the drug’s widespread adoption.
Of course, the concern isn’t about this example specifically, but rather similar unintended consequences. These reemerge for almost any goal researchers have yet come up with to offer a superintelligent machine.10 And all it takes is for a singlesuper-intelligent machine in the world to receive a poor instruction, and it could pose a large risk.
The smarter a system, the harder it becomes for humans to exercise meaningful oversight. And, as in the scenario above, an intelligent machine will often want to keep humans in the dark, if obscuring its actions reduces the risk that humans will interfere with it achieving its assigned goal.
You might think ‘why can’t we just turn it off?’, but of course an intelligent system will give every indication of doing exactly what we want, until it is certain we won’t be able to turn it off.
An intelligent machine may ‘know’ that what it is doing is not what humans intended it to do, but that is simply not relevant. Just as a heat-seeking missile follows hot objects, by design a machine intelligence will do exactly, and literally, what we initially program it to do. Unfortunately, intelligence doesn’t necessarily mean it shares our goals. As a result it can easily become monomaniacal in pursuit of a supremely stupid goal.
The solution is to figure out how to ensure that the instructions we give to a machine intelligence really capture what we want it to do, without any such unintended outcomes. This is called a solution to the ‘control’ or ‘value alignment’ problem.
It’s hard to imagine a more important research question. Solving the control problem could mean the difference between enormous wealth, happiness and health — and the destruction of the very conditions which allow humanity to thrive.
Few people are working on the problem
While the stakes seem huge, the effort being put into avoiding these hazards is small. Global spending on research and action to ensure that machine intelligence is developed safely will come to only $9 million in 2017.4 By comparison, over 100 times as much is spent trying to speed up the development of machine intelligence,12and 26,000 times as much is spent on biomedical research.5
That said, the field of AI safety research is growing quickly – in 2015, total spending was just $3 million.
Technical research refers to work in mathematics and AI to solve the control problem. Strategy research is focused on broader questions about how to safely develop AI, such as how it should be regulated.
As we’d expect from the above, recent investment into technical research on the control problem has already yielded significant results. We’ve detailed some of these findings in this footnote.14 While few of the technical issues have been resolved, we have a much clearer picture today of how intelligent systems can go wrong than a few years ago, which is the first step towards a solution.8
There has also been recent progress on better understanding the broader ‘strategic’ issues around AI. For instance, there has been research into how the government should respond to AI, covering arms races,16the implications of sharing research openly,17 and the criteria on which AI policy should be judged.18 That said, there is still very little written on these topics, so single papers can be a huge contribution to the literature.
Even if – as some have argued – meaningful research were not possible right now, it would still be possible to build a communitydedicated to mitigating these risks at a future time when progress is easier. Work by non-technical people has helped to expand funding and interest in the field a great deal, contributing to the recent rapid growth in efforts to tackle the problem.
Example: Paul Christiano used his math skills to tackle technical challenges
Some believe that artificial intelligence, even if much more intelligent than humans in some ways, will never have the opportunity to cause destruction on a global scale. For an example of this, see economist Robin Hanson, who believes that machines will eventually become better than humans at all tasks and supercede us, but that the process will be gradual and distributed enoughto ensure that no one actor is ever in a position to become particularly influential. His views are detailed in his book The Age of Em.6
Some believe that it will be straightforward to get an intelligent system to act in our interests. For an example of this, see Holden Karnofsky arguing in 2012 that we could design AIs to work as passive tools rather than active agents (though he has since changed his view significantly and now represents one of the field’s major funders).
Neil Lawrence, an academic in machine learning, takes issue with many predictions in Bostrom’s book Superintelligence, including our ability to make meaningful predictions far into the future.
The fact that there isn’t a consensus that smarter than human AI is coming soon and will be dangerous is a relief. However, given that a significant and growing fraction of relevant experts are worried, it’s a matter of prudent risk management to put effort into the problem in case they are right. You don’t need to be 100% sure your house is going to burn down to buy fire insurance.
We aren’t the most qualified to judge, but we have looked into the substantive issues and mostly found ourselves agreeing with those who are more worried than less.
It may be too early to work on it
If the development of human-level machine intelligence is hundreds of years away, then it may be premature to research how to align it with human values. For example, the methods used to build machine intelligence may end up being completely different from those we use to develop AI now, rendering today’s research obsolete.
However, the surveys of computer scientists show that there’s a significant chance – perhaps around 10% – that human level AI will arrive in 10-20 years. It’s worth starting now just in case this fast scenario proves to be accurate.
Furthermore, even if we knew that human level AI was at least 50 years away, we don’t know how hard it will be to solve the ‘control problem’. The solution may require a series of insights that naturally come one after another. The more of those insights we build up ahead of time, the more likely it is that we’ll be able to complete the solution in a rush once the nature of AI becomes clear.
Additionally, acting today could set up the infrastructure necessary to take action later, even if research today is not directly helpful.
It could be very hard to solve
As with many research projects in their early stages, we don’t know how hard this problem is to solve. Someone could believe there are major risks from machine intelligence, but be pessimistic about what additional research will accomplish, and so decide not to focus on it.
It may not fit your skills
Many individuals are concerned about this problem, but think that their skills are not a natural fit for working on it, so spend their time working on something else. This is likely true for math-heavy technical research roles, though below we also describe operational and support roles that are a good fit for a wider range of people.
It is probably possible to design a machine that is as good at accomplishing its goals as humans, including ‘social’ tasks that machines are currently hopeless at. Experts in artificial intelligence assign a greater than 50% chance of this happening in the 21st century.
Without careful design for reliability and robustness, machine intelligence may do things very differently than what humans intended – including pursuing policies that have a catastrophic impact on the world.
Even if advanced machine intelligence does not get ‘out of control’, it is likely to be very socially disruptive and could be used as a destabilizing weapon of war.
It is unknown how fast progress on this problem can be made – it may be fast, or slow.
What can you do to help?
We’ve broken this section into five parts to cover the main paths to making a difference in this area.
1. Technical research
Ultimately the problem will require a technical solution – humans will need to find a way to ensure that machines always understand and comply with what we really want them to do. But few people are able to do this research, and there’s currently a surplus of funding and a shortage of researchers.
So, if you might be a good fit for this kind of research, it could well be one of the highest-impact things you can do with your life.
Researchers in this field mostly work in academia and technology companies such as Google Deepmind or OpenAI. You might be a good fit if you would be capable of completing a PhD at a top 20 program in computer science or a similar quantitative course (though it’s not necessary to have such a background). We discuss this path in detail here:
If improvements in artificial intelligence come to represent the most important changes in the 21st century, governments are sure to take a keen interest. For this reason, there is a lot of interest in strategic and policy research – attempts to forecast how a transition to smarter-than-human machine intelligence could occur, and what the response by governments and other major institutions should be.
This is a huge field, but some key issues include:
How should we respond to technological unemployment if intelligent systems rapidly displace human workers?
How do we avoid an ‘arms race’ in which countries or organizations race to develop strong machine intelligences, for strategic advantage, as occurred with nuclear weapons?
How do we avoid the use of machine intelligence or robots in warfare, where they could have a destabilizing effect?
If an intelligent system is going to have a revolutionary effect on society, how should its goals be chosen?
When, if ever, should we expect AI to achieve particular capabilities or reach human-level intelligence?
If we handle these issues badly, it could lead to disaster, even if we solve the technical challenges associated with controlling a machine intelligence. So there’s a real need for more people to work on them.
That said, this is an especially difficult area, because it’s easy to do more harm than good. For example, you could make an arms race more likely by promoting the idea that machine intelligence can give you a strategic advantage over rivals, without sufficiently discussing the importance of cooperation. Alternatively, you could discredit the field by producing low-quality analysis or framing the problem in a non-credible or sensationalist way. So, it’s important to be cautious in your efforts and base them on detailed discussion with experts.
How to contribute to strategy research?
The aim of this career is to become an expert in an important area of AI strategy, and then advise major organisations on policy.
This path usually involves working in academia or think tanks, aiming to come up with new AI policy proposals, or more deeply understand the strategic picture. This requires more analytical skills (though almost never top end technical skills).
Some key centres of long-term focused policy research over the coming years are likely to be:
The Future of Humanity Institute
Allan Dafoe on Global Politics of AI at Yale University.
Google DeepMind, or OpenAI
Alan Turing Institute
Cambridge Centre for the Study of Existential Risk
Leverhulme Centre for the Future of Intelligence
These would all be great places to work. We expect more organizations to take an interest in this topic in future.
There are many other places that work on AI policy research, including many top think tanks. However, they mainly focus on short-term issues, such as the regulation of driverless cars, employment impacts, or autonomous weapons systems – rather than the long-term issues around ensuring AI is designed safely. Nevertheless, they can be a good place to work to build skills and connections, or try to tilt the debate towards longer-term issues. Some of the top centres for short-term focused AI policy research are:
Practitioners implement policy rather than develop new policy. These positions are hugely influential, so it’s important they’re filled by people who understand the issues. These positions require relatively stronger social skills compared to policy research.
It’s not clear yet where the best place to end up would be to implement AI policies over the long term. This means it’s important to build your career capital and be capable of taking opportunities as they arise.
To do this, gradually aim for higher and more relevant positions in either the executive or legislative branches and be opportunistic if, for example, changing political climates favor different agencies.
Examples of options that could become valuable include:
The White House Office of Science and Technology Policy, and the National Science Foundation.
Being a science advisor to a Member of Congress (or member of any other national parliament). In particular, it would be attractive to be an analyst in a relevant committee, such as the Committee on Science, Space, and Technology, or work with any committee chair.
Working in IARPA or DARPA, which are leaders in AI funding.
Within defence, the National Security Council or the Office of the Secretary of Defense.
Anywhere close to the top of federal governments (e.g. cabinets, the foreign service).
The United Nations (e.g. UNICRI or UNODA).
The European Commission, which is similar to a national government. For instance, the European Parliament Commission on Civil Law Rules on Robotics has made recommendations, which the European Commission is now considering.
Senior roles within the technology companies listed below.
Within governments outside the USA or EU, there are often similar institutions.
Many of these positions also put you in a good position to contribute to other urgent problems, such as pandemic risk and nuclear security.
How to prepare yourself for strategic research
We spoke with a researcher in the field who suggested the following ways to build qualifications and advance.
First, careers in policy and strategy require opportunism and the use of connections. You will want to position yourself to take a role in government during big shifts in power. There is a chance of advancing very quickly, so it’s worth applying for top roles as soon as possible (so long as you have a back-up plan).
Alternatively, take one of the following to build career capital for the top positions. All are useful, so choose between them based on personal fit. Also consider pursuing several at once, then go whereever you get the best opportunity.
A technical Master’s or PhD in computer science, economics, statistics or policy (Masters of Public Policy or International Relations). These are useful for credibility, as well as understanding the issues. A PhD may be overkill in some cases, but especially helps with science grantmaking.
Go work in an industry or company that is already developing AI, e.g. Google.
Work in any job that gives you policy connections. In the US that could mean being a staffer for a Congressperson; working in a think tank; or a leadership position in government such as the Science and Technology Policy Fellowship, Presidential Management Fellowship, or White House Fellows. In the UK, that could mean being a researcher for an MP, joining a think tank, working in the foreign service or joining the Civil Service Fast Stream.
It can be useful to build up expertise and credibility in government agencies that are likely to have a stake in the issues, for example in the Office of Navy Research, DARPA, IARPA, Homeland Security Advanced Research Projects, or United States Digital Service.
In most of the organizations above, research into problems caused by AI represents only a small part of what they do, creating the risk that you’ll develop expertise or connections that are not relevant. Our advice is as follows:
Go work directly on relevant AI strategy research at groups like the Future of Humanity Institute if you are already qualified to do so.
Otherwise, work on short-term policy issues arising from improvements to AI in think tanks, the military or elsewhere.
If that’s not possible, go into the role which will build your skills, network and credibility the most and wait for a chance to work more directly on this problem.
Ultimately, any of the positions listed above are prestigious and will provide you with good exit opportunities to work on this or another pressing problem if you do well in them.
3. Complementary roles
Even in a research organization, around half of the staff will be doing other tasks essential for the organization to continue functioning and have an impact. Having high-performing people in these roles is essential. Better staff allow an organization to grow more quickly, avoid major mistakes, and have a larger impact by communicating its ideas to more people.
Our impression is that the importance of these roles is underrated because the work is less visible. Some of the people who have made the largest contributions to solving this problem have done so as communicators and project managers. In addition, these roles are a good fit for a large number of people.
Organizations working on AI safety need a wide range of complementary skills:
This path is open to many people who can perform at a high level in these skills.
To get into these roles you’ll want to get similar jobs in organizations known for requiring high-quality work and investing in training their staff. We have more about how to skill up in our article on career capital.
Example: Seán Ó Heigeartaigh helped grow the Future of Humanity Institute
Seán Ó hÉigeartaigh was the Academic Project Manager (and later, Senior Academic Manager) at the Future of Humanity Institute during 2011-15, while its Director was focused on writing a book. He played a crucial role in ensuring the Institute ran smoothly, more than doubled its funding, co-wrote further successful grants including several AI strategy-relevant grants and communicated its research effectively to the media, policymakers and industry partners. During his time he helped grow FHI from 5 to 16 staff, and put in place and oversaw a team of project managers and administrators, including a Director of Research and Director of Operations to whom he transferred his responsibilities upon moving on. His experience doing this allowed him to be a key player in the founding of the Cambridge Centre for the Study of Existential Risk, and later, the Centre for the Future of Intelligence.Read more…
4. Advocacy and capacity building
People who are relatively strong on social skills might be able to have a larger impact by persuading others to work on or fund the problem. This is usually done by working at one of the research organizations already mentioned.
Beyond that, the group we know that is doing this the most to raise awareness of the issue is the effective altruism community, of which we are a part. Joining and growing that movement is a promising way to increase efforts to solve the AI control problem, among other pressing problems.
Once you are familiar with the issue, you could also spend some of your time spreading the word in any of the careers that typically provide you with a platform for advocacy, such as:
You could also rise up the ranks of an organization doing some relevant work, such as Google or the US military, and promote concern for AI safety there.
However, unless you are doing the technical, strategic or policy research described above, you will probably only be able to spend a fraction of your time on this work.
We would also caution that it is easy to do harm while engaging in advocacy about AI. If portrayed in a sensationalist manner, or by someone without necessary technical understanding, ‘advocacy’ can in fact simply be confusing and make the issue appear less credible. Much coverage of this topic in the media misrepresents the concerns actually held by experts. To avoid contributing to this we strongly recommend informing yourself thoroughly and presenting any information in a sober, accurate manner.
Example: Niel switched from physics to research management
Niel Bowerman studied Physics at Oxford University and planned to enter climate policy. But as a result of encountering the ideas above, he changed his career path, and became the Assistant Director at the Future of Humanity Institute, working on the Institute’s operations, fundraising and research communication. Through this work, Niel was involved in raising £3 million for the Institute, contributing to doubling its size. As a result, they have been able to hire a number of outstanding additional technical and strategic researchers. Read Niel’s story…
5. Earning to give
There is an increasing amount of funding available for research in this area, and we expect more large funders to enter the field in future. That means the problem is primarily talent constrained – especially by a need for innovative researchers.
However, there are still some funding gaps, especially among the less conventional groups that can’t get academic grants, such as the Machine Intelligence Research Institute.
As a result earning to give to support others working on the problem directly is still a reasonable option if you don’t feel the other roles described here are a good fit for you.
If you want to donate, our first suggestion is giving to the Long Term Future Fund. The manager of the fund is an expert in catastrophic risk funding, and makes grants to the organizations that are most in need of funding at the time. It’s run by the Centre for Effective Altruism, of which we’re part.
Alternatively you can choose for yourself among the top non-profit organizations in the area, such as the Machine Intelligence Research Institute in Berkeley and the Future of Humanity Institute at Oxford. These were the most popular options among experts in our review in December 2016. See more organizations below.
What are the key organizations you could work for?
We keep a list of every organization that we know is working on AI safety, with links to their vacancies pages, here. The ten most significant organizations, all of which would be good places to work, are probably the following:
Alan Dafoe’s research group at Yale University is conducting research on the ‘global politics of AI’, including its effects on international conflict. PhD or research assistant positions may be available.
AI Impacts is a non-profit which works on forecasting progress in machine intelligence and predicting its likely impacts.
To solve the problem of soul-destroying traffic, roads must go 3D, which means either flying cars or tunnels. Unlike flying cars, tunnels are weatherproof, out of sight and won’t fall on your head. A large network of road tunnels many levels deep would fix congestion in any city, no matter how large it grew (just keep adding levels). The key to making this work is increasing tunneling speed and dropping costs by a factor of 10 or more – this is the goal of The Boring Company. Fast to dig, low cost tunnels would also make Hyperloop adoption viable and enable rapid transit across densely populated regions, enabling travel from New York to Washington DC in less than 30 minutes
To alleviate traffic, transportation corridors, like the buildings that feed into them, must expand into three dimensions. One option is to “go up” with flying cars. However, flying cars have issues with weather, noise, and generally increase anxiety levels of those below them. The other option is to “go down” and build tunnels. The benefits are:
There is no practical limit to how many layers of tunnels can be built, so any level of traffic can be addressed.
Tunnels are weatherproof.
Tunnel construction and operation are silent and invisible to anyone on the surface.
Tunnels don’t divide communities with lanes and barriers.
Why hasn’t this been done before?
Currently, tunnels are really expensive to dig, with some projects costing as much as $1 billion per mile. In order to make a tunnel network feasible, tunneling costs must be reduced by a factor of more than 10.
How can we reduce the cost of tunneling?
First, reduce the tunnel diameter. To build a one-lane road tunnel, the tunnel diameter must be approximately 28 feet. By placing vehicles on a stabilized electric sled, the diameter can be reduced to less than 14 feet. Reducing the diameter in half reduces tunneling costs by 3-4 times. Secondly, increase the speed of the Tunnel Boring Machine (TBM). TBMs are super slow. A snail is effectively 14 times faster than a soft-soil TBM. Our goal is to defeat the snail in a race. Ways to increase TBM speed:
Increase TBM power. The machine’s power output can be tripled (while coupled with the appropriate upgrades in cooling systems).
Continuously tunnel. When building a tunnel, current soft-soil machines tunnel for 50% of the time and erect tunnel support structures the other 50%. This is not efficient. Existing technology can be modified to support continuous tunneling activity.
Automate the TBM. While smaller diameter tunneling machines are automated, larger ones currently require multiple human operators. By automating the larger TBMs, both safety and efficiency are increased.
Go electric. Current tunnel operations often include diesel locomotives. These can be replaced by electric vehicles.
Tunneling R&D. In the United States, there is virtually no investment in tunneling Research and Development (and in many other forms of construction). Thus, the construction industry is one of the only sectors in our economy that has not improved its productivity in the last 50 years.
What is an electric sled, and why use it?
An electric sled is a flat plate on wheels propelled by an electric motor. As discussed above, the electric skate allows a large reduction in tunnel diameter, in addition to:
Increased safety. A fully stabilized autonomous vehicleeliminates human error and the ability to “swerve off-course.”
Increased speed. The controlled autonomous sled allows for speeds of 125 miles per hour in urban settings.
Multiple payloads. The electric skate can transport automobiles, goods, and/or people. And if one adds a vacuum shell, it is now a Hyperloop Pod which can travel at 600+ miles per hour.
Eliminating hazardous emissions. Electric sleds are zero-emission vehicles, and thus do not output hazardous gases like internal combustion cars do. Every mile the sled transports a gas-burning vehicle becomes a zero-emission mile.
What about earthquakes?
Tunnels, when designed properly, are known to be one of the safest places to be during an earthquake. From a structural safety standpoint, the tunnel moves uniformly with the ground, in contrast to surface structures. Additionally, a large amount of earthquake damage occurs from falling debris, which does not apply inside tunnels. Some examples:
1994 Northridge Earthquake: no damage to LA Subway tunnels
1989 Loma Prieta (Northern California) Earthquake: no damage to tunnels, which were then used to transport rescue personnel
1985 Mexico City Earthquake: no damage to tunnels, which were then used to transport rescue personnel
Is there disruptive surface vibration or noise during tunneling?
No. Once a TBM is below a certain depth (approximately two tunnel diameters – or 28 feet in this case), the tunneling process is almost impossible to detect, especially in soft soil.
What do you do with all that dirt?
In typical tunneling projects, excavated dirt is shipped offsite to disposal locations. This process is costly, time-consuming, noisy, and can be environmentally hazardous. The Boring Company is investigating technologies that will recycle the earth into useful bricks to be used to build structures. This is not a new concept, as buildings have been constructed from Earth for thousands of years including, according to recent evidence, the Pyramids. These bricks can potentially be used as a portion of the tunnel lining itself, which is typically built from concrete. Since concrete production accounts for 4.5% of the world’s greenhouse gas emissions, earth bricks would reduce both environmental impact and tunneling costs.