Chapter 356 - The World’s Biggest Problems And Why They’re Not What First Comes To Mind[3]

Name:Random Stuff Author:Brayon101
Artificial intelligence and the 'alignment problem'

Around 1800, civilisation underwent one of the most profound shifts in human history: the industrial revolution.

Looking forward, what might be the next industrial revolution – the next pivotal event in history that shapes what happens to all future generations? If we could identify such a transition, that may well be the most important area in which to work.

One candidate is bioengineering – the ability to fundamentally redesign human beings – as covered by Yuval Noah Harari in Sapiens.

But we think there's an even bigger issue that's even more neglected: artificial intelligence.

Billions of dollars are spent trying to make artificial intelligence more powerful, but hardly any effort is devoted to making sure that those added capabilities are implemented safely and for the benefit of humanity.

This matters for two main reasons.

First, powerful AI systems have the potential to be misused. For instance, if the Soviet Union had developed nuclear weapons far in advance of the USA, then it might have been able to use them to establish itself as the leading global super power. Similarly, the development of AI might also destablise the global order, or lead to a concentration of power, by giving one nation far more power than it currently has.

Second, there is a risk of accidents when powerful new AI systems are deployed. This is especially pressing due to the "alignment problem."

This is a complex topic, so if you want to explore it properly, we recommend reading this article by Wait But Why, or watching the video below.

If you really have time, read Professor Nick Bostrom's book, Superintelligence. But here's a quick introduction.

In the 1980s, chess was held up as an example of something a machine could never do. But in 1997, world chess champion Garry Kasparov was defeated by the computer program Deep Blue. Since then, computers have become far better at chess than humans.

In 2004, two experts in artificial intelligence used truck driving as an example of a job that would be really hard to automate. But today, self-driving cars are already on the road.

In 2014, Professor Bostrom predicted that it would take ten years for a computer to beat the top human player at the ancient Chinese game of Go. But it was achieved in March 2016 by Google DeepMind.

The most recent of these advances are possible due to progress in a type of AI technique called "machine learning". In the past, we mostly had to give computers detailed instructions for every task. Today, we have programs that teach themselves how to achieve a goal. The same algorithm that can play Space Invaders below has also learned to play about 50 other arcade games. Machine learning has been around for decades, but improved algorithms (especially around "deep learning" techniques), faster processors, bigger data sets, and huge investments by companies like Google have led to amazing advances far faster than expected.

Google DeepMind Plays Space Invaders at a super-human level

Due to this, many experts think human-level artificial intelligence could easily happen in our lifetimes. Here is a survey of 100 of the most cited AI scientists:

You can see the experts give a 50% chance of human-level AI happening by 2050, just 35 years in the future. Admittedly, they are very uncertain, but high uncertainty also means it could arrive sooner rather than later. You can read much more about when human-level AI might happen here.

Why is this important? Gorillas are faster than us, stronger than us, and have a more powerful bite. But there are only 100,000 gorillas in the wild, compared to seven billion humans, and their fate is up to us. A major reason for this is a difference in intelligence.

Right now, computers are only smarter than us in limited ways (e.g. playing chess), and this is already transforming the economy. The key moment, however, is when computers become smarter than us in most ways, like how we're smarter than gorillas.

This transition could be hugely positive, or hugely negative. On the one hand, just as the industrial revolution automated manual labour, the AI revolution could automate intellectual labour, unleashing unprecedented economic growth.

But we also couldn't guarantee staying in control of a system that's smarter than us – it would be more strategic than us, more persuasive, and better at solving problems. What happens to humanity after its invention would be up to it. So we need to make sure the AI system shares our goals, and we only get one chance to get the transition right.

This, however, is not easy. No-one knows how to code moral behaviour into a computer. Within computer science, this is known as the alignment problem.

Solving the alignment problem might be one of the most important research questions in history, but today it's mostly ignored.

The number of full-time researchers at or beyond the postdoc level working directly on the control problem is under 30 (as of early 2018), making it some 100 times more neglected than biosecurity.

At the same time, there is momentum behind this work. In the last five years, the field has gained academic and industry support, such as a leading author of AI textbooks, Stuart Russell, and Stephen Hawking, as well as major funders, like the groundbreaking entrepreneur and billionaire Elon Musk. If you're not a good fit for technical research yourself, you can contribute by working as a research manager or assistant, or donating and raising funds for this research.

This will also be a huge issue for governments. AI policy is fast becoming an important area, but policy-makers are focused on short-term issues like how to regulate self-driving cars and job loss, rather than the key long-term issues (i.e. the future of civilisation).

You can find out how to contribute in our full profile.

Of all the issues we've covered so far, solving the alignment problem and managing the transition to powerful AI are among the most important, but also by far the most neglected. Despite also being harder to solve, we think they're likely to be among the most high-impact problems of the next century.

This was a surprise to us, but we think it's where the arguments lead. These days we spend more time researching machine learning than malaria nets.

Read more about why we think reducing extinction risks should be humanity's key priority.

Dealing with uncertainty, and "going meta"

Our views have changed a great deal over the last eight years, and they could easily change again. We could commit to working on AI or biosecurity, but we might discover something even better in the coming years. Might there be problems that will definitely be important in the future, despite all our uncertainty?

Eventually, we decided to work on career choice, which is why we're writing this article. In this section, we'll explain why, and suggest other problems that are more attractive the more you're uncertain. We think these are potentially competitive with AI and biosecurity, and where to focus mainly comes down to personal fit.

Global priorities research

If you're uncertain which global problem is most pressing, here's one answer: "more research is needed". Each year governments spent over $500 billion trying to make the world a better place, but only a tiny fraction goes towards research to identify how to spend those resources most effectively – what we call "global priorities research".

As we've seen, some approaches are far more effective than others. So this research is hugely valuable.

A career in this area could mean working at the Open Philanthropy Project, Future of Humanity Institute, economics academia, think tanks, and elsewhere. Read more about how to contribute in the full profile.