Ad

Physicist Max Tegmark explains why AI will help humanity flourish

"There's no law of physics that says we can't build machines more intelligent than we are"

Produced in partnership with Science at Pioneer Works

Max Tegmark warns us to think quickly and openly about the big problems we will face in the near future, when artificial intelligence may move us in directions we are unprepared for as a species. Tegmark is a cosmologist and currently a Professor of Physics at MIT in Cambridge MA.

Massive co-founder Nadja Oertelt sat down with physicist Max Tegmark to discuss his new book and optimistic take on how artificial intelligence will impact the world.


Nadja Oertelt: Can you tell me about your path from physics to writing artificial intelligence in Life 3.0?

Max Tegmark: I've always felt that the two greatest mysteries of science were our universe out there and our universe in our heads. After spending many years studying the cosmos I've gotten increasingly fascinated with the other great mystery in recent years, doing AI research, where I feel that we physicists actually have a lot to bring to the table.

As a physicist, I love to study systems that do interesting things. I think there's no more interesting thing that a system can do than be intelligent, to look at a blob of stuff and ask myself, "How can these particles moving around actually remember, compute, learn, and maybe even experience?" I look at it as a physicist, so I look both at biological systems like our human brains and also at the artificial intelligence systems that we build.

How would you describe intelligence, more generally, as a physicist?

We've traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans. But from my perspective as a physicist, intelligence is simply a certain kind of information processing performed by elementary particles moving around, and there's no law of physics that says that we can't build machines more intelligent than [we are] in all ways.

This suggests to me that we've only seen the tip of the intelligence iceberg, and that there's this amazing potential to unlock the full potential that's latent in nature and use it to help humanity flourish.

I take a broad view, a very inclusive view, of intelligence in [my] book. I define it simply as the ability to accomplish complex goals. Which means that it's not something one dimensional you can measure by a single number like an IQ. It's a spectrum. How good are you at the multiplying numbers? How good are you at remembering things? How good are you at driving cars? How good are you at empathy, and so on?

What we've seen so far, is that machines are already better than humans at a lot of narrow tasks, like arithmetic, playing chess, and soon, driving cars. But the holy grail out there among AI researchers is to build machines that also have broad intelligence that can learn any task at the human level and then ultimately beyond.

If that ever happens, that's going to be the biggest change in the history of life on earth. Because then, for the first time, machines will be able to even perform the task of building smarter machines. From then on, all the technologies that get invented will be invented by machines, not humans, because they can do it faster and cheaper. We might find that a lot of the things that we thought was going to take thousands of years and feel very sci-fi might actually happen in our lifetime.

There's been lots of talk about AI disrupting the job market and enabling new weapons. But very few scientists talk seriously about the elephant in the room: what will happen once machines outsmart us at all tasks? That's why I wrote this book.

Instead of shying away from this question. I feel we need people to start joining this conversation now. It might take decades to get the answers we need. So now is the time to start working on it.

When we imagine possible futures inhabited by artificial intelligence, they are usually described in apocalyptic and dark terms. How do we prevent our worst fears from being actualized?

I'm optimistic that we can create an awesome future with artificial intelligence if we do our homework and get the answers we need by the time we need them. But it's not going to be automatic.

You can be optimistic that the sun is going to rise tomorrow morning, regardless of what we do. But you cannot be optimistic that you're going to pass that exam if you can't be bothered studying for it. Similarly with artificial intelligence, it's not going to be fine by default. But if we really have a clear vision of where we want to go and really work hard on making sure that we build something which is actually beneficial for it, there is a huge possibility to help humanity flourish with it.

I feel that every single way in which 2017 is better than the stone age is because of technology. Everything I love about civilization is the product of intelligence. So if we can amplify human intelligence with machine intelligence, we have the potential to solve all these thorny problems that are plaguing us today.

So there are manifold possibilities for how AI can impact the future: economically, socially, ethically. Where and how do we start taking action?

Well, I mean, we humans evolved to flourish picking bananas and throwing rocks at each other, you know, with that level of technology. So it's actually remarkable that we're succeeding as well as we are, living in these concrete boxes and driving around in these little metal boxes with wheels on them, and doing things that evolution hasn't prepared us for at all.

But as we create a more and more complex world, we also shouldn't be surprised if sometimes we fail to understand the complexities about it – and elect politicians who totally don't get it and get us into big problems. So what we need is a real laser focus on the key questions that have to be answered, and get all of humanity really focused on this, and not just watching reruns of Keeping up with the Kardashians.

I feel this is the most important conversation to our time, and everybody needs to join it – not just tech nerds like myself – to solve the important technical questions.

Economists have to figure out how one can distribute the wealth that's produced by AIs so that everybody gets a reasonable share. And psychologists and social scientists have to think about how people can find purpose and meaning in their lives even if they're unemployable.

And everybody has to think about what sort of future we ultimately want to create with artificial intelligence. Because if we have no idea what we want, we're unlikely to get it.

Can you talk a bit about the difference of opinion you have in regards to the future of AI from Yann LeCun at Facebook, who thinks generalized intelligence won't happen for a long time?

I've had a lot of conversations with Yann about this, most recently at the Asilomar conference we held in January. My feeling is that Yann has a longer timeline than, for example, the CEO of Google DeepMind has, [as do] a number of other AI researchers, whom I respect greatly.

If you think that general artificial intelligence beyond the human level isn't going to happen for hundreds of years, then of course, there's not much point in thinking about it today, right? Whereas if you think it might happen in 30 years ... then this should be the number-one challenge of our time to really think through.

I totally respect the point of view of Yann and others who have a longer timeline. They might very well be right. But they might also be wrong. If there's a significant chance that the biggest event ever in human history is going to happen soon, it's better to be prepared a little too early than a little too late, right?

Another difference of opinion amongst AI researchers is the thought that if we get there, where machines can do everything we can, then what? There are some people who are very gloomy and think it's pretty much guaranteed to be a disaster. But there are other people who are quite convinced ... that everything is going to be fine, and think that machines can't have goals, and they're just going to be our tools, much like today's computers. So there's not much to worry about.

I respect Yann's point of view greatly, but I feel he is more on this optimistic spectrum, that he doesn't seem particularly concerned that there's anything particularly bad that is likely to happen. I'm more in the middle... I'm optimistic that this can be a great thing, but I think there's a very real risk, also, that it'll be the worst thing ever to happen to humanity.

I don't think we should panic, but I do think we should put serious effort into planning ahead to just make sure it becomes a good thing, not a bad thing. There are very concrete scenarios where things go wrong, and we want to just make sure we avoid them. The better we think them through in advance, the easier it is to avoid them, right?

If you're flying your rocket through the asteroid belt, it's good to find out where the asteroids are in advance. Then it becomes quite easy to not hit them. You don't want to just drift into it, without any math or without any plan.

What is of value in the future when AI can do everything better than humans?

I actually talked about this quite a bit in chapter six of the book. Since I'm a physicist, I couldn't resist writing a lot about the ultimate limits of what's possible in intelligence based on the laws of physics.

So if you think big, and life spreads throughout our universe, and develops a technology to rearrange particles in whatever way we want, what is the new currency going to be, really? You know, if you can rearrange carbon into gold or rearrange a star into a giant computer, just like that, then it's probably going to be information which is the commodity that really counts.

And it's going to be certain forms of information. Answers to certain very hard questions. For example, a future civilization might want to know how to avoid dark energy from ruining all their communications plans for the distant future. And so if some other civilization has figured that out, it's very valuable for them to know.

You might imagine future civilizations – they'll be trading sophisticated mathematical theorems with each other. Or maybe something like blockchain on steroids, that for some reason turns out to be really valuable in that day and age.

Answers to certain problems might be very, very hard to come by, and therefore very, very valuable. So those might be the bitcoins of future, the bitcoins of the future cosmos.


Tegmark's new book, Life 3.0: Being Human in the Age of Artificial Intelligence, is available now.

Parts of this interview have been edited and condensed for clarity.