Terence Tao snorts and waves his hands dismissively when he hears that he is the most intelligent human being on the planet, according to a number of online rankings, including a recent one conducted by the BBC. He is, however, indisputably one of the best mathematicians in history. When he was two, his parents saw him teaching another five-year-old boy to count.
One of the reasons why human mathematicians become good at their job is because they make a lot of mistakes and learn what doesn’t work. AIs don’t have this data
Q. So do you think that artificial intelligence can become better than you at an activity as creative as mathematical research?
A. I think they’ll be very useful assistants. They are getting good at solving problems for which there is a lot of previous data about similar problems. The thing is that mathematicians usually only publish our success stories, we don’t share what we try and doesn’t work. And one of the reasons why human mathematicians become good at their job is because they make a lot of mistakes and learn what doesn’t work. AIs don’t have this data.
Q. So?
A. All modern AI systems are based on huge amounts of data. If you want to teach an AI what a glass of water looks like, they need millions of examples of images of a glass of water. If I pour a glass of water and show it to you, you say, “Okay, I get it.” There needs to be a breakthrough in teaching AIs to learn from very small amounts of data. And we don’t know how to do this at all. If we can figure it out, then maybe AI can become as good as humans at really creative tasks.
Q. What do you think about artificial intelligence systems being in the hands of the ultra-rich, like Elon Musk?
A. There are some open source AI models out there, although they are two or three years behind the big commercial models. It’s not good for something as important as AI to be a monopoly controlled by one or two companies, but the basic technology to build these AIs is fairly public. In principle, anyone can build an AI. The problem is that it needs a lot of hardware, a lot of data, a lot of training. It takes hundreds of millions of dollars to make one of these really large models, but the cost will come down over time. There will be lots of open alternatives to AI in the future. I think there will be some need to regulate some aspects of AI. The ability of AI to generate deepfakes can be quite damaging. There are a few that could influence elections.
Q. Some of these businessmen are also a bit eccentric.
A. When these AI models came out, there was some concern that they would be used to generate propaganda, that there would be a conservative ChatGPT, a liberal ChatGPT, a Chinese Communist Party ChatGPT that would only give party-approved answers about Taiwan or whatever. This hasn’t happened. We’re going to need some regulation, but so far it hasn’t been as damaging as we had feared. What will happen soon is that we will lose trust. Before, people would see a video of an event and believe that it had actually happened. There was no way to fake a video of a plane crashing into the World Trade Center. Now, with AI, it is possible. The result will be that even when something is genuine, people won’t believe it. People won’t believe photos and videos anymore. How do we convince someone that something happened if everything can be faked? That is a problem. We have to find new ways to verify facts.
posted by f.sheikh