Why AI will be a threat to humanity – 为什么人工智能必将威胁我们的文明 – English

100%
16 paragraph translated (16 in total)
Read or translate in

My background is in astrophysics, and as a scientist, working at the Observatory for over ten years, I have some understanding of science, but I don’t engage in research about artificial intelligence. As to artificial intelligence, I am entirely opposed to it. I believe that humanity is currently playing with fire on two or three fronts, and artificial intelligence is one of the most dangerous, but most people don’t seem to realised that.

We may consider the threat of artificial intelligence in the short term, mid-term and long term.

In the short term, an important concern is large-scale unemployment and large-scale military applications. In the mid-term, the risk is that, as the public will come to understand the issue better, it will come to fear that artificial intelligence is out of control, leading to rebellions. But many people are not even thinking of the ultimate long-term threat.

For every type of danger, there is a lot of misperception. The most commonly seen is this: artificial intelligence is still very weak now, we’re afraid about rebellion, things getting out of control – aren’t we looking too far ahead. This is a very absurd reasoning: if we were talking about tigers, would you argue that ‘the tiger is still small’ and find that a good reason?

Let’s look at short-term dangers first. In this area, artificial intelligence experts have come up with very naive reasoning. First they promise us that in the foreseeable future, 95% or more of jobs could be replaced by artificial intelligence. If this is truly as they say, this means that the majority of people in our society will be unemployed, and only a minority of workers remain to feed everyone else. Such a social structure is something that has never been seen before in human history.

The unemployment rate today is 10 to 15%, we say that this is a high unemployment rate, and this is a dangerous social pattern. If we reach a point where 85 or 95% of people don’t need to work, what will they be doing then? You can say that we won’t be calling this unemployment, if we let the robots do the work, they can also look after us, and we can just stay home, eat, drink and be merry. But you’ve got to be careful at this point: apart from eating, drinking and being merry, people want another thing yet: they want a revolution. If 95% of workers can be replaced with robots, we can imagine that we might let the robots take care of the revolution as well. But when we start thinking of a robot revolution, as we’ve seen in many sci-fi movies, like the end of ‘mechanical enemy’, the whole public square fills up with robots – and this kind of society becomes a very dangerous one.

Experts engaging with artificial intelligence say that, in human history, whenever an important technological revolution arises, jobs get lost in the process, but new jobs are also created. This kind of reasoning has a logical flaw. These 95% of jobs that robots can take over – is that the case or not?

In fact, the substitution of artificial intelligence for human intelligence will lead to a situation completely unlike the previous one. We used to be able to say, after the rise of the car, it didn’t matter that the horse-cart driver lost their job, they could become a car driver; but now the car itself has become intelligent, and no longer requires people to drive it, and so, mass unemployment is actually a very serious threat. I believe, we shouldn’t think of using robots only because the cost would be lower. If down the track you switch to robots in order to follow industrial upgrades, this may be worth considering, although it is not good in the long run. But if you cause large numbers of people to lose their jobs just in order to reduce cost, then this is not a stable society.

Putting robots to military use will certainly be very effective. Although we all know about Asimov’s three laws for robots, any military use is a direct violation of the three laws. But then again, the three laws are not really laws, they’re just the imaginary products of a novelist. Besides, the use of robots for military purposes will bring further ethical issues. For instance, if we decide to drop bombs somewhere today, this is a decision that we, people, make; now that there are intelligent unmanned aerial vehicles and the like, which need to decide on the spot whether they will attack a target or not, then it becomes the robot’s decision to kill a person or not, and this, from an ethical perspectives, is a radical change from previous war situations.

So if we start using artificial intelligence in the military today, I think we can only have the attitude of the Americans when they started the Manhattan project, just because our opponents are also working on it, we can’t afford to lose, and so we have no choice and heading forward. That’s why Tesla’s CEO, Elon Musk, proposed that all countries should negotiate an agreement banning the use of artificial intelligence for military purposes.

The mid-term issue is the rebellion of artificial intelligence. At this point, industry experts say that you can always unplug the robots. But in fact, once artificial intelligence is connected to the Internet, it doesn’t need any physical form any more. Before the Internet, individual artificial intelligence had limitations: even if entirely made of chips, it had limits to its storage and computation capacity. But now, if it connects to the entirely, it doesn’t need any physical structure at all, as long as it can rely on the network to access all sorts of services, it can completely destroy society. So by that time, there is no way you can pull the plug, because there is no plug to pull.

On this question, experts often say, we write what’s on the chips of robots, and so we won’t let them go bad. But in relation to this issue, the simplest question is, you can’t even guarantee that you’re own children won’t go bad, so what is it going to be like with artificial intelligence? Therefore, so far, nobody has been able to give clear guarantees that they would prevent a rebellion of artificial intelligence.

But the most dangerous of all is the long-term threat. This is the ultimate threat. Even if artificial intelligence shows no trace of rebellion, and competently and loyally completely aims to be at our service, when everything is accomplished by artificial intelligence, what is left for us to do? We might become zombies, the human species will rapidly degenerate, both its physical capacity and intelligence will decrease, and so this has no significance at all for humanity – in fact, it’s about us digging our own graves.

So, we should recognise artificial intelligence as a weapon of mass destruction. The best situation would be one where the world’s great powers sit down and conduct negotiations to impose limits. We might wish to preserve a measure of low-level artificial intelligence, but we must find ways to prevent its further evolution.

In fact, we’re already using artificial intelligence a lot in our daily lives, for instance, when we buy a train or plane ticket on the Internet, the system behind it is a form of artificial intelligence. This kind of artificial intelligence, as long as it does not evolve, does not present a great problem. But because artificial intelligence might evolve very fast, and this evolution process is difficult to control, therefore all we can do is prevent its evolution, though of course, to what extent we can achieve this is hard to say.

What we can do for sure today is hold back the scientific efforts to develop artificial intelligence; what we cannot be sure about is whether artificial intelligence will end up destroying us.

Article Revisions:



Source : 南方周末
image source:http://www.gstv.com.cn/folder10/folder65/2016-12-27/336932.html

About Michael Broughton