The most famous meteorite in history hit Mexico about 6600 10,000 years ago. It unleashes the explosive power of the 100 Hiroshima-class bomb. Three-quarters of the species died afterwards, including various dinosaurs. This is only the fifth largest mass extinction in the past 5 billion years. (The worst thing is to eliminate 96% of the species.)

The universe will continue to be in the form of Yucatan-class rocks in our way. Interestingly, if we have enough warnings, we might be able toAvoid one. However, this kind of foresight will be costly, and we are unlikely to be doze off after our great great-great-grandchildren left. So should we bother to track these things?

Suppose the meteorite caused five major extinctions (althoughCompletely uncertainThis is a reasonable assumption for thought experiments, but they appear about once every 1 billion years. This means that in any given year, it is almost certain that we will not be killed by the meteorite. But for a long time, they will kill us all. Their life frequency is 75 billion, and the meteorite kills 75 people on average every year. In other words, a large number of asteroids have the same probability of causing 75someEvery year human death, using the "expected value" mathematical lens. (in this seriesFirst articleExplain this in depth, although you don't need to read it to follow this article. )

The Trump administration plans to plan for the prediction and elimination of large-scale cosmic collisions next year.Spend 1.5 billion dollars. The estimated number of deaths per year is 200 million. Is this just squandering?

The mathematical calculations of car safety are a good point of comparison, because few things are analyzed — or litigation — more ruthless. So let us consider the mother of all tasks: airbags, which is required by American law in all passenger cars since 1998. These manufacturersEvery carCost is about450 dollar. National Highway Traffic Safety Administration, 2016Think airbagCan save the lives of 2,756 people. Given that the country sold it that year1725 million carsWe can estimate that the annual cost of life savings for airbags in the US society is about 280 million.

This means that the killer asteroid budget in the United States is about 30% lower than our preventive expenditure on at least a daily fatal risk-that is, it is on the same ordinary baseball field. I personally think this is a wise investment, and for a country that has been condemned for insufficient climate risk, it is a visionary. It is also a useful background for considering other existing threats. If we are very unlucky, this will include today’s topic, artificial superintelligence.

Hollywood has done a remarkable job to make the danger inherent in AI popular. In other words, the fact that it is just a Hollywood staple can also make us take it seriously. James Bharat( James barratt Write similarlyIf the Centers for Disease Control issues a serious warning about vampires, it will "take time to stop roaring and the stakes will come out."

Of course, there is nothing that has a credible risk of abolishing humanity is a horrible thing. Even if the probability of a disaster is small, it is the same. And because of the risk of super artificial intelligence, it is unlikely. Unlike a meteor impact, we cannot fully turn to geological records for guidance. Even the history of science and technology has nothing to tell us because it has unforeseen breakthroughs and hairpin changes. These make the path of technology more difficult to predict than politics, sports or any other uneasy areas. The future of technology can only be built responsibly in terms of probability rather than certainty. Those who get zero or 100% odds in the results of a rational debate are either dishonest or deceptive.

Comprehensive analysis of AI riskFilled up whole Book, So I won't try it here. But the nature of the danger is straightforward. Start with rapid progress in computing. Since speeds are exponential and compound speeds, a thousand-fold performance jump usually occurs within ten years-then quickly expands to 2000x, then 4000x. Therefore, we must accept that computers will one day surpass us to make better computers.

When our brains evolve on the prairie, we don't see the exponential process, so predicting them is not natural, and accurate predictions can be stupid.Amara LawI believe that we tend to overestimate the impact of new technologies in the short term, but underestimate them in the long run. Short-term losses can also trigger cynicism – deepening the impact of long-term success. As the index changes, opponents should skip the victory circle, and the elders looked stupid. Unbridled radical predictions may eventually become cunning.

This is why we humans are often shocked by people in areas that seem to be almost difficult to calculate in the near future. These days have happened a lot: inIn danger!, Facial recognition, radiology, Go, and, soon, driving. Therefore, we must accept that computers will one day surpass us to make better computers. And I want to emphasize that the operating word is "possible"-I am not sure in this regard (and there is no one worthy of your attention in this debate).

If this threshold is exceeded, then the progress of the calculation may surge sharply-because although it takes decades to train a good software engineer, the digital engine can quickly replicate millions of times. Therefore, the runaway process of digital self-improvement may make our thinking about bacteria as brilliant as ours.

As our career goal is a hat like this, then it may be measurable to usE.coli. I suspect that a super artificial intelligence will desperately destroy us, just as we will not eliminate bacteria. However, we often destroy them with a billion. We did this to be careful (wipe the bathroom with Lysol). We do this through active self-defense (lowering antibiotics). We do this unconsciously through simple existence and metabolism. Bacteria have no moral stand with us. They are just background noise and may become malignant. We use Hal 9000 orEx MachinaAva faces the rational calm of human obstacles to approach them.

So why does super artificial intelligence treat us like a micro flu? This may be a precautionary measure,TerminatorSkynet. Just like we disinfect the bathroom, because a small part of bacteria is dangerous, artificial intelligence may worry that some of us might try to unplug it. In other words, a small group of our leaders may commit a nuclear war — which may make our numbers better explode with the rest of us.

Or, our doom may be a side effect of the Super AI's handling of its to-do list. Just like a virion can't understand my music choices, I can't imagine smart things.WillWhat is the most accessible building material in the universe? That is to say, currently contains our planet, the biosphere and the atoms of the inhabitants. All of this could become a cute interstellar super glider. Or a huge computing substrate. who knows? You can also ask the Ebola virus why I still like violent women.

Given that we have spent our own rule on this planet, taking out cool things from our internal organs and our fellow creatures, our successors will certainly have similar interests. Yes, it may not be appropriate to turn their ancestors into molecular Lego bricks. But we come down from fish, worms and bacteria, all of which are innocently slaughtered when we are fit.

However, all of this sounds extremely impossible-in fact, hope is-we cannot think it is impossible. A rational life can fight an unlikely disaster. We spend billions on aviation safety research, even though the number of deaths caused by global commercial air travel last year was zero. Of course, every new car has airbags installed, but only a small part can be installed. In the face of uncertainty, not the stupidity of the whole species, this is prudent caution. And since we can't accurately say the possibility of the end of the artificial intelligence world, it will be reassuring if the smartest people around us chuckle unanimously and look at the danger with wide eyes.

Unfortunately, in his last important public speech, the lateStephen Hawking said"The rise of powerful artificial intelligence will be the best or worst thing in human history." Why the latter? Because "artificial intelligence can develop its own will, this will conflict with our will, and it may destroy us." The unstable but undeniably brilliant Elon Musk agreed to this assessment, andReally thinkArtificial intelligence "is more dangerous than nuclear weapons. "Although the equally good Bill Gates keeps a distance from Musk’s toughest warning, hemake oneself "In the camp that focuses on super intelligence," and those who "do not understand why some people don't care."

Unlike celebrities who discuss vaccines with immunologists, this cannot be considered a high-profile hobbyist.despite this,criticismStill questioningThe certificate of Musk and Hawking, because they have not received the training of artificial intelligence experts. However, on this serious issue, should we only listen to the opinions of insiders approved by the guild? This is as embarrassing as Upton Sinclair’s motto, because when his salary depends on his understanding, “it’s hard for a person to understand something.” This trend is related to the salary scale – and evenNon-profit organizationArtificial intelligence experts will also cut seven figures.

Musk, Gates or Hawking will not be written off as smart people, they will become a completely unknown area.Semi-wiseTourists (such as Henry Kissinger and George Schultz joinedTheranos Board of Directors). Tesla almost runs on artificial intelligence. Microsoft has a huge artificial intelligence budget. As far as we know, Stephen Hawking himself is artificial intelligence. Advocates for artificial intelligence security also include many of the deepest insiders in the field – people like Stuart Russell, who wrote a book on artificial intelligence and it’s not a metaphor, but assertionHis text is used in more university artificial intelligence courses than any other fucking book. Russell does not mean that artificial intelligence will kill all of us-far from it. buthe thinksThis can be catastrophic, just like his other people.

In view of all this, assigning zero risk to an AI disaster will be a belief-based behavior-a behavior that religiously ignores expert opinions and the unpredictable nature of technology. The rational participants in this debate should focus on the acceptable level of non-zero risk – and whether that level can be achieved. This is not about precision. Rather, it is about scale and rationality. I cannot be sure of the severity of the AI ​​we face. No one can.

We can start with one of the highest guaranteed technologies ever offered. In network operations, the buzz phrase "five nines" represents 99.999% uptime. In addition to 30 seconds per month (or 2 seconds for 28 months), we can offer five nine-point services. Although it is often inserted into contracts, this standard is usually described asImpossible.Actually impossibleOne神话Etc. This is to keep the mainframe and website running relatively neat and easy to understand. In contrast, the process of setting the end of the artificial intelligence world involves some very reasonable things, but it is not understood at all, and is most likely to be implemented by autonomous software if it happens. There is no way to assign five nine points of confidence. In fact, the two 9 (or 99%) looks almost militant optimism.

For each element of this analysis, Y ears may quarrel. So I will be very clear: this is not about accuracy. Instead, it is about scale and rationality. I am not sure about the serious dangers of the AI ​​we are facing. No one can. But I believe that we are many orders of magnitude different from the level of risk accepted by the ordinary non-index world.

When the worst situation can kill us all, five nines of faith is equivalent to 75,000 specific deaths. It is impossible to pay for this. But we can notice that this is 9 times the death toll from the 11/25 terrorist attacks – governments around the worldweeklySpend billions of dollars to defend against possible sequels. On the scale of World War II, two 9 or 90% confidence can be mapped to a disaster. What can we do to avoid this? As for the odds of avoiding an asteroid that we destroy each year, they8At the beginning of the nine points, no one raised his finger. However, we are investing to improve these possibilities. We do this rationally. And the budget is growing fast (2012Year onlyFor 2000 million dollars).

So what should we do?

I am studying Arabic at university, now,My podcast  -In other words, I really don't know.But even if I knew we shouldwill notDo, this is against this result compared to now we spend a negligible resource on airbags and killer asteroid investments. Super artificial intelligence threat profiles can easily be similar to the Cold War era. That is to say: quite uncertain but too credible, and we are not willing to admit it,Really fucking big. During the Cold War, mankind did not have a cheap way out. In fact, we spent trillions of dollars. For all the mistakes, crimes and mistakes along the way, we are very good from that.

I will finally notice that the danger here is likely to stem from the mistakes of a arrogant elite group, rather than distorting the loneliness of the loner. There is no room for genius-sponsored and highly competitive areas where lonely geeks can grab the agenda. Bring usTitanicThe Stuxnet virus, the driving force of World War I and the financial crisis, is even more worrying here.

These are not bad things. Most of them are neutral to the corner of good people. We don't know that the warning signs for the Super Artificial Intelligence project are turning to the track because they have not been completed. So the corner may be cut off due to ignorance. Or bring Google to market. Or make sure that China does not cross the finish line first. As mentioned earlier, security issues may disappear in a game, especially if both parties believe that the winner will face an irreparable geopolitical advantage. Making things change doesn't require global consequences. Many people tend to take huge, reckless opportunities in the constitution, even with modest upside. Daredevils willTake all risksGet a small prize and some glory, the society allows this. However, ethics can mutate when lucrative private gambling harms everyone. That is, when the end of the world is at riskWhen privatized.

Imagine a young, selfish, and independent man. If he helps his startup company make a huge AI breakthrough, he will become very rich. There is little chance that things may go wrong seriously-lack of humility makes him inclined to minimize risks. Since ancient times, immigrants, miners and adventurers have accepted greater personal dangers in pursuit of smaller gains.

We can't expect all tomorrow's entrepreneurs to avoid this calculus by respecting strangers, foreigners or unborn people. Many people may. But others will ridicule each of these papers on the grounds that they are smarter than Stephen Hawking, Elon Musk and Bill Gates. We all know such people. We technicians may know dozens. And the key breakthroughs in this area may only require a few, join a confident, smart, and motivated team.

Easyai public number