How AI will rewrite our lives

About how robots can change the ears of our lives have been staple foods in science fiction for decades. In the 20 century 40 era, the wide interaction between humans and artificial intelligence seems to be still in the foreseeable future, Isaac Asimov proposed his famous "Three Laws of Robotics". Designed to prevent robots from harming us.

The first one-"A robot may not harm humans, or cause humans to be harmed by doing nothing." From the perspective of robots' understanding of humans through direct interaction, whether for good or bad. Think about classic science fiction: C-3PO and R2-D2 team up with the Rebel Alliance, inStar WarsFrustrate the empire, such as2001 years HAL 9000  : FromEx Machina's space roamingAnd Ava plan to murder their surface masters. But these imaginations are not focused on the broader and potentially more important aspects of artificial intelligence.社会Impact-Artificial intelligence may affect the way we humans interact.

From the perspective of human understanding through direct interaction, whether good or bad. Think of classic science fiction: C-3PO and R2-D2 work with the Rebel Alliance.Star WarsFrustrate the empire, such as2001 years HAL 9000  : FromEx Machina's space roamingAnd Ava plan to murder their surface masters. But these imaginations are not focused on the broader and potentially more important aspects of artificial intelligence.社会Impact-Artificial intelligence may affect the way we humans interact.

Of course, radical innovation has changed the way humans live together. The emergence of cities from 5000 to 10000 means less nomadism and higher population density. We have adjusted individually and collectively (for example, we may have evolved resistance to infections that are more likely to occur in these new situations). Recently, the invention of technology, including printing presses, telephones and the Internet, has revolutionized how we store and deliver information.

However, as these innovations do, they do not change the fundamental aspects of human behavior, which constitute what I call a “social suite”: we have developed a series of key capabilities for hundreds of thousands of years, including love, friendship. , cooperation and teaching. Whether the population is urban or rural, and whether it uses modern technology, the basic contours of these characteristics are still very consistent throughout the world.

However, adding artificial intelligence among us can be more destructive. Especially when machines look and behave like us and deeply affect our lives, they may change our love or kindness or kindness – not just our direct interaction with related machines, but ours with a machine The interaction of another.


My laboratory is at Yale University, where my colleagues and I have been exploring how this effect might play out some experiments. For one, we instructed a small group of people to lay railroad tracks in the virtual world together with humanoid robots. Each group consists of three people. A small blue and white robot sits around a square table and works on a tablet computer. The robot is programmed to make occasional errors-and admits to them: "Sorry, guys, I made a mistake this round," and it claims to be pretty good. "I know it may be hard to believe, but robots can also make mistakes."

It turns out that this clumsy confession robot helps team performance by improving communication between humans.Better. They become more relaxed and talkative, comforting group members who often stumble and laugh. Groups with confessional robots are better able to collaborate than those in which the robots only make mundane statements.

In another virtual experiment, we divided 4,000 human subjects into approximately 20 groups and assigned each person "friends" within the group; these friendships formed a social network.Then assign a task to these groups: each person must choose one of the three colors, but no individual's color can match the color of his or her designated friend in the social network.Unbeknownst to the subjects, some groups contain robots that are programmed to make occasional errors.Humans directly related to these robots have become more flexible and tend to avoid falling into solutions that might work for one person, rather than the entire group.More importantly, the resulting flexibility is spread across the entire network, and can even cover people who have no direct contact with the robot.As a result, robots help humans help themselves.

Both studies show that in what I call "hybrid systems"-humans and robots interact socially-the correct artificial intelligence can improve the way humans communicate with each other.Other findings emphasize this point. For example, political scientist Kevin Munger instructed certain types of robots to intervene after people sent racist comments to others online.He said that in some cases, a robot just reminds the perpetrator that their target is a person, and that a person's feelings may be hurt, which may cause the person's use of racist speech to decline for more than a month.

However, adding AI to our social environment will also make our behavior less efficient, and morality is not so simple. In another experiment, this one aims to explore how artificial intelligence affects the "tragedy of the commons"-the notion that individuals' self-centered behaviors may collectively harm their common interests-we gave thousands of subjects funding for multiple rounds of online game. In each round, subjects were told that they could keep their money or donate part or all of it to their neighbors. If they donate, we will match it and double the money their neighbors receive. In the early stages of the game, two-thirds of players acted selflessly. After all, they realize that being generous to their neighbors in one round may prompt their neighbors to be generous to them in the next, establishing a norm of reciprocity.Collect money from neighbors. In this experiment, we found that by adding some robots that act in a selfish, free-riding way (pretending to be human players), we can promote similar team behavior. In the end, human players completely stopped working together. So the robot turned a group of generous people into selfish bastards.

Let us stop and think about the meaning of this discovery. Cooperation is a key feature of our species and is vital to social life. Trust and generosity are critical to distinguishing between successful and unsuccessful groups. If everyone invests and sacrifices to help the team, everyone should benefit. However, when this behavior breaks down, the concept of public goods disappears and everyone suffers. The fact that artificial intelligence can meaningfully reduce our ability to work together is very worrying.


We encountered real-world examples of how Ai can corrupt human relations outside the laboratory. A survey of 2016 million Twitter users before the 570 U.S. presidential election found that trolling and malicious Russian accounts-including accounts operated by bots-are often reposted in a manner similar to other non-malicious accounts, particularly strongly affecting conservative users . By taking advantage of the cooperative nature of humans and our interest in teaching each other – two characteristics of the social suite – robots even affect people with whom they do not interact directly, helping to divide the nation’s voters.

Other social influences of simple types of AI are played around us every day. Parents watched their children roaring rude orders on digital assistants like Alexa or Siri. They began to worry that this rudeness would affect the way children treat others, or that their relationship with artificial intelligence machines would interfere or even preempt. Interpersonal relationship. Whoever grows up about AI substitutes may not master children's "device connection empathy," Xie Tekel, technical and social MIT experttell大西洋Alexis C. Madrid Garr not long agoHe bought a toy robot for his son afterwards.

With the ubiquity of digital assistants, we have become accustomed to talking to them as if they have feelings; writing on these pages last year, Judith Shulevitz described how some of us started to regard them as confidants, even friends And therapist. Shulevitz herself said that she admitted to Google Assistant that she would not tell her husband.If we become more comfortable when talking closely with our devices, what will happen to our human marriages and friendships?Due to business needs, designers and programmers often create devices whose responsiveness makes us feel better – but may not help us reflect on ourselves or consider painful facts.As artificial intelligence penetrates into our lives, we must face the possibility that it will hinder our emotions and inhibit the deep relationships between people

All of this may end up changing human society in an unconscious way that we need to see it as a political body. Do we want the machine to influence whether and how to treat children? Do we want machines to affect adult sexual behavior?

Kathleen Richardson, an anthropologist at De Montfort University in the UK, worries about the latter problem. As the person in charge of the movement against sex robots – yes, sex robots have become enough of an early phenomenon, and movement against them is not completely immature – she warns that they will be dehumanized and may cause users to withdraw from true intimacy relationship. We may even use robots as a tool for sexual gratification to treat other people. Other observers believe that robots can fundamentally improve sexual behavior between humans. In his 2007 " Love and sex and robots"In a bookThe anti-traditional chess master turned businessman David Levy considered the positive impact of "romantic attractive and sexual robots." He suggested that some people would prefer robot partners and human partners (prediction confirmed by "married" Japanese) last year's artificial intelligence hologram). Sex robots will not be affected by sexually transmitted diseases or unwanted pregnancy. They can provide opportunities for shameless experimentation and practice-thereby helping humans become "art lovers". For these and other reasons, Levy believes that sex with robots will be considered ethical and may be expected under certain circumstances.

Long before most of us encounter this intimate AI problem, we will encounter more daily challenges. After all, the era of driverless cars has arrived. These vehicles are expected to greatly reduce the fatigue and distraction that plague human drivers, thus preventing accidents. But what other influence do they have on people? Driving is a very modern social interaction that requires a high level of cooperation and social coordination. I am worried that driverless cars may cause them to shrink by depriving us of the opportunity to exercise these abilities.

Not only will these vehicles be programmed to take over driving duties, they will usurp the power of humans to make moral judgments (for example, which pedestrians will be hit when collisions are inevitable), and they will also affect the humans they are associated with. No direct contact. For example, driving a driver for a period of time while driving a self-driving vehicle at a steady, constant speed may reduce the driver’s attention.increaseThe possibility of an accident once they move to a part of the highway that is only occupied by human drivers. Or, experience may indicate that driving with autonomous vehicles that are fully compliant with traffic regulations can actually improve performance.

Either way, we will unscrupulously release new forms of artificial intelligence without first considering this social spillover effect – or external factors, as they are often called –. We must use the same effort and ingenuity as hardware and software to enable self-driving cars to manage the potential ripple effects of AI on people outside the car. After all, for your benefit, we force the brake light on the rear of your car not only for your benefit but mainly for your benefit, but for the people behind you.


In 1985, Isaac Asimov introduced his robotic law four decades later, and he added his own list: robots should not do anything that would harm humans. But he struggled to assess how to hurt. “Human is a concrete object,” he later wrote. “You can estimate and judge a person’s injury. Humanity is an abstraction.”

Focusing on social spillovers can help. Spillover effects in other areas have led to rules, laws and requirements for democratic oversight. Whether we are talking about companies that pollute water supplies or individuals who use second-hand smoke in office buildings, society may intervene once someone's behavior begins to affect others. Since the impact of artificial intelligence on interpersonal interaction is intense and far-reaching, and progress is rapid and extensive, we must systematically investigate possible second-order effects and discuss how to represent common interests.

There are already various researchers and practitioners-computer scientists, engineers, zoologists, and social scientists, etc.-are jointly developing the field of "machine behavior", hoping to combine our understanding of artificial intelligence with theoretical theories and technologies basis. This field does not only treat robots as man-made objects, but as a new class of social actors.

The investigation is urgent. In the near future, AI-powered machines can demonstrate intelligence and behavioral forms that seem strange compared to our own through programming or independent learning (the ability we will give them). We need to quickly distinguish between strange behaviors that are only real threats to us. The aspects of artificial intelligence that we care most about are those that affect the core aspects of human social life – these characteristics have allowed our species to survive for thousands of years.

The Enlightenment philosopher Thomas Hobbes believes that humans need a collective agreement to stop us from becoming chaotic. He is wrong. Long before we formed the government, evolution provided humanity with a social suite that allowed us to live together in peace and efficiency. In the pre-artificial world of artificial intelligence, genetic inheritance, friendship, cooperation and teaching capabilities continue to help us live together.

Unfortunately, humans do not have time to develop the same natural capabilities as robots. Therefore, we must take measures to ensure that they can live with us non-destructively. As artificial intelligence affects our lives more deeply, we may also need a new social contract-a machine and not the others.

This article is reproduced from the medium,Original address