试题详情
阅读理解-阅读单选 适中0.65 引用1 组卷14

As Artificial Intelligence (AI) becomes increasingly sophisticated (先进的,尖端的), there are growing concerns that robots could become a threat (威胁). This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code.

Russell argues that as robots take on more complicated tasks, it's necessary to translate our morals (道德) into AI language.

For example, if a robot does chores around the house, you wouldn't want it to put the pet cat in the oven to make dinner for the hungry children. "You would want that robot preloaded with a good set of values," said Russell.

Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn't think that's the kind of thing a properly brought-up person would do.

It will be possible to create more sophisticated moral machines, if only we can find a way to set out human values as clear rules.

Robots could also learn values from drawing patterns from large sets of data on human behavior. They are dangerous only if programmers are careless.

The biggest concern with robots going against human values is that human beings fail to do sufficient testing and they've produced a system that will break some kind of taboo(禁忌).

One simple check would be to program a robot to check the correct course of action with a human when faced with an unusual situation.

If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps (嘟嘟声), and ask for directions from a human. If we humans aren't quite sure about a decision, we go and ask somebody else.

The most difficult step in programming values will be deciding exactly what we believe is moral, and how to create a set of ethical (道德伦理的) rules. But if we come up with an answer, robots could be good for humanity.

【小题1】How do robots learn human values?
A.By interacting with humans in everyday life situations.
B.By picking up patterns from massive data on human behavior.
C.By following the daily routines of civilized human beings.
D.By imitating the behavior of properly brought-up human beings.
【小题2】What will a well-programmed robot do when facing an unusual situation?
A.Keep a distance from possible dangers.
B.Do enough testing before taking action.
C.Start its built-in alarm system at once.
D.Stop to get advice from a human being
【小题3】What is the most difficult to do when we turn human values into a programmable code?
A.Determine what is moral and ethical.
B.Design some large-scale experiments.
C.Set rules for man-machine interaction.
D.Develop a more sophisticated program.
19-20高一上·江苏南通·阶段练习
知识点:科普知识 科学技术 答案解析 【答案】很抱歉,登录后才可免费查看答案和解析!
类题推荐

How much time do you spend doing research before you make a decision? There are people who go over every detail exhaustively before making a choice. 【小题1】 Psychologists call this way of thinking a cognitive bias (偏见), a tendency toward a specific mental mistake.

To study “jumping”, we examined decision-making patterns among more than 600 people from the general population. We found that jumpers made more errors than non-jumpers on problems that require thoughtful analysis. 【小题2】 In a quiz about US civics, they overestimated the chance that their answers were right much more than other participants did – even when their answers were wrong.

So what is behind “jumping”? Psychological researchers commonly distinguish between two pathways of thought: automatic system, which reflects ideas that come to the mind easily, spontaneously and without effort, and controlled system including conscious and effortful reasoning. Jumpers and non-jumpers are equally influenced by automatic thoughts. 【小题3】 It is the controlled system that helps people avoid mental biases introduced by the automatic system. As a result, jumpers were more likely to accept the conclusions made at first blush without further questioning. A lack of controlled thinking is also more broadly connected to their problematic beliefs and faulty reasoning.

【小题4】 A method called metacognitive training can be used to target their biases, which can help people think more deliberatively. In this training, participants are confronted with their own biases. They can learn about the missteps and other ways of thinking through the problem at hand. It helps to chip away at participants’ overconfidence.

In everyday life, the question of whether we should think things through or instead go with our guts is a frequent and important one. 【小题5】 Sometimes the most important decision we make can be to take some more time before making a choice.

A.Happily, there may be some hope for jumpers.
B.Also, jumpers had problems with overconfidence.
C.But a fair number of individuals are quick to jump to conclusions.
D.It is certainly possible for them to overthink things to take a decision.
E.We plan to continue the work to trace other problems introduced by jumping.
F.The jumpers, however, did not engage in controlled reasoning to the same degree as non-jumpers.
G.Recent studies show that even gathering just a little bit more evidence may help us avoid a major mistake.

As newer, more advanced technologies come out, huge amounts of electronics (电子产品) are thrown away, instead of being reused. These goods often end up in landfills, where the chemicals inside them may be a danger to the environment. Electronics can contain harmful materials. If these materials get into the ground or water, the pollution can cause serious problems. Most electronics require metals. These metals must be mined from the Earth. Often the mining process creates serious pollution.

A group known as Waste Electrical and Electronic Equipment (WEEE) Forum is trying to make people more aware of the problems of e-waste. Recently, the WEEE Forum asked researchers from the United Nations (UN) to study a kind of e-waste that’s often not noticed because people don’t consider the goods to be electronics. The WEEE Forum calls this kind “unable-to-be-seen” e-waste.

The UN study shows that about 1/6 of all e-waste is “unable-to-be-seen”. Though it’s “unable-to-be-seen”, it’s certainly not a small amount. The “unable-to-be-seen” e-waste weighs about 9 billion kilograms. The WEEE Forum says that if this e-waste were put into 40-ton trucks and the trucks were then lined up, the line of trucks would be about 5,630 kilometers long.

The surprising kind leading the “unable-to-be-seen” e-waste group was toys. Worldwide, roughly 7.3 billion electronic toys are thrown away each year. These include goods like car racing sets, electric trains, and musical toys. They also include toys with electronic parts, like dolls that speak or games with electronic timers. In all, toys make up about 35% of “unable-to-be-seen” e-waste. But the problem is far larger than just toys. The report also shows that other everyday goods like home alarms, smoke alarms, power tools, and computer cables (电缆) are also big sources of “unable-to-be-seen” e-waste.

The WEEE Forum is hoping that as more people and governments become aware of e-waste, they will make a much greater effort to make sure electronics get reused.

【小题1】What is paragraph 1 mainly about?
A.The amount of electronics.B.The development of electronics.
C.The ways of reusing electronics.D.The pollution of electronics.
【小题2】What causes some e-waste often unnoticed?
A.People’s interest in electronics’ character.
B.People’s impression on electronics’ package.
C.People’s misunderstanding of electronics.
D.People’s struggle to adapt to electronics.
【小题3】How does the author support his viewpoint in paragraph 3?
A.By showing numbers.B.By providing examples.
C.By making a summary.D.By making a comparison,
【小题4】Which of the following is the WEEE Forum’s solution to e-waste?
A.Designing advanced electronics.B.Making electronics get reused.
C.Stopping giving away electronics.D.Reducing electronics’ production.

As Artificial Intelligence becomes increasingly complicated, there are growing concerns that robots could become a threat. This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code.

Russell argues that as robots take on more complicated tasks, it’s necessary to translate our morals into AI language.

For example, if a robot does chores around the house, you wouldn't want it to put the pet cat in the oven to make dinner for the hungry children. "You would want that robot preloaded with a good set of values," said Russell.

Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously, there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn't think that's the kind of thing a properly brought-up person would do.

It will be possible to create more complex moral machines, if only we can find a way to set out human values as clear rules.

Robots could also learn values from drawing patterns from large sets of data on human behavior. They are dangerous only if programmers are careless.


The biggest concern with robots going against human values is that human beings fail to do enough testing and they've produced a system that will break some kind of taboo(禁忌).

One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.

If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps, and ask for directions from a human. If we humans aren't quite sure about a decision, we go and ask somebody else.

The most difficult step in programming values will be deciding exactly what we believe in moral, and how to create a set of ethical rules. But if we come up with an answer, robots could be good for humanity.

【小题1】What does the author say about the threat of robots?
A.It may be a challenge to computer programmers.
B.It accompanies all machinery involving high technology.
C.It can be avoided if human values are translated into their language.
D.It has become an inevitable danger as technology gets more sophisticated.
【小题2】How do robots learn human values?
A.By interacting with humans in everyday life situations.
B.By picking up patterns from massive data on human behavior.
C.By following the daily routines of civilized human beings.
D.By imitating the behavior of properly brought-up human beings.
【小题3】What will a well-programmed robot do when facing an unusual situation?
A.Keep a distance from possible dangers.
B.Do enough testing before taking action.
C.Set off its built-in alarm system at once.
D.Stop to seek advice from a human being.
【小题4】What is most difficult to do when we turn human values into a programmable code?
A.Determine what is moral and ethical.
B.Design some large-scale experiments.
C.Set rules for man-machine interaction.
D.Develop a more sophisticated program.

组卷网是一个信息分享及获取的平台,不能确保所有知识产权权属清晰,如您发现相关试题侵犯您的合法权益,请联系组卷网