Center for Strategic Assessment and forecasts

Autonomous non-profit organization

Home / Science and Society / Analytical work: the experience of Russian and foreign experts / Articles
"People really don't like to make decisions in an emergency situation"
Material posted: Publication date: 30-01-2019

Professor Michael gubko — that the ethics of the robot, decision-makers can not be universal

[Ch.]: What is an automatic control system? Does the word "auto" our understanding of responsibility for decision-making, and if change, to what extent?

[MG]: System management, creation and implementation of which are engaged in our Institute, is the decision algorithms. Automated control system (ACS) is a system that, Yes, in varying degrees, removes the burden of decision-making. But this raises the key question, greatly hindering the implementation of such systems in practice — who is responsible for decisions that takes over ASU? If we are dealing with an operator at the plant which switches certain switches, looking at certain illuminated by the light bulb, the answer to this question is clear. But if the system parameters the algorithm takes, then who is responsible if he made the wrong decision? Developer? Programmer? The operator? Or the station Manager where it is installed? The same ethical and legal problems are discussed now in connection with the wide implementation of robots. We recognize the robot with the object, subject, mechanism? If the robot will cause someone harm, then who is responsible — the owner, developer, or somehow shares the responsibility?

Exactly the same problem arises in the famous problem of the trolley or Google, for example, which introduces unmanned vehicles. It is not an error, but is still about responsibility. When the car is moving with high speed and crosswalk runs a lady with a stroller, you have to knock them down or steer your car into a pole so that all the passengers dead?

And the problem here is not even how to calculate probability. The problem is that we do not fully understand what are our morals and our ethics. Which is better — that one person was killed, or that a hundred people all my life was sick? And if a thousand? In everyday life this problem may not arise, but to automate this, to explain to the computer, trying to clearly understand how it works, and stumble upon the fact that these answers are not very formulated. And if the person in the actions of the automation does not die, but receives a damage to health? And here too it is necessary to carry out a gradation: it is one thing if they have a runny nose, and the other is if life will have to walk on one leg. Questions of this kind arise very often when you have to allocate resources for medicine. Resources are always limited, and if we commit to search for the treatment of rare complex diseases, we decrease the level of security of people absolutely massive drugs. This is the same problem that arises in front of the robot driver of Google, when he decides who to bring down.

[Ch.]: That is the case for common rules that computers can follow?

[MG]: the fact that it is hardly possible. For example, we begin to consider the notion of ethics — for example, the concept of justice, which is closely linked to issues of financing of medicine. The idea of justice presupposes that everyone is equal. That is, if there is a person who is initially very unhappy, say, because of a congenital disease, then we all have very much to invest, to tighten his happiness level to the average in the rest of society, and it is not a pity any money.

The idea of this, however, is in contradiction with the idea of efficiency. Economic efficiency is often contrary to the concept of justice: fair distribution of benefits and effective are two completely different things. Here, our morality succumb to our rationality. The balance of justice and rationality is another problem that arises in our practice in control theory.

Furthermore, morality is not absolute or uniform. It is not, of course, is a subjective characteristic of a person: that it existed, need to be shared by a certain group of people, but this group — not all of humanity. In the world there are now several well-developed ethical systems, which can be divided into two large groups: secular ethics and religious ethics. The driving force in secular ethics are the relationships of people among themselves. And religious morality — even Christian, even Muslim — not with respect to the interaction of people and their relationship with God. And these relationships impose certain restrictions on interaction between people. In religious morality, you need to think primarily about their souls and not about how to build relationships between members of society or someone there to save an old woman or passengers. People who follow a religious moral code, think differently, their behaviour imposes a strong imprint of fatalism and the idea of eternal life.

[Ch.]: What does this mean?

[MG]: for Example, if among these people to conduct a survey all about the same dilemma of Google, a big role will play how properly does a person on a pedestrian crossing. The old lady runs a red light? Whether done by children that are acting up on the road? And within secular morality, born of the Enlightenment, played a Central role the ideas of equality, fraternity, and humanism, so it will run the arithmetic approach. How many people cross the road? How many people in the car is? The decision will be made in favor of maintaining a larger number of human lives that are the absolute value.

[Ch.]: And what place in this system is scientific ethics, not whether it resolves the contradiction?

[MG]: the Notion of "scientific ethics" is very similar to the concept of "religious ethics". Ironically, scientists are more similar in this sense, the religious fanatics, than the average person. When a scientist acts in a specific role of a researcher, he is not talking about relationships with themselves and other members of society, and about the relationship between himself and truth, eternity, absolute. People give different names, but essentially it is the same fictional adamnet, as in the construction of "man — God". Scientist is more important than loyalty to the truth, than the relationship between him and other members of society. And here come on the scene charges, for example, in the creation of the atomic bomb. The more prevailing academic research ethics and loyalty to the truth, the more immoral it becomes, in the General case from the point of view that secular and religious ethics.

[Ch.]: In General, robots can be assigned different system of ethics.

[MG]: Yes, "behavior" automatic system will depend on which system of values started by its founder. Will lead a fun example: in Israel there is a Shabbat — has nothing to do on Saturday. And the robot doesn't have to force you to something to do on Saturday. Even his question: "Master, I'm ready to start cleaning. Click consent!" — will be incorrect. That is, in Israel, in the design in this case will take into account things that we never would come.

[Ch.]: Well, suppose we programmed robots in accordance with our views. Once they trust more than yourself?

[MG]: AND this is a matter of taste. When we, say, design of automation for NPP, we stop for a complete transfer of control the fear that they could be wrong. But this is not considered that the operator is also wrong! So we hide our heads in the sand, shifting responsibility for mistakes to the operator. Despite the fact that we know that there are on average much more efficient than a person automated system. So here the decision is for the person who puts his signature on the document on the implementation of the automated system: he says that this machine is better than a man, we believe her more. The operator will not be able to press the button, the system will do it for him, and it will be an irreversible decision.

During the cold war was a system that was supposed to make your own decision about a retaliatory strike. And all treaties on the non-proliferation of nuclear weapons, on the prohibition of missiles short and medium range was made by the Soviet leadership under the influence of the danger of unwanted triggering of automatic systems. Because there were a number of unpleasant incidents when triggered automatic early detection systems. The head of state reported that the system worked, the rocket and fly to us, press the button. Information gave the car a man was needed (and still needs) in order to make an important decision — whether to give further decision-making machine or not. But the responsibility isn't going anywhere, as if you have it or passed on to ASU. People make decisions, while machines can only simulate this process.

[Ch.]: And how these ideas are implemented in modern non-military systems, for example in the energy sector?

[MG]: If we talk about us, our Institute is connected to the problem of control of atomic energy after the Chernobyl accident. It was necessary, first, to analyze the causes of this accident, and second, to create systems of new generation for nuclear power plants taking into account this negative experience and on the basis of system approach. A systematic approach in this case — not to forget about any danger at all. And this ISP has its strengths and side of the science that we are developing. A systemic approach has allowed to develop a new system of governance that operate at modern nuclear power plants and allow for all possible scenarios. And as long as these systems are implemented, serious accidents have not — neither in our country nor abroad, where we supply the control system of nuclear power plants in India and Iran.

Management system GES, for example, has proven its reliability. Everything was provided, including the Sayano-Shushenskaya hydroelectric power plant, automation and equipment worked well. But there violated the rules. Technically it was a technical accident, but there were problems in the organizational outline. Warning system all show, and the problem was bureaucracy and the desire to earn more money. The immediate cause of the accident was the destruction of the turbine, but it resulted in a systematic violation of the rules of operation — the contents of the turbine in a suboptimal mode. And this was the result of its design features, and could have avoided the accident, given these features. It was not necessary through the forbidden regime every day to drive this turbine, and to use it in more sparing mode. In fact, it is very often passed through the forbidden zone in an attempt to maximize profits. This mode is not recommended for continuous use, but to put a turbine from one stable operating mode to another, had to pass through this forbidden zone. One-two-three times it is possible to do, but when the turbine chases through the forbidden zone a few times a day, of course, there are fatigue phenomena. That is, this accident is the result of the influence of the human factor.

[Ch.]: And automated system trying to stop people from making mistakes?

[MG]: IN General, the whole theory of automation about it. When people started from a scientific point of view to look at this area, they realized very early that in any case it is necessary to assemble a system of unreliable elements. For the first time is realized in the systems of communication, and there were books on how to build reliable systems from unreliable elements. Introduced the concept of duplication, duplication, duplication, mathematical methods of calculation of reliability of the probabilistic scheme. All this math was invented in the first half of the XX century — the theory of reliability. Now it's still evolving, but in part has already become engineering practice. But here we are talking about how to calculate the probability of adverse events, and ethical issues do not arise here. They arise at the stage when we make the decision about whether enough the achieved probability or not, or just need a few more levels of nesting, so the probability of error to decrease by another order of magnitude. This is a purely ethical decision.

[Ch.]: Now let's get back to the situation of choosing the lesser of two evils. Here, for example, we face a serious choice of who to flood or disconnect from the power in case of accident on hydroelectric power station. How automation can help in this?

[MG]: As I said, it's an ethical choice, which in one form or another will have to do people. And people do not like to make decisions in such a situation, so put a lot of effort to avoid it. And now it is not about shifting responsibility, but the real efforts, that these situations did not happen. Measures are taken at the design stage of complex systems. For example, in hospitals and other facilities, which belong to the first class of hazard, provided the presence of Autonomous power supply source. This means that if the light turns off, the doctors will still be able to finish the operation, switching to stand-alone source. It also introduces a regulatory scheme outages, and there are several stages — several consumer groups, which are sequentially disabled to reduce the load on the electrical network. First off the lighting of the streets, then transportation, then industrial enterprises (excluding critical facilities), then the utility sector and finally hospitals, kindergartens, etc. But we remember that all important objects are equipped with Autonomous systems of power supply. People trying to impose "pillows" so that there is no need to make a decision in an emergency.

[Ch.]: But if such a situation arises?

[MG]: nothing to be done. So, all the preliminary measures have proved in this case ineffective. Then the decision is made by the operator under huge time pressure and usually takes it is not optimal. And then suffering all his life with remorse.

But still it is a rarity. The problem is something extraordinary, unusual. Most of the time we are far away from the borders of the area in which we need to make non-trivial decisions. In everyday life all the decisions do not require us such strong-willed efforts just because we've provided and designed in advance so that we were getting away from problematic situations.

Source: https://chrdk.ru/sci/lyudi_ochen_ne_lyubyat_prinimat_resheniya_v_ekstrennoi_situacii?fbclid=IwAR1Q1myrqQOZi3dlYAH1MqpINhnwFOSumHyoFvNryePx0fTZ2vaVEu-9F0w


RELATED MATERIALS: Science and Society
Возрастное ограничение