Center for Strategic Assessment and forecasts

Autonomous non-profit organization

Home / Defence and security / / Articles
Drones and unmanned vehicles as weapons: why we need to fear the hackers
Material posted: Publication date: 09-03-2018
If artificial intelligence gets into the wrong hands, the civilized world could plunge into chaos.

No one can deny that artificial intelligence can bring our lives to a new level. The AI is able to solve many problems that are not humanly possible.

However, many believe that the overmind will want to destroy us, like SkyNet, or will conduct experiments on people, like GLADoS from the game Portal. The irony is that artificial intelligence is good or bad can only people.

Почему искусственный интеллект может стать серьёзной угрозой
So as not to take over the world after this

Researchers from Yale, Oxford, Cambridge and company OpenAI published a report on the topic of the abuse of artificial intelligence. It says that the real danger comes from hackers. With malicious code, they can disrupt automated systems under AI control.


Researchers fear that technology, created with good intentions, will be used to harm. For example, monitoring tools can be applied not only to catch terrorists, but to spy on ordinary citizens. The researchers are also concerned about commercial drones that deliver food. They are easy to catch and put something explosive.

Another scenario is the destructive use of AI unmanned vehicles. It is enough to change a few lines of code, and the car will start to ignore safety rules.

Почему искусственный интеллект может стать серьёзной угрозой
In the game Watch Dogs hacker had control over all the systems of the city, and this is what came of it

Scientists believe that the threat can be digital, physical and political.

  • Artificial intelligence is already being used to explore the vulnerabilities of various software codes. In the future, hackers can create such a bot that will bypass any protection.
  • With AI people can automate many processes: for example, to control a swarm of drones or a group of vehicles.
  • With the help of technologies such as DeepFake, you can influence the political life of the state, spreading false information about world leaders with the help of bots on the Internet.

These frightening examples still exist only as hypotheses. The authors do not propose a complete rejection of technology. Instead, they believe that governments and large companies should take care of security, while the industry of artificial intelligence is in its infancy.

Policy should be study the technology and work together with experts in this area to effectively regulate the creation and use of artificial intelligence.

Developers, in turn, must assess the danger from high technology, to foresee the worst consequences and warn world leaders. The report calls for developers of AI: to unite with safety experts in other areas and find out whether you can use the principles to ensure the safety of these technologies for the protection of artificial intelligence.

The full report describes the problem in more detail, but the point is that AI is a powerful tool. All stakeholders need to learn a new technology and make sure it is not being used for criminal purposes.


Tags: Russia , security


нетбез-Ж+ответада (15-03-2018 13:03:42)

Комментарий на одобрении координатора проекта

RELATED MATERIALS: Defence and security
Возрастное ограничение