Center for Strategic Assessment and forecasts

Autonomous non-profit organization

Home / Science and Society / Analytical work: the experience of Russian and foreign experts / Other
Schedule of Gartner 2019: what all those fancy words?
Material posted: Publication date: 06-12-2019
Schedule of Gartner for anyone who works in technology, is like an exhibition of high fashion. Looking at it, you can find out what words chipofya this season and what you hear on all the upcoming conferences.

We decoded what lies behind the beautiful words on this chart, so you can also speak this language.

For a start, a few words, what is behind schedule. Every year in August, the consulting Agency produces a Gartner report – the Gartner Hype Curve. In Russian it's "hype curve", or simply put – hype. 30 years ago the rappers of the group Public Enemy sang, "Don't believe the hype". Believe it or not, the question is personal, but at least know these keywords is, if you work in technology and want to know world trends.

This is a graph of public expectations from the technology. According to Gartner, in the ideal case the technology sequentially passes through 5 stages: launch technology, the peak of inflated expectations, the valley of disillusionment, slope of enlightenment, plateau of productivity. But it also happens that she's drowning in the "valley of disappointment" — the come is very easy to take the same bitcoins: originally hitting the peak as the "future of money", they quickly rolled down when it became apparent the shortcomings of the technology, especially limitations on the number of transactions and enormous amount of electricity required for the generation of bitcoins (which already leads to problems with the environment). And of course, we must not forget that the schedule of Gartner – it's just a prediction: here, for example, you can read a detailed articleaddressing the most vivid unfulfilled predictions.

So, let's go over the new schedule of Gartner. Technology is divided into 5 major thematic groups:

  1. Advanced AI and intelligence (Advanced AI and Analytics)
  2. Post-classical computing and communication (Postclassical Compute and Comms)
  3. Sensory and mobility (Sensing and Mobility)
  4. "Augmented" people (Augmented Human)
  5. Digital ecosystems (Digital Ecosystems)


1. Advanced AI and intelligence (Advanced AI and Analytics)

The last 10 years we see the hour of deep learning (Deep Learning). These networks really effective for its own purpose. In 2018, Jan Lekun, Jeffrey Hinton and Joshua Bengio received for the opening in them the Turing award – the most prestigious award, the equivalent of "Nobel prize" in Informatics. So, the main trends in this area, which are placed on the chart:

1.1. The transfer of learning (Transfer Learning)

You don't teach a neural network from scratch, but take an already trained, and assign her a different goal. Sometimes you need to retrain part of the network, but not the entire network that much faster. For example, taking a ready neural network ResNet50 trained on dataset ImageNet1000, you will receive an algorithm that can classify the image a lot of different objects on a very deep level (1000 classes on the grounds, worked out 50 layers in a neural network). But you don't need to train the whole network as a whole, that would have taken months.

In the online course Samsung "Neural networks and computer vision", for example, in the final Kaggle-the problem with the classification of the plates on clean and dirty, illustrates the approach that 5 minutes gives you a deep neural network is able to distinguish the dirty dishes from clean, constructed on the above described architecture. Source network didn't know this dish at all, she only learned to distinguish the birds from dogs (see ImageNet).

Source: online course Samsung "Neural networks and computer vision"

For Transfer Learning, need to know what approaches work, what are ready to the underlying architecture. Overall, this greatly accelerates the appearance of practical applications of machine learning.

1.2. Generative-adversarial network (Generative Adversarial Networks, GAN)

This is for those cases where it is very difficult to formulate the objective of training. The closer the task to real life, so it is clearer to us ("bring the table"), but the harder it is to formulate a technical task. GAN — just try to rid us of this problem.

There are two networks: one generator (Generative), the other discriminator (Adversarial). One network learns to do useful work (to classify images, to recognize sounds, to draw cartoons). And the other network learns to learn the network: it has real examples, and it learns how to find in advance unknown complex formula to compare the creatures of the generative parts of the network with real-world objects (training samples) for a really deep important characteristics: number of eyes, close to the Miyazaki style, correct pronunciation of English.

Result sample network to generate anime characters. Source

But there is, of course, it is difficult to build the architecture. It is not enough just to throw of neurons, they must be prepared. And learning has for weeks. Theme GAN do my colleagues in the Center for artificial intelligence Samsung, they are one of the key research questions. For example, here's a development: the use of generative networks for the synthesis of realistic images of people with variable position — for example, to create a virtual fitting room, or for the synthesis of the face, which may allow to reduce the amount of information that you need to store or transfer to ensure a high-quality video, broadcast, or the protection of personal data.


1.3. AI explained (Explainable AI)

In some rare task progress in deep architectures suddenly drew a deep neural network to the human. Now the battle is for the fact that the range of such tasks increase. For example, the robot cleaner could easily distinguish a cat from a dog when a frontal meeting. But in most situations it will be unable to find cat sleeping among linen or furniture (as well as we, in most cases...).

The reason for the success of deep neural networks? They produce performance tasks based on "visible to the naked eye" information (pixels, photos, horse racing sound volume...), and on the characteristics obtained after the preprocessing of this information by several hundreds of layers of the neural network. Unfortunately, these relationships can also be meaningless, contradictory or bear traces of the imperfections of the original dataset. For example, which can lead to thoughtless application of AI in recruiting, there is a small computer game , Survival Of The Best Fit.

System for markup of images called the man who makes a woman, although the picture is actually male (Source). This is noticed at the Institute of Virginia.

To analyse the complex and deep relationship that we often cannot articulate, and we need methods Explainable AI. They organize signs of deep neural networks after training, we can analyze the learned network internal representation, not just relying on her decision.

1.4. Peripheral Analytics / AI (Edge Analytics / AI)

Anything with the word Edge means the following: the transfer of algorithms from the cloud/server to the end device/gateway. This algorithm will work faster and will not require connection to a Central server for its operation. If you are familiar with abstraction "thin client", we this client a little utolsaet.
This can be important for Internet of things. For example, if the machine is overheated and needs to cool down, it makes sense to signal this immediately, at the plant level, not waiting until data stored in the cloud and from there to the shift foreman. Or another example: cars-drones can deal with traffic on their own, without recourse to the Central server.


Or another example why this is important from the point of view of security: when you are on your phone type texts, it remembers a typical word to you, to you the phone keyboard it is convenient told – it's called predictive text input. Send somewhere in the data center everything you type on the keyboard, it would be a violation of your privacy and simply unsafe. Therefore, learning the keyboard only occurs within the most of your device.

1.5. AI is a platform as a service (PaaS AI)

PaaS – Platform-as-a-service is a business model where we have access to the integrated platform, including its cloud storage and ready procedure. Thus, we can liberate ourselves from the problems in the infrastructure, and concentrate on the production of something useful. An example of PaaS platforms for task AI: IBM Cloud, Microsoft Azure, Amazon Machine Learning, Google's AI Platform.

1.6. Adaptive machine learning (Adaptive ML)

What if we allow artificial intelligence to adapt... You may ask – how so? Unless he adapts to the task? Here is the problem: each such problem we painstakingly prepare before you build to solve artificial intelligence. Answer it is possible this chain to be simplified.

Conventional machine learning works on the principle of an open system (open-loop): you prepare the data, come up with a neural network (or whatever), teach, and then you look at few indicators, if you like, you can send a neural network in smart phones – to solve problems of users. But in applications where data is very much and their character gradually changed, need other methods. Such systems, which adapt and train themselves, organize in closed learning loops (closed-loop), and they need to run smoothly.

Application — it can be a streaming Analytics (Stream Analytics) on the basis of which many businessmen make the decisions, or adaptive production control. Scale modern applications and given a better understanding of risks to people, the methods that make up the solution to this problem, all of these methods are collected under the title Adaptive AI.


Looking at this picture, it's hard to shake the feeling that futurists do not feed bread – give to teach the robot to breathe...

Post-classical computing and communication (Postclassical Compute and Comms)


2.1. Mobile communication of the fifth generation (5G)

It is so interesting topic that immediately refer to our article. Well, here a quick squeeze. 5G by increasing the frequency of data transfer will make the Internet speed is fast. Short waves are more difficult to pass through obstacles, so networking will be absolutely other: base stations need 500 times more.

Together with speed we get a new phenomenon: realtime game with augmented reality, complex tasks (such as surgery) via telepresence, prevention of accidents and difficult situations on the roads through communication between machines. Of more prosaic: finally stop falling mobile Internet during mass events such as the match at the stadium.

Source images — Reuters, Niantic

2.2. Memory next-generation (Next-Generation Memory)

Here we are talking about the fifth generation of RAM is DDR5. Samsung has announced that by the end of 2019 will be products on the basis of DDR5. It is expected that the new memory is two times faster and two times higher capacity, preserving the form factor, that is, we will be able to get your computer dies with memory capacity up to 32GB. In the future it will be especially important for smartphones (new in memory version low power) and for laptops (where the number of slots is limited). And machine learning requires large amounts of RAM.

2.3. The Leo satellite system (Low-Earth-Orbit Satellite Systems)

The idea of replacing heavy, expensive, powerful satellites in a swarm of small and cheap is not new and appeared back in the 90s About the fact that "Elon Musk will soon be distributed to all Internet satellite" now you're not just lazy. Here is the most famous company is Iridium, which went bankrupt in the late 90's, but was rescued by a U.S. Department of defense (not to be confused with iRidium, the Russian system of smart home). Project Elon musk (Starlink) is not the only one – day race involving Richard Branson (OneWeb – 1440 alleged satellites), Boeing (3000 satellites), Samsung (4600 satellites), and others.

As is the case in this area, as it looks like the economy – read the review. And we look forward to the first tests of these systems first users to be held next year.

2.4. 3D printing on the nanoscale (Nanoscale 3D Printing)

3D printing, though not included in every person's life (in the form of promised individual home plastic factory), however, has long gone from a niche technology for geeks. You can judge by the fact that the existence of at least 3D sculpted handles known to every schoolboy, and many dream to buy a box of rails and extruder... "just because" (or have already purchased).

Stereolithography (3D laser printer) allow you to print individual photons: explore new polymers for the solidification of which only two photons. This will allow non-laboratory conditions to create completely new filters, fasteners, springs, capillaries, lenses and... your options in the comments! Here, close to light curing only this technology allows to "print" the CPUs and the computing scheme. In addition, the first year there is the print technology of graphene 500 nm three-dimensional structures, but without a radical development.


3. Sensory and mobility (Sensing and Mobility)


3.1. Unmanned vehicles, level 4 and 5 (Autonomous Driving Level 4 & 5)

To avoid confusion in terminology, it is necessary to understand what levels of autonomy are distinguished (taken from the detailed article, to which we refer all interested):

Level 1: Cruise control: driver assistance in very limited situations (e.g. keeping the car at a preset speed after the driver took his foot off the pedal)
Level 2: Limited assistance with steering and braking. The driver should be ready to take over the controls almost instantly. His hands are on the wheel, eyes focused on the road. This is something that is already in the Tesla and General Motors.
Level 3: the Driver no longer needs to constantly monitor the road. But must stay alert and be ready to take over. That's what is no commercially available cars. All existing at the moment – 1-2.
Level 4: a Real autopilot, but with restrictions: only travel in a certain area, which is carefully mapped and generally known to the system, and under certain conditions: for example, in the absence of snow. Such prototypes have Waymo and General Motors, and they plan to launch them in several cities and test in a real situation. At "Yandex" there is a test zone for unmanned taxi in SKOLKOVO and Innopolis: the trip takes place under the supervision of the engineer sitting in the passenger seat; by the end of the year, the company plans to expand the fleet to 100 unmanned vehicles.
Level 5: Full automatic driving, full replacement of live driver. Such systems do not exist, and it is unlikely they will appear in the coming years.

How realistic is it to see all of that in the foreseeable future? Here I would like to redirect the reader to the article "Why run robotaxi by 2020, as promised by Tesla, it is impossible". This is partly due to a lack of communication 5G: available 4G speeds is not enough. Partly with a very high cost of stand-alone machines: they are still unprofitable, strange business model. In short, it's complicated, and it is no coincidence that Gartner writes that forecast the mass adoption of Level 4 and 5 – not earlier than 10 years.

3.2. Camera with 3D vision (3D Sensing Cameras)

Eight years ago game controller Microsoft Kinect created a sensation by proposing available and relatively inexpensive solution for a 3D view. Since then sports and dancing games with the Kinect over its brief rise and fall, but the 3D cameras were used in industrial robots, unmanned vehicles, mobile phones for facial recognition. Technology has become cheaper, more compact and more affordable.

The phone is Samsung S10 is TOF (Time-of-Flight) camera that measures the distance to the object to simplify focusing. Source

If you are interested in this topic, it will forward to a very good thorough overview of the camera depth: part 1, part 2.

3.3. Drones for delivery of small cargo (Light Cargo Delivery Drones)

This year, Amazon created a sensation when shown at the exhibition new flying drone capable of carrying small loads up to 2 kg weight. For the city, with its traffic jams, this seems like the perfect solution. Let's see how these drones will manifest itself in the very near future. Perhaps here we should turn cautious skepticism: there are a lot of problems, starting with the possibility of easy theft of the drone, and ending legal restrictions on the UAV. Amazon Prime Air now six years old, but is still in the testing phase.

New Amazon drone shown this spring. There's something in it from "Star wars." Source

In addition to Amazon, there are other players in this market (there is a detailed overview), but none of the finished product: all in the testing stages and marketing campaigns. We should also mention a rather interesting specialized medical projects in Africa: the delivery of donated blood in Ghana (14 000 deliveries, company Zipline) and Rwanda (company Matternet).

3.4. Autonomous flying vehicles (Autonomous Flying Vehicles)

It is difficult to say anything definite. According to Gartner, it will appear not earlier than in 10 years. In General, here, all the same problems as manned and unmanned vehicles, but they take on a new dimension — the vertical. About his ambitions to build a flying taxi say Porsche, Boeing, Uber.

3.5. The cloud augmented reality (AR Cloud)

A permanent digital copy of the real world, allowing you to create a new layer of reality, common to all users. If to speak in more technical language, the question of how to make an open cloud platform, where developers could integrate their AR applications. The monetization model is clear, it is an analogue of Steam. The idea is so ingrained that now some believe that AR without clouds is simply useless.

How it might look in the future, drawn in a small video. Looks like another series of "Black mirror":

More can be read in the article review.

4. "Augmented" people (Augmented Human)


4.1. The AI for the emotion (Emotion AI)

How to measure, simulate and respond to human emotions? Some of the customers here – a company producing voice assistants like Amazon Alexa. To truly get into the house they can, if you learn to recognize the mood: to understand the cause of dissatisfaction of the user to try to correct the situation. Generally, in the context of much more information than the message itself. And the context is and facial expression, and intonation, and nonverbal behavior.

Other practical applications: the analysis of emotions during the interview at work (for video interview), evaluation of responses to commercials or other video content (smiles, laughter), assistance in teaching (for example, an independent practitioner in the art of public speaking).

On this subject it is difficult to speak more than the author of the 6-minute short film Stealing Ur Feeling. In witty and tastefully done clip shows how to measure our emotions for marketing purposes, and from the immediate reactions of your face to know whether you love pizza, dogs, Kanye West, and even what your level of income and estimated IQ. Going to the website of the film at the link above, you can be part of an interactive video using the built-in camera of your laptop. The film was shown at several film festivals.


There is an interesting study: how to recognize sarcasm in text. Took tweets with the hashtag #sarcasm and made a training sample of 25 000 tweets with sarcasm and 100,000 ordinary tweets about everything. Used the library TensorFlow, trained the system, here is the result:


So now, if you are unsure of your colleague or friend, he said something to you that seriously or sarcastically, you can take advantage of the already trained neural network!

4.2. Augmented intelligence (Augmented Intelligence)

Automation of intellectual labor with machine learning methods. It would seem, anything new? But here the formulation is important, especially because it coincides with the acronym for Artificial Intelligence. This brings us to the debate about "strong" and "weak" AI.
Strong AI is the artificial intelligence from a science fiction movie, which is fully equivalent to the human mind and is aware of himself as a person. This does not yet exist and it is unclear whether there at all.

Weak AI is not an independent person, and the assistant-assistant the human. He does not pretend to humanlike thinking, and just knows how to solve problems of information, for example, to determine what the picture is or to translate the text.


In this sense, Augmented Intelligence is pure "weak AI", and the wording was perfect as makes no confusion and the temptation to see here the same "strong AI", which all dream of (or fear, if you remember numerous arguments about the "rise of the machines"). Using the expression Augmented Intelligence, we will immediately become like the heroes of other films: from science fiction (like "I, robot" Asimov) we find ourselves in a cyberpunk ("the augments" in this genre called all sorts of implants, expanding the possibilities of person).

As said Eric Brynjolfsson and Andrew Mcaffe: "Over the next 10 years that's what will happen. No AI will replace managers, and those managers that use AI will replace those that have not yet"


  • Medical: Stanford University developed the algorithm, which copes with the task of recognizing abnormalities on the chest x-ray on average just as successful as most doctors
  • Education: assistance to the student and the teacher, the analysis of the response of students to the materials, the construction of individual learning paths.
  • Business Analytics: data pre-processing, according to statistics, is 80% of the time of the researcher, only 20% of the experiment


4.3. Biochips (Biochips)

This is a favorite theme of all cyberpunk films and books. In General, chipping of Pets is a practice not new. But now these chips began to implant more and people.

In this case, the HYIP, most likely, is connected with the sensational case of the American company Three Square Market. There, the employer began offering to implant under the skin chips in exchange for a fee. The chip allows you to open doors, log into computers, to buy snacks in the machine – that is such a universal card member. This chip serves as the identification card, it has no GPS module, so track one on it is impossible. And if a person wants to remove the chip by hand, it takes 5 minutes with the doctor.

Chips are usually implanted between the thumb and index finger. Source

Read the detailed article about the state of Affairs with the chipping in the world.

4.4. Immersive working space (Immersive Workspace)

"Immersive" is another new word, which simply have nowhere to go. It's everywhere. Immersive theatre, exhibition, film. What do you mean? Immersionist is the creation of immersive effect when a person loses the boundary between author and audience, virtual and real world. With regard to the workplace, presumably, this means the Erasure of boundaries between the performer and the initiator and the promotion of employees to a more active position through reformatting its environment.

Since we're everywhere now, Agile, flexibility, close collaboration and jobs should be as easily configurable, should encourage group work. The economy dictates its own terms: it becomes more temporary employees, the cost of renting office space is growing, and in a competitive labor market in the IT companies are trying to increase the satisfaction of employees from their work, creating recreational areas and other benefits. And all this is reflected in the design of jobs.

From the report Knoll

4.5. Personification (Personification)

We all know that personalization in advertising. This is when you have been discussing today with a colleague that the room that the air is dry, and you have to buy office humidifier, and the next day you see in social network advertising – "buy a humidifier" (a real case that happened to me).


Personification, according to the Gartner definition is a response to the growing concern of users regarding the use of their personal data for advertising purposes. The goal is to develop an approach in which we will be shown is appropriate to the context in which we are, not us personally. For example, our location, device type, time of day, weather conditions is something that does not violate our personal data and we do not feel the unpleasant sensation of "surveillance".

The difference between these two concepts, read the note to Andrew Frank's blog on the Gartner website. So there is a subtle difference and so much like the words that you, not knowing the difference, the risk to argue with someone, not knowing that in General, both are right (and this is also a real incident with the author).

4.6. Biotech – Artificial tissue (Biotech – Cultured or Artificial Tissue)

It is, first and foremost, the idea of growing artificial meat. Multiple teams around the world engaged in the development laboratory of "Meat 2.0" – it is expected that it will be cheaper than usual, and it switches fast food restaurants and then shopping. Of investors in this technology – bill gates, Sergey Brin, Richard Branson and others.


The reasons why are all so interested in artificial meat:

  1. Global warming: methane emissions from farms. This is 18% of the global volume of gases that affect the climate.
  2. The growth in population. Demand for meat is growing, and to feed all the organic meats will not work – it simply expensive.
  3. The lack of space. 70% of Amazonian forests have been cut down for animal grazing.
  4. Ethical considerations. There are those for whom this is important. The animal protection organization PETA has already offered a prize of 1 million dollars to the scientist who will release the artificial chicken meat.

This substitution of meat with soy is a partial solution, because people feel the difference in taste and texture, and are unlikely to give up steak in favor of soybeans. So it should be real, that organically grown meat. Now, unfortunately, the artificial meat is too expensive:$ 12 for a kilo. This is due to the complex process of growing such meat. Read all of this article.

If to speak about other cases of tissue – in medicine — it is an interesting topic with artificial organs: for example, "band-aid" for the heart muscle, printed a special 3D printer. Famous stories like artificially grown rat heart, but in General it is not yet beyond clinical trials. So Frankenstein in the coming years we are unlikely to see.

Here Gartner very careful in the estimates, probably keeping in mind his failed prediction 2015 on that in 2019, 10% of the population in developed countries will have 3D-printed medical device implant. Therefore, denotes the time of reaching the plateau of productivity – at least 10 years.

5. Digital ecosystems (Digital Ecosystems)


5.1. Decentralized Web (Decentralized Web)

This concept is closely linked with the name of the inventor of the web, winner of the Turing, sir Tim Berners-Lee. He has always been an important ethical issues in computer science and important the collective nature of the Internet: laying the foundations for hypertext, he was convinced that the network should work like a spider's web, not a hierarchy. It was in the early stages of network development. However, with the growth of the Internet structure for a number of reasons was to centralize. It turned out that access to the network for the whole country can be easily shut off with just a few providers. And user data has become a source of strength and income of the Internet companies.

"The Internet is decentralized, says Berners-Lee. The problem is that dominated by one search engine, one big social network, one platform for microblogging. We have no technological problems, but there are social".

In his open letter to the 30th anniversary of the World Wide Web the Creator of the Web, had identified three main problems of the Internet:

  1. Deliberate harm, such as state-sponsored hacking attacks, crime and online harassment
  2. The device itself is a system, which is to the detriment of the user paves the way for mechanisms such as: promoting financial climate and the viral spread of false information
  3. Unintended consequences of system design which lead to conflicts and reduce the quality of online discussion

And Tim Berners-Lee already has the answer, what principles could be based "Internet of healthy human", devoid of issue number 2: "For many users the only model of interaction with the web remains advertising revenue. Even if people are scared of what is happening with their data, they are willing to make a deal with the marketing machine for the opportunity to get content for free. Imagine a world in which payment for goods and services easy and enjoyable for both sides." Of options of how this can be arranged: musicians can sell their recordings without intermediaries in the form of iTunes and news sites — use a system of micropayments for reading one article, instead of to earn advertising.

As an experimental prototype of such a new Internet, Tim Berners-Lee launched the project SOLID, the essence of which is that you store your data in "the hearth" — a repository of information and can provide this information to third-party applications. But in principle, you are the owners of their data. All this is closely connected with the concept of peer to peer networks, that is your computer will not only request services, but also provides them so don't rely on a single server as a single channel.


5.2. Decentralized Autonomous organizations (Decentralized Autonomous Organizations)

This is an organization that is governed by rules written in the form of a computer program. Its financial activity is based on the blockchain. The aim of such organizations is to eliminate the state from the role of facilitator and to create a common trusted environment for companies that does not own one personally, and hold everything together. That is, in theory, it should, if the idea will take root, to abolish notaries, and other familiar institutions of verification.

The most famous example of such an organization was focused on a business venture, The DAO, which in 2016 has collected $ 150 million, of which 50 immediately stole a legal "hole" in the rules. Then came a difficult dilemma: either to roll back and return the money, or to admit that the withdrawal was legal, because it is in no way violated the rules of the platform. In the end, to return the money to investors, the creators had to destroy The DAO, rewriting the blockchain and its violating the basic principle of immutability.

Comic about Ethereum (left) and The DAO (on the right). Source

The whole story spoiled the reputation of the idea of DAO. The project was done on the basis of the Ethereum cryptocurrency, next year is expected to version 2.0 Air – perhaps the authors (including well-known acne Buterin) will consider the errors and will show something new. Perhaps that is why Gartner and placed DAO on the rising line.

5.3.Synthetic data (Synthetics Data)

For training the neural networks need large amounts of data. To partition the data manually is a huge work, which can only be done by a person. Therefore, it is possible to create artificial datasets. For example, the same collection of human faces on the website They are created using GAN – algorithms, which were mentioned above.

These persons do not belong to the people. Source

A big plus of such data is that there are legal difficulties in their use: the consent to the processing of personal data issue no one.

5.4.Digital Ops

The suffix "Ops" has become extremely fashionable since then, both in our speech caught on DevOps. Now that is DigitalOps is just a generalization of DevOps, DesignOps, MarketingOps... You're still not bored? In short, it is the transfer of the approach adopted in DevOps, the scope of the software on all the other aspects of business – marketing, design, etc.


The idea of DevOps was to remove the barriers between Development (with development) and Operations (business processes), through the creation of common teams, where programmers and testers, and security, and administrators; implementation of specific practices: continuous integration, infrastructure as code, shortening and strengthening of chains of feedback. The goal was to accelerate product to market. If you think that this is similar to Agile, you thought correctly. Now mentally move this approach from the scope of software development to development in General – and you understand what DigitalOps.

5.5. Columns of knowledge (Knowledge Graphs)

A programmatic way to simulate the area of knowledge, including using machine learning algorithms. The knowledge graph is built on top of existing databases to link together all the information: how structured (a list of events or persons) and unstructured (text).

The simplest example is the card, which you can see in the search results Google. If you're looking for a specific person or institution, you will see the right card:

Please note that "Upcoming events" is not a copy of the information with Google maps, and integration of the schedule with Yandex.Posters: you can easily see this if you click on events. That is combining multiple data sources together.

If you request a list – for example, "famous Directors" — you will show "carousel":

Bonus for those who read to the end

And now, when we clarify for ourselves the value of each of the items, you can look at the same picture, but on Russian language:

Freely share it in social networks!

RELATED MATERIALS: Science and Society