The developers of Mountain View have recently presented some new features of Google which should make research results more precise thanks to the intervention of Artificial Intelligence. These are implementations that in many ways extend the effectiveness of the system based on the keywords used up until now by the platform.
At the technical level the latest improvements can be summarized by the acronym BERT (Bidirectional Encoder Representations from Transformers), a solution based on TPU (Tensor processing Unit) that act as accelerators AI able to recognize the meaning of the terms and the characteristics of the objects based on the context in which they are pronounced, written or represented.
In the case of a common keyword, for example, the engine animated by BERT should be able to understand exactly what a user is looking for taking into account the words that precede and follow it. The advantages of such an approach are obvious, especially from the point of view of predictive capabilities of the algorithm.
As pointed out by Jeff Dean, head of the Big G division that deals with artificial intelligence technologies, in the future Google will have to be increasingly able to recognize the relationships between words which are used in the formulation of the queries, bringing its capacity of understanding the context to levels similar (if not higher) than the human one.
For the Californian company it is fundamental to continue to invest in searching, Google is a dominant subject in this market (with a share of more than 90%) and 80% of its turnover derives from advertising that in the business model of Sundar Pichai group is directly determined by the research activities and the quality of the results.