Last week, Google held its “Search On” event, during which it outlined a range of improvements it will be making to Google Search. The improvements outlined were largely centred on utilising new machine learning techniques and artificial intelligence (AI) to provide its users with better search results.
One of the principal improvements the search engine giant announced was a new spellcheck tool to help identify the most undecipherable queries. Prabhakar Raghavan, Head of Search at Google, stated during the event that 15% of its daily search queries are ones that the search engine has never encountered before – and this is partly due to incorrectly spelt queries. According to Cathy Edwards, Google’s VP Engineering, 10% of queries are misspelt.
Since around November 2008, the “did you mean” feature has been in place, with the search engine suggesting alternate queries when it thinks queries have been misspelt. It’s a well-known feature across the world, which has even led to a whole series of memes dedicated to the tool. However, Google has now said that by the end of October, there will be a huge update rolled out to the feature, which will use a new spelling algorithm set to be powered by a 680-million parameter neural net (a means of machine learning loosely modelled on a human brain). After a user enters in a search query, the algorithm will run in under three milliseconds, and Google claims it will offer better suggestions than the existing “did you mean” feature.
Another significant change Google announced is that when a user makes a query, it will be able to index specific passages from a webpage, as opposed to the whole webpage. For example, if a user searches for “how do I check my engine oil?”, Google would be able to pull up a single passage from a car forum to produce an answer. Edwards stated that when this new algorithm is rolled out, it is set to improve 7% of search queries across all languages.
Google also revealed that it is using AI to split broad searches into subtopics to help produce better results, and that it is beginning to use speech recognition and computer vision to tag and divide videos into segments automatically. For example, a football match can be automatically divided into ‘chapters’, which can then appear in search – a handy feature if you just want to catch up on the last half of a game, for example. This feature is already available for video creators to do by hand, but the automation will make the process effortless.
These changes are all set to be implemented over the next few weeks and months. As Google continuously improves its search results to help users, so too should website owners continually improve their websites to ensure they are easily found and engaging. If you need help in maximising your web presence, contact us at Engage Web today.