46 | Natural Language Understanding & BERT with Dawn Anderson

image

Dawn Anderson has been in SEO since 2007. She’s a consultant and owns a boutique agency called Bertey. Dawn is also a lecturer on digital and search strategy at Manchester Metropolitan university. A big thanks to Kevin Gibbons who recommended the presentation after seeing Dawn speak at Pubcon.

When Dawn was trying to learn SEO she was in a completely diff industry. She was trying to build a website and was fortunate that she met an information architect working for himself. He taught her the concept of ontology; natural language; knowledge graphs, linguistics etc. Dawn was learning about cooccurrence and connected words, contextualisation and so on, from around 2012.

“Natural Language Understanding (NLU) is the filler for structured data.” – Dawn Anderson

Google introduced Bidirectional Encoder Representations from Transformers (BERT) around 12 months ago, which has been a whole game changer.  Move on this later…

What Stage Are We at With Ambiguity of Language?

It’s better but not perfect. It’s still content and we’re humans who understand common sense. Search engines don’t have this, although there’s research into trying to create a system that can identify this.

Dawn also mentions homophones. ‘Faux candles’ and ‘four candles’ have different meanings as they’re different words but there are programs that have play on words even that twist the meanings of those. Then there are words with multiple meanings, ‘Bass’, for example, has about 7 different meanings. The word ‘like’ can be a verb, an adjective, a noun etc. So, as a sentence develops, so does the meaning of a word . Language is evolving all the time.

“The word’s meaning is its context” –  Ludwig Wittgenstein

Prior to BERT, Dawn says that natural language training had uni-directional modelling. It could only look at the wording before the words in a sentence. It’s like a sliding context window so it couldn’t look at both directions at once. BERT has been trained on question answering, sentiment analysis and lots of other natural language understanding tasks. It beats human understanding because linguistics will argue forever about what the word means… It’s a pre-trained model that has 2500 million words. It’s open sourced too, perfect for research purposes which means a lot of other research is escalating pretty quickly.

‘Transformers’ in Bidirectional Encoder Representations from Transformers is interesting, says Dawn, as it helps with pronouns. These are always problematic in NLU. He, she, they; when these are in a sentence which talks about an entity, a search engine can lose track of who these are associated with, especially if more than 1 person spoken about.

BERT is fine-tuned on question and answer data sets. It was also tested on real queries. Have a look into MS Marco, a data set that was able to help fine-tune the system.

Knowledge Panels

Structured data and NLU are not one and the same. Knowledge graph uses structured data first and then is populated with natural language later, almost filling in the gaps.

Dawn saw Enrique Alfonseca speak, who talked about how they populate voice search answers. There is lots of correlation that you see between knowledge panels and device assistants. That’s the first point of call because it’s more structured but then they look for semi-structured data thereafter. Ordered lists are pulled in and are therefore so important.

What content to create for Natural Language Processing?

Dawn says that a web page is just a bag of words. It’s about trying to add context to those, for example semantic headings that help for disambiguation. Also, think about accessibility, because these factors help things turn into something more structured. Use H3s and H4s, like Wikipedia that uses the full range between H1-H6 so it’s well structured.

Treat a website like a library and organise your ‘filing system’. Dawn recommends sitemaps too to help users/crawlers find their way around the site. Use it like sign posts and help disambiguate your site sections. She does however think that blogs are a ‘random mess’…

Personalisation & Intent with Natural Language

You would think that phones make things easier to help with personalisation but actually it’s harder to detect intent.

There is a 2017 paper on the categorisation of queries; navigation, transactional & inspirational but since there has also been further categorised queries, such as spoken and action queries. Motorway and travelling can be problematic, especially getting results for your destination, rather than the location you’ve just passed. There’s time-sensitive intent too, example of ‘dresses’ where users wanted wedding dresses, rather than general dresses. The reason was due to the search being made during the royal wedding and people wanted to see Megan Markle’s wedding dress. There are real-time issues. New events create new queries as well. So, it’s only increasing in complexity rather than becoming easier.

There’s a layering of factors included. BERT has layers too, modules on top of algorithms of algorithms.. BERT is called a black box algorithm because of the levels of complexity. Like a vicious circle, this could be a problem too if we’re unable to see why it’s doing/choosing things via its rules. Dawn reveals that there is a lot of research happening to have reasons why these algorithms make these decisions. This can explain potential bias of these programs, especially when these programs are learning on their own.

Dawn did a talk in Paris called ‘The user is the query’. It was looking at Google Discover alongside Microsoft team’s research on personalisation. Search engines are becoming recommend assistants. They look at groups that are similar to you and predict content you’d like to see; deciphering the next step in your journey. Google discover is just that as it integrating with other areas of your other products such as Gmail, maps, etc.

Linkedin | Twitter: @BeBertey @dawnieando| Site: Bertey.com

Related Articles:

Related Episodes:

Music credit: I Dunno (Grapes of Wrath Mix) by spinningmerkaba (c) copyright 2017 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/jlbrock44/56346 Ft: Jlang, 4nsic, grapes.

Comments are closed.