6.1 Latent Dirichlet Allocation

Latent Dirichlet allocation (LDA) is a particularly popular method for fitting a topic model. It treats each document as a mixture of topics, and each topic as a mixture of words. This allows documents to “overlap” each other in terms of content, rather than being separated into discrete groups, in a way that mirrors typical use of natural language.

The LDA model is guided by two principles:

  • Each document is a mixture of topics. In a 3 topic model we could assert that a document is 70% about topic A, 30 about topic B, and 0% about topic C.

  • Every topic is a mixture of words. A topic is considered a probabilistic distribution over multiple words.

In particular, LDA is a imagined generative process, illustrated in the plate notation below:

Source: Lee et al. (2018)

Figure 6.1: Source: Lee et al. (2018)

  • \(M\) denotes the number of documents
  • \(N\) is the number of words in a given document (document \(i\) has \(N_i\) words)
  • \(\vec{\theta_m}\) is the expected topic proportion of document \(m\), which is generated by a Dirichlet distribution parameterized by \(\vec{\alpha}\) (e.g., in a two topic model \(\theta_m = [0.3, 0.7]\) means document \(m\) is expected to have 30% topic 1 and 70% topic 2)
  • \(\vec{\phi_k}\) is the word distribution of topic \(k\), which is generated by a Dirichlet distribution parameterized by \(\vec{\beta}\)
  • \(z_{m, n}\) is the topic for the \(n\)th word in document \(m\), one word are assigned to one topic.
  • \(w_{m, n}\) is the word in the \(n\)th position word of document \(m\)

The only observed variable in this graphical probabilistic model is \(w_{m, n}\), so it is “latent”.

To actually infer the topics in a corpus, we imagine the generative process as follows. LDA assumes the following generative process for a corpus \(D\) consisting of \(M\) M documents each of length \(N_i\):

  1. Generate \(\vec{\theta_i} \sim \text{Dir}(\vec{\alpha})\), where \(i \in \{1, 2, ..., M\}\). \(\text{Dir}(\vec{\alpha})\) is a Dirichlet distribution with symmetric parameter \(\vec{\alpha}\) where \(\vec{\alpha}\) is often sparse.

  2. Generate \(\vec{\phi_k} \sim \text{Dir}(\vec{\beta})\), where \(k \in \{1, 2, ..., K\}\) and \(\vec{\beta}\) is typically sparse

  3. For the \(n\)th position in document \(m\), where \(n \in \{1, 2, ..., N_m\}\) and \(m \in \{1, 2, ..., M\}\)

    1. Choose a topic \(z_{m, n}\) for that position which is generated from \(z_{m, n} \sim \text{Multinomial}(\vec{\theta_i})\)
    2. Fill in that position with word \(w_{m, n}\) which is generated from the word distribution of the topic picked in the previous step \(w_{i,j} \sim \text{Multinomial}(\phi_{z_{m, n}})\)

6.1.1 Example: Associated Press

We come to the AssociatedPress document term matrix (the required data strcture for the modeling function) and fit a two topic LDA model with stm::stm (stm stands for structural equation modeling). The stm takes as its input a document-term matrix, either as a sparse matrix (using cast_sparse) or a dfm from quanteda (using cast_dfm). Here we specify a two topic model by setting \(K = 2\) for demonstration purposes, in 6.3 we will see how to choose \(K\) with metrics such as semantic coherence.

stm objects have a summary() method for displaying words with highest probability in each topic. But we want to go back to data frames to take advantage of dplyr and ggplot2. For tidying model objects, tidy(model_object, matrix = "beta") (the default) access the topic-word probability vector (we denotes with \(\vec{\phi_k}\))

Which words have a relateve higher probabiltity to appear in each topic?

As an alternative, we could consider the terms that had the greatest difference in \(\vec{\phi_k}\) between topic 1 and topic 2. This can be estimated based on the log ratio of the two: \(\log_2(\frac{\phi_{1n}}{\phi_{2n}})\), \(\phi_{1n} / \phi_{2n}\) being the probability ratio of the sam e word \(n\) in two topics (a log ratio is useful because it makes the difference symmetrical)

This can answer a question like: which word is most representative of a topic?

To extrac the word proportion vector \(\vec{\theta_m}\) for document \(m\), use matrix = "gamma" in tidy()

With this data frame, we want to knwo which document is most charateristic of each topic?

This plot would definitely be more insightful if we have document titles rather than an ID.

To sum up, the topic modeling workflow involves:
- use tidy tools like dplyr, tidyr, and ggplot2 for initial data exploration and preparation.
- cast to a non-tidy structure to perform some machine learning algorithm. - tidy the modeling results to to use tidy tools again (exploring, visualization, etc.)