3.1 tf-idf

The logic of tf-idf is that the words containing the greatest information about a particular document are the words that appear many times in that document, but in relatively few others. Calculating tf-idf attempts to find the words that are important (i.e., common) in a text, but not too common. It is widely used in document search and information retrieval tasks. 14 To the extent tf.idf reliably captures what is distinctive about a particular document, it could be interpreted as a feature evaluation technique.

Let \(w = 1, 2, ..., W\) index words and \(\boldsymbol{y}\) denots the W-vector of word counts in the corpus. Let \(i \in I\) index documents in the corpus, let \(\boldsymbol{y^i}\) denotes the W-vector of word counts of document \(i\)\(y_w^i\) the count of word \(w\) in document \(i\), and \(n^i\) the total count of words in document \(i\).

Term frequency (tf) of a word \(w\) in document \(i\) is defined as its proprotions

\[ f_{w}^{i} = \frac{y_{w}^{i}}{n^{i}} \]

We can see that \(\text{tf}_{ij}\) is essentially a scaling of term counts \(n^{i}\), so that the metric will not be biased against words in lengthy documents.

.Inverse document frequency (idf) of word \(w\) in the corpus is defined as

\[ \text{idf}_i = \log{\frac{|D|}{|{j:t_i \in d_j}|}} \] where \(|D|\) is the number of documents in the corpus, and \(|{j:t_i \in d_j}|\) the number of documents containing word \(i\). We can let \(df_w\) denote the fraction of documents that contain word \(w\) at least once, then idf can be stated as

\[ \text{idf}_i = \log{\frac{1}{df_{w}}} \] and tf-idf, the production of term frequency and document frequency, as

\[ tf.idf_w^i = f_w^i \log{\frac{1}{df_{w}}} \]

3.1.1 Term frequency in Jane Austen’s novels

There is one row in this book_words data frame for each word-book combination; n is the number of times that word is used in that book and total_words is the total words in that book. The usual suspects are here with the highest n, “the”, “and”, “to”, and so forth.

let’s look at the distribution of n / total for each novel, which is the predefined term frequency:

Term Frequency Distribution in Jane Austen’s Novels

Figure 3.1: Term Frequency Distribution in Jane Austen’s Novels

3.1.2 Zipf’s law

In Figure 3.1 we see the characteristic long-tailed distribution of term frequency. In fact, those types of long-tailed distributions are so common in any given corpus of natural language (like a book, or a lot of text from a website, or spoken words) that the relationship between the frequency that a word is used and its rank has been the subject of study; a classic version of this relationship is called Zipf’s law, after George Zipf, a 20th century American linguist, which can be stated as.

\[ \text{word rank} \times \text{term frequency} = c \]

where \(c\) is a constant.

Zipf’s law is often visualized by plotting rank on the x-axis and term frequency on the y-axis, on logarithmic scales. Plotting this way, an inversely proportional relationship will have a constant, negative slope.

\[ \lg{(\text{term frequency})} = - \lg{(\text{term frequency})} + \lg{c} \]

The slope is not quite constant, though; perhaps we could view this as a broken power law with, say, three sections. Let’s see what the exponent of the power law is for the middle section of the rank range.

The \(R^2\) is approximately \(1\), so that we consider the relationship between log word rank and log tf to be \(\lg{\text{tf}} = -1.11\lg{\text{rank}} - 0. 62\).

We have found a result close to the classic version of Zipf’s law for the corpus of Jane Austen’s novels. The deviations we see here at high rank are not uncommon for many kinds of language; a corpus of language often contains fewer rare words than predicted by a single power law. The deviations at low rank are more unusual. Jane Austen uses a lower percentage of the most common words than many collections of language. This kind of analysis could be extended to compare authors, or to compare any other collections of text; it can be implemented simply using tidy data principles.

3.1.3 Word rank slope chart

Emil Hvitfeldt had a great blog post on how to make a word rank slope chart. This plot is generally designed to visualize the word rank difference of a set of paired words. If a writer is more comfortable using masculine words, then we could expect that “he” has a lower word rank than “she” (words are ranked in an descending order based on counts, as in book_words).

In Jane Austen’s novels, suppose we decide to compare word rank on a set of words related to gender

We unnest 6 books into separate words as usual, and pull() them out as a vector.

We then use match() to match individual words to its word rank. The trick is using the logged rank rather the rank itself, otherwise the y scale will be heavily extended by large word rank. scale_y_log10() is not the best option in this case, since we need scale_y_reverse() to put the most frequent words on the top of our plot, and labels on the y axis can be fixed by passing a function to label

3.1.4 The bind_tf_idf() function

The bind_tf_idf() function in the tidytext package takes a tidy text dataset as input with one row per token (term), per document. One column (word here) contains the terms/tokens, one column contains the documents (book in this case), and the last necessary column contains the counts, how many times each document contains each term (n in this example).

Notice that idf and thus tf-idf are zero for these extremely common words. These are all words that appear in all six of Jane Austen’s novels, so the idf term (which will then be the natural log of \(1\)) is zero.

Although it is often not necessary to remove stopwords when extracting tf-idf on the ground that stop words will generally have zero idf, it is good practice, in most cases, to focus on non-stop words only (I do not anti-join here because I want to compare the results on common words between tf-idf and weighted log odds ratio). There are circumstances under which which some stop words could have a meaning worth capturing, e.g., “her” in abortion debates), and tf-idf is not a good option in such cases, see 3.2.

Let’s look at terms with high tf-idf in Jane Austen’s works.

Proper nouns are often favoured by tf-idf, in this case names of important characters in each novel will generally have high tf-idf value. None of them occur in all of novels, and they are important, characteristic words for each text within the corpus of Jane Austen’s novels.