Chapter 7 The truth table

7.1 General considerations

The truth table is the main analytical tool needed to perform the minimization process that was invented in the engineering, and which Charles Ragin adapted for the social sciences. It is well known the electric engineering procedure was pioneered by Claude Shannon (1940) at MIT, after having been earlier exposed to the work of George Boole, during his studies at the University of Michigan. However it is less known that something extremely similar was developed even earlier by the American sociologist Paul Lazarsfeld (1937) who introduced the concept of “attribute space”, later to become what is known today as the concept of “property space”.

Lazarsfeld advanced a social research method based on typologies, arranging the observed empirical information on various combinations of properties or attributes. On page 10 from his 1937 paper, he presents a table containing 8 combinations of three binary attributes to study discrimination: “To have (+) or not to have (-) a college degree, to be of white (+) or colored (-) race, and to be native (+) or foreign born (-) in America”:

Table 7.1: Lazarsfeld’s attribute space
Combination Number College Degree White Native Born
1 + + +
2 + + -
3 + - +
4 + - -
5 - + +
6 - + -
7 - - +
8 - - -

This is almost exactly the description of a modern truth table, with reversed rows. What Lazarsfeld advanced, and what is probably less obvious to the untrained eye, is that a truth table is a generalization of a regular crosstable for multiple attributes (causal conditions in QCA).

For two conditions, keeping things simple and restrict each to only two values, a crosstable and its truth table counterpart is easy to construct. Figure 7.1 shows such an example, with the familiar 2 \(\times\) 2 crosstable on the left side and the truth table on the right side, plus the corresponding cells.

Crosstable and truth table for two conditions

Figure 7.1: Crosstable and truth table for two conditions

The only difference is the arrangement of the individual combinations of values, in the crosstable along the familiar cross pattern, while in the truth table the combinations are arranged row-wise, below the others.

A three-way crosstable, although not very difficult to imagine, is less obvious and clearly more complicated. On the other hand, as seen in figure 7.2, the truth table looks similar as the one from the previous example.

Crosstable and truth table for three conditions

Figure 7.2: Crosstable and truth table for three conditions

Crosstables can get even more complicated for 4 conditions, even more for 5 until they reach a practical limit. They are useful when kept simple, otherwise losing their practical applicability when they get very complicated. Truth tables, on the other hand, look just as simple for any number of conditions, just doubling their number of rows with each new condition.

The structure of a truth table is therefore a matrix with k columns, where k is the number of causal conditions included in the analysis. The number of rows is often presented as \(2^k\), in the example above for 3 causal conditions there are \(2^3 = 8\) rows. This is partially correct, because it applies to binary crisp sets only.

The universal equation that applies for both binary and multi-value crisp sets, is the product of the number of levels \(l\) (valents, categories) for each causal condition from 1 to \(k\):

\[\begin{equation} \prod_{i = 1}^{k}l_{k} = l_1 \times l_2 \times \dots \times l_k \tag{7.1} \end{equation}\]

Incidentally, \(2^3 = 2 \times 2 \times 2 = 8\), but if one of the causal conditions had three values (levels) instead of two, the structure of the truth table changes to:

createMatrix(noflevels = c(3, 2, 2))
      [,1] [,2] [,3]
 [1,]    0    0    0
 [2,]    0    0    1
 [3,]    0    1    0
 [4,]    0    1    1
 [5,]    1    0    0
 [6,]    1    0    1
 [7,]    1    1    0
 [8,]    1    1    1
 [9,]    2    0    0
[10,]    2    0    1
[11,]    2    1    0
[12,]    2    1    1

In the above command, the argument noflevels refers to the number of levels for three causal conditions (and the output shows three columns as well), however the number of rows is not 8, but 3 \(\times\) 2 \(\times\) 2 = 12. The equation (7.1), as well as the above function createMatrix(), give all possible combinations of properties for any given number of causal conditions \(k\), with any number of levels for each, describing the entire property space mentioned by Lazarsfeld.

The next step in the truth table procedure is to allocate individual cases to the corresponding truth table rows. This is something similar to constructing a table of frequencies, as the calibrated values in the input data have the same structure as the combinations from the truth table.

A good example here is the binary crisp version of the Lipset data, examining the first 6 rows:

data(LC) # if not already loaded
head(LC)
   DEV URB LIT IND STB SURV
AU   1   0   1   1   0    0
BE   1   1   1   1   1    1
CZ   1   1   1   1   1    1
EE   0   0   1   0   1    0
FI   1   0   1   0   1    1
FR   1   0   1   1   1    1

Very often, multiple cases display the same configuration, and in the Lipset data BE and the CZ have positive values for all conditions from DEV to STB, and even the same value for the outcome SURV. These two cases are allocated to the same truth table combination 11111, which has at minimum 2 cases:

truthTable(LC, outcome = SURV, complete = TRUE, show.cases = TRUE)

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND STB   OUT    n  incl  PRI   cases      
 1    0   0   0   0   0     0     3  0.000 0.000 GR,PT,ES   
 2    0   0   0   0   1     0     2  0.000 0.000 IT,RO      
 3    0   0   0   1   0     ?     0    -     -              
 4    0   0   0   1   1     ?     0    -     -              
 5    0   0   1   0   0     0     2  0.000 0.000 HU,PL      
 6    0   0   1   0   1     0     1  0.000 0.000 EE         
 7    0   0   1   1   0     ?     0    -     -              
 8    0   0   1   1   1     ?     0    -     -              
 9    0   1   0   0   0     ?     0    -     -              
10    0   1   0   0   1     ?     0    -     -              
11    0   1   0   1   0     ?     0    -     -              
12    0   1   0   1   1     ?     0    -     -              
13    0   1   1   0   0     ?     0    -     -              
14    0   1   1   0   1     ?     0    -     -              
15    0   1   1   1   0     ?     0    -     -              
16    0   1   1   1   1     ?     0    -     -              
17    1   0   0   0   0     ?     0    -     -              
18    1   0   0   0   1     ?     0    -     -              
19    1   0   0   1   0     ?     0    -     -              
20    1   0   0   1   1     ?     0    -     -              
21    1   0   1   0   0     ?     0    -     -              
22    1   0   1   0   1     1     2  1.000 1.000 FI,IE      
23    1   0   1   1   0     0     1  0.000 0.000 AU         
24    1   0   1   1   1     1     2  1.000 1.000 FR,SE      
25    1   1   0   0   0     ?     0    -     -              
26    1   1   0   0   1     ?     0    -     -              
27    1   1   0   1   0     ?     0    -     -              
28    1   1   0   1   1     ?     0    -     -              
29    1   1   1   0   0     ?     0    -     -              
30    1   1   1   0   1     ?     0    -     -              
31    1   1   1   1   0     0     1  0.000 0.000 DE         
32    1   1   1   1   1     1     4  1.000 1.000 BE,CZ,NL,UK

There are actually 4 cases with that particular combination (last row in the truth table), namely BE, CZ, NL and UK. The same number of 4 cases is displayed in the truth table for that configuration, under the column n.

The other two columns incl and PRI should be self-explanatory. Inclusion, for crisp sets, show how consistent are the cases from a given causal configuration, to display the same value in the outcome SURV. In this truth table, all observed configurations are perfectly consistent, and the column OUT is allocated a value of 1 where the inclusion is 1, and 0 otherwise.

In the past, when fuzzy sets were not introduced yet, a single case with a different value for the outcome would render the causal combination as a contradiction. The modern approach is to allocate a value of 1 in the OUT column if the inclusion score is at least equal to a certain inclusion cut-off, something which will become more evident when dealing with fuzzy sets. The same thing applies for the PRI score, which is always equal to the inclusion score for crisp sets, and will play a greater role for fuzzy sets.

7.2 Command line and GUI dialog

At this point, some more explanations are needed about the written command and its arguments, as well as the output. A first thing to notice is the missing argument conditions, a situation where all columns in the dataset except the specified outcome are automatically considered causal conditions. The same output can be produced with the following, equivalent command:

truthTable(LC, outcome = SURV, conditions = "DEV, URB, LIT, IND, STB", 
    complete = TRUE, show.cases = TRUE)

The complete structure of the truthTable() function contains some other arguments, like: incl.cut, n.cut, sort.by, use.letters and inf.test. They will be described during this chapter, but for the time being it is important to remember that not specifying an argument doesn’t mean they have no influence over the output. Quite the contrary, they have default values that are automatically employed, if not otherwise specified.

Before going through the rest of the arguments, it is also important to remember that package QCA has a graphical user interface for the most important functions, including the creation of a truth table. Having the web user interface started (see section 2.4.2), the dialog is opened by choosing the menu:

Analyse / Truth table

The "Truth table" dialog

Figure 7.3: The “Truth table” dialog

Figure 7.3 is an accurate match of the command used to produce the earlier truth table:

  • it has the LC, the binary crisp version of the Lipset data selected
  • in the Outcome space the column SURV is selected
  • no columns are selected in the Conditions space (all others are automatically considered causal conditions)
  • the checkboxes complete and show cases are selected

In addition, the truth table is attributed to an object called ttLC, otherwise it will just be printed on the screen and then disappear from memory. Assigning truth tables to objects, in the graphical user interface, has an additional role to be presented later.

The checkbox complete (an argument) is not mandatory, here used for demonstrative purposes to create the entire truth table containing all possible configurations. It can be seen that most of them are empty, with zero empirical information in the column n, and the column OUT coded as a question mark. These configurations are the so called remainders in QCA, and they become counterfactuals when included in the minimization process.

In the middle part of the options space in the dialog, there are three possible ways to sort the truth table: by the output score in the column OUT, by the inclusion score, and by the number of cases (frequency) in column n. All of these belong to the argument sort.by of the written function.

By default, there is no sorting and the truth table is presented in the natural order of the configurations, from the absence of all (00000 on line 1) to the presence of all (11111 on line 32). Activating any of those (by clicking in the interface), opens up another possibility to sort: in decreasing (default) or increasing order.

Different possibilities to sort the truth table

Figure 7.4: Different possibilities to sort the truth table

Figure 7.4 shows two different possibilities to sort the truth table, in the interface. The left side activates the sort by inclusion (default in decreasing order) and frequency (increasing order, having unchecked the corresponding checkbox under Decr.easing).

In the right side, should all of these options be activated, the figure demonstrates the action of dragging the frequency option on the top of the list, to establish the sorting priority.

Unchecking the complete option results in this command which is visible in the command constructor dialog:

truthTable(LC, outcome = SURV, show.cases = TRUE, sort.by = "incl, n+")

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND STB   OUT    n  incl  PRI   cases      
22    1   0   1   0   1     1     2  1.000 1.000 FI,IE      
24    1   0   1   1   1     1     2  1.000 1.000 FR,SE      
32    1   1   1   1   1     1     4  1.000 1.000 BE,CZ,NL,UK
 6    0   0   1   0   1     0     1  0.000 0.000 EE         
23    1   0   1   1   0     0     1  0.000 0.000 AU         
31    1   1   1   1   0     0     1  0.000 0.000 DE         
 2    0   0   0   0   1     0     2  0.000 0.000 IT,RO      
 5    0   0   1   0   0     0     2  0.000 0.000 HU,PL      
 1    0   0   0   0   0     0     3  0.000 0.000 GR,PT,ES   

The truth table is now sorted according to the above criteria, and the row numbers reflect this change: first row is number 22, which is also the first where the inclusion score has the value or 1, and also the first there the column OUT was allocated a value of 1 for a positive output.

The graphical user interface presents only three sorting options, but the written command is more flexible and allows for any truth table column (including the causal conditions) to be used for sorting. The sort.by argument accepts a string with the columns of interest separated by a comma, and signal an increasing order sorting by adding a “+” sign after a specific column.

In the past, there was an additional logical argument called decreasing, and applied to all sorting criteria. That argument has become obsolete with the new structure of the sort.by argument, in the example above "incl, n+" indicating to first sort by inclusion (by default in decreasing order) then after the frequency in increasing order.

7.3 From fuzzy sets to crisp truth tables

To better explain the cut-off values for inclusion and frequency, it is the right time to talk about constructing a truth table from a fuzzy dataset. Crisp datasets are straightforward, cases being allocated to their corresponding truth table rows. With fuzzy sets this is not that easy because fuzzy scores are not simply equal to 0 or 1, but anywhere in between, which means that a case has partial memberships in all truth table configurations.

Ragin (2000) defines the fuzzy sets as a multidimensional vector space with a number of corners equal to \(2^k\), where each corner of the vector space is a unique configuration of (crisp) causal conditions. The number of corners \(2^k\) is accurate here, because fuzzy sets don’t have multiple “levels” (or categories) like crisp sets but only two extreme limits represented by the numbers 0 (full exclusion) and 1 (full inclusion).

The next section will introduce some notions of Boolean minimization, that requires crisp scores. Since obviously fuzzy scores cannot be used for this algorithm, the fuzzy scores need to be transformed into crisp scores to allocate cases into one or another configuration from the truth table.

Basically, there is no “pure” fuzzy minimization procedure (given an infinite number of possible combinations of fuzzy scores, for multiple dimensions), therefore fuzzy sets need to be first transformed into crisp truth table before the minimization.

This process requires a further detailed explanation of the relation between the different contingency tables (crosstabulations) and the process of constructing typologies. The simplest example is a 2 \(\times\) 2 contingency table with 4 corners. As fuzzy sets are continuous, the squared vector space is also continuous and a case has fuzzy coordinates anywhere in this space.

Bidimensional vector space

Figure 7.5: Bidimensional vector space

The vector space is continuous, but the corners are crisp and they define all \(2^2 = 4\) possible configurations of presence / absence for two causal conditions. The fuzzy coordinates determine how close a certain point (a case) is to one of the four corners. In the figure 7.5 things are very clear, the point being closest to the corner (10), but when coordinates are closer to the center of the vector space it becomes more difficult to decide where to allocate the case.

Each causal condition acts like a geometrical dimension, defining the shape of the vector space. For one causal condition it is a line with two ends, for two causal conditions the space is a cartesian square and 4 corners, for three causal conditions the vector space is 3 dimensional having eight corners etc. All dimensions are assumed to be orthogonal to each other, measuring separate and distinct properties.

Three dimensional vector space

Figure 7.6: Three dimensional vector space

Fuzzy sets almost never reach a corner with perfect scores, instead the coordinates from the vector space tend to get closer to one corner or another. The perfect scores from the corner are crisp, and they are similar to the Weberian concept of “ideal-type”: as no scientific theory would ever account for the full diversity of all possible particular manifestations of some phenomena in the real world, it needs to be abstracted to some kind of a tool that is not a perfect match for every possible real situation, but a close enough match to decide that observed manifestations are similar to certain theoretical models.

The ideal-type is such an abstract model that cannot measured directly, but its manifestations in the real world can be observed through various behavioral indicators. The complex social reality is nothing but the manifestation of an unobserved ideal-type, an important feature that allow categorizing the infinite combinations of manifestations to a finite collection of ideal-types, corners of the multidimensional vector space (Lazarsfeld’s property space). This way, a very complex reality can be located near a single such corner from the vector space.

As logical as it may seem, this operation is not trivial because cases are similar to multiple corners, especially when they are positioned close to the center of the vector space. It is again Ragin (2005, 2008b) who developed the transformation technique from fuzzy sets to the crisp corners from the truth table, based on the fundamental property that:

“…each case can have only a single membership score greater than 0.5 in the logically possible combinations formed from a given set of causal conditions.”

Ragin demonstrates this process using this simple dataset, which can be found in the package QCA, along with a dedicated help file to describe each column. The outcome is “W” weak class voting, and the causal conditions are “A” affluent countries, “I” substantial levels of income inequality, “M” high percentage of workers employed in manufacturing, and “U” countries with strong unions.

data(NF)
NF
     A   I   M   U   W
AU 0.9 0.7 0.3 0.7 0.7
BE 0.7 0.1 0.1 0.9 0.7
DK 0.7 0.3 0.1 0.9 0.1
FR 0.7 0.9 0.1 0.1 0.9
DE 0.7 0.9 0.3 0.3 0.6
IE 0.1 0.7 0.9 0.7 0.9
IT 0.3 0.9 0.1 0.7 0.6
NL 0.7 0.3 0.1 0.3 0.9
NO 0.7 0.3 0.7 0.9 0.1
SE 0.9 0.3 0.9 1.0 0.0
UK 0.7 0.7 0.9 0.7 0.3
US 1.0 0.9 0.3 0.1 1.0

Particular fuzzy scores make the coordinates of each case close to multiple corners, but only one corner is the closest. Cases have membership scores for each corner, and Ragin observed that only one corner has a membership score greater than 0.5, provided that none of the individual fuzzy scores are equal to exactly 0.5 (maximum ambiguity). It is important, especially in the calibration phase, to make sure that no score is equal to 0.5 otherwise the transformation to crisp truth tables becomes impossible.

The first step of the procedure is to create a matrix of all observed cases on the rows, and all possible combinations from the truth table on the columns. The next step is to calculate inclusion scores for each case on each truth table configuration, to decide which of the truth table rows has an inclusion score above 0.5. There are \(2^4 = 16\) possible configurations in the truth table, with as many columns in the matrix.

For reasons of space, the columns are labeled by their corresponding row numbers from the truth table, but otherwise number 1 means 0000, number 2 means 0001, number 3 means 0010 etc.

Table 7.2: Ragin’s matrix
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
AU 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.3 0.3 0.3 0.3 0.3 0.7 0.3 0.3
BE 0.1 0.3 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.7 0.1 0.1 0.1 0.1 0.1 0.1
DK 0.1 0.3 0.1 0.1 0.1 0.3 0.1 0.1 0.1 0.7 0.1 0.1 0.1 0.3 0.1 0.1
FR 0.1 0.1 0.1 0.1 0.3 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.7 0.1 0.1 0.1
DE 0.1 0.1 0.1 0.1 0.3 0.3 0.3 0.3 0.1 0.1 0.1 0.1 0.7 0.3 0.3 0.3
IE 0.1 0.1 0.3 0.3 0.1 0.1 0.3 0.7 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
IT 0.1 0.1 0.1 0.1 0.3 0.7 0.1 0.1 0.1 0.1 0.1 0.1 0.3 0.3 0.1 0.1
NL 0.3 0.3 0.1 0.1 0.3 0.3 0.1 0.1 0.7 0.3 0.1 0.1 0.3 0.3 0.1 0.1
NO 0.1 0.3 0.1 0.3 0.1 0.3 0.1 0.3 0.1 0.3 0.1 0.7 0.1 0.3 0.1 0.3
SE 0.0 0.1 0.0 0.1 0.0 0.1 0.0 0.1 0.0 0.1 0.0 0.7 0.0 0.1 0.0 0.3
UK 0.1 0.1 0.3 0.3 0.1 0.1 0.3 0.3 0.1 0.1 0.3 0.3 0.1 0.1 0.3 0.7
US 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.1 0.1 0.7 0.1 0.3 0.1

Calculating membership scores in truth table configurations is rather straightforward. As an example, the first configuration in the truth table (first column in table 7.2) is 0000, or in product notation it is \({\sim}\)A\(\cdot{\sim}\)I\(\cdot{\sim}\)M\(\cdot{\sim}\)U.

The fuzzy scores for the first country (Austria) are:

NF[1, 1:4]  # the same can be produced with NF["AU", 1:4]
     A   I   M   U
AU 0.9 0.7 0.3 0.7

Calculating its inclusion score in the first truth table configuration (which should be equal to 0.1) involves a simple negation of each score then taking the minima, using the familiar fuzzyand() intersection function, or perhaps even more simple with:

compute(~A~I~M~U, data = NF[1, ])
[1] 0.1

The only truth table configuration where Austria has a membership score above 0.5 is number 14, which is 1101. When calculating the fuzzy intersection for this configuration, only the third causal condition M is negated:

compute(AI~MU, data = NF[1, ])
[1] 0.7

Ragin’s procedure, as simple and intuitive as it may seem, involves calculating all possible membership scores for all cases into all truth table configurations. When the number of causal conditions is small, this is really easy for any decent computer, but the more causal conditions are added into the model, the more difficult it becomes. That happens because the number of rows in the truth table increases exponentially to the powers of 2, quickly reaching a physical limit even for strong computers.

The required memory grows more than twice for each new condition (the truth table grows not only on the rows, but also on the columns), and the calculation time becomes prohibitively slow and very soon grindes to a halt. But looking at the table 7.2, it is clear that not all columns have membership scores above 0.5. Especially for large(r) truth tables, most configurations lack any empirical information due to the issue of limited diversity.

For as little as 10 causal conditions, a truth table has \(2^{10} = 1024\) rows, and more than 1000 of them would be empty (no empirical information from the observed cases). For 20 causal conditions, more than 1 million rows are empty and they require not only a lot of memory but also pointless calculations that consume a lot of time.

Ragin’s procedure can be greatly improved by further taking note that a case can have only one membership score above 0.5. To find the configurations of interest for each case, a simple dichotomization does the job:

ttrows <- apply(NF[, 1:4], 2, function(x) as.numeric(x > 0.5))
rownames(ttrows) <- rownames(NF)
ttrows
   A I M U
AU 1 1 0 1
BE 1 0 0 1
DK 1 0 0 1
FR 1 1 0 0
DE 1 1 0 0
IE 0 1 1 1
IT 0 1 0 1
NL 1 0 0 0
NO 1 0 1 1
SE 1 0 1 1
UK 1 1 1 1
US 1 1 0 0

These are exactly the truth table configurations where the cases have membership scores above 0.5, the code below transforming their binary representation into decimal notation using the matrix multiplication operator %*%:

drop(ttrows %*% c(8, 4, 2, 1)) + 1
AU BE DK FR DE IE IT NL NO SE UK US 
14 10 10 13 13  8  6  9 12 12 16 13 

Ragin (2008b, 138) warns against a mechanical dichotomization of fuzzy sets, and with good reasons. While dichotomization should not be used to convert fuzzy data to crisp sets, it can however be used to quickly identify the truth table configurations where the cases belong to, with a dramatic improvement on performance.

By not verifying all possible configurations from the truth table, Ragin’s procedure becomes much faster and allows the possibility to generate truth tables with virtually any number of causal conditions, including the parameters of fit (something impossible in his initial, original version).

7.4 Calculating consistency scores

Each configuration in the truth table has a consistency score with the outcome (an inclusion in the outcome set). Table 7.2 present 16 columns for all configurations, and it is easy to be seen that each such column (configuration) has as many values as the number of cases in the dataset, and at the same time as many as the number of values in the outcome.

The inclusion score of a certain configuration in the outcome is calculated using the usual formula from equation (6.3), where X is replaced by the inclusion scores of the cases in that particular configuration (the columns from table 7.2). The truth table for the NF data is presented below:

ttNF <- truthTable(NF, outcome = W, incl.cut = 0.8, show.cases = TRUE)
ttNF

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     A  I  M  U    OUT    n  incl  PRI   cases   
 6   0  1  0  1     0     1  0.760 0.400 IT      
 8   0  1  1  1     1     1  0.870 0.667 IE      
 9   1  0  0  0     1     1  1.000 1.000 NL      
10   1  0  0  1     0     2  0.700 0.438 BE,DK   
12   1  0  1  1     0     2  0.536 0.071 NO,SE   
13   1  1  0  0     1     3  0.971 0.944 FR,DE,US
14   1  1  0  1     1     1  0.821 0.583 AU      
16   1  1  1  1     0     1  0.654 0.100 UK      

What is printed on the screen is only a fraction of the total information contained in the object produced by the function truthTable(). This is essentially a list object with several other components, among which the useful minmat.

ttNF$minmat
     6   8   9  10  12  13  14  16
AU 0.1 0.1 0.3 0.3 0.3 0.3 0.7 0.3
BE 0.1 0.1 0.1 0.7 0.1 0.1 0.1 0.1
DK 0.3 0.1 0.1 0.7 0.1 0.1 0.3 0.1
FR 0.1 0.1 0.1 0.1 0.1 0.7 0.1 0.1
DE 0.3 0.3 0.1 0.1 0.1 0.7 0.3 0.3
IE 0.1 0.7 0.1 0.1 0.1 0.1 0.1 0.1
IT 0.7 0.1 0.1 0.1 0.1 0.3 0.3 0.1
NL 0.3 0.1 0.7 0.3 0.1 0.3 0.3 0.1
NO 0.3 0.3 0.1 0.3 0.7 0.1 0.3 0.3
SE 0.1 0.1 0.0 0.1 0.7 0.0 0.1 0.3
UK 0.1 0.3 0.1 0.1 0.3 0.1 0.1 0.7
US 0.0 0.0 0.1 0.1 0.1 0.7 0.1 0.1

This is precisely the relevant part from table 7.2, where the column numbers correspond to the row numbers from the empirically observed truth table configurations above. As shown in the previous section, the matrix represents the inclusion of all cases in each configuration, and the scores are used to calculate the overall consistency of all these configurations.

To demonstrate, consider the first column number 6, that is 0101 first row in the truth table. In product notation it is written as “\(\sim\)A\(\cdot\)I\({\cdot}{\sim}\)M\(\cdot\)U”, but since the conditions are simple letters it can also be written more simple as “aImU”.

In table 7.3 below, the sum of the scores for the intersection with the outcome W is 1.9, while the sum of the scores in column “aImU” is 2.5, making the ratio between them equal to 0.760 which is the inclusion score from the first row in the truth table object ttNF above.

Table 7.3: Calculating the inclusion of configuration aImU in outcome W
aImU W min(aImU, W)
AU 0.1 0.7 0.1
BE 0.1 0.7 0.1
DK 0.3 0.1 0.1
FR 0.1 0.9 0.1
DE 0.3 0.6 0.3
IE 0.1 0.9 0.1
IT 0.7 0.6 0.6
NL 0.3 0.9 0.3
NO 0.3 0.1 0.1
SE 0.1 0.0 0.0
UK 0.1 0.3 0.1
US 0.0 1.0 0.0

The same value can be obtained with the familiar parameters of fit pof() command:

pof("~AI~MU -> W", data = NF)

           inclS   PRI   covS   covU  
------------------------------------- 
1  ~AI~MU  0.760  0.400  0.279    -   
------------------------------------- 

And even with the fuzzyand() function:

aImU <- ttNF$minmat[, 1] # [, "6"] would also work
using(NF, sum(fuzzyand(aImU, W)) / sum(aImU))
[1] 0.76

The minmat component is thus more than just informative. It can be used for various purposes, including to identify the so called deviant cases consistency: that have an inclusion in the configuration higher than 0.5, but a lower (than 0.5) inclusion in the outcome.

truthTable(NF, outcome = W, incl.cut = 0.8, show.cases = TRUE,
           dcc = TRUE)

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency
  DCC: deviant cases consistency

     A  I  M  U    OUT    n  incl  PRI   DCC  
 6   0  1  0  1     0     1  0.760 0.400      
 8   0  1  1  1     1     1  0.870 0.667      
 9   1  0  0  0     1     1  1.000 1.000      
10   1  0  0  1     0     2  0.700 0.438 DK   
12   1  0  1  1     0     2  0.536 0.071 NO,SE
13   1  1  0  0     1     3  0.971 0.944      
14   1  1  0  1     1     1  0.821 0.583      
16   1  1  1  1     0     1  0.654 0.100 UK   

The dcc argument is a new one, and complements the argument show.cases. It prints the cases that represent the true logical remainders, and that happens if and only if the argument show.cases is activated. If this happens, the printing method asks which cases to print, the normal or the deviant cases.

The graphical user interface has a correspondent checkbox called deviant cases, that is activated only when the checkbox show cases is checked.

7.5 The OUTput value

In a situation where all cases from a given configuration are perfectly consistent with the outcome, the column OUT will be assigned the value from the data.

More difficult are the situations where the consistency is not perfect, having values below 1. Back in the csQCA days, a single case from the same configuration, having a different output was problematic, since the requirement was to have a perfect inclusion (no cases with the presence of X, outside Y).

When the evidence is small (one contradictory case out of, say 5), that single case can be meaningful, but often the evidence is large enough to justify a natural question: what is the value of a single contradictory case out of 100? Is that case enough to dismiss a close to perfect (but not full) consistency?

This kind of question, concomitantly with the appearance of fuzzy sets QCA, marked the beginning of what today is the useful inclusion cut-off. The actual consistency score is compared against a certain threshold (specified by the researcher), and the column OUT is attributed a value of 1 only if the consistency score is higher. Even if not perfectly consistent, an inclusion score of 0.91 is still high enough, in any case higher than a plausible cut-off value of 0.9. Cases with a consistency score below this value are attributed a value of 0 in the OUT column.

In the fuzzy version of the Lipset data, no single truth table configuration has a perfect inclusion score, therefore in order to have at least one or some configurations assigned with a positive output, the argument incl.cut needs to be lowered down:

data(LF) # if not already loaded
truthTable(LF, outcome = SURV, incl.cut = 0.8)

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND STB   OUT    n  incl  PRI  
 1    0   0   0   0   0     0     3  0.216 0.000
 2    0   0   0   0   1     0     2  0.278 0.000
 5    0   0   1   0   0     0     2  0.521 0.113
 6    0   0   1   0   1     0     1  0.529 0.228
22    1   0   1   0   1     1     2  0.804 0.719
23    1   0   1   1   0     0     1  0.378 0.040
24    1   0   1   1   1     0     2  0.709 0.634
31    1   1   1   1   0     0     1  0.445 0.050
32    1   1   1   1   1     1     4  0.904 0.886

It should be clear that the output value from the truth table is not the same thing as the outcome column from the original data, even for binary crisp data. Not only because the outcome might have fuzzy values and the output should be binary, but most importantly because the outcome is given while the truth table output is assigned, sometimes against evidence from the data.

So far, the inclusion cut-off argument incl.cut was successfully used to produce the required two values of the output, 0 and 1. But in the former crisp sets procedure there was another possibility for the output value, namely the contradiction. That happens when the evidence is not enough for a positive output, but neither it is for a negative one.

To allow for such situations, the argument incl.cut accepts two values (two thresholds), the first above which the output is assigned a value of 1, and the second below which the output is assigned a value of 0. Such a length 2 numerical vector has the form c(ic1, ic0) where ic1 is the inclusion cut-off for a positive output, and ic0 the inclusion cut-off for a negative output. If not otherwise specified, the value of ic0 is set equal to the value of ic1.

As an example, let us have a look at the truth table for the negation of the outcome (note the tilde in front of the outcome’s name \(\sim\)SURV). In the graphical user interface, the checkbox negate outcome does the same thing:

truthTable(LF, outcome = ~SURV, incl.cut = 0.8)

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND STB   OUT    n  incl  PRI  
 1    0   0   0   0   0     1     3  1.000 1.000
 2    0   0   0   0   1     1     2  0.982 0.975
 5    0   0   1   0   0     1     2  0.855 0.732
 6    0   0   1   0   1     1     1  0.861 0.772
22    1   0   1   0   1     0     2  0.498 0.281
23    1   0   1   1   0     1     1  0.974 0.960
24    1   0   1   1   1     0     2  0.495 0.366
31    1   1   1   1   0     1     1  0.971 0.950
32    1   1   1   1   1     0     4  0.250 0.106

As expected, the configurations which previously were coded with a positive output 1, are now coded with a negative output 0, with one exception. The configuration number 24 (10111) has a value of 0, and previously (for the normal outcome) it was also coded with 0, due to its low consistency of 0.709 (below the inclusion cut-off of 0.8). For the negation of the outcome, its inclusion score of 0.495 is also very low.

Something happens with this configuration, since there is not enough evidence for neither a positive output, nor for a negative one.

It looks like a contradiction, but using a single inclusion cut-off does not solve the puzzle. This is the purpose for the second value in the incl.cut argument, with the following decision process:

  • if the inclusion score is greater than ic1, the output value is coded to 1
  • if the inclusion score is lower than ic0, the output value is coded to 0
  • when the inclusion score is between ic0 and ic1, (greater than ic0 but lower than ic1) the output value is coded as a contradiction, with the letter C:
truthTable(LF, outcome = SURV, incl.cut = c(0.8, 0.6))

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND STB   OUT    n  incl  PRI  
 1    0   0   0   0   0     0     3  0.216 0.000
 2    0   0   0   0   1     0     2  0.278 0.000
 5    0   0   1   0   0     0     2  0.521 0.113
 6    0   0   1   0   1     0     1  0.529 0.228
22    1   0   1   0   1     1     2  0.804 0.719
23    1   0   1   1   0     0     1  0.378 0.040
24    1   0   1   1   1     C     2  0.709 0.634
31    1   1   1   1   0     0     1  0.445 0.050
32    1   1   1   1   1     1     4  0.904 0.886

All of these arguments seem to provide a complete experience to create a truth table. There is however one more situation that is not covered by any of those, stemming from the steady expansion of the QCA methodology from the small-N world to the large-N situations, with hundreds of cases in the dataset.

In the quantitative methodology, a certain value is not exactly smaller or greater than another, because of the inherent uncertainty generated by the random sampling. Each dataset is unique, drawn from a very large population, and some situations can be sample specific. The same value can be smaller in a different random sample. The usual strategy is to bring sufficient statistical evidence to declare one value significantly greater than another, entering the realm of inferential statistics.

To dismiss possible suspicions that inclusion scores (greater than a certain cut-off) are sample specific, the truth table function has another argument called inf.test, which assigns values in the OUT column based on an inferential test. It is a binomial test, and it only works with binary crisp data (no fuzzy, no multi-values).

The argument is specified as a vector of length 2 or a single string containing both, with the type of test (currently only "binom") and the critical significance level.

truthTable(LC, outcome = SURV, incl.cut = 0.8, inf.test = "binom, 0.05")

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency
pval1: p-value for alternative hypothesis inclusion > 0.8

     DEV URB LIT IND STB   OUT    n  incl  PRI   pval1
 1    0   0   0   0   0     0     3  0.000 0.000 1.000
 2    0   0   0   0   1     0     2  0.000 0.000 1.000
 5    0   0   1   0   0     0     2  0.000 0.000 1.000
 6    0   0   1   0   1     0     1  0.000 0.000 1.000
22    1   0   1   0   1     0     2  1.000 1.000 0.640
23    1   0   1   1   0     0     1  0.000 0.000 1.000
24    1   0   1   1   1     0     2  1.000 1.000 0.640
31    1   1   1   1   0     0     1  0.000 0.000 1.000
32    1   1   1   1   1     0     4  1.000 1.000 0.410

It seems that all output values have been coded to zero.
Suggestion: lower the inclusion score for the presence of the outcome,
the relevant argument is "incl.cut" which now has a value of 0.8.

This command performs a binomial statistical test, using a 5% significance level. Only the configurations with a significantly greater than 0.8 inclusion score would be coded with a positive output 1.

The example is provided for demonstration purposes only. Due to the very small number of cases for each configuration (from 1 to at most 4), binomial testing is obviously not producing any significant results. The probability of error pval1 is too high (100% in many cases), therefore the null hypothesis that the true (population) inclusion score is less than 0.8, cannot be rejected. This kind of statistical test requires large(r) samples to draw meaningful conclusions, but if such data would be available, specifying the inf.test should be done as in the example above.

A more important argument, albeit rarely used, is the frequency cut-off n.cut, the minimum number of cases under which a configuration is declared as a remainder (the output value is assigned a “?” sign), even if it has an consistency score above the inclusion cut-off. As QCA data belongs to the so called “small-N” world, n.cut has a default value of 1 but there are situations with very large data available, when it might make sense to raise the frequency cut-off to more than 1 case per configuration.

The final argument that needs introducing is a logical one called use.letters. When the names of the causal conditions are too long, they can be replaced by alphabetical letters. Switching this argument to TRUE will automatically replace each causal condition’s name with an upper case letter, starting with A for the first column, B for the second etc (naturally, there should not be more than 26 causal conditions in the dataset).

7.6 Excluding configurations

We’ve seen, in the previous section, that the output column in the truth table is automatically coded in mainly three values: positive (1) and negative (0) for those configurations where empirical evidence does exist, and unknown (?) for those remainder configurations for which there is no empirical evidence.

This is the result of the so-called limited diversity phenomenon: the empirical evidence clusters in very few rows from an exponentially large truth table, and most configurational rows lack any empirical evidence that would allow researchers to allocate a value of 1 or 0.

The next chapter 8 will provide more details about the procedural aspects of the truth table minimization process. For the moment, suffice it to say that sometimes, these unknown remainder configurations are included in the minimization process alongside the positive configurations.

But not all remainders are equal, and almost the entire body of QCA literature discusses various ways to include some (but not all) remainders, and block others from the minimization process. Section 8.6 will present just how diverse these remainders are, and the reasons why some of them have to be blocked, excluded from the minimization.

Previous versions of the package QCA placed the exclusion process in the minimization phase itself, as an argument called exclude in function minimize(). Starting with version 3.7, this argument has been moved to function truthTable(), where it more naturally belongs.

The classical minimization process exclusively concentrates on the positive and remainder rows, and the only way to exclude some configurations from the process is to pre-allocate them a negative value of 0 for the output. This signals the outcome phenomenon will not be produced in those particular configurations of causal conditions.

In this situation, for instance, are the so-called impossible remainders, combinations of causal factors that can never appear, such as a pregnant male, or a person living simultaneously in two different places etc. Since it is absolutely impossible for a certain causal configuration to appear, it is also impossible for the outcome to be instantiated under those conditions (no effect without a cause).

In such situations, it is not the empirical evidence that drives the automatic allocation of that value, but our human judgement and logic. Naturally, such allocations should be clearly spelled out and duly justified in the methodological part of any research, otherwise the exclusion process would be nothing but a solution tweaking tool.

The argument exclude expects a numeric vector of row numbers from the truth table. A practical example of how to use this argument is presented in section 8.6.

7.7 Consistency using mixed sets

Theory makes a distinction between crisp and fuzzy sets, and yet another distinction between (binary) crisp and multi-value (crisp) sets. For some unknown reason, probably dating back when some QCA packages created separate truth tables for crisp, multi-value and fuzzy sets, many researchers remain under the impression that analyses themselves must be carried separately for each of these types.

In package QCA, there are no separate truth table functions for different types of sets. The same function truthTable() can be used for crisp, multi-value and fuzzy data. However, this information does not seem to be obvious enough since there are still situations when researchers believe the whole dataset (with all its conditions) should be calibrated either as binary, or multi-value, or fuzzy.

This misconception should be clarified, as the function truthTable() can easily accommodate a dataset containing all types of calibrations at once, for both the causal conditions and the outcome.

set.seed(12345)
dfm <- data.frame(cs = rbinom(n = 20, size = 1, prob = 0.5),
                  fs = runif(n = 20, min = 0, max = 1),
                  mv = sample(0:2, size = 20, replace = TRUE),
                  outmv = sample(0:2, size = 20, replace = TRUE))
dfm
   cs         fs mv outmv
1   1 0.45372807  1     2
2   1 0.32675241  1     2
3   1 0.96541532  2     0
4   1 0.70748188  0     1
5   0 0.64454264  0     2
6   0 0.38982848  2     0
7   0 0.69854364  1     2
8   1 0.54405786  0     1
9   1 0.22646718  0     2
10  1 0.48455776  2     0
11  0 0.79300717  2     1
12  0 0.00598763  0     2
13  1 0.18771245  2     2
14  0 0.68183362  0     1
15  0 0.37010412  0     2
16  0 0.36162557  1     2
17  0 0.86879490  1     2
18  0 0.90415467  2     1
19  0 0.61742457  2     2
20  1 0.13403163  0     2

Using such a dataset containing all kinds of mixed calibrations, the truth table command is still the same:

truthTable(dfm, outcome = outmv[2], sort.by = "incl")

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     cs fs mv   OUT    n  incl  PRI  
 2   0  0  1     1     1  1.000 1.000
 5   0  1  1     1     2  1.000 1.000
 8   1  0  1     1     2  1.000 1.000
 1   0  0  0     0     2  0.862 0.862
 7   1  0  0     0     2  0.687 0.687
 4   0  1  0     0     2  0.600 0.600
 9   1  0  2     0     2  0.596 0.596
 3   0  0  2     0     1  0.295 0.295
 6   0  1  2     0     3  0.228 0.228
10   1  1  0     0     2  0.224 0.224
12   1  1  2     0     1  0.115 0.115

For instance, let us calculate the consistency of the configuration “0 0 0” from row number 1: that is, the negation of the crisp set “cs”, the negation of the fuzzy set “fs” and the value 0 from the multi-value condition “mv”. This has a consistency of 0.862 in the inclusion column.

It should now be clear, using the information from section 3.3.1, that the negation of both crisp and fuzzy sets is trivially done using the “1 -” notation, subtracting the calibrated scores from 1:

using(dfm, 1 - cs)
 [1] 0 0 0 0 1 1 1 0 0 0 1 1 0 1 1 1 1 1 1 0
using(dfm, 1 - fs)
 [1] 0.54627193 0.67324759 0.03458468 0.29251812 0.35545736 0.61017152
 [7] 0.30145636 0.45594214 0.77353282 0.51544224 0.20699283 0.99401237
[13] 0.81228755 0.31816638 0.62989588 0.63837443 0.13120510 0.09584533
[19] 0.38257543 0.86596837

Multi-value sets cannot be negated using this method. In fact, there is no such thing as a negation of a multi-value condition, but instead they are recoded to a binary crisp notation:

using(dfm, recode(mv, "0 = 1; else = 0"))
 [1] 0 0 0 1 1 0 0 1 1 0 0 1 0 1 1 0 0 0 0 1

Such a recodification can be achieved in various ways, for instance:

using(dfm, as.numeric(mv == 0))
 [1] 0 0 0 1 1 0 0 1 1 0 0 1 0 1 1 0 0 0 0 1

Since binary crisp sets are only a particular instance of a multi-value set, the very same recoding / inversion tactics can be used for crisp sets as well. The consistency of all rows from the dataset, in this particular truth table configuration, can be calculated as:

using(dfm, pmin(1 - cs, 1 - fs, as.numeric(mv == 0)))
 [1] 0.0000000 0.0000000 0.0000000 0.0000000 0.3554574 0.0000000
 [7] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.9940124
[13] 0.0000000 0.3181664 0.6298959 0.0000000 0.0000000 0.0000000
[19] 0.0000000 0.0000000

All these procedures are presented for a thorough understanding of the inner process behind the truth table construction using mixed sets, but the same result is obtained using the simpler function compute() from package admisc:

c1 <- compute(cs[0]*fs[0]*mv[0], data = dfm)
c1
 [1] 0.0000000 0.0000000 0.0000000 0.0000000 0.3554574 0.0000000
 [7] 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.9940124
[13] 0.0000000 0.3181664 0.6298959 0.0000000 0.0000000 0.0000000
[19] 0.0000000 0.0000000

The multi-value outcome gets recoded to a binary crisp form, using the same recodification process:

outrec <- using(dfm, as.numeric(outmv == 2))
outrec
 [1] 1 1 0 0 1 0 1 0 1 0 0 1 1 0 1 1 1 0 1 1

And finally, the consistency score for this configuration is calculated as usual:

sum(pmin(c1, outrec)) / sum(c1)
[1] 0.8615182
pof(cs[0]*fs[0]*mv[0] -> outmv[2], data = dfm) # the same result

                      inclS   PRI   covS   covU  
------------------------------------------------ 
1  cs[0]*fs[0]*mv[0]  0.862  0.862  0.165    -   
------------------------------------------------ 

7.8 Additional information in the truth table

One detail that is probably less known, and especially useful for replication purposes, is that a truth table object contains all the information necessary to trace its creation. It contains a component named call that displays the exact command that was used to produce it, and also the original dataset that was used as an input for the function truthTable(), in the component initial.data.

Researchers can thus either exchange the initial data and the exact set of commands used to produce the truth table, or perhaps more simple they can exchange the truth table itself, since it already contains all relevant information:

ttLF <- truthTable(LF, outcome = SURV, incl.cut = c(0.8, 0.6))
names(ttLF)
 [1] "tt"           "indexes"      "noflevels"    "initial.data"
 [5] "recoded.data" "cases"        "DCC"          "minmat"      
 [9] "categories"   "multivalue"   "options"      "fs"          
[13] "call"        

References

Lazarsfeld, Paul. 1937. “Some Remarks on Typological Procedures in Social Research.” Zeitschrift Fur Sozialforschung 6: 1–24.
———. 2000. Fuzzy Set Social Science. Chicago; London: University of Chicago Press.
———. 2005. “From Fuzzy Sets to Crisp Truth Tables.” http://www.compasss.org/wpseries/Ragin2004.pdf.
———. 2008b. Redesigning Social Inquiry. Fuzzy Sets and Beyond. Chicago; London: University of Chicago Press.
Shannon, Claude Elwood. 1940. “A Symbolic Analysis of Relay and Switching Circuits.” Master’s thesis, Massachusetts Institute of Technology, Dept. of Electrical Engineering.