Chapter 8 The Logical Minimization

The logical, or Boolean minimization process is the core of the QCA methodology, which seeks to find the simplest possible expression that is associated with the explained value of an output. The term expression, here, is a synonym for sums of products, or unions of intersections, or disjunctions of conjunctions (of causal conditions) It will also be used as a synonym for a causal configuration, since that is a conjunction (a product) of causal conditions.

McCluskey (1956) was the first to introduce this procedure, building on the previous work by Quine (1952, 1955), leading to an algorithm which is today known as the “Quine-McCluskey” (from here on, QMC). Their work was dedicated to electrical engineering switching circuits (opened and closed gates, Boolean state), and for each combination of such gates in the circuit design, an output is either present or absent.

But instead of a circuit designed for all possible combinations where an output occurs, McCluskey observed that it is much cheaper to specify the simplest possible expression, with a superior overall performance of the circuit. The idea is quite simple, and it is based on the following Boolean algebra theorem:

\[\begin{equation} \mbox{A}{\cdot}\mbox{B + A}{\cdot{\sim}}\mbox{B = A(B + }{\sim}\mbox{B) = A} \tag{8.1} \end{equation}\]

Given any two expressions, if they differ by exactly one literal, that literal can be minimized. Table 8.1 presents a comparison of two expressions with 3 causal conditions each. Since the condition B is the only different one, it can be eliminated and the initial two expressions can be replaced by A\(\cdot\)C.

Boolean minimization

Figure 8.1: Boolean minimization

It is the simplest possible example of Boolean minimization, and the QMC algorithm is an extension of this operation for all possible pairs of cases. The first part of the original algorithm consists of successive iterations of these operations:

  • compile a list of all possible pairs of two expressions
  • check which pairs differ by only one literal, and minimize them
  • the minimized pairs, plus the surviving unminimized expressions enter in the next iteration
  • repeat operations until nothing can further be minimized

The final surviving, minimal expressions are called prime implicants (PI). The second part of the QMC algorithm creates the so called prime implicants chart, a matrix with the prime implicants on the rows, and the initial expressions on the columns. The PI chart from the original book by Ragin (1987, 97) is a very good example:

Ragin's prime implicants chart

Figure 8.2: Ragin’s prime implicants chart

To solve the PI chart, the task is to find all possible combinations of rows (PIs) that cover all columns (initial expressions). In this example, there is a single combination of first and third PI that together, cover all initial expressions. The final solution is: A\(\cdot\)C + B\(\cdot\)c.

This is obviously a very simple PI chart, and the solution is straightforward. More complicated examples have more rows, and it is possible to have multiple solutions from multiple combinations of prime implicants that cover all columns. This is a quite technical jargon, and sometimes the columns are called primitive expressions, but it actually means that a solution is valid when the combination of prime implicants account for all observed empirical evidence.

Each prime implicant is a sufficient expression itself, but it doesn’t always account for all primitive expressions. In such a situation, it needs (an)other sufficient prime implicant(s) that account for different other columns, and disjunctively (union with logical OR) create a larger sufficient expression accounting for all observed, empirical evidence.

The QMC minimization procedure is a two-level algorithm, the first phase identifying the prime implicants and the second solving the PI chart. Both are quite technical and could be described in a separate annex.

8.1 Command Line and GUI Dialog

Version 3.0 of package QCA brought important changes in the structure of the main minimize() function. Like all previous updates of the package, a lot of effort is being invested to make these changes backwards compatible, so that old code will still work in the new version.

The new structure is more simple, more clear and most of all more logical. The previous one was perpetuated from version to version, for historical reasons since the first version(s) of the package. In the beginning, the minimize() function had a dataset as an input, for which an outcome and some conditions were specified, and it followed the normal procedure to construct a truth table and perform the minimization. Later, a separate truth table was created, that function minimize() was calling behind the scenes.

But the normal input for the minimization procedure is a truth table, not a dataset. The function was designed to detect when the input is a dataset or a truth table, and if a dataset it would call the truthTable() function to create the actual input for the minimization. It amounts to the same thing, just less visible for the user.

Most of the arguments in the minimize() functions had nothing to do with the minimization process, they were just served to the truthTable() function. The only benefit of directly using a dataset instead of a truth table, is the specification of multiple outcomes to mimic CNA (details in section 10.2).

Since they did not belong to the minimization function per se, arguments such as outcome, conditions, incl.cut, n.cut, show.cases are now removed as formal arguments for the function minimize(). The change is backwards compatible and it is still possible to specify these arguments, but they are passed to function truthTable(). The current function has the following form:

minimize(input, include = "", dir.exp = NULL, details = FALSE, pi.cons = 0,
         sol.cons = 0, all.sol = FALSE, row.dom = FALSE, first.min = FALSE,
         max.comb = 0, method = "CCubes", ...)

The current formal arguments are specific to the minimization process, as it should logically be the case. The other previous formal arguments are now detected via the three dots “...” argument, and dealt with according to the situation. But truth table construction and minimization are two separate activities, and the new structure makes this more obvious.

Among others, the first argument is now called input instead of data, driven by the same idea that a normal input is not a dataset, but a truth table. The same change is also reflected in the graphical user interface dialog, that can be opened with the following menu:

Analyse / Minimization

The starting minimization dialog

Figure 8.3: The starting minimization dialog

This is one of the most complex dialogs in the entire graphical user interface, and it makes full use of the newly added feature to display and use multiple objects at the same time. Many of the options will be presented in the next sections, for the moment we will cover the absolute basics.

First of all, in the upper left part of the dialog there is a radio button with two options: Dataset and TT, with a default on the TT meaning the truth table. Figure 8.3 displays nothing in the space underneath that radio button, because no truth tables have been assigned to particular objects. Truth tables can be created for visualization purposes only, but once the user is certain a truth table is ready for minimization, it should be assigned to an object.

Before creating one to see the difference, since all versions of the Lipset datasets (binary crisp, multi-value and fuzzy) are already loaded from the previous sections, a click on the Dataset option of the radio button displays those datasets, along with the columns from the dataset being selected, as in figure 8.4 below.

Dataset option in the minimization dialog

Figure 8.4: Dataset option in the minimization dialog

The greyed out options from the left will only become available when the Dataset radio option is selected. As the input is not a truth table but a regular dataset, these options will become editable, just like their exact counterparts in the truth table dialog.

Back to the main dialog in figure 8.3, another important change in the user interface is a slight re-arrangement of the check-boxes and textbox inputs. This rearrangement reflects the change in the written command arguments, but more importantly separates the options specific to the truth table from the options specific to the minimization process.

The space on the right side of the Include option is empty for the moment, but will become visible when using directional expectations, with some more examples at section 8.7.

8.2 The Conservative (Complex) Solution

A conservative solution is obtained with the classical QMC algorithm presented at the beginning of this chapter. It takes the cases where the outcome is present, and performs the necessary minimizations, in successive iterations, until the simplest prime implicants are generated.

Much like in the electrical engineering, the conservative solution is focused on the expressions associated with the presence of the outcome. Back in the days of McCluskey, the purpose of the minimization procedure was (and still is) to create a cheaper circuit that does something. It would be pointless to create an elaborate algorithm for something that does not produce any result.

Rather, the purpose of the classical QMC algorithm is to find the shortest, most simple expression that is equivalent with the initial positive configurations. This is the reason why, despite empirical evidence for configurations where the output is not present, the classical QMC algorithm ignores that evidence and focuses strictly on the configurations with a positive output.

Actually, the QMC algorithm is agnostic not only about observed empirical evidence for the negative output, but also about configurations with unknown output (where there is too little or no empirical evidence to assess the value of the output) and also about contradictions. All of those are ignored as if they are unknown, and the only empirical evidence that is used as input for the classical QMC function is the set of positive output configurations.

Unless a configuration has its output value changed to positive one, it does not play any role whatsoever in the minimization process, therefore it does not contribute to the final result.

It is the very reason why the solution is called “conservative”, because it does not make any assumptions about any other configurations (most notably the remainders) but those with a positive outcome. It is conservative when compared with the parsimonious solution (presented in the next section) which includes the remainders, hence making counterfactual assumptions for which little or no empirical evidence exists.

It is also called “complex”, because although the solution is a more simple expression than the initial configurations, the parsimonious solution is even more simple (contains fewer literals). The conservative solutions is complex compared to the parsimonious solution, and the other way round, but both are equivalent to (and more simple than) the initial configurations.

To have an example of conservative minimization, a truth table has to exist in the workspace (assigned to an object) for function minimize() to find it. Using the crisp version of the Lipset data, it is assigned to object ttLC:

ttLC <- truthTable(LC, outcome = SURV, incl.cut = 0.9, show.cases = TRUE)

Once this command is run, or its equivalent clicks in the truth table dialog from the graphical user interface, the options in the minimization dialog are automatically changed, as shown in figure 8.5.

The minimization dialog using an existing truth table

Figure 8.5: The minimization dialog using an existing truth table

The options above the separator, as well as the output and the conditions (all specific to the selected truth table), are still greyed out and cannot be modified. That happens because the truth table already exists, thus it does not make sense to modify things from an already created object. Modifying a truth table, or creating a new one with different options, should be done from the truth table dialog itself, or by modifying the equivalent written command (the new object will immediately become visible in the dialog).

However, the truth table options displayed in the minimization dialog are informative, especially if there are multiple truth tables to choose from. Selecting any existing truth table object will automatically refresh the options used at the time they were created. In this example, we used an inclusion cut-off equal to 0.9 and clicked on the option show cases.

Assigning the result of the minimization to the object consLC is left optional to the user. If not assigned, the result is simply printed on the screen (for the graphical user interface, in the web R console), but otherwise it is always a good idea to assign the results to an object for the simplest reason that it contains a lot more information than what is printed on the screen.

The minimization dialog, as presented on the screen, amounts to the following automatically generated command, in the command constructor dialog:

The minimization command, auto-constructed from the dialog

Figure 8.6: The minimization command, auto-constructed from the dialog

There are three positive configurations in the object ttLC (numbers 22, 24 and 32, they can be found in the equivalent truth table from section 7.1):
DEV\(\cdot{\sim}\)URB\(\cdot\)LIT\(\cdot{\sim}\)IND\(\cdot\)STB + DEV\(\cdot{\sim}\)URB\(\cdot\)LIT\(\cdot\)INT\(\cdot\)STB + DEV\(\cdot\)URB\(\cdot\)LIT\(\cdot\)INT\(\cdot\)STB

The minimization procedure produces two prime implicants in a disjunctive expression that is not only simpler and equivalent to the initial three configurations, but most of all it is both necessary and sufficient for the outcome:
DEV\(\cdot{\sim}\)URB\(\cdot\)LIT\(\cdot\)STB + DEV\(\cdot\)LIT\(\cdot\)IND\(\cdot\)STB

consLC <- minimize(ttLC, details = TRUE)
consLC

M1: DEV*~URB*LIT*STB + DEV*LIT*IND*STB <-> SURV

                     inclS   PRI   covS   covU   cases 
------------------------------------------------------------------- 
1  DEV*~URB*LIT*STB  1.000  1.000  0.500  0.250  FI,IE; FR,SE 
2   DEV*LIT*IND*STB  1.000  1.000  0.750  0.500  FR,SE; BE,CZ,NL,UK 
------------------------------------------------------------------- 
                 M1  1.000  1.000  1.000 

It may not seem much, but with only three positive configurations the minimized solution cannot be expected to be more parsimonious than that. As a rule of thumb, the more positive configurations enter the minimization process, the more parsimonious the final solution(s) will get.

Notice that cases were printed in the parameters of fit table, even though not specifically required by the function minimize(). This is possible because that option is inherited from the input truth table ttLC.

Also note the relation <-> signals both necessity and sufficiency. Since this is a sufficiency solution, all prime implicants are sufficient; the necessity relation appears when the coverage for the solution (in this case 1.000) is at least as high as the inclusion cut-off.

The object consLC is also structured as an R list with many other components, and some of them are going to play a pivotal role in the next sections:

names(consLC)
 [1] "tt"         "options"    "negatives"  "initials"   "PIchart"   
 [6] "primes"     "solution"   "essential"  "inputcases" "pims"      
[11] "IC"         "numbers"    "SA"         "complex"    "call"      

The interesting one for this section is the prime implicants chart:

consLC$PIchart

                   22 24 32
DEV*~URB*LIT*STB   x  x  - 
DEV*LIT*IND*STB    -  x  x 

As expected, the chart shows the three initial primitive expressions, with both prime implicants needed to cover them.

8.3 What is explained

Before approaching the parsimonious solution, it is the right moment to concentrate on the defunct argument explain from function minimize(). The structure of the function used to have the explained value as "1" and it seemed redundant, for what could it possibly be explained other than the positive output? Remainders cannot be explained, as they are unobserved, perhaps contradictions but they are very few and seldom encountered.

More plausible is the negative output, and that generates a huge confusion. The negative output is many times mistaken with the negation of the outcome, but they are very different things. Consider the binary crisp Lipset data:

truthTable(LC, outcome = SURV)

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND STB   OUT    n  incl  PRI  
 1    0   0   0   0   0     0     3  0.000 0.000
 2    0   0   0   0   1     0     2  0.000 0.000
 5    0   0   1   0   0     0     2  0.000 0.000
 6    0   0   1   0   1     0     1  0.000 0.000
22    1   0   1   0   1     1     2  1.000 1.000
23    1   0   1   1   0     0     1  0.000 0.000
24    1   0   1   1   1     1     2  1.000 1.000
31    1   1   1   1   0     0     1  0.000 0.000
32    1   1   1   1   1     1     4  1.000 1.000

When negating the outcome, notice how the output values are simply inverted:

truthTable(LC, outcome = ~SURV)

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND STB   OUT    n  incl  PRI  
 1    0   0   0   0   0     1     3  1.000 1.000
 2    0   0   0   0   1     1     2  1.000 1.000
 5    0   0   1   0   0     1     2  1.000 1.000
 6    0   0   1   0   1     1     1  1.000 1.000
22    1   0   1   0   1     0     2  0.000 0.000
23    1   0   1   1   0     1     1  1.000 1.000
24    1   0   1   1   1     0     2  0.000 0.000
31    1   1   1   1   0     1     1  1.000 1.000
32    1   1   1   1   1     0     4  0.000 0.000

This is a situation with perfect consistency scores (either 1 or 0), and indeed explaining the negative output or negating the outcome lead both to exactly the same solutions, hence it is understandable how matters can be confused. But truth tables do not always display perfect consistencies, as in the following two examples:

truthTable(LC, outcome = SURV, conditions = "DEV, URB, LIT, IND")

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND   OUT    n  incl  PRI  
 1    0   0   0   0     0     5  0.000 0.000
 3    0   0   1   0     0     3  0.000 0.000
11    1   0   1   0     1     2  1.000 1.000
12    1   0   1   1     0     3  0.667 0.667
16    1   1   1   1     0     5  0.800 0.800

While the upper three configurations have perfect consistencies, notice the bottom two configurations that have less perfect scores and their output values remain negative after negating the outcome:

truthTable(LC, outcome = ~SURV, conditions = "DEV, URB, LIT, IND")

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND   OUT    n  incl  PRI  
 1    0   0   0   0     1     5  1.000 1.000
 3    0   0   1   0     1     3  1.000 1.000
11    1   0   1   0     0     2  0.000 0.000
12    1   0   1   1     0     3  0.333 0.333
16    1   1   1   1     0     5  0.200 0.200

Explaining the negative output for the presence of the outcome (rows "1", "3", "11" and "16" in the first) would definitely not give the same solutions as explaining the positive output for the negation of the outcome (rows "1" and "3" in the second), the configurations entering the minimization process being different from their complementary set of configurations in the first truth table.

Hopefully, this digression makes it clear that explaining the negative output doesn’t make any sense: it is not the same thing as negating the outcome, and it is meaningless to explain configurations having consistencies below the inclusion cut-off. All of this becomes much clear if refraining from directly minimizing the data and produce truth tables beforehand.

Conversely, as it most likely is the case, users who are not aware of the difference simply trust the software knows what it’s doing. That is always a danger.

For this reason, starting with version 3.0 the package QCA outputs an error when trying to explain or include negative outputs. Taking remainders away, since by definition they cannot be explained, the only other value of the output that remains to discuss is represented by the contradictions.

The difference between explaining and including contradictions is found in the construction of PI charts, where the primitive expressions in the columns are represented by the explained configurations:

  • explaining contradictions (in addition to positive outputs) results in adding more columns to the prime implicants chart;
  • including the contradictions, they are treated similar to remainders, contributing in the minimization process but not affecting the PI chart;
  • if neither explaining, nor including the contradictions, they are by default treated as negative output configurations.

As an example, we will again reproduce the truth table containing for the fuzzy version of the Lipset data tweaking the inclusion cut-offs to produce more contradictions:

ttLF1 <- truthTable(LF, outcome = SURV, incl.cut = c(0.8, 0.5))
ttLF1

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND STB   OUT    n  incl  PRI  
 1    0   0   0   0   0     0     3  0.216 0.000
 2    0   0   0   0   1     0     2  0.278 0.000
 5    0   0   1   0   0     C     2  0.521 0.113
 6    0   0   1   0   1     C     1  0.529 0.228
22    1   0   1   0   1     1     2  0.804 0.719
23    1   0   1   1   0     0     1  0.378 0.040
24    1   0   1   1   1     C     2  0.709 0.634
31    1   1   1   1   0     0     1  0.445 0.050
32    1   1   1   1   1     1     4  0.904 0.886

First inspect the complex solution and the associated PI chart when explaining the contradictions:

minimize(ttLF1, explain = "1, C")

M1: ~DEV*~URB*LIT*~IND + DEV*LIT*IND*STB + (DEV*~URB*LIT*STB) -> SURV 
M2: ~DEV*~URB*LIT*~IND + DEV*LIT*IND*STB + (~URB*LIT*~IND*STB)
    -> SURV 
minimize(ttLF1, explain = "1, C")$PIchart

                     5  6  22 24 32
~DEV*~URB*LIT*~IND   x  x  -  -  - 
DEV*~URB*LIT*STB     -  -  x  x  - 
DEV*LIT*IND*STB      -  -  -  x  x 
~URB*LIT*~IND*STB    -  x  x  -  - 

The same solution, with the same prime implicants and the same PI chart can be produced by lowering the inclusion cut-off until all contradictions become explained:

ttLF2 <- truthTable(LF, outcome = SURV, incl.cut = 0.5)
minimize(ttLF2)

M1: ~DEV*~URB*LIT*~IND + DEV*LIT*IND*STB + (DEV*~URB*LIT*STB) -> SURV 
M2: ~DEV*~URB*LIT*~IND + DEV*LIT*IND*STB + (~URB*LIT*~IND*STB)
    -> SURV 
minimize(ttLF2)$PIchart

                     5  6  22 24 32
~DEV*~URB*LIT*~IND   x  x  -  -  - 
DEV*~URB*LIT*STB     -  -  x  x  - 
DEV*LIT*IND*STB      -  -  -  x  x 
~URB*LIT*~IND*STB    -  x  x  -  - 

As it can be seen, choosing to add the contradictions in the explain argument (along with the positive output) or lowering the inclusion cut-off until the contradictions become explained (they would be allocated a positive output themselves), lead to the the same complex solutions.

A similar phenomenon happens when including the remainders:

minimize(ttLF1, explain = "1, C", include = "?")

M1: ~DEV*LIT + DEV*STB -> SURV 
M2: ~DEV*LIT + LIT*STB -> SURV 
M3: DEV*STB + LIT*~IND -> SURV 
M4: LIT*~IND + LIT*STB -> SURV 
M5: LIT*~IND + IND*STB -> SURV 
minimize(ttLF2, include = "?")

M1: ~DEV*LIT + DEV*STB -> SURV 
M2: ~DEV*LIT + LIT*STB -> SURV 
M3: DEV*STB + LIT*~IND -> SURV 
M4: LIT*~IND + LIT*STB -> SURV 
M5: LIT*~IND + IND*STB -> SURV 

The solution models are again identical, which is to be expected since the PI charts are also identical:

minimize(ttLF1, explain = "1, C", include = "?")$PIchart

           5  6  22 24 32
~DEV*LIT   x  x  -  -  - 
DEV*~IND   -  -  x  -  - 
DEV*STB    -  -  x  x  x 
URB*STB    -  -  -  -  x 
LIT*~IND   x  x  x  -  - 
LIT*STB    -  x  x  x  x 
IND*STB    -  -  -  x  x 
minimize(ttLF2, include = "?")$PIchart

           5  6  22 24 32
~DEV*LIT   x  x  -  -  - 
DEV*~IND   -  -  x  -  - 
DEV*STB    -  -  x  x  x 
URB*STB    -  -  -  -  x 
LIT*~IND   x  x  x  -  - 
LIT*STB    -  x  x  x  x 
IND*STB    -  -  -  x  x 

This proves that explaining contradictions is meaningless because it leads to exactly the same solutions as lowering the inclusion cutoff. It is an important observation, leaving the argument explain with only one logically possible value (positive output configurations) and that makes it redundant.

It has survived through different version of the software, since the beginning of crisp sets QCA when fuzzy sets and consistency scores have not been introduced yet. But once consistency scores made their way into the truth table, this argument should have been abandoned, for as it turns out no other output except the positive one makes any logical sense to explain. And since "1" is the default value, this argument is not even explicitly mentioned in command’s formals.

The "Include" argument in the minimization dialog

Figure 8.7: The “Include” argument in the minimization dialog

In the graphical user interface, the argument include is located in the middle part of the minimization dialog. Although specific to function minimize(), it is located immediately at the right side pf the (greyed out) truth table options, which seems logical since it refers to the output values in the truth table.

8.4 The Parsimonious Solution

A parsimonious solution is a more simplified but equivalent expression, compared to the complex solution. It is obtained by employing a less conservative approach over the empirical evidence and include remainders in the minimization process. Before delving into the technical details of this solution, some preliminary considerations need to be made about the problem of limited diversity and the different strategies to deal with this problem in social research methodology.

Conventional statistical analysis manages to produce rather accurate predictions for the values in the dependent variable, even in the absence of empirical information. It does that by inferring the information drawn from existing evidence can be extrapolated to empty areas where no such evidence exists. The simplest example is the linear regression model, where all predicted values are deterministically located on the regression line.

Things are rather clear for a scatterplot with the relation between the independent and the dependent variable, and even for a 3D scatterplot with two independent variables versus the dependent, the prediction being made on the regression (hyper)plane. It is easy to visualize where the points are (the empirical data), and where there are no points to make predictions.

Once the number of independent variables grows, visualizing empty areas becomes much more difficult and the inference is usually accepted as a natural expansion from the simple 2D or 3D examples to a \(k\)-dimensional cube (vector space), with some strong assumptions like the multivariate normality.

But it is a fact, as Ragin and Sonnett (2008) eloquently point out, that social reality is very limited in its diversity. This should not be confused with a limited number of cases (the small-N or medium-N research situation), instead it actually refers to a limited number of empirically observed configurations in the truth table. It does not matter how large the data is (how many cases it has), if all of them cluster in a few such configurations.

Thompson (2011) analyzed the 1970 Birth Cohort Study (BCS70) in UK, a large study with no less than 17000 cases and still ran into the problem of limited diversity. Whatever we empirically observe seems to cover a very small area of the entire vector space, and for the biggest part of this space researchers either make strong assumptions to extrapolate the model through statistical regularities, or engage in counterfactual analysis.

This aspect of limited diversity is well addressed in the QCA methodology, due to the very structure of the truth table that not only guides researchers to think in terms of configurations but more importantly shows exactly how much (or how small) empirical evidence exists in the property space. Unlike the quantitative methodology where areas empty of empirical evidence are covered by an estimated underlying distribution, QCA uses exclusively the existing evidence to map the property space and specifically expose all unknown configurations.

The normal scientific way to deal with lack of evidence (non-existence or incomplete data) is to perform experiments and produce the data needed to formulate theories. Science is driven by curiosity, and where such experiments are possible researchers engage into an active process of discovery by tweaking various input parameters and observe if (and how much) the outcome changes.

But social sciences are not experimental, researchers being unable to tweak the input parameters. Being left with the observed empirical information at hand, they conduct thought experiments involving counterfactual analysis.

In the absence of direct evidence for critical cases to compare against, researchers often resort to the question “What if…?” and begin wondering what the reality would be like, if things in the past would have been different. Such counterfactual thinking can be traced back to Hume (1999, 146), where in his definition of causation the following statement is found:

“… if the first object had not been, the second never had existed”.

This is a clear counterfactual statement, and social sciences (especially in the qualitative research tradition) abounds in such statements particularly if the events being studied are rare, such as state revolutions, as the complex reality is reduced to an abstract concept (the ideal type) that does not exist in the reality, for which there is no empirical evidence to claim its existence.

It seems that counterfactual thinking is not only desirable, but actually indispensable to advance social research. In QCA, all remainders are potential counterfactual statements that can be used to further minimize the observed configurations. The decision to include remainders in the minimization makes the implicit assumption that, should it be possible to empirically observe these configurations, they would have a positive output.

But this is a very serious assumption(!) that can be subjected to immediate question marks, mainly to how impossible configurations contribute to a meaningful minimal solution, or how remainders that contradict our theory contribute to a solution that confirms the theory. Such issues will be dealt with, in the next sections.

Existing software make it extremely easy to add remainders in the minimization process, and most likely (depending on how large the truth table is and how many empirical configurations do exist) the solutions will be less complex.

There is a high temptation to rush and find the simplest possible (most parsimonious) solution, indiscriminately throwing anything into the minimization routine, something similar to grabbing all available ingredients from a kitchen and lump everything into the oven to make food. But not all ingredients go well together, and not all of them can be cooked in the oven.

Fortunately the “food” can be inspected post-hoc, after the minimization, but for the moment it is good to at least be alert of the situation. Either before the minimization, or post (or even better before and post) the minimization, the structure of the remainders should constantly be monitored.

For the pure parsimonious solution, the argument of interest in function minimize() was already introduced. It is include, specifying which of the other output values ("?" and "C") are included in the minimization process. The Quine-McCluskey algorithm treats remainders as having the same output value as the explained configurations, but they will not be used to construct the PI chart as remainders are not primitive expressions.

Having a truth table object available, the simplest form of the minimization command is:

ttLF <- truthTable(LF, outcome = SURV, incl.cut = 0.8, show.cases = TRUE)
minimize(ttLF, include = "?")

M1: DEV*~IND + URB*STB -> SURV

This command outputs just the most parsimonious solution, without any other details about the parameters of fit. It also uses by default upper case letters for their presence and lower case letters for their absence. Choosing to print the details, and to using a tilde to signal a negated (absent) condition, the command and the output changes to:

minimize(ttLF, include = "?", details = TRUE)

M1: DEV*~IND + URB*STB -> SURV

             inclS   PRI   covS   covU   cases 
---------------------------------------------------- 
1  DEV*~IND  0.815  0.721  0.284  0.194  FI,IE 
2   URB*STB  0.874  0.845  0.520  0.430  BE,CZ,NL,UK 
---------------------------------------------------- 
         M1  0.850  0.819  0.714 

In the table containing the parameters of fit, there are two sufficient conjunctions, each covering part of the empirically observed positive configurations. They are displayed with a logical OR relation, either one being sufficient for some of the positive configurations, but both are needed to cover all of them.

Both are highly consistent sufficient expressions but their cumulated unique coverage is lower, suggesting there is some more space in the outcome set that remains unexplained.

Including remainders using the minimization GUI dialog

Figure 8.8: Including remainders using the minimization GUI dialog

In the dialog from the graphical user interface, the options being selected accurately reflect the written minimization command: the remainders "?" are selected, and the checkboxes show details and use tilde are checked. A notable aspect is the display of the cases in the parameters of fit table, this option being inherited from the truth table used as input for the minimization.

Below the separating line there are some other options, where depth has two sets of counters, one for a maximum number of conditions in conjunctive prime implicants, and the other for a maximum number of prime implicants in disjunctive solutions. At the default value equal to 0, it signals an exhaustive search for all possible PIs and all possible solutions. The solution depth is useful only when maximal solutions is checked, and/or the consistency threshold for the solution is lower than 1.

The other two options in the left side are specific to solving the prime implicants chart. The row dominance option eliminates irrelevant PIs from the truth table before finding the minimal solutions, if they are covered by other prime implicants. Finally, maximal solutions finds all possible non-redundant disjunctions of prime implicants that cover all initial, truth table positive configurations, even if not minimal. It is mainly used to mimic CNA, discussed in section 10.2.

8.5 A Note on Complexity

Describing the Standard Analysis procedure, C. Schneider and Wagemann (2012, 161) introduce three dimensions to classify the types of solutions obtained using different logical remainders:

  1. set relation
  2. complexity
  3. type of counterfactuals

Having analyzed all possible combinations of remainders from their truth table to include in the minimization, they meticulously show that the parsimonious solution is the least complex one, and it is a superset of not only the complex solution but also of all other solutions using various remainders.

Defining the complexity of a solution “… by the number of conditions and the logical operators AND and OR that it involves …”, they advise to refrain from using the alternative name of “complex” solution, for the conservative one, because some of the solutions involving remainders are even more complex than the complex one (which does not make any assumption on, and does not include any remainders).

While certainly valid from a logical point of view, Schneider & Wagemann’s conclusions are not quite right from an objective point of view. Their truth table can be replicated with the following commands:

SW <- data.frame(A = c(0, 0, 0, 1, 1), B = c(0, 1, 1, 0, 1),
                 C = c(1, 0, 1, 1, 0), Y = c(1, 0, 0, 1, 1))
ttSW <- truthTable(SW, outcome = Y, complete = TRUE)
ttSW

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

    A  B  C    OUT    n  incl  PRI  
1   0  0  0     ?     0    -     -  
2   0  0  1     1     1  1.000 1.000
3   0  1  0     0     1  0.000 0.000
4   0  1  1     0     1  0.000 0.000
5   1  0  0     ?     0    -     -  
6   1  0  1     1     1  1.000 1.000
7   1  1  0     1     1  1.000 1.000
8   1  1  1     ?     0    -     -  

The row numbers are not the same, their ordering being a bit different from those in package QCA where they represent the base 10 conversion from base 2, but all configurations are identical for the three positive ones (001, 101, 110), for the two negative ones (010, 011) and finally for the three remainders left (000, 100, 111).

For three remainders, there are eight possible way to use them in the minimization process, from none (the equivalent of the conservative solution) to all (leading to the parsimonious solution). Schneider & Wagemann then show their solution (e) that includes only the remainder 000 leads to the most complex solution: \(\sim\)A\(\cdot{\sim}\)B + \(\sim\)B\(\cdot\)C + A\(\cdot\)B\(\cdot{\sim}\)C.

However, the actual solution using that remainder does not seem to entirely coincide:

ttSW <- truthTable(SW, outcome = Y, exclude = c(5,8), complete = TRUE)
pSW <- minimize(ttSW, include = "?")
pSW

M1: ~B*C + A*B*~C <-> Y

The truth table has been altered to hardcode remainders 5 and 8 to an output value of 0 (forcing their exclusion them from the minimization), which means only one remainder is left to include, that is row number 1, or 000.

But it is obvious this solution is not the same with Schneider & Wagemann’s. It most likely happened because they tried to explain the remainder (which doesn’t make sense), instead of just including it in the minimization. That affected the PI chart, which is supposed to have only as many columns as the initial number of primitive expressions (in this situation, three).

Consequently, their PI chart contains an additional column:

        1  2  6  7 
~A*~B   x  x  -  - 
~B*C    -  x  x  - 
A*B*~C  -  -  -  x 

while the correct PI chart renders \(\sim\)A\(\cdot{\sim}\)B as irrelevant (it is dominated):

pSW$PIchart

         2  6  7 
~A*~B    x  -  - 
~B*C     x  x  - 
A*B*~C   -  -  x 

The purpose of this demonstration is less to point the finger to a reasonable and easy to make error, but more to restate that as it turns out, no other solution can be more complex than the conservative one. At best, other solutions can be equally complex therefore the word “complex” doesn’t always identify with the conservative solution. I would also advocate using the word “conservative”, for the same stated, and more important reason that it does not make any assumption on the remainders.

8.6 Types of counterfactuals

This is one of the most informationally intensive sections from the entire analysis of sufficiency, and at the same time the pinnacle of all the information presented so far. Everything that has been introduced up to this point, plus many others, is extensively used in what is about the follow.

If things like constructing a truth table or logical minimization are relatively clear, this section contains an entire host of additional concepts where many overlap in both understanding and dimensionality. Users who possess at least a minimal understanding of what QCA is used for, undoubtedly heard about remainders, directional expectations, different types of counterfactuals (easy and difficult, impossible, implausible, incoherent), simplifying assumptions, contradictory simplifying assumptions, tenable and untenable assumptions, all within the context of Standard Analysis, Enhanced Standard Analysis, Theory-Guided Enhanced Standard Analysis etc.

Without a deep understanding of each such concept, they tend to gravitate around in a spinning motion that can provoke a serious headache. It is therefore mandatory to have a thorough introduction for these concepts, before using the R code in package QCA. Most of them are already discussed and properly introduced in multiple other places, see Ragin and Sonnett (2005), Ragin (2008b), C. Schneider and Wagemann (2012), C. Q. Schneider and Wagemann (2013) to name the most important ones. Some of these concepts are going to be deepened in the next chapter 10, therefore the beginning of this section is not meant to replace all of those important lectures but mainly to facilitate their understanding.

It should be clear, however, that all of these concepts are connected to the remainders and how are they included, filtered or even excluded from the minimization process to obtain a certain solution.

In this book, the terms “counterfactuals” and “remainders” can be used interchangeably. They are synonyms and refer to one and the same thing: causal configurations that, due to the issue of limited diversity in social phenomena, have no empirical evidence. They are unobserved configurations which, if by any chance would be observed, could contribute in the minimization process to obtain a more parsimonious solution.

In the previous section 8.5, I showed that the parsimonious solution is the least complex one, and no other solution is more complex than the conservative one, leading to the following Complexity rule:

Any remainder included in the minimization process can only make the solution simpler. Never more complex, always more and more parsimonious.

To the limit, when all remainders are included in the minimization, the final solution is “the” most parsimonious solution.

The truth table composition

Figure 8.9: The truth table composition

Figure 8.9 is yet another attempt to classify the remainders based on the current theory, using a linear approach and various vertical separators. The first delimitation is made between the empirically observed configurations and the remainders (counterfactuals). Among those empirically observed, some configurations have a positive output, while some have a negative output (for simplicity, this figure assumes there are no contradictions.)

Among the remainders on the right hand side of the figure:

  • some are simplifying assumptions and some are not (simplifying meaning they contribute to the logical minimization)
  • the simplifying assumptions is a set composed from the easy counterfactuals and the difficult counterfactuals
  • remainders which are not simplifying assumptions can be simply good counterfactuals (that don’t have any logical difficulties, but don’t contribute to the logical minimization), or untenable assumptions
  • easy counterfactuals are also good counterfactuals, but some are also part of the tenable assumptions
  • some of the difficult counterfactuals are also tenable, while some are untenable etc.

The meaning for all of these is going to be gradually unfolded. They are all connected with the moment when QCA theoreticians realized that not all remainders can be used as counterfactuals.

A very simple example is the relation between the presence of the air (as a trivial necessary condition) and the existence of a big fire in a city. Undoubtedly, the air is not a cause for a fire, but on the other hand a fire without air is impossible in normal situations.

There are numerous non-trivial causes for a fire, and a researcher might select 5 or 6 most commonly observed conditions (among which the necessary air). Due to the limited diversity, many of the 64 logically possible configurations are not empirically observed, and they could be used as counterfactuals for the parsimonious solution.

The problem is that many of the remainders also contain the absence of air.

According to the Complexity rule, any remainder has the potential to make the solution less and less complex, including those where the air is absent. But such remainders are difficult counterfactuals, despite making the solution simpler, since they contradict all our knowledge about how a fire is maintained.

For any given phenomenon (outcome) of interest, there is some established theoretical corpus that guides the research and offers a number of hints about how causal conditions should contribute to the presence of that outcome. These can be called directional expectations.

We expect air to be present when a fire is produced, therefore all remainders that do not conform to this expectation are difficult. To put it differently, it would be very difficult to explain how did we arrive at a minimal explanation (solution) about the presence of fire, involving remainders associated with the absence of air in the minimization.

For this reason, out of all possible remainders in the truth table, a sensible decision would be to filter out those which contradict our theory. By contrast, those remainders which are accepted in the minimization process are called easy counterfactuals, because they are in line with our current knowledge.

With a good part of the remainders out of the minimization, it is clear the final solution will not be the most parsimonious one. It will be less complex compared with the conservative solution, but more complex than the parsimonious one. This is a key idea presented by Ragin and Sonnett (2005), who introduced the concepts of easy and difficult counterfactuals in QCA.

Later, Ragin (2008b) wrapped up the whole procedure which is today known as the Standard Analysis, that produces three types of solutions: conservative, intermediate and parsimonious.

To reach the parsimonious solution, all remainders are included in the minimization process. But not all of them really contribute to the minimization, some of the pairs being compared differ by more than one literal, hence they don’t produce implicants for the next iterations. Consequently, part of the remainders are never used in the minimization. Those remainders that do help producing prime implicants are called Simplifying Assumptions and, as shown in figure 8.9 they are composed from the Easy Counterfactuals plus the Difficult Counterfactuals:

\[SA = EC + DC\]

This is as far as the Standard Analysis goes, separating the simplifying assumptions from all remainders, and differentiating between easy and difficult counterfactuals. This model would be extremely effective if not for the red area located right in the middle of the easy counterfactuals segment. It is called UA, part of a category identified by C. Q. Schneider and Wagemann (2013), and dubbed the Untenable Assumptions.

Many things can be Untenable Assumptions:

  • logical impossibilities (the famous example of the pregnant man);
  • contradictory simplifying assumptions that end up being sufficient for both the outcome and its negation (Yamasaki and Rihoux 2009);
  • a most interesting category of that combines the analysis of sufficiency with the analysis of necessity.

The later category is part of the incoherent counterfactuals, and makes a full use of the mirror effect between necessity and sufficiency: when a condition is identified as necessary for an outcome Y, it is a superset of the outcome. The negation of the condition, therefore, cannot be assumed to be sufficient for the outcome for it would need to be a subset of the outcome (to be sufficient).

But since the condition itself is bigger than the outcome set, what is outside the condition (\(\sim\)X) is by definition outside the outcome set: it has no inclusion in the outcome (it would be illogical to think otherwise), as it can be seen in figure 8.10 below.

$\sim$X $\not\rightarrow$ Y (left, $\sim$X not a subset of Y), while X $\rightarrow$ Y (right, X is a subset of Y)

Figure 8.10: \(\sim\)X \(\not\rightarrow\) Y (left, \(\sim\)X not a subset of Y), while X \(\rightarrow\) Y (right, X is a subset of Y)

This is one of the key aspects put forward by C. Q. Schneider and Wagemann (2013) and wrongly contested by some QCA users (Thiem 2016), as it was shown in section 5.5.

Indeed, when a condition (more generally a phenomenon) is necessary for an outcome, its negation cannot be at the same time sufficient for the same outcome, therefore any counterfactual containing the negation of a necessary condition should be eliminated from the minimization process.

This is the starting line for the Enhanced Standard Analysis, an extension of the standard analysis by further eliminating the untenable assumptions from the entire minimization process. Not only to create intermediate solutions, but more importantly remove the untenable assumptions from the most parsimonious solution(s) as well.

But C. Q. Schneider and Wagemann (2013) didn’t stop here and went even further than ESA, formulating what is now known as TESA - Theory-Driven Enhanced Standard Analysis, that expands the inclusion of remainders by formulating the so-called conjunctural directional expectations (see section 8.8).

The difference from the simple directional expectations is the formulation of these expectations not only for atomic conditions, but for entire conjunctions of conditions. Something like: we expect a fire to appear when there is both air AND a sparkle AND inflammable materials around: (A\(\cdot\)S\(\cdot\)I \(\rightarrow\) F)

There is still a need to prove the usefulness of using conjunctural directional expectations, since a similar result can be obtained by formulating atomic directional expectations, not necessarily in conjunction:

  • we expect a fire when there is air (A \(\rightarrow\) F)
  • we expect a fire when there is a sparkle (S \(\rightarrow\) F)
  • we expect a fire when there are inflammable materials around (I \(\rightarrow\) F)

The example provided by Schneider and Wagemann, that is going to be replicated in the next section, suffers from the same benign (but still an) error to explain these remainders and change the PI chart in the process. Contrary to their findings the actual solution is equal to the conservative one, therefore in the absence of a more concrete example it is unclear what TESA brings more to the table.

On the other hand, ESA is a welcomed addition to the methodological QCA field and I fully agree with its creators that implausible counterfactuals (especially those which contain the negation of a necessary condition) should be removed from the minimization process.

Finally, this section should not be finished without discussing yet another type of truth table rows that can be excluded from the minimization. It is not about remainders, but about the empirically observed configurations and do not belong to the counterfactual analysis. The property on which we might eliminate observed rows is called simultaneous subset relations.

Some of these observed configurations might pass the sufficiency threshold for both the presence and for the absence of the outcome, which renders them incoherent. Although empirically observed, one possible decision is to remove them too from the minimization. This will of course change the final solution, but at least it would be based on firmly and coherently consistent observed configurations.

There are also the true logical contradictory cases (C. Schneider and Wagemann 2012, 127), that have an inclusion in the configuration higher than 0.5, but a lower (than 0.5) inclusion in the outcome. More recently, C. Q. Schneider and Rohlfing (2013, 585) prefer to call these deviant cases consistency in kind, so the QCA terminology is not only expanding but is also overlapping. A bit confusing, but hopefully this section will have made some helpful clarifications.

8.7 Intermediate solutions: SA and ESA

No less than three arguments are available when referring to the remainders: include, exclude and dir.exp. They should be self-explanatory, first specifying what to include in the minimization, the second what to exclude from the minimization, and the third specifying the directional expectations.

Assuming the reader didn’t go through all previous code examples where multiple truth tables have been created, we will again use the fuzzy version of the Lipset data. The very first step is to create and inspect the truth table:

data(LF)
ttLF <- truthTable(LF, SURV, incl.cut = 0.8, show.cases = TRUE,
                   sort.by = "OUT, n")

The function minimize() allows, and there is a strong tendency to initiate a direct minimization process using the dataset, bypassing the truth table, but I would warmly recommend producing a truth table first.

The truth table looks like this:

ttLF

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency

     DEV URB LIT IND STB   OUT    n  incl  PRI   cases      
32    1   1   1   1   1     1     4  0.904 0.886 BE,CZ,NL,UK
22    1   0   1   0   1     1     2  0.804 0.719 FI,IE      
 1    0   0   0   0   0     0     3  0.216 0.000 GR,PT,ES   
 2    0   0   0   0   1     0     2  0.278 0.000 IT,RO      
 5    0   0   1   0   0     0     2  0.521 0.113 HU,PL      
24    1   0   1   1   1     0     2  0.709 0.634 FR,SE      
 6    0   0   1   0   1     0     1  0.529 0.228 EE         
23    1   0   1   1   0     0     1  0.378 0.040 AU         
31    1   1   1   1   0     0     1  0.445 0.050 DE         

With 5 causal conditions, there are 32 rows in the truth table, out of which 2 positive output, 7 negative output configurations, and 23 remainders. The truth table is sorted first by the values of the OUT column (in descending order), then by the values of the frequency column n. The consistency scores are unsorted, but this choice of structure emulates C. Q. Schneider and Wagemann (2013).

Note the row numbers are also changed, due to the sorting choices. But each configuration, remainders included, has a unique row number and that is going to prove very useful when deciding which to include and which to remove from the minimization process.

Before proceeding to the minimization, it is a good idea to check if there are deviant cases consistency in kind, in the truth table, using the argument dcc:

truthTable(LF, SURV, incl.cut = 0.8, show.cases = TRUE, dcc = TRUE, 
           sort.by = "OUT, n")

  OUT: output value
    n: number of cases in configuration
 incl: sufficiency inclusion score
  PRI: proportional reduction in inconsistency
  DCC: deviant cases consistency

     DEV URB LIT IND STB   OUT    n  incl  PRI   DCC     
32    1   1   1   1   1     1     4  0.904 0.886         
22    1   0   1   0   1     1     2  0.804 0.719         
 1    0   0   0   0   0     0     3  0.216 0.000 GR,PT,ES
 2    0   0   0   0   1     0     2  0.278 0.000 IT,RO   
 5    0   0   1   0   0     0     2  0.521 0.113 HU,PL   
24    1   0   1   1   1     0     2  0.709 0.634         
 6    0   0   1   0   1     0     1  0.529 0.228 EE      
23    1   0   1   1   0     0     1  0.378 0.040 AU      
31    1   1   1   1   0     0     1  0.445 0.050 DE      

The DCC cases are all associated with the negative output configurations, which is good. As long as they are not associated with the positive output configurations, everything seems to be alright.

The “pure” parsimonious solution is produced, as always, by specifying the question mark ("?") in the include argument:

pLF <- minimize(ttLF, include = "?", details = TRUE, show.cases = TRUE)
pLF

M1: DEV*~IND + URB*STB -> SURV

             inclS   PRI   covS   covU   cases 
---------------------------------------------------- 
1  DEV*~IND  0.815  0.721  0.284  0.194  FI,IE 
2   URB*STB  0.874  0.845  0.520  0.430  BE,CZ,NL,UK 
---------------------------------------------------- 
         M1  0.850  0.819  0.714 

For this solution, the assumption is that all remainders contribute equally to parsimony, despite the fact that not all of them have been used by the minimization algorithm. The simplifying assumptions (the set of remainders actually contributing to the minimization process), can be seen by inspected the component SA from the newly created object:

pLF$SA
$M1
   DEV URB LIT IND STB
10   0   1   0   0   1
12   0   1   0   1   1
14   0   1   1   0   1
16   0   1   1   1   1
17   1   0   0   0   0
18   1   0   0   0   1
21   1   0   1   0   0
25   1   1   0   0   0
26   1   1   0   0   1
28   1   1   0   1   1
29   1   1   1   0   0
30   1   1   1   0   1

Out of the 23 remainders, only 12 have been used and the rest did not contribute at all to the minimization. These SAs might contain both easy and difficult counterfactuals, as it will be seen later. For the moment, the currently recommended procedure is to continue preparing for the intermediate solution, by verifying the existence of contradictory simplifying assumptions.

This can be done in two ways, the first and most intuitive being to produce another minimization on a truth table for the negation of the output (using the same inclusion cut-off) assigned to an object pLFn, and check its simplifying assumptions component:

ttLFn <- truthTable(LF, outcome = ~SURV, incl.cut = 0.8)
pLFn <- minimize(ttLFn, include = "?")
pLFn$SA
$M1
   DEV URB LIT IND STB
3    0   0   0   1   0
4    0   0   0   1   1
7    0   0   1   1   0
8    0   0   1   1   1
9    0   1   0   0   0
10   0   1   0   0   1
11   0   1   0   1   0
12   0   1   0   1   1
13   0   1   1   0   0
14   0   1   1   0   1
15   0   1   1   1   0
16   0   1   1   1   1
17   1   0   0   0   0
19   1   0   0   1   0
21   1   0   1   0   0
25   1   1   0   0   0
27   1   1   0   1   0
29   1   1   1   0   0

Since some of the rows are present in both matrices, it is a proof there are indeed contradictory simplifying assumptions. To identify exactly which ones are found in both:

intersect(rownames(pLF$SA$M1), rownames(pLFn$SA$M1))
[1] "10" "12" "14" "16" "17" "21" "25" "29"

According to Enhanced Standard Analysis, these CSAs should be avoided and not included in the minimization process. This identification method works when there is only one solution for both the presence and the absence of the outcome. If multiple solutions exist, the component SA will contain the simplifying assumptions for each solution, some obviously being duplicated since they contribute to more than one solution.

A second and more straightforward way to check for contradictory simplifying assumptions is to use the built-in function findRows() which takes care of all these details. The input for this function is either one of the two truth tables, usually the one for the presence of the outcome:

findRows(obj = ttLF, type = 2)
[1] 10 12 14 16 17 21 25 29

The conclusion is the same, this function being more integrated and saving users from having to type additional commands. The purpose of finding these CSAs is going to be introduced later, for the time being a next step in the Standard Analysis is to specify directional expectations, for the purpose of constructing intermediate solutions.

For the sake of simplicity, we will be assuming it is the presence of the causal conditions that lead to the survival of democracy (the presence of the outcome). These expectations are specified using the argument dir.exp:

iLF <- minimize(ttLF, include = "?", dir.exp = "DEV, URB, LIT, IND, STB")

Older versions of the package QCA used a different specification, in the form dir.exp = "1,1,1,1,1". This is still supported and backwards compatible, but the current SOP (Sums of Products) expression is recommended because it allows not only simple but also conjunctural directional expectations, a subject to be discussed in section 8.8.

Directional expectations can also employ the so-called “don’t cares”, that is causal conditions for which it is not important if they contribute in their presence or their absence, as long as they help with the minimization process.

Such don’t cares are simply left out from the SOP expression. For instance, if there is no directional expectation for the condition LIT, the argument would be specified as dir.exp = "DEV, URB, IND, STB".

Using SOP expressions is also very useful for multi-value sets, when directional expectations can take multiple values. The Lipset data has a multi-value version, where the first causal condition has 3 values. For an assumption that both values 1 and 2 lead to the presence of the outcome, the directional expectations would be specified as dir.exp = "DEV[1,2], URB[1], LIT[1], IND[1], STB[1]".

Specifying directional expectations using SOP notation

Figure 8.11: Specifying directional expectations using SOP notation

In the graphical user interface, the directional expectations become available as soon as the the button ? is activated, for each causal condition selected when the truth table was produced. Back to the intermediate solution, it is:

iLF

From C1P1: 

M1:    DEV*URB*LIT*STB + DEV*LIT*~IND*STB -> SURV 

This is a subset of the most parsimonious solution, and a superset of the conservative solution. The term DEV\(\cdot\)URB\(\cdot\)LIT\(\cdot\)STB is a subset of URB\(\cdot\)STB, and DEV\(\cdot\)LIT\(\cdot\)ind\(\cdot\)STB is a subset of DEV\(\cdot\)ind. And they are both supersets of the conservative solution:

minimize(ttLF)

M1: DEV*URB*LIT*IND*STB + DEV*~URB*LIT*~IND*STB -> SURV

To print the parameters of fit the argument details = TRUE should be specified, or since the object has a dedicated printing function a command such as print(iLF, details = TRUE) is also possible.

The intermediate solution is always in the middle between the conservative and the parsimonious solutions, both in terms of complexity (it is less complex than the conservative solution, but more complex compared to the parsimonious one) and also in terms of set relations.

Ragin and Sonnett (2005) explain in detail how to derive these intermediate solutions, based on the comparison between the complex and the parsimonious solutions. Their procedure is implemented in the package QCA using the prime implicants’ matrices from both solutions, combining them according to the directional expectations to filter those which are ultimately responsible with the intermediate solutions.

The reason for which the intermediate solution is halfway between the conservative and the parsimonious ones is due to the fact that less remainders end up being used in the minimization process, as a result of filtering out by directional expectations. Those which did contribute to producing the intermediate solution are the easy counterfactuals, the others are the difficult counterfactuals.

The resulting object from the minimization function is a list with many components. The one referring to the intermediate solutions is called i.sol:

names(iLF)
 [1] "tt"         "options"    "negatives"  "initials"   "PIchart"   
 [6] "primes"     "solution"   "essential"  "inputcases" "pims"      
[11] "IC"         "numbers"    "SA"         "i.sol"      "complex"   
[16] "call"      

The i.sol component is also a list with multiple possible subcomponents:

names(iLF$i.sol)
[1] "C1P1"

In this example, it contains only one subcomponent, the pair of the (first) conservative and the (first) parsimonious solutions. Sometimes it happens to produce multiple conservative and also multiple parsimonious solutions, a situation when there will be more subcomponents: C1P2, C2P1, C2P2 etc.

For each pair of conservative and parsimonious solutions, one or more intermediate solutions are produced within the corresponding subcomponent, which is yet another object itself containing the following sub-sub-components:

names(iLF$i.sol$C1P1)
 [1] "EC"        "DC"        "solution"  "essential" "primes"   
 [6] "PIchart"   "c.sol"     "p.sol"     "IC"        "pims"     

This is where all the relevant information is located: the conservative solution c.sol, the parsimonious solution p.sol, a dedicated PI chart in the subcomponent PIchart as well as what we are interested most, to see which of the simplifying assumptions are easy (EC) and which are difficult (DC).

iLF$i.sol$C1P1$EC
   DEV URB LIT IND STB
30   1   1   1   0   1
iLF$i.sol$C1P1$DC
   DEV URB LIT IND STB
10   0   1   0   0   1
12   0   1   0   1   1
14   0   1   1   0   1
16   0   1   1   1   1
17   1   0   0   0   0
18   1   0   0   0   1
21   1   0   1   0   0
25   1   1   0   0   0
26   1   1   0   0   1
28   1   1   0   1   1
29   1   1   1   0   0

Out of the 12 simplifying assumptions, only one was consistent with the specified directional expectations (the easy counterfactual on row number 30), the rest of them have been filtered out, being identified as difficult.

This is as far as the Standard Analysis goes, with respect to the intermediate solutions, it is only a matter of specifying directional expectations. From here on, it is all about the Enhanced Standard Analysis which states that many of the remainders (sometimes including some of the easy counterfactuals from the intermediate solution) are incoherent, implausible, some even impossible, hence any assumption using such remainders is untenable.

Applying directional expectations performs a similar operation of filtering out some of the remainders (the difficult counterfactuals) so the process is somewhat similar. But filtering based on directional expectations is a mechanical procedure that cannot possibly identify incoherent counterfactuals.

Whatever is called “incoherent” makes sense only from a human interpreted perspective, for example it is impossible for an algorithm to detect an impossible counterfactual such as the “pregnant man”, unless it is an artificial intelligence (but we are not there just yet).

Identifying untenable assumptions is therefore a human activity, and it is a task for the researcher to tell the minimization algorithm not to use those remainders identified as untenable. Excluding this kind of remainders from the minimization process ensures a set of solutions that are not only minimally complex, but also logically and theoretically coherent.

What is about to be described next is highly important: given that some of the remainders (the untenable ones) have been removed from the minimization, it is obvious the minimal solution will not be as parsimonious as the “most parsimonious” solution including all possible remainders. It will be a bit more complex, but still less complex than the conservative one.

It is called the enhanced parsimonious solution (EPS) to differentiate it from the “most” parsimonious solution which is almost always useless, being derived using not only difficult counterfactuals but also untenable assumptions.

Sometimes, the enhanced parsimonious solution can be identical with the intermediate solution, a reason for which many users confuse the two. But this is a special situation, when directional expectations filter out untenable remainders as well as the difficult counterfactuals. However this is not always guaranteed, therefore the two should not be confused.

The result of the minimization process using a truth table that excludes the untenable assumptions is the enhanced parsimonious solution, which can subsequently be used to derive yet another intermediate solution (this time based on ESA), called the enhanced intermediate solution (EIS).

It goes without saying that all other remainders that are tenable are included in the minimization to obtain the EPS, otherwise it really does not matter how many remainders are excluded, and the result will always be a superset of the conservative solution.

Some examples are going to be illustrative. For example, some contradictory simplifying assumptions have been identified earlier, using the function findRows(). Since these are incoherent counterfactuals, they should be excluded from the minimization process.

CSA <- findRows(obj = ttLF, type = 2)
ttLFe <- truthTable(LF, SURV, incl.cut = 0.8, exclude = CSA)
minimize(ttLFe, include = "?")

M1: DEV*URB*STB + DEV*~IND*STB -> SURV

This is the EPS, the “most” parsimonious solution that could be obtained given the available remainders after excluding the CSAs. And the argument which makes it possible is exclude, the third one referring to the remainders, after having introduced include and dir.exp.

This solution is also a superset of the conservative solution. In fact, all other solutions are supersets of the conservative one, or the other way round the conservative solution is a subset of all other possible solutions. But at the same time, this EPS it is a superset of all possible enhanced intermediate solutions (EISs) based on the remaining easy counterfactuals, after excluding the CSAs. The complexity rule is also valid, starting from the conservative solution all the way down to the EPS.

Applying directional expectations leads to the EIS, which is identical to the normal intermediate solution:

eiLF <- minimize(ttLFe, include = "?", dir.exp = "DEV, URB, LIT, IND, STB")
eiLF

From C1P1: 

M1:    DEV*URB*LIT*STB + DEV*LIT*~IND*STB -> SURV 

It is the same intermediate solution because it uses the same easy counterfactual:

eiLF$i.sol$C1P1$EC
   DEV URB LIT IND STB
30   1   1   1   0   1

The procedure is the same, regardless of the identified untenable assumptions. They are simply served to the minimize() function via the argument exclude, and the result is one EPS or another, depending which untenable assumptions have been excluded.

For example, some other types of untenable assumptions are related to the negated necessary conditions. At the analysis of necessity in chapter 5, the following necessary conditions have been identified:

superSubset(LF, outcome = SURV, incl.cut = 0.9, ron.cut = 0.6)

                    inclN   RoN   covN  
--------------------------------------- 
1  STB              0.920  0.680  0.707 
2  LIT*STB          0.915  0.800  0.793 
3  DEV + URB + IND  0.903  0.704  0.716 
--------------------------------------- 

This replicates the example presented by C. Q. Schneider and Wagemann (2013), just using different column names. They have identified LIT (literacy) and STB (government stability) as necessary conditions. Here, only STB is necessary because LIT has a lower relevance, but it does not matter because the conjunction LIT\(\cdot\)STB is necessary and that implies by default the atomic conditions are necessary: if the outcome is a subset of an intersection, it surely is a subset of the entire set.

What C. Q. Schneider and Wagemann (2013) argue has been covered in the previous section, graphically illustrated in figure 8.10: when a condition is identified as necessary for the outcome, it is logically impossible for its negation to be sufficient for the outcome. In this case, the conjunction LIT\(\cdot\)STB is necessary and its negation is \(\sim\)LIT + \(\sim\)STB, which means that any remainder containing the negation of LIT or the negation of STB is incoherent.

The GUI dialog to find incoherent configurations

Figure 8.12: The GUI dialog to find incoherent configurations

It is possible to identify all these incoherent counterfactuals, employing the function findRows() which has a dedicated dialog in the graphical user interface (figure 8.12), that is opened by selecting the menu:

Analyse / Incoherent configurations l It uses the default type = 1 to identify the expression subset remainders:

INCOH <- findRows("~LIT + ~STB + ~DEV*~URB*~IND", ttLF)
ttLFi <- truthTable(LF, SURV, incl.cut = 0.8, exclude = INCOH)
minimize(ttLFi, include = "?")

M1: URB*LIT*STB + DEV*LIT*~IND*STB -> SURV

This is the same solution presented by Schneider and Wagemann, and irrespective of what further directional expectations would be defined, the enhanced intermediate solution is the same because the incoherent counterfactuals have already been excluded from the minimization.

These two examples demonstrate how to use the argument exclude in order to perform the ESA - Enhanced Standard Analysis. How exactly this is made possible, will be presented in the next section but it is hopefully clear by now that ESA is possible by excluding any number of untenable assumptions via the argument exclude: all truth table rows that do not seem logical, coherent or tenable can be excluded to obtain a parsimonious or an intermediate solution free of such assumptions.

In the graphical user interface, specifying the exclusion vector is trivially done by inserting the name of the object containing the row numbers.

Specifying the excluded configurations in the truth table dialog

Figure 8.13: Specifying the excluded configurations in the truth table dialog

One final type of incoherent configurations that is worth presenting refers to the simultaneous subset relations. Just like in the case of remainders, the function findRows() can be used to identify this category of rows in the truth table. This time, however, it is not about the remainders but about the empirically observed configurations.

Sometimes, it may happen that an such an observed configuration to have a consistency score above the threshold for both the presence and the absence of the outcome. Such a configuration is simultaneously a subset of both the presence and of the absence of the outcome, this is quite possible in fuzzy sets.

findRows(obj = ttLF, type = 3)
numeric(0)

For this particular dataset, there are no such simultaneous subset relations (of type = 3 in the command above). Similar to the untenable assumptions, where some remainders are excluded from the minimization, the same is possible about observed configurations running the minimization process excluding the simultaneous subset relations. The function minimize() doesn’t care if it is an observed configuration or a remainder, everything supplied via the argument exclude is equally excluded.

Before ending this section, the argument type has an additional possible value equal to 0, that means finding all types of incoherent or untenable configurations, provided an expression for type 1:

ALL <- findRows("~LIT + ~STB", ttLF, type = 0)
ttLFa <- truthTable(LF, SURV, incl.cut = 0.8, exclude = ALL)
minimize(ttLFa, include = "?")

M1: DEV*URB*LIT*STB + DEV*LIT*~IND*STB -> SURV

The solution above is the enhanced parsimonious solution excluding all those untenable configurations. Interestingly, it is the same as the enhanced intermediate solution after excluding the contradictory simplifying assumptions and using directional expectations, so one way or another the solutions seem to converge.

Any other untenable assumptions that researchers might find via other methods not part of the function findRows() can be added to the numeric vector supplied via the argument exclude, using the usual function c(). The final enhanced parsimonious and intermediate solutions are then going to be derived using what is left from the remainders.

8.8 Conjunctural directional expectations

8.9 Theory evaluation

Theory evaluation is vaguely similar to what the quantitative research calls hypothesis testing: having empirical data available, it is possible to test a certain hypothesized relation between (say) an independent and a dependent variable, or more often between a control and an experimental group.

QCA is also focused on the empirical part, although much less on the actual (raw) data but rather on the solutions obtained via the minimization process. Using Boolean logic, Ragin (1987, 118) showed how to create an intersection between a minimization solution and a theoretical statement (expectation) about how an outcome is produced, and C. Schneider and Wagemann (2012) extended this approach using consistency and coverage scores.

In the QCA package, there are two main functions which can be used to create such intersections, both being able to natively involve various negations: the first function is called intersection(), that works exclusively on the sum of products type of expressions (including those from the minimization objects) and the second one is called modelFit() which relies on a heavier use of the minimization objects to also calculate parameters of fit.

It is perhaps important to mark the difference and not confuse the function intersection() with the base R function called intersect() which performs set intersection but has nothing to do with QCA.

As a first example, it is possible to re-use some of the objects created in the previous section, but to make these examples self-contained they are going to be re-created here:

data(LF)
ttLF <- truthTable(LF, outcome = SURV, incl.cut = 0.8)
iLF <- minimize(ttLF, include = "?", dir.exp = "DEV, URB, LIT, IND, STB")
iLF

From C1P1: 

M1:    DEV*URB*LIT*STB + DEV*LIT*~IND*STB -> SURV 

Now, suppose that we have a strong theoretical expectation that democracy survives where a country is both developed and has a stable government. Such a hypothesis can be written in the expression: DEV\(\cdot\)STB.

Using the function intersection(), we can see how is this expectation covered by the empirical minimization solution:

intersection(iLF, "DEV*STB")

E1-C1P1-1: (DEV*URB*LIT*STB + DEV*LIT*~IND*STB)*DEV*STB
  I1-C1P1-1: DEV*URB*LIT*STB + DEV*LIT*~IND*STB

In this intersection, it could be argued the intermediate solution perfectly overlaps our theory, since the conjunction DEV\(\cdot\)STB is found in both solution terms from the model.

In a similar fashion, and also very interesting is to see how the empirical solutions overlap with the negation of the theory, or even how both negations intersect:

intersection(negate(iLF), negate("DEV*STB"))

E1: (~DEV + ~LIT + ~STB + ~URB*IND)(~DEV + ~STB)
  I1: ~DEV + ~STB

Note how both arguments that form the input for the function intersection() can be natively and intuitively negated, the function negate() detecting automatically when the input is a character expression or a minimization object.

The intersection of both negated inputs seems like an even more perfect match with the (negated) theoretical expectation. And things can be made even better, not only by automatically calculating all possible intersections between expressions and / or their negations, but also including parameters of fit:

modelFit(model = iLF, theory = "DEV*STB")

M-C1P1
model:          DEV*URB*LIT*STB + DEV*LIT*~IND*STB
theory:         DEV*STB
model*theory:   DEV*URB*LIT*STB + DEV*LIT*~IND*STB
model*~theory:  -
~model*theory:  DEV*~LIT*STB + DEV*~URB*IND*STB
~model*~theory: ~DEV + ~STB

                     inclS   PRI   covS  
---------------------------------------- 
1   DEV*URB*LIT*STB  0.901  0.879  0.468 
2  DEV*LIT*~IND*STB  0.814  0.721  0.282 
3             model  0.866  0.839  0.660 
4            theory  0.869  0.848  0.824 
5      model*theory  0.866  0.839  0.660 
6     model*~theory    -      -      -   
7     ~model*theory  0.713  0.634  0.242 
8    ~model*~theory  0.253  0.091  0.295 
---------------------------------------- 

Just as in the case of QCA solutions, where multiple sufficient paths can be found (each having potentially multiple disjunctive solution terms), there can be alternative theories about a certain outcome. All of these theories can be formulated disjunctively using the + operator, for example if another theory about democracy survival states that industrialization alone ensures a survival of democracy, these two theories can be formulated as DEV\(\cdot\)STB + IND.

modelFit(iLF, "DEV*STB + IND")

M-C1P1
model:          DEV*URB*LIT*STB + DEV*LIT*~IND*STB
theory:         DEV*STB + IND
model*theory:   DEV*URB*LIT*STB + DEV*LIT*~IND*STB
model*~theory:  -
~model*theory:  ~DEV*IND + ~URB*IND + ~LIT*IND + IND*~STB + DEV*~LIT*STB
~model*~theory: ~DEV*~IND + ~IND*~STB

                     inclS   PRI   covS  
---------------------------------------- 
1   DEV*URB*LIT*STB  0.901  0.879  0.468 
2  DEV*LIT*~IND*STB  0.814  0.721  0.282 
3             model  0.866  0.839  0.660 
4            theory  0.733  0.698  0.871 
5      model*theory  0.866  0.839  0.660 
6     model*~theory    -      -      -   
7     ~model*theory  0.533  0.438  0.302 
8    ~model*~theory  0.272  0.070  0.251 
---------------------------------------- 

The output of the modelFit() function is a list, having as many components as the number of models in the minimization object. An artificial example to generate two models could be:

iLF2 <- minimize(ttLF, include = "?",
                 dir.exp = "DEV, ~URB, ~LIT, ~IND, ~STB")
iLF2

From C1P1: 

M1:    DEV*URB*STB + (DEV*~URB*~IND) -> SURV 
M2:    DEV*URB*STB + (DEV*~IND*STB) -> SURV 

For each minimization model (solution), a model fit is generated:

mfLF2 <- modelFit(iLF2, "DEV*STB")
length(mfLF2)
[1] 2

Any of the contained solutions, for instance the second, can be accessed very easily using the regular [[ operator to index lists, such as:

mfLF2[[2]]

References

Hume, David. 1999. An Enquiry concerning Human Understanding (Oxford Philosophical Texts, edited by Tom Beauchamp). Oxford: Oxford University Press.
McCluskey, Edward J. 1956. “Minimization of Boolean Functions.” The Bell System Technical Journal 5: 1417–44.
Quine, Willard Van Orman. 1952. “The Problem of Simplifying Truth Functions.” The American Mathematical Monthly 59 (8): 521–31.
———. 1955. “A Way to Simplify Truth Functions.” The American Mathematical Monthly 62 (9): 627–31.
Ragin, Charles. 1987. The Comparative Method. Moving Beyond Qualitative and Quantitative Strategies. Berkeley, Los Angeles & London: University Of California Press.
———. 2008b. Redesigning Social Inquiry. Fuzzy Sets and Beyond. Chicago; London: University of Chicago Press.
Ragin, Charles, and John Sonnett. 2005. “Between Complexity and Parsimony: Limited Diversity, Counterfactual Cases, and Comparative Analysis.” In Vergleichen in Der Politikwissenschaft, edited by Sabine Kropp and Michael Minkenberg, 180–97. Wiesbaden: VS Verlag für Sozialwissenschaften. https://doi.org/10.1007/978-3-322-80441-9.
———. 2008. “Limited Diversity and Counterfactual Cases.” In Redesigning Social Inquiry. Fuzzy Sets and Beyond, edited by Charles Ragin, 147–59. Chicago; London: University of Chicago Press.
Schneider, Carsten Q., and Ingo Rohlfing. 2013. Combining QCA and Process Tracing in Set-Theoretic Multi-Method Research.” Sociological Methods and Research 42 (4): 559–97. https://doi.org/10.1177/0049124113481341.
Schneider, Carsten Q., and Claudius Wagemann. 2013. “Doing Justice to Logical Remainders in QCA: Moving Beyond the Standard Analysis.” Political Research Quarterly 66 (1): 211–20. https://doi.org/10.1177/1065912912468269h.
Schneider, Carsten, and Claudius Wagemann. 2012. Set-Theoretic Methods for the Social Sciences. A Guide to Qualitative Comparative Analysis. Cambridge: Cambridge University Press.
———. 2016. Standards of Good Practice and the Methodology of Necessary Conditions in Qualitative Comparative Analysis.” Political Analysis 24 (4): 478–84.
Thompson, Stephanie Louisa. 2011. “The Problem of Limited Diversity in Qualitative Comparative Analysis: A Discussion of Two Proposed Solutions.” International Journal of Multiple Research Approaches 5 (2): 254–68. https://doi.org/10.5172/mra.2011.5.2.254.
Yamasaki, Sakura, and Benoît Rihoux. 2009. “A Commented Review of Applications.” In Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques, edited by Benoît Rihoux and Charles Ragin, 123–45. London: Sage Publications.