# Chapter 3 Data

NCA is versatile regarding the data that can be used for the analysis. In quantitative studies NCA uses quantitative data (like in regression analysis), in qualitative studies NCA uses qualitative data (letters, words or numbers without a quantitative meaning), and is set theoretical studies (like Qualitative Comparative Analysis-QCA) NCA uses set membership scores. NCA can use any of them. NCA does not pose new requirements on collecting data –also NCA needs a proper research design, a good sample, and valid and reliable scores. However a few aspects can be different. This Chapter discusses several aspects of data collection that are relevant for NCA: the necessity experiment (section 3.1), sample size (section 3.2), different data types such as quantitative data (section3.3), qualitative data (section 3.4), longitudinal data (section 3.5), and set membership scores (section 3.6), and how to handle outliers (section 3.7).

## 3.1 Necessity experiment

Whereas in a common experiment \(X\) is manipulated to produce the *presence* of \(Y\), in NCA \(X\) is manipulated to produce *absence* of \(Y\). This is done as follows: The research starts with cases *with* the outcome and the condition, then the condition is *removed or reduced*, and it is observed if the outcome has *disappeared or reduced*. This contrasts the sufficiency experiment in which the research starts with cases *without* the outcome and the condition, then the condition is *added or increased*, and it is observed if the outcome has *appeared or increased*.

## 3.2 Sample size

A frequently asked question is “What is the required sample size for NCA.” The short answer is: “The more the better, but at least N = 1.” This is explained below.

### 3.2.1 Small N studies

In hypothesis testing research a sample is used to test whether the hypothesis can be falsified (rejected) or not. After repeated non-rejections with different samples, a hypothesis may be considered ‘confirmed.’

When \(X\) and \(Y\) are dichotomous variables (e.g., having only two values ‘absent’ or ‘present,’ ‘0’ or ‘1,’ low or high) it is possible to falsify a necessary condition with even a single case (N = 1). The single case is selected by *purposive sampling* (not random sampling). The selected case must have the outcome (present, 1, high), and it is observed if this case has the condition or not. If not (absent, 0, low), the hypothesis is rejected. If yes (present, 1, high)), the hypothesis is not rejected. In the deterministic view on necessity, where no exceptions exist, the researcher can conclude that the hypothesis does not hold. For a stronger conclusions replication with other cases must be done.

An example of this approach is a study by Harding et al. (2002), who studied rampage school shootings in the USA. They first analysed two shooting cases and identified five necessary conditions: gun availability, cultural script (a model why the shooting solves a problem), perceived marginal social position, personal trauma, and failure of social support system. Next, they tested these conditions with two other shooting cases, an concluded that their ‘pure necessity theory’ (see section 2.3) was supported.

In another example, Fujita & Kusano (2020) published an article in the *Journal of East Asian Studies* studying why some Japanese prime ministers (PM’s) have attempted to deny Japan’s violent past through highly controversial visits to the Yasukuni Shrine, while others have not. They propose three necessary conditions for a Japanese PM’s to decide to visit: a conservative ruling party, a government enjoying high popularity, and Japan’s perception of a Chinese threat. They tested these conditions by considering the 5 successful cases (PM’s who visited the shrine) from all 22 cabinets between 1986 and 2014. In these successful cases the three necessary conditions were satisfied. This means that the hypotheses were supported. They additionally tested the contrapositive: the absence of at least one necessary condition is sufficient for the absence of a visit, which was indeed supported by the data.

The editor of the journal commented on the study as follows: https://www.youtube.com/watch?v=zHlVzsQg8Ek

### 3.2.2 Large N studies

In large N studies cases are sampled from a population in the theoretical domain. The goal of inferential statistics is to make an inference (statistical generalization) from the sample to the population. The ideal sample is a *probability sample* where all cases of the population have equal chances to become part of the sample. This contrasts *convenience sample* where cases are selected from the population for convenience of the researcher, for example because the researcher has easy access to the case. NCA puts no new requirements on sampling, and common techniques that are used with other data analysis methods can also be applied for NCA, with the same possibilities and limitations.

## 3.3 Quantitative data

Quantitative data are data expressed by numbers where the numbers are scores (or values) with a meaningful order and meaningful distance. Scores can have two levels (dichotomous, e.g. 0 and 1 ), a finite number of levels (discrete, e.g. 1,2,3,4,5) or an infinite number of levels (continuous, e.g. 1.8, 9.546, 306.22).

NCA can be used with any type of quantitative data.
It is possible that a *single indicator* represents the concept (condition or outcome) of interest. Then this indicator score is used to score the concept. For example, in a questionnaire study where a subject or informant is asked to score conditions and outcomes with seven-point Likert scales, the scores are one of seven possible (discrete) scores.

It is also possible that a *construct* that is build from several indicators is used to represent the concept of interest. Separate indicator scores are then summed, averaged or otherwise combined by the researcher to score the concept. It is also possible to combine several indicator scores statistically, for example by using the factor scores resulting from factor analysis, or construct score resulting from the measurement model of a Structural Equation Model.

Quantitative data can be analysed with NCA’s ‘scatter plot approach’ and with NCA’s software for R. When the number of variable levels is small (e.g. less than 5) it is also possible to use NCA’s contingency table approach.

## 3.4 Qualitative data

Qualitative data are data expressed by names, letter or numbers without a quantitative meaning. For example, gender can be expressed with names (‘male,’ ‘female,’ ‘other’), letters (‘m,’ ‘f,’ ‘o’) or numbers (‘1,’ ‘2,’ ‘3’). Qualitative data are discrete and have usually no order between the scores (e.g., names of people) or if there is an order, the distances between the scores are unspecified (e.g. ’low, medium, high). Although with qualitative data a quantitative NCA (with the `NCA`

software for R) in not possible, it is still possible to apply NCA in a qualitative way using visual inspection of the \(XY\) contingency table or scatter plot.

When the condition is qualitative the researcher observes for a given score of \(Y\), which qualitative score of the condition (e.g., ‘m,’ ‘f,’ ‘o’) can reach that score of \(Y\). When the logical order of the condition is absent, the position of the empty space can be in any upper corner (left corner, middle or right; assuming the hypotheses that \(X\) is necessary for the presence of or a high level of \(Y\)). When a logical order of the qualitative scores is present, the empty space is in the upper left corner (assuming the hypotheses that the presence of a high level of \(X\) is necessary for the presence of or a high level of\(Y\)). In both cases the size of the empty space is arbitrary and the effect size cannot be calculated because the values of \(X\) and therefore the effect size are meaningless. However, when \(Y\) is quantitative, the distance between the highest observed \(Y\) score and the next-highest observed \(Y\) score could be an indication of the constraint of \(X\) on \(Y\).

## 3.5 Longitudinal, panel and time-series data

Researchers often use cross-sectional research designs. In such designs several variables are measured at one moment in time. In a longitudinal research design the variables are measured at several moments in time. The resulting data are called longitudinal data, panel data or time-series data. In this book, longitudinal data (a name commonly used by statisticians) and panel data (a name commonly used by econometricians) are considered as synonyms.

Longitudinal/panel data are data from a longitudinal/panel study in which *several variables* are measured at several moments in time. Time-series data are a special case of longitudinal/panel data in which *one variable* is measured many times.

Longitudinal/panel and time-series data can be analysed with NCA with different goals in mind. Two straightforward ways are discussed in this book. First, if the researcher is not specifically interested in time, the time data can be pooled together as if the data were from cross-sectional data set (e.g., Jaiswal & Zane, 2021). Second, if the researcher is interested in time trends, NCA can be applied for each time stamp separately.

### 3.5.1 NCA with pooled data

Figure 3.1 shows ans example of a scatter plot of a time-pooled data. It is an example of the necessity of a country’s economic prosperity for a country’s average life expectancy: A high expected age is not possible without economic prosperity. This necessity claim can be theoretically justified by the effect of economic prosperity on the quality of the health care system. The data are from the Worldbank for 60 years between 1960 and 2019 and for 199 countries. Data is available for 8899 country-years. The scatter plot of time pooled data has Economic prosperity expressed as log GDP per capita on the \(X\) axis, and Life expectancy at birth expressed in years on the \(Y\) axis.

Figure 3.1 shows a clear empty space in the upper left corner. The NCA analysis is done with the C-LP ceiling line, which is a straight ceiling line that has no observations above it, and with the
CE-FDH line that follows the somewhat non-linear trend of the upper left border
between the full and empty space. The empty space in the upper left corner indicates that a certain level of life expectancy is not possible without a certain level of economic prosperity. For example, for a country’s life expectancy of 75 years, a economic prosperity of at least 3 (10^{3} = 1000 equivalent US dollar) is necessary. The effect sizes for the two ceiling techniques are 0.13 (p < 0.001) and 0.16 (p < 0.001), respectively. This suggests that a *necessity*
relationship exists between Economic prosperity and Life expectancy.
(There is also a well-known positive *average* relationship (imaginary line through the middle of the data), but this average trend is
not considered here.)

### 3.5.2 NCA for describing time trends

When the researcher is interested in time trends, NCA can be applied for each time stamp separately. Such approach can be considered as multiple cross-sectional NCA analyses. As an example in Table 2.1 a longitudianl analysis is done with the same countries and years as in Figure 3.1, but now analysed per year with the C-LP ceiling line. For making the NCA results comparable per year, the scope for each year is standardized to the scope of the pooled data (based on the absolute minima and maxima of empirically observed Economic prosperity and Life expectancy in the data). This scope is entered in the NCA analysis as the theoretical scope for each year.

Table 2.1 shows several NCA parameters for each time-point. The effect sizes range from 0.13 to 0.29 and the p values for all years is less than 0.001. In general, the effect size decreases over the years, such that the constraint that economic prosperity puts on life expectancy is decreasing over time.

Figure 3.2 is an animated figure showing the time trends. In the figure, each data point is a country. The size of a data point refers to the country’s population size, and the color to the geographic region to which it belongs, e.g., yellow for the Sub-Saharan region, and violet for East Asian and Pacific region.

### 3.5.3 Interpretation

The interpretation of longitudinal analysis with NCA focuses on the change of the effect size and ceiling line over time. For example, Table 2.1 shows that the size of the necessity effect of economic prosperity for life expectancy reduces over time. This indicates that economic prosperity has become less of a bottleneck for high life expectancy. Figure 3.2 shows that this decrease of effect size is particularly due to the increase of the intercept (upward moving of the ceiling line). This indicates a rise of the maximum possible life expectancy for a given level of prosperity.

Figure 3.3 shows that in 1960 the maximum life expectancy for a prosperity
of 3.4 (10^{3.4} = 2500 equivalent US dollars) was about 65 years, and in 2019 it was about 75 years. The reduction of necessity is also partly due to the flattening of the
ceiling line, indicating that the difference between the best performing
less prosperous countries, and the best performing more prosperous
countries, is diminishing. In 1960 countries with prosperity of
3.4 (2500 dollar) had a maximum possible life expectancy that was 10
years less than that for countries with prosperity of 4.4
(25000 dollar): 65 versus 75 years. However, in 2019 countries with prosperity of 3.4 (2500 dollar)
had a maximum possible life expectancy that was only 5 years less than that for countries with prosperity of 4.4 (25000
dollar): 75 versus 80 years.

Another observation is that the density of countries near the ceiling line has increased over the years. This indicates that the relative differences in life expectancy between the more and less performing countries has reduced.

Socio-economic policy practitioners could probably make new interesting interpretations from these necessity trends. For example, in the past, economic prosperity was a major bottleneck for high life expectancy. Currently, most countries (but surely not all) are rich enough for having a decent life expectancy of 75 years; if these countries do not reach that level of life expectancy, other factors than economic prosperity are the reason that it is not achieved. In the future, economic prosperity may not be a limiting factor for high life expectancy when all countries have reached an GDP per capita of 2500 dollar and the empty space in the upper left corner has disappeared.

## 3.6 Set membership data

NCA can be used with set membership scores. This is usually done when NCA is combined with QCA. In QCA ‘raw’ variable scores are ‘calibrated’ into set membership scores with values between 0 and 1. A set membership score indicates to which extend a case belongs to the set of cases that have a given characteristic, in particular the \(X\) and the \(Y\). Several studies have been published that use NCA with set membership data (e.g., Torres & Godinho, 2021). These studies combine NCA with QCA for gaining additional insights (in degree) about the conditions that must be present in all sufficient configurations. Section 4.4 discusses how NCA can be combined with QCA.

## 3.7 Outliers

Outliers are observations that are “far away” from other observations. Outliers can have a large influence on the results of any data analysis, including NCA. In NCA an outlier is an ‘influential’ case or observation that has a large influence on the necessity effect size when removed.

### 3.7.1 Outlier types

Because the effect size is defined by the numerator ‘ceiling zone’ (*empty space*) divided by the denominator
*scope* and an outlier is defined by its influence on the effect size, two types of outliers exist in NCA: an ‘empty space outlier’ and a ‘scope outlier.’

An *empty space outlier* is an isolated case in the otherwise empty corner of the scatter plot that reflects necessity (usually the upper
left corner). Such case determines the location of the
ceiling line, thus the size of the empty space, and thus the
necessity effect size. When an empty space outlier is removed, it usually increases the empty space and thus
*increases* the necessity effect size.

A *scope outlier* is an isolated case that determines the minimum or maximum value of the
condition or the outcome, thus the scope, and thus on the necessity
effect size. When a scope outlier is removed, it usually decreases the scope and thus *increases* the effect size.

When a case is both an empty space outlier and a scope outlier, it determines the position of the ceiling and the minimum or maximum of value of \(X\) or \(Y\). When such case is removed it simulaneoulsy decreases empty space and the scope. The net effect is often is a *decrease* of the necessity effect size.

Only cases that determine the ceiling line and cases that determine the minimum or maximum value of \(X\) or \(Y\) can change the effect size when removed. Cases that are below the ceiling line and away from the extreme values of \(X\) and \(Y\) have no influence on the effect size when removed, and therefore are no candidates for potential outliers.

It depends on the ceiling line technique which cases are candidates for potential empty space outliers. The CE-FDH and CR-FDH ceiling techniques use the upper left corner points of the CE-FDH ceiling line (step function) for drawing the ceiling line. These upper left corner points are called ‘peers.’ Therefore, the peers are potential outliers when these ceiling lines, which are the default lines of NCA, are used. The C-LP ceiling line uses only two points of the peers for drawing the ceiling line, and these points are therefore the potential empty space outliers. The scope outliers do not depend on the ceiling technique.

### 3.7.2 Outlier identification

There are no scientific rules for deciding if a case is an outlier. This decision
depends primarily on the judgment of the researcher. For a first impression of possible outliers, the researcher can visually inspect the scatter plot for cases that are far away from other cases. In the literature rules of thumb are used for identifying
outliers. For example, a score of a variable is an
outlier score ‘if it deviates more than three standard deviations from
the mean,’ or ‘if 1.5 interquartile ranges is above the third quartile
or below the first quartile.’ These pragmatic rules
address only one variable (\(X\) or \(Y\)) at a time. Such single variable rules
of thumb may be used for identifying scope outliers (unusual minimum
and maximum values of condition or outcome). These rules are not suitable
for identifying potential empty space outliers. The reason is that empty space outliers
depend on the *combination* of \(X\) and \(Y\) values. For example, a case with a
non-extreme \(Y\)-value may be a regular case when it has a large \(X\) value, because it does not define the ceiling. Then removing the case does not change the effect size. However, when \(X\) is small the case defines the ceiling and removing the case may change the effect size considerably. This means that both \(X\) and \(Y\) must be considered when identifying empty space outliers in NCA.

No pragmatic bivariate outlier rules currently exist for NCA. Because a case is a potential outlier if it considerably changes the necessity effect size when it is removed, the researcher can first select potential outlier and then calculate the effect size change when the case is removed. If no clear potential outlier case can be identified, the researcher may decided that the data set does not contain outliers.

All potential empty space outliers and potential scope outliers can be evaluated one by one by calculating the absolute and relative effect size changes when the case is removed (with replacement). A outlier may be identified if the change is ‘large’ –according to the judgment of the researcher–. Again, no scientific rules exist for what can be considered as a ‘large’ effect size change.

For illustration an outlier analysis is done with the example that is
part of the `NCA`

software to test the
hypothesis that a country’s Individualism is necessary for a country’s
Innovation performance. Data are available for 28 cases (countries).

```
library(NCA)
data(nca.example)
<- nca_analysis(nca.example, 1,3, ceilings = c("ce_fdh","cr_fdh"))
model nca_output(model, plots=TRUE, summaries =FALSE)
```

The scatter plot with the two default ceiling lines is shown in Figure 3.4.

From visual inspection no clear outlier case that is very far away from the other case can be recognized.
Therefore, potential *empty space outliers* could be selected by evaluating all peers. The peers are the upper left corner points of the CE-FDH ceiling line (step function). They can be found with the argument`peers`

of the `NCA`

software as follows (assuming that the `nca_analysis`

of `nca.example`

is named ‘model’):

```
<- model$peers
empty.space.outliers print(empty.space.outliers)
```

```
## $Individualism
## X Y
## South Korea 18 42.3
## Japan 46 171.6
## Finland 63 173.1
## Sweden 71 184.9
## USA 91 214.4
```

The output is a list names of the peers (countries) and their \(X\) and \(Y\) coordinates.

Next, potential *scope outliers* could be selected by evaluating the cases that have the minimum or maximum value of \(X\) and \(Y\). This can be done with the following script (assuming that `nca_analysis`

of `nca.example`

is named ‘model’).

```
<- as.numeric(unlist(model$summaries, use.names = F)[3])
Xmin <- as.numeric(unlist(model$summaries, use.names = F)[4])
Xmax <- as.numeric(unlist(model$summaries, use.names = F)[5])
Ymin <- as.numeric(unlist(model$summaries, use.names = F)[6])
Ymax <- row.names(nca.example)[which (nca.example[,1] == Xmin)]
Cases.Xmin <- row.names(nca.example)[which (nca.example[,1] == Xmax)]
Cases.Xmax <- row.names(nca.example)[which (nca.example[,3] == Ymin)]
Cases.Ymin <- row.names(nca.example)[which (nca.example[,3] == Ymax)]
Cases.Ymax <- rbind(Cases.Xmin,Cases.Xmax,Cases.Ymin,Cases.Ymax)
scope.outliers print(scope.outliers)
```

```
## [,1]
## Cases.Xmin "South Korea"
## Cases.Xmax "USA"
## Cases.Ymin "Mexico"
## Cases.Ymax "USA"
```

It turns out that South Korea defines the minimum \(X\), USA the maximum \(X\) and the maximum \(Y\), and Mexico the minimum \(Y\). This means that South Korea and USA are not only potential empty space outliers but also potential scope outliers.

Finding potential empty space ourliers and potential scope outliers can be combined in the folowing user-defined function. The function selects the cases that are peers and the cases that have minimum and maximum values and evaluates their individual influences on the effect size by removing each cases one by one (with replacement). The function needs 5 inputs: the name of the data set, a specification of \(X\) and \(Y\), the ceiling line, and whether or not all cases should be considered potential outliers.

```
<- function (data, xvar, yvar, ceiling = "cr_fdh", all = FALSE){
nca_outliers library(NCA)
<- nca_analysis(data, xvar,yvar, ceilings = ceiling)
modelA <- as.numeric(unlist(modelA$summaries, use.names = F)[8])
eff.or #select potential empty space outliers: peers
<- as.data.frame(modelA$peers) # peers
peers#select potential scope outliers: minimum/maximum
<- as.numeric(unlist(modelA$summaries, use.names = F)[3])
Xmin <- as.numeric(unlist(modelA$summaries, use.names = F)[4])
Xmax <- as.numeric(unlist(modelA$summaries, use.names = F)[5])
Ymin <- as.numeric(unlist(modelA$summaries, use.names = F)[6])
Ymax <- row.names(data)[which (data[,1] == Xmin)]
Cases.Xmin <- row.names(data)[which (data[,1] == Xmax)]
Cases.Xmax <- row.names(data)[which (data[,3] == Ymin)]
Cases.Ymin <- row.names(data)[which (data[,3] == Ymax)]
Cases.Ymax #unique potential outliers
if (all) {outl <- rownames(data)} else {
<- unique (c(row.names(peers), Cases.Xmin, Cases.Xmax, Cases.Ymin, Cases.Ymax))
outl
} #evaluate potential outliers
#remove outliers one by one (with replacement) and calculate effect size (difference)
= 0
cnt = NULL
dif.abs = NULL
dif.rel = NULL
eff.nw for (i in 1:length(outl)){
<- cnt +1
cnt <- data[-which(row.names(data)== outl[i]),]
dataR <- nca_analysis(dataR, xvar, yvar, ceilings=ceiling)
modelR <- as.numeric(unlist(modelR$summaries, use.names = F)[8])
eff.nw[i] <- (eff.nw[i] - eff.or)
dif.abs[i] <- (eff.nw[i] - eff.or)/(eff.or)*100
dif.rel[i]
}<- round(eff.or, digits = 2)
eff.or <- round(eff.nw, digits = 2)
eff.nw <- round(dif.abs, digits = 2)
dif.abs <- round(dif.rel, digits = 1)
dif.rel <- data.frame(outl, eff.or, eff.nw, dif.abs, dif.rel)
potential.outliers return(potential.outliers)
}
```

The function can be applied to the `nca.example`

data set, with the `ce_fdh`

ceiling line as follows:

`nca_outliers(nca.example,1,3, ceiling = "ce_fdh", all = FALSE)`

```
## outl eff.or eff.nw dif.abs dif.rel
## 1 South Korea 0.42 0.40 -0.01 -3.0
## 2 Japan 0.42 0.55 0.14 32.7
## 3 Finland 0.42 0.42 0.00 0.2
## 4 Sweden 0.42 0.43 0.02 3.6
## 5 USA 0.42 0.33 -0.09 -21.5
## 6 Mexico 0.42 0.42 0.00 0.1
```

The output table shows in the first column the case name of the potential outlier (outl), in the second column the original effect size when no cases are removed (eff.or), in the third column the new effect size when the outlier is removed (eff.nw). The next two columns show the absolute and relative differences between the new and the original effect sizes. Note that when a potential outlier is removed and the difference is calculated it is added again to the data set before another potential outlier is evaluated (removing with replacement). The new effect sizes range from 0.33 to 0.55, absolute differences range from 0.00 to 0.14 and relative differences range from 0.1% to +32.7% .

As expected, the three potential *empty space outliers* (Japan, Finland, Sweden) increase the effect size when removed. Their absolute differences range from virtually 0 to 0.14, and their relative effect size differences from 0.2% to 32.7%. As expected, Mexico is a *potential scope outlier* and when removed, increases the effect size, but this influence is marginal. As expected, the two potential *empty space outliers* that are also potential *scope outliers* *reduce* the effect size when removed (South Korea and USA) with absolute effect size differences of -0.01 and -.0.09, respectively, and relative effect size differences of -3.0% and - 21.5%, respectively.
In all, it appears that Japan is the most serious potential outlier.

### 3.7.3 Outlier decision making

After potential outliers are identified, the next question is what to do with them: remove or keep. Because removing outliers has a large effect on the outcome (see the definition or outlier), removing a case from the data set based on a pragmatic outlier rule without further judgment is not recommended. In general, the recommendation is to keep a potential outlier, unless there is a good reason to remove it (thus “keep unless,” instead of “remove unless”). When an outlier is removed it is important to report that it was removed and why.

Figure 3.5 shows a decision tree to assist researchers in deciding about keeping or removing a potential outlier.

First, the potential outlier case is selected. It is then evaluated for sampling error. Sampling error refers a case that does not represent the theoretical domain of the theory that is tested or to which the researcher wants to generalize the results. For example, the case may be a large company, whereas the theory applies to small companies only. This is a reason to remove the outlier case.

If there is no sampling error, the potential outlier case may have measurement error. The condition or the outcome may have been incorrectly scored, which could happen for a variety of reasons. If there is measurement error, and the error can be corrected then the outlier case becomes a regular case and stays in the dataset. If the measurement error cannot be corrected the case can be removed from the dataset and this reason is reported.

If there is no information about sampling error or measurement error, the researcher can decide to apply an outlier rule. If the researcher applies an outlier rule, for example based on the magnitude of the absolute or relative effect size changes when the case is removed (see section 3.7.2). Such rule should be specified and justified. The outlier rule decides whether the case is kept or removed. If no outlier rule is applied the potential outlier case remains in the data set.

When a potential outlier case has a large influence the necessity effect size if removed, but stays in the dataset, a *sensitivity analysis* can be done. In this analysis the researcher explores the influence of the potential outlier case not only on the effect size (see above), but also the p value of the effect size to judge if (lack of) evidence of necessity in the data remains approximately the same or not.

Furthermore, when a potential outlier case has a large influence the necessity effect size if removed, but stays in the dataset, the researcher might consider taking a ‘probabilistic view’ on necessary. In this view necessity is considered to be present, even if outlier cases do not support it. The necessity analysis is done with the data set without the outliers (e.g., NCA is done with the incomplete dataset) and the outliers are added afterwards again. The researcher should be reluctant to use this approach, in particularly for empty space outliers, because ignoring such outlier usually results in an increase of the effect size.

For example, Japan can be removed from `nca.example`

as follows:

```
<- nca.example[-14,]
nca.example <- nca_analysis(nca.example, 1,3, ceilings = c("ce_fdh", "cr_fdh"))
model nca_output(model, plots=TRUE, summaries =FALSE)
```

The scatter plot of Figure 3.6 can be compared with the scatter plot of Figure 3.4.

In the original analysis with the complete dataset, the results of the effect size with the CE-FDH ceiling line, was 0.42 (p = 0.080). However, in the results without Japan in the dataset increase the effect size to 0.55 and decreases the p value to p = 0.012. Therefore, removing an outlier could influence the researcher’s judgment about evidence of necessity in the data. The researcher’s judgment about the hypothesis is less convincing if the judgment is sensitive for including or removing outliers that are not clear sampling errors (e.g., Japan is not part of the theoretical domain) or clear measurement errors (e.g., incorrect measurement of Japan’s Individualism or Innovation performance).

The above procedures are for one outlier at a time. The judgment becomes more complex when the researcher considers removing more than one outlier at the same time. After removing the first outlier, a case that was originally not part of the first set of single potential outliers, may be identified as a new potential outlier, etc.

### References

*Journal of East Asian Studies*,

*20*(2), 291–316. https://doi.org/10.1017/jea.2020.2

*Sociological Methods & Research*,

*31*(2), 174–217. https://journals-sagepub-com.eur.idm.oclc.org/doi/10.1177/0049124102031002003

*Thunderbird International Business Review*. https://onlinelibrary.wiley.com/doi/full/10.1002/tie.22243?casa_token=-K0Ua3I3s_EAAAAA%3Avi0um5TgeCsKpsiedkxxmtJFmgJV3JoO4e7WohNJMVIOVYB2rLOo-Pw6fRGW5YTl_ZUY9gp_AfbF9Kw

*Small Business Economics*, 1–17. https://link.springer.com/article/10.1007/s11187-021-00515-3