It should be noted that everything we do here can also be done in the shiny app (web interface) of the qmethod package by Aiora Zabala here. All these guidelines and code are based on instructions laid out by Aiora Zabala here.

My contribution here is simply to say one or two words what’s happening in between the lines in a bit more detail.


Contents


0. R setup

First of all, make sure you installed the qmethod package by Aiora Zabala.

install.packages("qmethod")

In the beginning of your document, make sure to call the library.

library(qmethod)
## Warning: vorhergehender Import 'psych::equamax' durch 'GPArotation::equamax'
## während des Ladens von 'qmethod' ersetzt
## Warning: vorhergehender Import 'psych::varimin' durch 'GPArotation::varimin'
## während des Ladens von 'qmethod' ersetzt
## 
## This is 'qmethod' v.1.8.
## 
## Please cite as:
## Zabala, A. (2014) qmethod: A Package to Explore Human Perspectives Using Q Methodology. The R Journal, 6(2):163-173.

Next up, set the according working directory

setwd("your_directory")

1. Loading data into R

So, an important note before importing the data (in that case here called q_data.csv), delete in excel first column and save as csv file. Optionally, you can save the data then in a separate file.

Here I will illustrate the different analytical steps with an expample data set, which I priorly saved in q_data.csv.

On this page I make some general remarks on what’s happening in the code and also introduce a few comments in blue which then refer to the output based on the sample data file.

qdata <- read.csv("q_data.csv", header=TRUE, sep=";",dec=".")

save(qdata,file = "qdata.RData")

2. Very basic analysis

Just to get a first idea of your data, having a quick look at the correlations. This is by far not a necessary step, but oftentimes nice to see who is correlated with whom. This shall provide some idea about which Q-sorts are most and least similar and therefore tell us something about how the factors are likely to develop.

cor(qdata)

2.1. Some Intuition

Intuitively speaking, this is what is happening in the background: we have many individual Q-sorts. Comparing the different Q-sorts, we quickly see that some are alike, and some are different. Looking at the dataset we notice that that there is some variance. What we try to do in the first step is to see, which part of the variance is shared among different factors. This is referred to as communality.

What we do with Principal Component Analysis is to explain the variance and covariance in terms of a simpler structure by looking for commonality in the data.

To facilitate interpretation of the resulting simplified structure, we apply factor rotation. This procedure aims to maximise individual factor loadings (metric representing factor associations, similar to correlations).

In short, the statistical procedures are as follows (conducted in that order):

  1. correlation
  2. extraction
  3. rotation

3. Q analysis

At this stage we will tell R: “Have a look at he data and check for 3 Factors”. This is very crucial here. From now on we will use the features of the qmethod package.
The qmethod function below goes throw our data. Based on the statistical criteria mentioned further on, we will have to decide if we are ok with the number of factors, or if we have to decrease the number further down to 2.

The rationale thereby is the following by imagining the following situation: I don’t know if you have kids, but let’s suppose you do. You come home and go into your kids’ room. It’s a total mess with toys everywhere. Now you tell your kid: Clean up your room and sort your toys into 6 boxes. Not 4, not 5, not 7 – into 6 boxes. And you figure out a system to put them into the 6 boxes.”

Now, time passes by and you check how your kid cleaned up the room. One box is full of action figures, one with crayons, one with stuffed animals, and the remaining three boxes are filled with each one book. Now this doesn’t really look efficient, fewer boxes, in particular one box for books, could have done the job as well. However, since you imposed that 6 boxes should be used, you can now evaluate whether the outcome is efficient or not.

The process of factor analysis in Q-method is quite similar. We impose a restriction on the program by telling how many factors to extract. All information the program has is the different sorting patterns. What the program does now is to look for similarities within the sorting patterns and “putting them into the boxes” (the factors). In that case, it may also happen the program comes up with factors that don’t tell us much. Similar to the example above with the three boxes of books, it is up to us researchers to evaluate whether a factor makes sense or not. To do that, we have statistical criteria, such as Eigenvalues, the share of variance explained, or number of Q-sorts per factor. Moreover, we can evaluate after building the narratives whether a smaller number of factors extracted makes more sense.

results <- qmethod(qdata, nfactors = 3, rotation = "varimax", cor.method="pearson")
## Q-method analysis.
## Finished on:               Fri Oct 13 11:38:42 2023
## 'qmethod' package version: 1.8
## Original data:             38 statements, 15 Q-sorts
## Forced distribution:       TRUE
## Number of factors:         3
## Extraction:                PCA
## Rotation:                  varimax
## Flagging:                  automatic
## Correlation coefficient:   pearson

[NOTE: The default options for rotation is varimax, for correlation method is pearson and for extraction method is PCA.]

UPDATE: new versions of the ‘qmethod’ package also take ‘extraction’ as an argument. Options here are then ‘PCA’ and ‘Centroid’.

In our example here, we extracted 3 factors via the varimax rotation and pearson correlation method.

“In Q, correlation coefficients are employed to determine the extent to which statement patterns in two Q sorts are similar: it is assumed that two persons with approximately the same attitude on a subject will rank the items in roughly the same order” (Brown, 1980 p. 267). Taking this as the starting point of our analysis, we first correlate the different Q-sorts with each other and then subsequently rotate the data to reduce the dimensionality and to find out, who can be grouped together.

The rotation method can best be understood or let’s say compared to a search mechanism under which the programme tries to identify the number of factors in the dataset. Now rotation might sound a little fancy, but it’s actually quite accurate of what is happening here in the background. Lets us remember briefly, that our aim is to reduce the dimensions of our dataset. Within Q-studies we interview many people to find a reduced number of subjective viewpoints. This should be intuitive. However, we can tell the program based on which criteria it should reduce our dataset to end up with our desired subjective viewpoints.

In the case of the varimax rotation, we tell the programme to rotate the dataset in a way that each factor explains as much variance as possible. In other words, the rotation goes into the direction in which the data is most dispersed. Consequently, we see that the first factor explains most, followed by the second, followed by the third…

Varimax is thereby set as default. There are many other popular rotation modes, which are each based on different rotation criteria. Each mode has its own rationale. If you think that variance is not a good criterion for your data rotation, you might wanna have a look at different rotation methods, such as:

or do the rotation by hand.

For what concerns the correlation method, there are the options of Pearson and Spearman correlation. Pearson’s correlation assesses linear relationships, whereas Spearman’s correlation addresses monotonic relationships. These might be linear or take other functional forms.

4. Extracting factors

Watts and Stenner (2012) provide a good overview of Q-methodology and the respective criteria that determine the number of extracted factors.

First, we check which participants load onto which factor.

results$flag
##        flag_f1 flag_f2 flag_f3
## Resp1    FALSE    TRUE   FALSE
## Resp2    FALSE   FALSE    TRUE
## Resp3     TRUE   FALSE   FALSE
## Resp4    FALSE    TRUE   FALSE
## Resp5    FALSE   FALSE   FALSE
## Resp6    FALSE   FALSE    TRUE
## Resp7    FALSE   FALSE    TRUE
## Resp8     TRUE   FALSE   FALSE
## Resp9    FALSE    TRUE   FALSE
## Resp10   FALSE    TRUE   FALSE
## Resp11    TRUE   FALSE   FALSE
## Resp12    TRUE   FALSE   FALSE
## Resp13   FALSE    TRUE   FALSE
## Resp14    TRUE   FALSE   FALSE
## Resp15    TRUE   FALSE   FALSE
loa.and.flags(results)
##        fg1    f1 fg2    f2 fg3    f3
## Resp1      -0.07   *  0.78      0.50
## Resp2       0.22      0.23   *  0.69
## Resp3    *  0.72      0.22      0.32
## Resp4       0.19   *  0.87      0.10
## Resp5       0.39      0.47      0.53
## Resp6       0.49      0.13   *  0.74
## Resp7       0.08      0.18   *  0.87
## Resp8    *  0.69      0.47     -0.07
## Resp9       0.37   *  0.63      0.24
## Resp10      0.41   *  0.59      0.29
## Resp11   *  0.49      0.27      0.22
## Resp12   *  0.60      0.35      0.31
## Resp13      0.45   *  0.57      0.15
## Resp14   *  0.74     -0.03      0.12
## Resp15   *  0.56      0.25      0.21

Among one of these criteria mentioned by Watts and Stenner (2012) is the number of Q-sorts belonging to a factor. The commands above return the Q-sorts which significantly load onto a certain factor. An important indicator for that are the so called factor loadings. Factor loadings are the scores indicating how much each Q sort loads on each factor. Moreover, a Q-sort, which significantly loads onto a factor is called flagged.

Again, in our example, the stars indicate which sort is flagged for which factor and the numbers are the respective factor loadings. As we can see, for the first factor we have six flagged Q-sorts, for the second factor five flagged Q-sorts and for the third factor we have three flagged Q-sorts. We are good to go.

Each Q-sort can only load onto one factor. However, it might occur that one Q-sort may load relatively high on multiple factors. In that case, we cannot clearly attribute the person’s Q-sort to one sinle factor, leading to a potential exclusion of that Q-sort. As stated by Zabala (2014), there are two relevant criteria for flagging a Q-sort:

More precisely, regarding 1) we calculate the significance level by \(1.96 * \frac{1}{\sqrt{n}}\), where \(n\) = number of statements. For what concerns the second test, the idea is to test if a Q-sort loads onto a single factor by a large enough margin that it can be considered to be a factor exemplar.

results$f_char$characteristics
##    av_rel_coef nload eigenvals expl_var reliability se_fscores
## f1         0.8     6  3.432723 22.88482   0.9600000  0.2000000
## f2         0.8     5  3.281275 21.87516   0.9523810  0.2182179
## f3         0.8     3  2.801565 18.67710   0.9230769  0.2773501

Other criteria can be found in the table above. Based on Watts and Stenner (2012), the following criteria play a role in determining the number of factors extracted from the analysis:

Lets go through them one by one. Eigenvalues are all bigger 1 and accumulated variance amounts to 63%. In short, so far all criteria are satisfied.

The Kaiser Guttman criterion ensures each extracted factor accounts for at least as much study variance than a single Q sort. If this was not the case, the factor in question captures less information than the data provided by a single participant.

Humphreys rule basically states that the cross product of the two highest factor loadings must exceed \(\frac{-2}{\sqrt{n}}\)

humphrey<-2/sqrt(dim(qdata))
humphrey
## [1] 0.3244428 0.5163978

The first number of this command gives us the threshold of the Humphrey rule. If we calculate this for our third factor, we get 0,64. Since this is larger than the calculated threshold, we can say that also that criteria is satisfied.

The last criteria is a graphical description of variance explained by additional factors. Here, a kink in the line should indicate a cut-off point of additional factors.

screeplot(prcomp(qdata), main = "Screeplot of unrotated factors", type = "l")

Okay, this criterion leaves some room for interpretation, but what are we looking for? What we are trying to do is to extract a number of factors that is smaller than the number of participants in our study. At the same time, we try to explain as much variance as possible. A kink in the screeplot basically tells us: that factor is not explaining much viariance. What is a kink and what not is subject to individual interpretation though.

5. Making sense of the extracted factors

At this stage we know how many factors we extract from the data. Now, the task is to build a narrative and fill them with meaning.
The following line of code will provide us with a whole bunch of information, such as:

summary(results)
results

So far, we looked at most of these bits of information. Now, for interpretation of the respective factors we will turn to the distinguishing and consensus statements. In short, consensus statements are elements of general agreement, whereas distinguishing statements indicate potential areas of contention.

Now, how do we know if a statement is distinguished or represents consensus? Simple, by comparing z-scores. Z-scores tell us, how a statement has been evaluated on average by study participants for each respective factor. Consequently, these scores have the same dimensions as the extremes of the applied Q-grid.

Whether or not a statement is significantly distinguished by a factor depends on the standard error of the difference of the respective z-score of that statement. This measure indicates if two factors evaluated a certain statement differently, by looking at the dispersion of the z-scores for each factor.

plot(results)

Here we see a graphical representation of the z-scores for each factor. At the bottom we see consensus statements and at the top statements with the highest dispersion of z-scores. If an icon is filled with a color, we know that the factor is significantly distinguished for that respective statement.

In the example above, the grid ranged from -4 to 4. Thus, the z-scores must also be located within that range. Each factor here has its own color. As said before, at the bottom we find consensus statements and at the top statements which are highly distinguished. Note that the more up we go, the likelier it is that a colored object is actually filled with color and not just outlined. This is because the more dispersed the z-scores, the likelier it is that the standard error of the difference exceeds the critical point to appear significant.

Another useful feature to interprete the different factors is to build idealised Q-sorts. This idealised Q-sort represents a so to say average of a Q-sort of a factor.

scores <- cbind(round(results$zsc, digits=2), results$zsc_n) 
nfactors <- ncol(results$zsc) 
col.order <- as.vector(rbind(1:nfactors, (1:nfactors)+nfactors)) 
#scores <- scores[col.order] 
scores <- as.data.frame(scores)
scores
##    zsc_f1 zsc_f2 zsc_f3 fsc_f1 fsc_f2 fsc_f3
## 1   -0.19  -0.30  -0.85     -1      0     -2
## 2   -2.11  -1.28  -1.01     -4     -3     -2
## 3   -0.03  -0.39  -0.53      0     -1     -1
## 4   -0.57   0.57   1.13     -1      1      2
## 5    1.08   0.41   0.65      2      1      1
## 6   -2.01  -1.91  -1.96     -4     -3     -4
## 7    0.49   1.79   0.16      1      4      1
## 8    0.49   1.66   0.03      1      3      0
## 9    0.46   0.24  -0.05      1      1      0
## 10  -0.69   0.21  -0.44     -2      0     -1
## 11   0.43   0.16  -0.63      1      0     -2
## 12   1.19   1.10   0.03      3      3      0
## 13   1.15   1.07   0.03      3      2      0
## 14   0.28   1.21  -0.06      0      3      0
## 15   0.75  -0.26   0.07      1      0      0
## 16  -1.04  -0.52  -1.52     -2     -2     -3
## 17  -1.96  -0.88   0.16     -3     -2      1
## 18   0.90  -0.18   0.62      2      0      1
## 19   1.47  -0.44   1.46      4     -1      3
## 20  -0.07   0.06   1.33     -1      0      3
## 21   1.08   0.78   0.71      2      1      2
## 22   0.18   2.17   1.73      0      4      4
## 23   0.11   0.91   2.21      0      2      4
## 24  -0.64  -1.00  -0.19     -1     -2     -1
## 25   0.83  -0.31   1.18      1     -1      2
## 26  -1.36  -2.29  -1.69     -3     -4     -4
## 27   1.06   0.82   1.22      2      2      3
## 28  -0.33  -0.52  -1.41     -1     -2     -3
## 29  -0.94  -0.50  -1.36     -2     -1     -2
## 30   1.15   0.45   0.55      3      1      1
## 31   0.22   0.83  -0.36      0      2     -1
## 32   1.62  -0.39  -0.18      4     -1      0
## 33   0.06  -0.08   0.78      0      0      2
## 34  -1.29  -0.24  -0.30     -2      0     -1
## 35   0.10  -0.38   0.53      0     -1      1
## 36  -1.39  -1.01   0.00     -3     -3      0
## 37  -0.64  -1.91  -0.45     -1     -4     -1
## 38   0.16   0.34  -1.59      0      1     -3

Reordering might help to see which statments have been most polarising for each factor

scores[order(scores$zsc_f1, decreasing = T), ]
##    zsc_f1 zsc_f2 zsc_f3 fsc_f1 fsc_f2 fsc_f3
## 32   1.62  -0.39  -0.18      4     -1      0
## 19   1.47  -0.44   1.46      4     -1      3
## 12   1.19   1.10   0.03      3      3      0
## 13   1.15   1.07   0.03      3      2      0
## 30   1.15   0.45   0.55      3      1      1
## 5    1.08   0.41   0.65      2      1      1
## 21   1.08   0.78   0.71      2      1      2
## 27   1.06   0.82   1.22      2      2      3
## 18   0.90  -0.18   0.62      2      0      1
## 25   0.83  -0.31   1.18      1     -1      2
## 15   0.75  -0.26   0.07      1      0      0
## 7    0.49   1.79   0.16      1      4      1
## 8    0.49   1.66   0.03      1      3      0
## 9    0.46   0.24  -0.05      1      1      0
## 11   0.43   0.16  -0.63      1      0     -2
## 14   0.28   1.21  -0.06      0      3      0
## 31   0.22   0.83  -0.36      0      2     -1
## 22   0.18   2.17   1.73      0      4      4
## 38   0.16   0.34  -1.59      0      1     -3
## 23   0.11   0.91   2.21      0      2      4
## 35   0.10  -0.38   0.53      0     -1      1
## 33   0.06  -0.08   0.78      0      0      2
## 3   -0.03  -0.39  -0.53      0     -1     -1
## 20  -0.07   0.06   1.33     -1      0      3
## 1   -0.19  -0.30  -0.85     -1      0     -2
## 28  -0.33  -0.52  -1.41     -1     -2     -3
## 4   -0.57   0.57   1.13     -1      1      2
## 24  -0.64  -1.00  -0.19     -1     -2     -1
## 37  -0.64  -1.91  -0.45     -1     -4     -1
## 10  -0.69   0.21  -0.44     -2      0     -1
## 29  -0.94  -0.50  -1.36     -2     -1     -2
## 16  -1.04  -0.52  -1.52     -2     -2     -3
## 34  -1.29  -0.24  -0.30     -2      0     -1
## 26  -1.36  -2.29  -1.69     -3     -4     -4
## 36  -1.39  -1.01   0.00     -3     -3      0
## 17  -1.96  -0.88   0.16     -3     -2      1
## 6   -2.01  -1.91  -1.96     -4     -3     -4
## 2   -2.11  -1.28  -1.01     -4     -3     -2
scores[order(scores$zsc_f2, decreasing = T), ] 
##    zsc_f1 zsc_f2 zsc_f3 fsc_f1 fsc_f2 fsc_f3
## 22   0.18   2.17   1.73      0      4      4
## 7    0.49   1.79   0.16      1      4      1
## 8    0.49   1.66   0.03      1      3      0
## 14   0.28   1.21  -0.06      0      3      0
## 12   1.19   1.10   0.03      3      3      0
## 13   1.15   1.07   0.03      3      2      0
## 23   0.11   0.91   2.21      0      2      4
## 31   0.22   0.83  -0.36      0      2     -1
## 27   1.06   0.82   1.22      2      2      3
## 21   1.08   0.78   0.71      2      1      2
## 4   -0.57   0.57   1.13     -1      1      2
## 30   1.15   0.45   0.55      3      1      1
## 5    1.08   0.41   0.65      2      1      1
## 38   0.16   0.34  -1.59      0      1     -3
## 9    0.46   0.24  -0.05      1      1      0
## 10  -0.69   0.21  -0.44     -2      0     -1
## 11   0.43   0.16  -0.63      1      0     -2
## 20  -0.07   0.06   1.33     -1      0      3
## 33   0.06  -0.08   0.78      0      0      2
## 18   0.90  -0.18   0.62      2      0      1
## 34  -1.29  -0.24  -0.30     -2      0     -1
## 15   0.75  -0.26   0.07      1      0      0
## 1   -0.19  -0.30  -0.85     -1      0     -2
## 25   0.83  -0.31   1.18      1     -1      2
## 35   0.10  -0.38   0.53      0     -1      1
## 3   -0.03  -0.39  -0.53      0     -1     -1
## 32   1.62  -0.39  -0.18      4     -1      0
## 19   1.47  -0.44   1.46      4     -1      3
## 29  -0.94  -0.50  -1.36     -2     -1     -2
## 16  -1.04  -0.52  -1.52     -2     -2     -3
## 28  -0.33  -0.52  -1.41     -1     -2     -3
## 17  -1.96  -0.88   0.16     -3     -2      1
## 24  -0.64  -1.00  -0.19     -1     -2     -1
## 36  -1.39  -1.01   0.00     -3     -3      0
## 2   -2.11  -1.28  -1.01     -4     -3     -2
## 6   -2.01  -1.91  -1.96     -4     -3     -4
## 37  -0.64  -1.91  -0.45     -1     -4     -1
## 26  -1.36  -2.29  -1.69     -3     -4     -4
scores[order(scores$zsc_f3, decreasing = T), ]
##    zsc_f1 zsc_f2 zsc_f3 fsc_f1 fsc_f2 fsc_f3
## 23   0.11   0.91   2.21      0      2      4
## 22   0.18   2.17   1.73      0      4      4
## 19   1.47  -0.44   1.46      4     -1      3
## 20  -0.07   0.06   1.33     -1      0      3
## 27   1.06   0.82   1.22      2      2      3
## 25   0.83  -0.31   1.18      1     -1      2
## 4   -0.57   0.57   1.13     -1      1      2
## 33   0.06  -0.08   0.78      0      0      2
## 21   1.08   0.78   0.71      2      1      2
## 5    1.08   0.41   0.65      2      1      1
## 18   0.90  -0.18   0.62      2      0      1
## 30   1.15   0.45   0.55      3      1      1
## 35   0.10  -0.38   0.53      0     -1      1
## 7    0.49   1.79   0.16      1      4      1
## 17  -1.96  -0.88   0.16     -3     -2      1
## 15   0.75  -0.26   0.07      1      0      0
## 8    0.49   1.66   0.03      1      3      0
## 12   1.19   1.10   0.03      3      3      0
## 13   1.15   1.07   0.03      3      2      0
## 36  -1.39  -1.01   0.00     -3     -3      0
## 9    0.46   0.24  -0.05      1      1      0
## 14   0.28   1.21  -0.06      0      3      0
## 32   1.62  -0.39  -0.18      4     -1      0
## 24  -0.64  -1.00  -0.19     -1     -2     -1
## 34  -1.29  -0.24  -0.30     -2      0     -1
## 31   0.22   0.83  -0.36      0      2     -1
## 10  -0.69   0.21  -0.44     -2      0     -1
## 37  -0.64  -1.91  -0.45     -1     -4     -1
## 3   -0.03  -0.39  -0.53      0     -1     -1
## 11   0.43   0.16  -0.63      1      0     -2
## 1   -0.19  -0.30  -0.85     -1      0     -2
## 2   -2.11  -1.28  -1.01     -4     -3     -2
## 29  -0.94  -0.50  -1.36     -2     -1     -2
## 28  -0.33  -0.52  -1.41     -1     -2     -3
## 16  -1.04  -0.52  -1.52     -2     -2     -3
## 38   0.16   0.34  -1.59      0      1     -3
## 26  -1.36  -2.29  -1.69     -3     -4     -4
## 6   -2.01  -1.91  -1.96     -4     -3     -4

Apart from the graphical representation, we can also directly assess the differences of the z-scores for each particular statement. The next line of code tells you whether a statement:

In addition, the differences will be shown.

results$qdc
##            dist.and.cons       f1_f2 sig_f1_f2       f1_f3 sig_f1_f3
## 1              Consensus  0.10784304            0.66164758          
## 2  Distinguishes f1 only -0.83081474        ** -1.10205533        **
## 3              Consensus  0.36083234            0.49991104          
## 4  Distinguishes f1 only -1.14093538       *** -1.69268208        6*
## 5                         0.67104080         *  0.43697031          
## 6              Consensus -0.10070172           -0.04987698          
## 7  Distinguishes f2 only -1.29614255       ***  0.32897738          
## 8  Distinguishes f2 only -1.17681947       ***  0.45851601          
## 9              Consensus  0.21940411            0.51075578          
## 10                       -0.89864093        ** -0.24686194          
## 11 Distinguishes f3 only  0.27084992            1.05552923        **
## 12 Distinguishes f3 only  0.09636846            1.16377025       ***
## 13 Distinguishes f3 only  0.08347787            1.12123719        **
## 14 Distinguishes f2 only -0.93468218        **  0.34361645          
## 15 Distinguishes f1 only  1.00916201       ***  0.68633150         *
## 16                       -0.51976394            0.47955612          
## 17     Distinguishes all -1.08757443       *** -2.12491868        6*
## 18 Distinguishes f2 only  1.08121728       ***  0.28592347          
## 19 Distinguishes f2 only  1.91038521        6*  0.00472624          
## 20 Distinguishes f3 only -0.13499389           -1.40372169       ***
## 21             Consensus  0.30333358            0.36716598          
## 22 Distinguishes f1 only -1.99319528        6* -1.55744783       ***
## 23     Distinguishes all -0.80168441        ** -2.10366416        6*
## 24                        0.36190446           -0.45147699          
## 25 Distinguishes f2 only  1.13258009       *** -0.35565033          
## 26                        0.92613613        **  0.32257049          
## 27             Consensus  0.23881027           -0.16124796          
## 28 Distinguishes f3 only  0.19114359            1.07890141        **
## 29                       -0.43469470            0.42018874          
## 30                        0.69245329         *  0.59424046          
## 31 Distinguishes f2 only -0.60926863         *  0.57979110          
## 32 Distinguishes f1 only  2.01415778        6*  1.80245705        6*
## 33 Distinguishes f3 only  0.14390850           -0.71537551         *
## 34 Distinguishes f1 only -1.04226626       *** -0.98642002        **
## 35                        0.48539097           -0.42076764          
## 36 Distinguishes f3 only -0.38737822           -1.39342617       ***
## 37 Distinguishes f2 only  1.27551606       *** -0.18891530          
## 38 Distinguishes f3 only -0.18635903            1.75172485        6*
##          f2_f3 sig_f2_f3
## 1   0.55380454          
## 2  -0.27124059          
## 3   0.13907870          
## 4  -0.55174670          
## 5  -0.23407050          
## 6   0.05082474          
## 7   1.62511993       ***
## 8   1.63533548       ***
## 9   0.29135167          
## 10  0.65177899          
## 11  0.78467931         *
## 12  1.06740179        **
## 13  1.03775932        **
## 14  1.27829863       ***
## 15 -0.32283051          
## 16  0.99932006        **
## 17 -1.03734425        **
## 18 -0.79529381         *
## 19 -1.90565897        6*
## 20 -1.26872780       ***
## 21  0.06383239          
## 22  0.43574745          
## 23 -1.30197975       ***
## 24 -0.81338145         *
## 25 -1.48823042       ***
## 26 -0.60356563          
## 27 -0.40005822          
## 28  0.88775782         *
## 29  0.85488344         *
## 30 -0.09821283          
## 31  1.18905973       ***
## 32 -0.21170073          
## 33 -0.85928402         *
## 34  0.05584624          
## 35 -0.90615861         *
## 36 -1.00604796        **
## 37 -1.46443135       ***
## 38  1.93808388        6*

Note that there are a few instances which are blank. This represents some sort of “middleground”. In other words, there are differences in z-scores which lead us to assume that there is no consensus. However, these differences are not large enough to speak of “distinguished statements”.

To make things a little easier, we can order the statements based on their categorisation.

Clearly, these categorisations are key for building the respective narratives. Personally, I have a separate excel spreadsheet in which I mark these features. In the end, it is the idealised Q-sorts in combination with the features below that really help to draw a picture of the different narratives.

results$qdc[which(results$qdc$dist.and.cons == "Consensus"), ]
##    dist.and.cons      f1_f2 sig_f1_f2       f1_f3 sig_f1_f3       f2_f3
## 1      Consensus  0.1078430            0.66164758            0.55380454
## 3      Consensus  0.3608323            0.49991104            0.13907870
## 6      Consensus -0.1007017           -0.04987698            0.05082474
## 9      Consensus  0.2194041            0.51075578            0.29135167
## 21     Consensus  0.3033336            0.36716598            0.06383239
## 27     Consensus  0.2388103           -0.16124796           -0.40005822
##    sig_f2_f3
## 1           
## 3           
## 6           
## 9           
## 21          
## 27
results$qdc[which(results$qdc$dist.and.cons == "Distinguishes all"), ]
##        dist.and.cons      f1_f2 sig_f1_f2     f1_f3 sig_f1_f3     f2_f3
## 17 Distinguishes all -1.0875744       *** -2.124919        6* -1.037344
## 23 Distinguishes all -0.8016844        ** -2.103664        6* -1.301980
##    sig_f2_f3
## 17        **
## 23       ***
results$qdc[which(results$qdc$dist.and.cons == "Distinguishes f1 only"), ]
##            dist.and.cons      f1_f2 sig_f1_f2      f1_f3 sig_f1_f3       f2_f3
## 2  Distinguishes f1 only -0.8308147        ** -1.1020553        ** -0.27124059
## 4  Distinguishes f1 only -1.1409354       *** -1.6926821        6* -0.55174670
## 15 Distinguishes f1 only  1.0091620       ***  0.6863315         * -0.32283051
## 22 Distinguishes f1 only -1.9931953        6* -1.5574478       ***  0.43574745
## 32 Distinguishes f1 only  2.0141578        6*  1.8024570        6* -0.21170073
## 34 Distinguishes f1 only -1.0422663       *** -0.9864200        **  0.05584624
##    sig_f2_f3
## 2           
## 4           
## 15          
## 22          
## 32          
## 34
results$qdc[which(results$qdc$dist.and.cons == "Distinguishes f2 only"), ]
##            dist.and.cons      f1_f2 sig_f1_f2       f1_f3 sig_f1_f3      f2_f3
## 7  Distinguishes f2 only -1.2961426       ***  0.32897738            1.6251199
## 8  Distinguishes f2 only -1.1768195       ***  0.45851601            1.6353355
## 14 Distinguishes f2 only -0.9346822        **  0.34361645            1.2782986
## 18 Distinguishes f2 only  1.0812173       ***  0.28592347           -0.7952938
## 19 Distinguishes f2 only  1.9103852        6*  0.00472624           -1.9056590
## 25 Distinguishes f2 only  1.1325801       *** -0.35565033           -1.4882304
## 31 Distinguishes f2 only -0.6092686         *  0.57979110            1.1890597
## 37 Distinguishes f2 only  1.2755161       *** -0.18891530           -1.4644314
##    sig_f2_f3
## 7        ***
## 8        ***
## 14       ***
## 18         *
## 19        6*
## 25       ***
## 31       ***
## 37       ***
results$qdc[which(results$qdc$dist.and.cons == "Distinguishes f3 only"), ]
##            dist.and.cons       f1_f2 sig_f1_f2      f1_f3 sig_f1_f3      f2_f3
## 11 Distinguishes f3 only  0.27084992            1.0555292        **  0.7846793
## 12 Distinguishes f3 only  0.09636846            1.1637702       ***  1.0674018
## 13 Distinguishes f3 only  0.08347787            1.1212372        **  1.0377593
## 20 Distinguishes f3 only -0.13499389           -1.4037217       *** -1.2687278
## 28 Distinguishes f3 only  0.19114359            1.0789014        **  0.8877578
## 33 Distinguishes f3 only  0.14390850           -0.7153755         * -0.8592840
## 36 Distinguishes f3 only -0.38737822           -1.3934262       *** -1.0060480
## 38 Distinguishes f3 only -0.18635903            1.7517249        6*  1.9380839
##    sig_f2_f3
## 11         *
## 12        **
## 13        **
## 20       ***
## 28         *
## 33         *
## 36        **
## 38        6*

Another way of highlighting factor differences is to compute the pairwise z-score differences between two factors. This operation is already being done and the differences are storred in the background, we just need to call it with the right command. Here we are interested in the differences between Factor 1 and Factor 2 and order the z-score differences respectively.

results$qdc[order(results$qdc$f1_f2, decreasing = T), ]
##            dist.and.cons       f1_f2 sig_f1_f2       f1_f3 sig_f1_f3
## 32 Distinguishes f1 only  2.01415778        6*  1.80245705        6*
## 19 Distinguishes f2 only  1.91038521        6*  0.00472624          
## 37 Distinguishes f2 only  1.27551606       *** -0.18891530          
## 25 Distinguishes f2 only  1.13258009       *** -0.35565033          
## 18 Distinguishes f2 only  1.08121728       ***  0.28592347          
## 15 Distinguishes f1 only  1.00916201       ***  0.68633150         *
## 26                        0.92613613        **  0.32257049          
## 30                        0.69245329         *  0.59424046          
## 5                         0.67104080         *  0.43697031          
## 35                        0.48539097           -0.42076764          
## 24                        0.36190446           -0.45147699          
## 3              Consensus  0.36083234            0.49991104          
## 21             Consensus  0.30333358            0.36716598          
## 11 Distinguishes f3 only  0.27084992            1.05552923        **
## 27             Consensus  0.23881027           -0.16124796          
## 9              Consensus  0.21940411            0.51075578          
## 28 Distinguishes f3 only  0.19114359            1.07890141        **
## 33 Distinguishes f3 only  0.14390850           -0.71537551         *
## 1              Consensus  0.10784304            0.66164758          
## 12 Distinguishes f3 only  0.09636846            1.16377025       ***
## 13 Distinguishes f3 only  0.08347787            1.12123719        **
## 6              Consensus -0.10070172           -0.04987698          
## 20 Distinguishes f3 only -0.13499389           -1.40372169       ***
## 38 Distinguishes f3 only -0.18635903            1.75172485        6*
## 36 Distinguishes f3 only -0.38737822           -1.39342617       ***
## 29                       -0.43469470            0.42018874          
## 16                       -0.51976394            0.47955612          
## 31 Distinguishes f2 only -0.60926863         *  0.57979110          
## 23     Distinguishes all -0.80168441        ** -2.10366416        6*
## 2  Distinguishes f1 only -0.83081474        ** -1.10205533        **
## 10                       -0.89864093        ** -0.24686194          
## 14 Distinguishes f2 only -0.93468218        **  0.34361645          
## 34 Distinguishes f1 only -1.04226626       *** -0.98642002        **
## 17     Distinguishes all -1.08757443       *** -2.12491868        6*
## 4  Distinguishes f1 only -1.14093538       *** -1.69268208        6*
## 8  Distinguishes f2 only -1.17681947       ***  0.45851601          
## 7  Distinguishes f2 only -1.29614255       ***  0.32897738          
## 22 Distinguishes f1 only -1.99319528        6* -1.55744783       ***
##          f2_f3 sig_f2_f3
## 32 -0.21170073          
## 19 -1.90565897        6*
## 37 -1.46443135       ***
## 25 -1.48823042       ***
## 18 -0.79529381         *
## 15 -0.32283051          
## 26 -0.60356563          
## 30 -0.09821283          
## 5  -0.23407050          
## 35 -0.90615861         *
## 24 -0.81338145         *
## 3   0.13907870          
## 21  0.06383239          
## 11  0.78467931         *
## 27 -0.40005822          
## 9   0.29135167          
## 28  0.88775782         *
## 33 -0.85928402         *
## 1   0.55380454          
## 12  1.06740179        **
## 13  1.03775932        **
## 6   0.05082474          
## 20 -1.26872780       ***
## 38  1.93808388        6*
## 36 -1.00604796        **
## 29  0.85488344         *
## 16  0.99932006        **
## 31  1.18905973       ***
## 23 -1.30197975       ***
## 2  -0.27124059          
## 10  0.65177899          
## 14  1.27829863       ***
## 34  0.05584624          
## 17 -1.03734425        **
## 4  -0.55174670          
## 8   1.63533548       ***
## 7   1.62511993       ***
## 22  0.43574745

What we see here is that statements at the top and statements and the bottom appear to be significantly differently evaluated by the two factors. What happens is that R takes the largest z-scores of factor 1 and substracts the smallest z-scores of factor 2. Consequently, at some point the z-scores of factor 1 will be quite small and the z-scores of factor 2 large. Therefore there are negative differences at the bottom.

Apart from the z-score differences, we also see whether or not these differences are significant or not. By changing f1_f2 to f1_f3 we order the z-score differences of factors 1 and 3 respectively. This will make life a lot easier once we deal with many factors and try to highlight the differences among them.

6. Saving results

save(results, file = "practiseresults.Rdata")

write.csv(results$zsc, file = "zscores.csv") 

write.csv(results$zsc_n, file = "factorscores.csv") 

write.csv(results$loa, file = "loadings.csv")

These files will incorporate all things we just did. This is pretty useful in case you want to look at your results, but do not want to run R again to do so.

export.qm(results, file = "myreport.txt", style = "R")

export.qm(results, file = "myreport-pqm.txt", style = "PQMethod")

References

Brown, S. R. (1980). Political subjectivity: Applications of Q methodology in political science. Yale University Press.

Watts, S., & Stenner, P. (2012). Doing Q methodological research: Theory, method & interpretation. Sage.

Zabala, A. (2014). qmethod: A package to explore human perspectives using Q methodology.