We assume that you have:
If you struggle with basics of R, you may also find this online book useful: https://bookdown.org/ndphillips/YaRrr/
Load the fraud_data
dataset using the load
command. The dataset is located in the folder ./data
(assuming you are in the homework folder) and contains 1000 cases of employee fraud in financial trading companies.
The columns of this dataset are:
Build a regression model that models the damage cause through the employees’ gender and promotion status.
#your code here
What is the mean absolute error (MAE) of your model?
#your code here
Now add the additional predictor variable years_experience
. How does this affect your model’s MAE? Did you expect this?
#your code here
Finally, build the full model (i.e. include all main effects and interaction effects). Plot the residuals of that model against the observed values.
#your code here
You can decide empirically, which combination of predictors results in the best model fit.
Start by building both a “null” model (i.e. only the intercept) and a “full” model (also called the saturated model).
#your code here
Now start from the full model and use backwards stepwise regression. Which model does this procedure result in?
#your code here
Do the same with forward stepwise regression. Does this result in the same model?
#your code here
Finally, try to implement the bidirectional stepwise regression (hint: use “both” for the direction
argument).
#your code here
What do these findings tell you?
You will often end up with different models of the same outcome variable. If these models are nested (i.e. one model can be derived from the other by removing model parameters), then you can use inferential statistics to determine whether one model is significantly worse than another.
In R, you can use the anova
function to conduct an analysis of variance on two models to determine whether a simpler model is significantly worse than a more complex model.
Perform the model comparison test between (1) the full model vs the null model, (2) a ‘gender-only’ model and a ‘gender + promotion’ model, and (3) between the full model and the ‘gender + promotion’ model.
#your code here
Rank the models from best to worst (use equal ranks if there is no significant difference).
Ranks come here:
Often you want to build a model to either understand relationships in the data, make predictions, or both, about an outcome variable that is scored as 0/1, present/not present, arrested/not arrested, etc. If such an outcome variable has only two levels, we also speak of a binary or dichotomous outcome.
Regression models can be applied in this context too. To understand what the special issue with binary outcome variables is, let’s have a look at a dataset.
Load the attack_data
dataset from ./data
. We use this dataset to revise concepts from the lecture. This dataset represents whether or not a website was hacked and the number of attempted hacking attacks Columns are:
Suppose you want to model the relationship between hacked
and attempts
. If you look at the plot, you see that these data do not stem from a normal distribution:
load('./data/attack_data.RData')
hist(attack_data$hacked)
A distribution of a variable that can take only 0 and 1,
We can also look at the ‘raw’ data to get a better understanding of the relationship between the variables hacked
and attempts
:
So let’s start with what we know from regression modelling and ‘fit’ an ordinary linear model:
summary(ordinary_model)
Call:
glm(formula = hacked ~ attempts, family = gaussian, data = attack_data)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.01593 -0.20825 0.03781 0.21227 1.17248
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.871e-01 1.856e-02 -15.47 <2e-16 ***
attempts 1.349e-03 3.212e-05 41.99 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for gaussian family taken to be 0.08599304)
Null deviance: 237.456 on 999 degrees of freedom
Residual deviance: 85.821 on 998 degrees of freedom
AIC: 388.39
Number of Fisher Scoring iterations: 2
Note that a GLM with family “gaussian” is identical to a normal linear model. This is because the ordinary linear model assumes that the outcome variable is normally distributed (i.e. follows a Gaussian distribution).
To see what might be problematic, have a look at the predicted values:
You will notice that the values predicted through the model are (1) not only 1s and 0s and (2) exceed 1 and are even smaller than 0. Clearly, for a dataset where the outcome variable can only take the value 0 and 1, this is an inadequate way to model the data.
You can also look at the actual regression line fitted to the raw data:
So we need a solution to that issue.
Luckily, there is a way to transform the 1/0 outcome variable to a continuous variable so that the model can make predictions on a continuous scale.
A neat way to do this, is the logit function that performs the following steps:
Let’s do this for a sequence fro 0.0 to 1.0 in steps of 0.1
probabilities
[1] 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16
[17] 0.17 0.18 0.19 0.20 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.30 0.31 0.32
[33] 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48
[49] 0.49 0.50 0.51 0.52 0.53 0.54 0.55 0.56 0.57 0.58 0.59 0.60 0.61 0.62 0.63 0.64
[65] 0.65 0.66 0.67 0.68 0.69 0.70 0.71 0.72 0.73 0.74 0.75 0.76 0.77 0.78 0.79 0.80
[81] 0.81 0.82 0.83 0.84 0.85 0.86 0.87 0.88 0.89 0.90 0.91 0.92 0.93 0.94 0.95 0.96
[97] 0.97 0.98 0.99
This brings the transformed outcome variable from a 0,1 scale to a 0:1 scale (note: the :
reads as “to”).
odds = P/(1-P)
odds
[1] 0.01010101 0.02040816 0.03092784 0.04166667 0.05263158 0.06382979 0.07526882
[8] 0.08695652 0.09890110 0.11111111 0.12359551 0.13636364 0.14942529 0.16279070
[15] 0.17647059 0.19047619 0.20481928 0.21951220 0.23456790 0.25000000 0.26582278
[22] 0.28205128 0.29870130 0.31578947 0.33333333 0.35135135 0.36986301 0.38888889
[29] 0.40845070 0.42857143 0.44927536 0.47058824 0.49253731 0.51515152 0.53846154
[36] 0.56250000 0.58730159 0.61290323 0.63934426 0.66666667 0.69491525 0.72413793
[43] 0.75438596 0.78571429 0.81818182 0.85185185 0.88679245 0.92307692 0.96078431
[50] 1.00000000 1.04081633 1.08333333 1.12765957 1.17391304 1.22222222 1.27272727
[57] 1.32558140 1.38095238 1.43902439 1.50000000 1.56410256 1.63157895 1.70270270
[64] 1.77777778 1.85714286 1.94117647 2.03030303 2.12500000 2.22580645 2.33333333
[71] 2.44827586 2.57142857 2.70370370 2.84615385 3.00000000 3.16666667 3.34782609
[78] 3.54545455 3.76190476 4.00000000 4.26315789 4.55555556 4.88235294 5.25000000
[85] 5.66666667 6.14285714 6.69230769 7.33333333 8.09090909 9.00000000 10.11111111
[92] 11.50000000 13.28571429 15.66666667 19.00000000 24.00000000 32.33333333 49.00000000
[99] 99.00000000
This transforms the outcome variable to a scale ranging from 0:Inf.
log_odds
[1] -4.59511985 -3.89182030 -3.47609869 -3.17805383 -2.94443898 -2.75153531 -2.58668934
[8] -2.44234704 -2.31363493 -2.19722458 -2.09074110 -1.99243016 -1.90095876 -1.81528997
[15] -1.73460106 -1.65822808 -1.58562726 -1.51634749 -1.45001018 -1.38629436 -1.32492541
[22] -1.26566637 -1.20831121 -1.15267951 -1.09861229 -1.04596856 -0.99462258 -0.94446161
[29] -0.89538405 -0.84729786 -0.80011930 -0.75377180 -0.70818506 -0.66329422 -0.61903921
[36] -0.57536414 -0.53221681 -0.48954823 -0.44731222 -0.40546511 -0.36396538 -0.32277339
[43] -0.28185115 -0.24116206 -0.20067070 -0.16034265 -0.12014431 -0.08004271 -0.04000533
[50] 0.00000000 0.04000533 0.08004271 0.12014431 0.16034265 0.20067070 0.24116206
[57] 0.28185115 0.32277339 0.36396538 0.40546511 0.44731222 0.48954823 0.53221681
[64] 0.57536414 0.61903921 0.66329422 0.70818506 0.75377180 0.80011930 0.84729786
[71] 0.89538405 0.94446161 0.99462258 1.04596856 1.09861229 1.15267951 1.20831121
[78] 1.26566637 1.32492541 1.38629436 1.45001018 1.51634749 1.58562726 1.65822808
[85] 1.73460106 1.81528997 1.90095876 1.99243016 2.09074110 2.19722458 2.31363493
[92] 2.44234704 2.58668934 2.75153531 2.94443898 3.17805383 3.47609869 3.89182030
[99] 4.59511985
You can see that now we transformed the outcome variable further to a scale ranging from -Inf:Inf.
It is this logit function that is used to transform the outcome variable from a 1,0 discrete range to a continuous range from -Inf:Inf.
You will see this in action further below…
In the GLM function, you can specify this by using the family =
argument and setting it to “binomial” (since our outcome variable stems from a binomial distribution).
Build a logistic regression model that models whether or not a website was hacked through the number of attacks:
#your code here
Take a look at the model summary. Remember what the logit model does? If we model the “log-odds”, then the coefficients (what we call the intercept and slope in linear regression) need to be interpreted as such.
But because the log-odds are hard to interpret, we want to transform them back to the more interpretable odds.
From the video above, you will have learned that you can reverse the natural logarithm by taking e to the power of the logarithm.
Do this transformation:
#your code here
What does this yield (i.e. how do you intepret these values)?
What would the interpretation of these findings look like in your own words?
Similar to the “line-fitting” of linear regression, we can also look at the fitted model visually.
You can see that the model (= the curve) predicts values exclusively in the 0:1 range. However, you can also see that while the majority of the predicted values is either 0 or 1, some values are in between (e.g. around attempts == 500). This is the reason why you need thresholds if you want to ascertain the accuracy of such a model. In Year 3, you will learn about applications of this thresholding for logistic regression in machine learning.
You can see the relationship between fitted values (i.e. probabilities of a case being in on eof the two outcome classes - hacked vs not hacked - given a certain number of attempts) and the observed values: