Aims of this notebook

Requirements

We assume that you have:

If you struggle with basics of R, you may also find this online book useful: https://bookdown.org/ndphillips/YaRrr/


Multiple regression

Load the fraud_data dataset using the load command. The dataset is located in the folder ./data (assuming you are in the homework folder) and contains 1000 cases of employee fraud in financial trading companies.

The columns of this dataset are:

Task: multiple regression recap

Build a regression model that models the damage cause through the employees’ gender and promotion status.

#your code here

What is the mean absolute error (MAE) of your model?

#your code here

Now add the additional predictor variable years_experience. How does this affect your model’s MAE? Did you expect this?

#your code here

Finally, build the full model (i.e. include all main effects and interaction effects). Plot the residuals of that model against the observed values.

#your code here

Task: multiple regression model selection

You can decide empirically, which combination of predictors results in the best model fit.

Start by building both a “null” model (i.e. only the intercept) and a “full” model (also called the saturated model).

#your code here

Now start from the full model and use backwards stepwise regression. Which model does this procedure result in?

#your code here

Do the same with forward stepwise regression. Does this result in the same model?

#your code here

Finally, try to implement the bidirectional stepwise regression (hint: use “both” for the direction argument).

#your code here

What do these findings tell you?

Task: multiple regression model comparison

You will often end up with different models of the same outcome variable. If these models are nested (i.e. one model can be derived from the other by removing model parameters), then you can use inferential statistics to determine whether one model is significantly worse than another.

In R, you can use the anova function to conduct an analysis of variance on two models to determine whether a simpler model is significantly worse than a more complex model.

Perform the model comparison test between (1) the full model vs the null model, (2) a ‘gender-only’ model and a ‘gender + promotion’ model, and (3) between the full model and the ‘gender + promotion’ model.

#your code here

Rank the models from best to worst (use equal ranks if there is no significant difference).

Ranks come here:

  1. … …

Logistic regression

Often you want to build a model to either understand relationships in the data, make predictions, or both, about an outcome variable that is scored as 0/1, present/not present, arrested/not arrested, etc. If such an outcome variable has only two levels, we also speak of a binary or dichotomous outcome.

Regression models can be applied in this context too. To understand what the special issue with binary outcome variables is, let’s have a look at a dataset.

Load the attack_data dataset from ./data. We use this dataset to revise concepts from the lecture. This dataset represents whether or not a website was hacked and the number of attempted hacking attacks Columns are:

Task: logistic regression - fitting the GLM

Build a logistic regression model that models whether or not a website was hacked through the number of attacks:

#your code here

Task: logistic regression - interpreting the model

Take a look at the model summary. Remember what the logit model does? If we model the “log-odds”, then the coefficients (what we call the intercept and slope in linear regression) need to be interpreted as such.

But because the log-odds are hard to interpret, we want to transform them back to the more interpretable odds.

From the video above, you will have learned that you can reverse the natural logarithm by taking e to the power of the logarithm.

Do this transformation:

#your code here

What does this yield (i.e. how do you intepret these values)?

What would the interpretation of these findings look like in your own words?

Task: logistic regression - curve fitting

Similar to the “line-fitting” of linear regression, we can also look at the fitted model visually.