The code and the report can be found in my github repo

MLE (Probit)

  1. For the first question, I programmed the routine to estimate a probit model by MLE. I used the CDF_Normal, PDF_Normal and BFGS routines of the Probability and Minimization modules. Whenever the routine evaluated \(\Phi(x'\beta)\), I used \(\mu=0\) and \(\sigma=1\), because \(U_I \sim N(0,1)\). The coefficients are displayed in row 1 of the results table. To compare, the results of estimating the probit in R are the following:
# probit <- glm(V4~V2+V3, family = binomial(link=probit),data=data)
summary(probit)
## 
## Call:
## glm(formula = V4 ~ V2 + V3, family = binomial(link = probit), 
##     data = data)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.5734  -0.6464  -0.2295   0.6487   2.9027  
## 
## Coefficients:
##             Estimate Std. Error  z value Pr(>|z|)    
## (Intercept)  6.38161    0.05034  126.764  < 2e-16 ***
## V2           0.12014    0.02376    5.055  4.3e-07 ***
## V3          -3.90106    0.02919 -133.665  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 68828  on 49999  degrees of freedom
## Residual deviance: 43351  on 49997  degrees of freedom
## AIC: 43357
## 
## Number of Fisher Scoring iterations: 5
  1. For the second question, according to Nail’s notes (take it with caution), \(n^{1/2}(\hat{\theta}_{MLE}-\theta_0)\sim N(0,B^{-1})\), where in the case of probit \[ B=E\left[xx'\frac{\phi(-x'\beta)^2}{\Phi(-x'\beta)\Phi(x'\beta)} \right] \] To invert the variance-covariance matrix \(B\), I used the Cholesky factorization using intel-LAPACK routine POTRF and POTRI. I tried using the Matrix_Inverse and Matrix_Inverse_symmetric provided in the Matrix module but I was getting negative variances for some estimates. Using the Cholesky factorization, I could replicate the exact estimates as the R intrinsic program. The standard errors are shown in the table row 2.

  2. Lastly, I bootstrapped the estimates 100 times. I used sampling from the uniform distribution and Halton sequences, by usingSample_Uniform and Halton from the random module, respectively. I report the unbiased bootstrapped estimators in the the table. Namely, \(\tilde{\theta}=2\hat{\theta}-\bar{\hat{\theta^\star}}\). Rows 3 to 6 display the results. Results are very close. Considering the standard errors, they are not significantly different from one another.

MLE Probit
Alpha Lambda Gamma
Coef 6.3816177 0.1201244 -3.9010558
S.E 0.05034238 0.02376460 0.02918537
Coef (BS) 6.322229 0.125762 -3.866202
S.E. (BS) 0.05003862 0.02369987 0.02897557
Coef (BSH) 6.3998747 0.1215389 -3.9130211
S.E. (BSH) 0.05052107 0.02376524 0.02927741