library(tidyverse)

Exercise 1: Hypothesis testing

Similarly to dataset mtcars, the dataset mpg from ggplot package includes data on automobiles. However, mpg includes data for newer cars from year 1999 and 2008. The variables measured for each car is slighly different. Here we are interested in the variable, hwy, the highway miles per gallon.

# We first format the column trans to contain only info on transmission auto/manual
mpg
mpg <- mpg %>% 
  mutate(
    transmission = factor(
        gsub("\\((.*)", "", trans), levels = c("auto", "manual"))
  )
mpg

Part 1: One-sample test

  1. Subset the mpg dataset to inlude only cars from year 2008.
mpg2008 <- mpg %>% 
  filter(year == 2008)
  1. Test whether cars from 2008 have mean the highway miles per gallon, hwy, equal to 30 mpg.
t.test(mpg2008$hwy, mu = 30, alternative = "two.sided")

    One Sample t-test

data:  mpg2008$hwy
t = -12.11, df = 116, p-value < 2.2e-16
alternative hypothesis: true mean is not equal to 30
95 percent confidence interval:
 22.38218 24.52380
sample estimates:
mean of x 
 23.45299 
  1. Test whether cars from 2008 with 4 cylinders have mean hwy equal to 30 mpg.
mpg2008_4cyl <- mpg %>% 
  filter(year == 2008, cyl == 4)
t.test(mpg2008_4cyl$hwy, mu = 30, alternative = "two.sided")

    One Sample t-test

data:  mpg2008_4cyl$hwy
t = -1.1492, df = 35, p-value = 0.2582
alternative hypothesis: true mean is not equal to 30
95 percent confidence interval:
 28.15568 30.51098
sample estimates:
mean of x 
 29.33333 

Part 2: Two-sample test

  1. Test if the mean hwy for automatic is less than that for manual cars in 2008. Generate a boxplot with jittered points for hwy for each transmission group.
t.test(data = mpg2008, hwy ~ transmission, alternative = "less")

    Welch Two Sample t-test

data:  hwy by transmission
t = -3.1269, df = 64.928, p-value = 0.001322
alternative hypothesis: true difference in means is less than 0
95 percent confidence interval:
      -Inf -1.635703
sample estimates:
  mean in group auto mean in group manual 
            22.43373             25.94118 
# or
# t.test(x = mpg2008 %>% filter(transmission == "auto") %>% pull(hwy),
#        y = mpg2008 %>% filter(transmission == "manual") %>% pull(hwy), 
#        alternative = "less")
ggplot(mpg2008, aes(x = transmission, y = hwy)) +
  geom_boxplot() + geom_jitter(height = 0, width = 0.2)

  1. Test if the mean hwy for cars from 1999 and is greater than that for cars from 2008. Generate a boxplot with jittered points for hwy for each year group.
t.test(data = mpg, hwy ~ year, alternative = "greater")

    Welch Two Sample t-test

data:  hwy by year
t = -0.032864, df = 231.64, p-value = 0.5131
alternative hypothesis: true difference in means is greater than 0
95 percent confidence interval:
 -1.314123       Inf
sample estimates:
mean in group 1999 mean in group 2008 
          23.42735           23.45299 
ggplot(mpg, aes(x = factor(year), y = hwy)) +
  geom_boxplot() + geom_jitter(height = 0, width = 0.2)

Exercise 2: Logistic Regression

In this you will use a dataset Default, on customer default records for a credit card company, which is included in ISL book. To obtain the data you will need to install a package ISLR.

# install.packages("ISLR")
library(ISLR)
(Default <- tbl_df(Default))
  1. First, divide your dataset into a train and test set. Randomly sample 6000 observations and include them in the train set, and the remaining use as a test set.
train.idx <- sample(1:nrow(Default), 6000, replace = FALSE)
train <- Default[train.idx, ]
test <- Default[-train.idx, ]
  1. Fit a logistic regression including all the features to predict whether a customer defaulted or not.
fit.logit <- glm(default ~ student + balance + income, data = train, 
                 family = "binomial")
summary(fit.logit)

Call:
glm(formula = default ~ student + balance + income, family = "binomial", 
    data = train)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.2577  -0.1408  -0.0541  -0.0200   3.7847  

Coefficients:
              Estimate Std. Error z value Pr(>|z|)    
(Intercept) -1.074e+01  6.418e-01 -16.739   <2e-16 ***
studentYes  -4.658e-01  3.097e-01  -1.504    0.133    
balance      5.738e-03  3.066e-04  18.716   <2e-16 ***
income      -3.509e-06  1.112e-05  -0.316    0.752    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 1658.42  on 5999  degrees of freedom
Residual deviance:  903.93  on 5996  degrees of freedom
AIC: 911.93

Number of Fisher Scoring iterations: 8
  1. Note if any variables seem not significant. Then, adjust your model accordingly (by removing them).
fit.logit <- glm(default ~ student + balance, data = train, 
                 family = "binomial")
summary(fit.logit)

Call:
glm(formula = default ~ student + balance, family = "binomial", 
    data = train)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.2420  -0.1412  -0.0541  -0.0201   3.7776  

Coefficients:
              Estimate Std. Error z value Pr(>|z|)    
(Intercept) -10.874551   0.490137 -22.187   <2e-16 ***
studentYes   -0.388055   0.188726  -2.056   0.0398 *  
balance       0.005733   0.000306  18.737   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 1658.42  on 5999  degrees of freedom
Residual deviance:  904.03  on 5997  degrees of freedom
AIC: 910.03

Number of Fisher Scoring iterations: 8
  1. Compute the predicted probabilities of ‘default’ for the observations in the test set. Then evaluate the model accuracy.
pred.prob.default <- predict(fit.logit, test, type = "response")
pred.default <- factor(pred.prob.default > 0.5, levels = c(FALSE, TRUE),
                       labels = c( "No", "Yes"))
(tab <- table(pred = pred.default, true = test$default))
     true
pred    No  Yes
  No  3833  102
  Yes   20   45
(accuracy <- sum(diag(tab))/nrow(test))
[1] 0.9695
  1. For the test set, generate a scatterplot of ‘balance’ vs ‘default’ with points colored by ‘student’ factor. Then, overlay a line plot of the predicted probability of default as computed in the previous question. You should plot two lines for student and non student separately by setting the ‘color = student’.
train$default.numeric <- as.numeric(train$default) - 1
test$default.numeric <- as.numeric(test$default) - 1
ggplot(test, aes(x = balance, color = student)) +
  geom_point(aes(y = default.numeric)) + 
  geom_line(aes(y = pred.prob.default), lwd = 1)

Exercise 3: Random Forest

In this exercise we will build a random forest model based on the data used to create the visualization here.

# Skip first 2 lines since they were comments
url <- paste0("https://raw.githubusercontent.com/jadeyee/r2d3-part-1-data/",
              "master/part_1_data.csv")
houses <- read.csv(url, skip = 2)
houses <- tbl_df(houses)
houses <- houses %>%
    mutate(city = factor(in_sf, levels = c(1, 0), labels = c("SF", "NYC")))
houses 
  1. Using pairs() function plot the relationship between every variable pairs. You can color the points by the city the observation corresponds to; set the color argument in pairs() as follows: col = houses$in_sf + 3L
city.colors <- houses$in_sf + 3L
pairs(houses[, -1], col = city.colors, pch = 16)

  1. Split the data into (70%-30%) train and test set. How many observations are in your train and test sets?
set.seed(123)
train.idx <- sample(nrow(houses), 0.7 * nrow(houses))
train <- houses[train.idx, ]
test <- houses[-train.idx, ]
dim(train)
[1] 344   9
dim(test)
[1] 148   9
  1. Train a random forest on the train set, using all the variables in the model, to classify houses into the ones from San Francisco and from New York. Remember to remove ‘in_sf’, as it is the same variable as ‘city’.
library(randomForest)
randomForest 4.6-14
Type rfNews() to see new features/changes/bug fixes.

Attaching package: ‘randomForest’

The following object is masked from ‘package:dplyr’:

    combine

The following object is masked from ‘package:ggplot2’:

    margin
houses.rf <- randomForest(city ~ . -in_sf, data = train, importance = TRUE, proximity = TRUE)
houses.rf

Call:
 randomForest(formula = city ~ . - in_sf, data = train, importance = TRUE,      proximity = TRUE) 
               Type of random forest: classification
                     Number of trees: 500
No. of variables tried at each split: 2

        OOB estimate of  error rate: 7.56%
Confusion matrix:
     SF NYC class.error
SF  161  17  0.09550562
NYC   9 157  0.05421687
  1. Compute predictions and print out the confusion (error) matrix for the test set to asses the model accuracy. Also, compute the model accuracy.
pred <- predict(houses.rf, newdata = test)
(confusion.mat <- table(pred, truth = test$city))
     truth
pred  SF NYC
  SF  77   9
  NYC 13  49
(accuracy <- sum(diag(confusion.mat))/nrow(test))
[1] 0.8513514
  1. Which features were the most predictive for classifying houses into SF vs NYC groups? Use importance measures to answer the question.
varImpPlot(houses.rf)

---
title: "Lecture 7: Exercises with Answers"
date: October 18th, 2018
output: 
  html_notebook:
    toc: true
    toc_float: true
---

```{r}
library(tidyverse)
```


# Exercise 1: Hypothesis testing

Similarly to dataset `mtcars`, the dataset `mpg` from `ggplot` package 
includes data on automobiles. However, `mpg` includes data for newer
cars from year 1999 and 2008. The variables measured for each car is slighly 
different. Here we are interested in the variable, `hwy`, the highway miles per 
gallon.


```{r}
# We first format the column trans to contain only info on transmission auto/manual
mpg
mpg <- mpg %>% 
  mutate(
    transmission = factor(
        gsub("\\((.*)", "", trans), levels = c("auto", "manual"))
  )
mpg
```

## Part 1: One-sample test

a. Subset the `mpg` dataset to inlude only cars from  year 2008.

```{r}
mpg2008 <- mpg %>% 
  filter(year == 2008)
```

b. Test whether cars from 2008 have mean the highway miles per gallon, `hwy`, 
equal to 30 mpg.

```{r}
t.test(mpg2008$hwy, mu = 30, alternative = "two.sided")
```

c. Test whether cars from 2008 with 4 cylinders have mean `hwy` equal to 30 mpg.

```{r}
mpg2008_4cyl <- mpg %>% 
  filter(year == 2008, cyl == 4)

t.test(mpg2008_4cyl$hwy, mu = 30, alternative = "two.sided")
```

## Part 2: Two-sample test

a. Test if the mean `hwy` for automatic is **less than** that for manual cars
**in 2008**. Generate a boxplot with jittered points for `hwy` for each 
transmission group.

```{r}
t.test(data = mpg2008, hwy ~ transmission, alternative = "less")
# or
# t.test(x = mpg2008 %>% filter(transmission == "auto") %>% pull(hwy),
#        y = mpg2008 %>% filter(transmission == "manual") %>% pull(hwy), 
#        alternative = "less")
```

```{r}
ggplot(mpg2008, aes(x = transmission, y = hwy)) +
  geom_boxplot() + geom_jitter(height = 0, width = 0.2)
```

b. Test if the mean `hwy` for cars from 1999 and is greater than that for
cars from 2008. Generate a boxplot with jittered points
for `hwy` for each year group.

```{r}
t.test(data = mpg, hwy ~ year, alternative = "greater")
```

```{r}
ggplot(mpg, aes(x = factor(year), y = hwy)) +
  geom_boxplot() + geom_jitter(height = 0, width = 0.2)
```



# Exercise 2: Logistic Regression

In this you will use a dataset `Default`, on customer default records for 
a credit card company, which is included in [ISL book](www.statlearning.com). 
To obtain the data you will need to install a package `ISLR`.

```{r}
# install.packages("ISLR")
library(ISLR)
(Default <- tbl_df(Default))
```


a. First, divide your dataset into a train and test set. Randomly sample
6000 observations and include them in the train set, and the remaining use
as a test set.

```{r}
train.idx <- sample(1:nrow(Default), 6000, replace = FALSE)
train <- Default[train.idx, ]
test <- Default[-train.idx, ]
```

b. Fit a logistic regression including all the features to predict
whether a customer defaulted or not.

```{r}
fit.logit <- glm(default ~ student + balance + income, data = train, 
                 family = "binomial")
summary(fit.logit)
```

c. Note if any variables seem not significant. Then, adjust your model
accordingly (by removing them).

```{r}
fit.logit <- glm(default ~ student + balance, data = train, 
                 family = "binomial")
summary(fit.logit)
```

d. Compute the predicted probabilities of 'default' for the observations
in the test set. Then evaluate the model accuracy.

```{r}
pred.prob.default <- predict(fit.logit, test, type = "response")
pred.default <- factor(pred.prob.default > 0.5, levels = c(FALSE, TRUE),
                       labels = c( "No", "Yes"))
(tab <- table(pred = pred.default, true = test$default))
(accuracy <- sum(diag(tab))/nrow(test))
```


d. For the test set, generate a scatterplot of 'balance' vs 'default' 
with points colored by 'student' factor. Then, overlay a line plot 
of the predicted probability of default as computed in the previous question.
You should plot two lines for student and non student separately by setting 
the 'color = student'.


```{r}
train$default.numeric <- as.numeric(train$default) - 1
test$default.numeric <- as.numeric(test$default) - 1

ggplot(test, aes(x = balance, color = student)) +
  geom_point(aes(y = default.numeric)) + 
  geom_line(aes(y = pred.prob.default), lwd = 1)
```




# Exercise 3: Random Forest

In this exercise we will build a random forest model based
on the data used to create the visualization [here](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/).

```{r}
# Skip first 2 lines since they were comments
url <- paste0("https://raw.githubusercontent.com/jadeyee/r2d3-part-1-data/",
              "master/part_1_data.csv")
houses <- read.csv(url, skip = 2)
houses <- tbl_df(houses)
houses <- houses %>%
    mutate(city = factor(in_sf, levels = c(1, 0), labels = c("SF", "NYC")))
houses 
```

a. Using `pairs()` function plot the relationship between every variable pairs.
You can color the points by the city the observation corresponds to; set the color argument 
in `pairs()` as follows: `col = houses$in_sf + 3L` 

```{r, fig.width=8, fig.height=7}
city.colors <- houses$in_sf + 3L
pairs(houses[, -1], col = city.colors, pch = 16)
```

b. Split the data into (70%-30%) train and test set.
How many observations are in your train and test sets?


```{r}
set.seed(123)
train.idx <- sample(nrow(houses), 0.7 * nrow(houses))
train <- houses[train.idx, ]
test <- houses[-train.idx, ]
dim(train)
dim(test)
```

c. Train a random forest on the train set, using all the variables in the model,
to classify houses into the ones from San Francisco and from New York.
Remember to remove 'in_sf', as it is the same variable as 'city'. 

```{r}
library(randomForest)
houses.rf <- randomForest(city ~ . -in_sf, data = train, importance = TRUE, proximity = TRUE)
houses.rf
```

d. Compute predictions and print out 
[the confusion (error) matrix](https://en.wikipedia.org/wiki/Confusion_matrix)
for the test set to asses the model accuracy. Also, compute the model 
accuracy.

```{r}
pred <- predict(houses.rf, newdata = test)
(confusion.mat <- table(pred, truth = test$city))
(accuracy <- sum(diag(confusion.mat))/nrow(test))
```

e. Which features were the most predictive for classifying houses into SF vs NYC groups?
Use importance measures to answer the question.

```{r}
varImpPlot(houses.rf)
```






