library(tidyverse)
Similarly to dataset mtcars
, the dataset mpg
from ggplot
package includes data on automobiles. However, mpg
includes data for newer cars from year 1999 and 2008. The variables measured for each car is slighly different. Here we are interested in the variable, hwy
, the highway miles per gallon.
# We create a new culumn with manual/automatic data only
mpg <- mpg %>%
mutate(
transmission = factor(
gsub("\\((.*)", "", trans), levels = c("auto", "manual"))
)
mpg
[38;5;246m# A tibble: 234 x 12[39m
manufacturer model displ year cyl trans drv cty hwy fl class transmission
[3m[38;5;246m<chr>[39m[23m [3m[38;5;246m<chr>[39m[23m [3m[38;5;246m<dbl>[39m[23m [3m[38;5;246m<int>[39m[23m [3m[38;5;246m<int>[39m[23m [3m[38;5;246m<fct>[39m[23m [3m[38;5;246m<chr>[39m[23m [3m[38;5;246m<int>[39m[23m [3m[38;5;246m<int>[39m[23m [3m[38;5;246m<chr>[39m[23m [3m[38;5;246m<chr>[39m[23m [3m[38;5;246m<fct>[39m[23m
[38;5;250m 1[39m audi a4 1.8 [4m1[24m999 4 auto f 18 29 p compact auto
[38;5;250m 2[39m audi a4 1.8 [4m1[24m999 4 auto f 21 29 p compact auto
[38;5;250m 3[39m audi a4 2 [4m2[24m008 4 auto f 20 31 p compact auto
[38;5;250m 4[39m audi a4 2 [4m2[24m008 4 auto f 21 30 p compact auto
[38;5;250m 5[39m audi a4 2.8 [4m1[24m999 6 auto f 16 26 p compact auto
[38;5;250m 6[39m audi a4 2.8 [4m1[24m999 6 auto f 18 26 p compact auto
[38;5;250m 7[39m audi a4 3.1 [4m2[24m008 6 auto f 18 27 p compact auto
[38;5;250m 8[39m audi a4 quattro 1.8 [4m1[24m999 4 auto 4 18 26 p compact auto
[38;5;250m 9[39m audi a4 quattro 1.8 [4m1[24m999 4 auto 4 16 25 p compact auto
[38;5;250m10[39m audi a4 quattro 2 [4m2[24m008 4 auto 4 20 28 p compact auto
[38;5;246m# ... with 224 more rows[39m
Subset the mpg
dataset to inlude only cars from year 2008.
Test whether cars from 2008 have mean the highway miles per gallon, hwy
, equal to 30 mpg.
Test whether cars from 2008 with 4 cylinders have mean hwy
equal to 30 mpg.
Test if the mean hwy
for automatic is less than that for manual cars in 2008. Generate a boxplot with jittered points for hwy
for each transmission group.
Test if the mean hwy
for cars from 1999 and is greater than that for cars from 2008. Generate a boxplot with jittered points for hwy
for each year group.
In this you will use a dataset Default
, on customer default records for a credit card company, which is included in ISL book. To obtain the data you will need to install a package ISLR
.
# install.packages("ISLR")
library(ISLR)
(Default <- tbl_df(Default)) # convert data.frame to tibble
[38;5;246m# A tibble: 10,000 x 4[39m
default student balance income
[3m[38;5;246m<fct>[39m[23m [3m[38;5;246m<fct>[39m[23m [3m[38;5;246m<dbl>[39m[23m [3m[38;5;246m<dbl>[39m[23m
[38;5;250m 1[39m No No 730. [4m4[24m[4m4[24m362.
[38;5;250m 2[39m No Yes 817. [4m1[24m[4m2[24m106.
[38;5;250m 3[39m No No [4m1[24m074. [4m3[24m[4m1[24m767.
[38;5;250m 4[39m No No 529. [4m3[24m[4m5[24m704.
[38;5;250m 5[39m No No 786. [4m3[24m[4m8[24m463.
[38;5;250m 6[39m No Yes 920. [4m7[24m492.
[38;5;250m 7[39m No No 826. [4m2[24m[4m4[24m905.
[38;5;250m 8[39m No Yes 809. [4m1[24m[4m7[24m600.
[38;5;250m 9[39m No No [4m1[24m161. [4m3[24m[4m7[24m469.
[38;5;250m10[39m No No 0 [4m2[24m[4m9[24m275.
[38;5;246m# ... with 9,990 more rows[39m
First, divide your dataset into a train and test set. Randomly sample 6000 observations and include them in the train set, and the remaining use as a test set.
Fit a logistic regression including all the features to predict whether a customer defaulted or not.
Note if any variables seem not significant. Then, adjust your model accordingly (by removing them).
Compute the predicted probabilities of ‘default’ for the observations in the test set. Then evaluate the model accuracy.
For the test set, generate a scatterplot of ‘balance’ vs ‘default’ with points colored by ‘student’ factor. Then, overlay a line plot of the predicted probability of default as computed in the previous question. You should plot two lines for student and non student separately by setting the ‘color = student’.
In this exercise we will build a random forest model based on the data used to create the visualization here.
# Skip first 2 lines since they were comments
url <- paste0("https://raw.githubusercontent.com/jadeyee/r2d3-part-1-data/",
"master/part_1_data.csv")
houses <- read_csv(url, skip = 2)
houses <- tbl_df(houses)
houses <- houses %>%
mutate(city = factor(in_sf, levels = c(1, 0), labels = c("SF", "NYC")))
houses
[38;5;246m# A tibble: 492 x 9[39m
in_sf beds bath price year_built sqft price_per_sqft elevation city
[3m[38;5;246m<int>[39m[23m [3m[38;5;246m<dbl>[39m[23m [3m[38;5;246m<dbl>[39m[23m [3m[38;5;246m<int>[39m[23m [3m[38;5;246m<int>[39m[23m [3m[38;5;246m<int>[39m[23m [3m[38;5;246m<int>[39m[23m [3m[38;5;246m<int>[39m[23m [3m[38;5;246m<fct>[39m[23m
[38;5;250m 1[39m 0 2 1 [4m9[24m[4m9[24m[4m9[24m000 [4m1[24m960 [4m1[24m000 999 10 NYC
[38;5;250m 2[39m 0 2 2 2[4m7[24m[4m5[24m[4m0[24m000 [4m2[24m006 [4m1[24m418 [4m1[24m939 0 NYC
[38;5;250m 3[39m 0 2 2 1[4m3[24m[4m5[24m[4m0[24m000 [4m1[24m900 [4m2[24m150 628 9 NYC
[38;5;250m 4[39m 0 1 1 [4m6[24m[4m2[24m[4m9[24m000 [4m1[24m903 500 [4m1[24m258 9 NYC
[38;5;250m 5[39m 0 0 1 [4m4[24m[4m3[24m[4m9[24m000 [4m1[24m930 500 878 10 NYC
[38;5;250m 6[39m 0 0 1 [4m4[24m[4m3[24m[4m9[24m000 [4m1[24m930 500 878 10 NYC
[38;5;250m 7[39m 0 1 1 [4m4[24m[4m7[24m[4m5[24m000 [4m1[24m920 500 950 10 NYC
[38;5;250m 8[39m 0 1 1 [4m9[24m[4m7[24m[4m5[24m000 [4m1[24m930 900 [4m1[24m083 10 NYC
[38;5;250m 9[39m 0 1 1 [4m9[24m[4m7[24m[4m5[24m000 [4m1[24m930 900 [4m1[24m083 12 NYC
[38;5;250m10[39m 0 2 1 1[4m8[24m[4m9[24m[4m5[24m000 [4m1[24m921 [4m1[24m000 [4m1[24m895 12 NYC
[38;5;246m# ... with 482 more rows[39m
Using pairs()
function plot the relationship between every variable pairs. You can color the points by the city the observation corresponds to; set the color argument in pairs()
as follows: col = houses$in_sf + 3L
Split the data into (70%-30%) train and test set. How many observations are in your train and test sets?
Train a random forest on the train set, using all the variables in the model, to classify houses into the ones from San Francisco and from New York. Remember to remove ‘in_sf’, as it is the same variable as ‘city’.
Compute predictions and print out the confusion (error) matrix for the test set to asses the model accuracy. Also, compute the model accuracy.
Which features were the most predictive for classifying houses into SF vs NYC groups? Use importance measures to answer the question.