library(tidyverse)

Exercise 1: Linear Regression

Part 1

The data in the following URL “http://www-bcf.usc.edu/~gareth/ISL/Income1.csv” includes observation on income levels (in tens of thousands of dollars) and years of education. The data is not real and was actually simulated.

  1. Read the data into R and generate a ggplot with a fitted line.
income <- read_csv("http://www-bcf.usc.edu/~gareth/ISL/Income1.csv")
Missing column names filled in: 'X1' [1]Parsed with column specification:
cols(
  X1 = col_integer(),
  Education = col_double(),
  Income = col_double()
)
income <- income %>% select(-X1)
Error in select(., -X1) : unused argument (-X1)

rr ggplot(income, aes(x = Education, y = Income)) + geom_point() + geom_smooth(method = )

  1. Split the data into train and test set. Note that here we do not form a validation set as we have very few observations and will only consider a single model.

rr set.seed(12345) idx <- sample(1:nrow(income), floor(0.7 * nrow(income))) income_train <- income[idx, ] income_test <- income[-idx, ]

  1. Fit a linear model with education (years of education) as input variable and income as response variable. What are the model coefficients obtained and how can you extract them? Inspect the model fit using summary()

rr fit <- lm(data = income_train, formula = Income ~ Education) summary(fit)


Call:
lm(formula = Income ~ Education, data = income_train)

Residuals:
     Min       1Q   Median       3Q      Max 
-11.7076  -4.2127   0.8739   3.6296  11.9906 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -43.5026     6.3357  -6.866 1.50e-06 ***
Education     5.8171     0.3829  15.190 4.41e-12 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 6.149 on 19 degrees of freedom
Multiple R-squared:  0.9239,    Adjusted R-squared:  0.9199 
F-statistic: 230.7 on 1 and 19 DF,  p-value: 4.408e-12

rr coef(fit)

(Intercept)   Education 
 -43.502593    5.817084 
  1. Compute the fitted values of income for the observations included in the train set.

rr library(modelr) income_train <- income_train %>% add_predictions(fit) income_train

# A tibble: 21 x 3
   Education Income  pred
       <dbl>  <dbl> <dbl>
 1      18.7   69.0  65.3
 2      20.4   75.8  74.9
 3      22     80.3  84.5
 4      19.5   71.9  70.0
 5      14.5   41.5  41.0
 6      11.6   15.2  24.2
 7      12.9   25.5  31.5
 8      21.6   72.1  82.1
 9      16.6   51.5  53.2
10      18.3   64.3  62.8
# ... with 11 more rows
  1. Predict the income for new observations, for people with 16.00, 12.52, 15.55, 21.09, and 18.36 years of education. Then, make predictions also for the test set and evaluate the root mean squared error on the test set.

rr newobs <- data.frame(Education = c(16.00, 12.52, 15.55, 21.09, 18.36)) newobs <- newobs %>% add_predictions(fit) newobs

  Education     pred
1     16.00 49.57076
2     12.52 29.32730
3     15.55 46.95307
4     21.09 79.17972
5     18.36 63.29908

rr income_test <- income_test %>% add_predictions(fit) (RMSE <- sqrt(mean((income_test\(Income - income_test\)pred)^2)))

[1] 4.767779

Part 2

  1. Now download data from “http://www-bcf.usc.edu/~gareth/ISL/Income2.csv” which include the same observations but also records data on “senority”. Again, split the data into train and test.

rr income2 <- read_csv(://www-bcf.usc.edu/~gareth/ISL/Income2.csv)

Missing column names filled in: 'X1' [1]Parsed with column specification:
cols(
  X1 = col_integer(),
  Education = col_double(),
  Seniority = col_double(),
  Income = col_double()
)

rr (income2 <- income2 %>% select(-X1))

# A tibble: 30 x 3
   Education Seniority Income
       <dbl>     <dbl>  <dbl>
 1      21.6     113.    99.9
 2      18.3     119.    92.6
 3      12.1     101.    34.7
 4      17.0     188.    78.7
 5      19.9      20     68.0
 6      18.3      26.2   71.5
 7      19.9     150.    88.0
 8      21.2      82.1   79.8
 9      20.3      88.3   90.0
10      10       113.    45.7
# ... with 20 more rows

rr set.seed(12345) idx <- sample(1:nrow(income2), floor(0.7 * nrow(income))) income2_train <- income2[idx, ] income2_test <- income2[-idx, ]

  1. Fit a new model including a new variable and print the model summary.

rr mfit <- lm(data = income2_train, formula = Income ~ Education + Seniority) summary(mfit)


Call:
lm(formula = Income ~ Education + Seniority, data = income2_train)

Residuals:
   Min     1Q Median     3Q    Max 
-9.538 -4.833 -0.184  3.283 12.358 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept) -58.13205    6.52578  -8.908 5.13e-08 ***
Education     6.32393    0.39571  15.981 4.44e-12 ***
Seniority     0.16557    0.02644   6.261 6.63e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 6.035 on 18 degrees of freedom
Multiple R-squared:  0.9558,    Adjusted R-squared:  0.9509 
F-statistic: 194.7 on 2 and 18 DF,  p-value: 6.419e-13
  1. Predicted values of income for the observations in the train set.

rr (income2_train <- income2_train %>% add_predictions(mfit))

# A tibble: 21 x 4
   Education Seniority Income  pred
       <dbl>     <dbl>  <dbl> <dbl>
 1      14.6     138.    53.5  56.7
 2      11.2      44.8   21.4  20.4
 3      17.0     107.    74.6  67.3
 4      10.4      32.4   18.6  13.1
 5      18.7     144.    96.3  83.9
 6      19.9      20     68.0  71.2
 7      21.2      82.1   79.8  89.3
 8      12.1      32.4   17.6  23.6
 9      14.1      20     28.8  34.6
10      18.3     101.    74.7  74.1
# ... with 11 more rows
  1. Evaluate the RMSE of the model on the test set.

rr income2_test <- income2_test %>% add_predictions(mfit) (RMSE_mfit <- sqrt(mean((income2_test\(Income - income2_test\)pred)^2)))

[1] 10.00648

Note that since we don’t have the validation test, we cannot use these RMSE to choose between the two models. If you were to do that, you would need to first start with the same dataset, and split them into three parts, use RMSE on a validation set to pick between the two models, and evaluate the performance of the chosen model on the test set separately.

Exercise 2

In this exercise you will perform Lasso regression yourself. We will use the Boston dataset from the MASS package. The dataset contains information on the Boston suburbs housing market collected by David Harrison in 1978.

We will try to predict the median value of of homes in the region based on its attributes recorded in other variables.

First install the package:

rr # install.packages()

rr # We are not loading the entire library MASS because we are using only # one dataset (also MASS has a select function which is in conflict with # dplyr::select) Boston <- MASS::Boston head(Boston, 3)

     crim zn indus chas   nox    rm  age    dis rad tax ptratio  black lstat medv
1 0.00632 18  2.31    0 0.538 6.575 65.2 4.0900   1 296    15.3 396.90  4.98 24.0
2 0.02731  0  7.07    0 0.469 6.421 78.9 4.9671   2 242    17.8 396.90  9.14 21.6
3 0.02729  0  7.07    0 0.469 7.185 61.1 4.9671   2 242    17.8 392.83  4.03 34.7

rr str(Boston)

'data.frame':   506 obs. of  14 variables:
 $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
 $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
 $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
 $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
 $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
 $ rm     : num  6.58 6.42 7.18 7 7.15 ...
 $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
 $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
 $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
 $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
 $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
 $ black  : num  397 397 393 395 397 ...
 $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
 $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
  1. Split the data to training and testing subsets.

rr set.seed(123) idx <- sample(1:nrow(Boston), round(0.7 * nrow(Boston))) boston_train <- Boston[idx, ] boston_test <- Boston[-idx, ]

  1. Perform a Lasso regression with glmnet. Steps:
  1. Extract the input and output data from the Boston data.frame and convert them if necessary to a correct format.

rr X_train <- data.matrix(boston_train[, -ncol(boston_train)]) y_train <- boston_train\(medv X_test <- data.matrix(boston_test[, -ncol(boston_test)]) y_test <- boston_test\)medv

  1. Use cross-validation to select the value for \(\lambda\).

rr library(glmnet)

Loading required package: Matrix

Attaching package: ‘Matrix’

The following object is masked from ‘package:tidyr’:

    expand

Loading required package: foreach

Attaching package: ‘foreach’

The following objects are masked from ‘package:purrr’:

    accumulate, when

Loaded glmnet 2.0-16

rr cvfit <- cv.glmnet(X_train, y_train, nfolds = 5) plot(cvfit)

  1. Inspect computed coefficients for lambda.min and lambda.1se.

rr cvfit$lambda.min

[1] 0.03383821

rr cvfit$lambda.1se

[1] 0.3155763
  1. Compute the predictions for the test dataset the two choices of the tuning parameter, lambda.min and lambda.1se. Evaluate the MSE for each.

rr final_pred <- predict(cvfit, newx=X_test, s=.1se) head(final_pred)

          1
3  31.29388
6  26.56063
8  19.69023
9  11.02580
10 19.86751
18 18.23750

rr (RMSE <- sqrt(mean((final_pred - y_test)^2)))

[1] 4.700897
LS0tCnRpdGxlOiAiTGVjdHVyZSA2OiBFeGVyY2lzZXMgd2l0aCBhbnN3ZXJzIgpkYXRlOiBPY3RvYmVyIDE2dGgsIDIwMTgKb3V0cHV0OiAKICBodG1sX25vdGVib29rOgogICAgdG9jOiB0cnVlCiAgICB0b2NfZmxvYXQ6IHRydWUKLS0tCgoKYGBge3IsIG1lc3NhZ2U9RkFMU0UsIHdhcm5pbmcgPSBGQUxTRX0KbGlicmFyeSh0aWR5dmVyc2UpCmBgYAoKCgojIEV4ZXJjaXNlIDE6IExpbmVhciBSZWdyZXNzaW9uCgojIyBQYXJ0IDEKClRoZSBkYXRhIGluIHRoZSBmb2xsb3dpbmcgVVJMICJodHRwOi8vd3d3LWJjZi51c2MuZWR1L35nYXJldGgvSVNML0luY29tZTEuY3N2IgppbmNsdWRlcyBvYnNlcnZhdGlvbiBvbiBpbmNvbWUgbGV2ZWxzIChpbiB0ZW5zIG9mIHRob3VzYW5kcyBvZiBkb2xsYXJzKQphbmQgeWVhcnMgb2YgZWR1Y2F0aW9uLiAqVGhlIGRhdGEgaXMgbm90IHJlYWwgYW5kIHdhcyBhY3R1YWxseSBzaW11bGF0ZWQqLgoKYS4gUmVhZCB0aGUgZGF0YSBpbnRvIFIgYW5kIGdlbmVyYXRlIGEgZ2dwbG90IHdpdGggYSBmaXR0ZWQgbGluZS4KYGBge3J9CmluY29tZSA8LSByZWFkX2NzdigiaHR0cDovL3d3dy1iY2YudXNjLmVkdS9+Z2FyZXRoL0lTTC9JbmNvbWUxLmNzdiIpCmluY29tZSA8LSBpbmNvbWUgJT4lIHNlbGVjdCgtWDEpCmluY29tZQpgYGAKCmBgYHtyfQpnZ3Bsb3QoaW5jb21lLCBhZXMoeCA9IEVkdWNhdGlvbiwgeSA9IEluY29tZSkpICsKICBnZW9tX3BvaW50KCkgKyBnZW9tX3Ntb290aChtZXRob2QgPSAibG0iKQpgYGAKCmIuIFNwbGl0IHRoZSBkYXRhIGludG8gdHJhaW4gYW5kIHRlc3Qgc2V0LiBOb3RlIHRoYXQgaGVyZSB3ZSBkbyBub3QgZm9ybQphIHZhbGlkYXRpb24gc2V0IGFzIHdlIGhhdmUgdmVyeSBmZXcgb2JzZXJ2YXRpb25zIGFuZCB3aWxsIG9ubHkgY29uc2lkZXIKYSBzaW5nbGUgbW9kZWwuCgpgYGB7cn0Kc2V0LnNlZWQoMTIzNDUpCmlkeCA8LSBzYW1wbGUoMTpucm93KGluY29tZSksIGZsb29yKDAuNyAqIG5yb3coaW5jb21lKSkpCmluY29tZV90cmFpbiA8LSAgaW5jb21lW2lkeCwgXQppbmNvbWVfdGVzdCA8LSBpbmNvbWVbLWlkeCwgXQpgYGAKCgpjLiBGaXQgYSBsaW5lYXIgbW9kZWwgd2l0aCBlZHVjYXRpb24gKHllYXJzIG9mIGVkdWNhdGlvbikgYXMgaW5wdXQgdmFyaWFibGUgCmFuZCBpbmNvbWUgYXMgcmVzcG9uc2UgdmFyaWFibGUuIFdoYXQgYXJlIHRoZSBtb2RlbCBjb2VmZmljaWVudHMgb2J0YWluZWQgYW5kIApob3cgY2FuIHlvdSBleHRyYWN0IHRoZW0/IEluc3BlY3QgdGhlIG1vZGVsIGZpdCB1c2luZyBgc3VtbWFyeSgpYAoKIVtbU291cmNlOiBJbnRyb2R1Y3Rpb24gdG8gU3RhdGlzdGljYWwgTGVhcm5pbmddKGh0dHBzOi8vbGluay1zcHJpbmdlci1jb20uc3RhbmZvcmQuaWRtLm9jbGMub3JnL2NoYXB0ZXIvMTAuMTAwNy85NzgtMS00NjE0LTcxMzgtN18yKV0oLi9lZHVjYXRpb25faW5jb21lLnBuZykKCgpgYGB7cn0KZml0IDwtIGxtKGRhdGEgPSBpbmNvbWVfdHJhaW4sIGZvcm11bGEgPSBJbmNvbWUgfiBFZHVjYXRpb24pCnN1bW1hcnkoZml0KQpgYGAKCgpgYGB7cn0KY29lZihmaXQpCmBgYAoKZC4gQ29tcHV0ZSB0aGUgZml0dGVkIHZhbHVlcyBvZiBpbmNvbWUgZm9yIHRoZSBvYnNlcnZhdGlvbnMKaW5jbHVkZWQgaW4gdGhlIHRyYWluIHNldC4KCmBgYHtyfQpsaWJyYXJ5KG1vZGVscikKaW5jb21lX3RyYWluIDwtIGluY29tZV90cmFpbiAlPiUKICAgIGFkZF9wcmVkaWN0aW9ucyhmaXQpCmluY29tZV90cmFpbgpgYGAKCgplLiBQcmVkaWN0IHRoZSBpbmNvbWUgZm9yIG5ldyBvYnNlcnZhdGlvbnMsIGZvciBwZW9wbGUgd2l0aCAKMTYuMDAsIDEyLjUyLCAxNS41NSwgMjEuMDksIGFuZCAxOC4zNiB5ZWFycyBvZiBlZHVjYXRpb24uIFRoZW4sCm1ha2UgcHJlZGljdGlvbnMgYWxzbyBmb3IgdGhlIHRlc3Qgc2V0IGFuZCBldmFsdWF0ZSB0aGUgcm9vdCBtZWFuIHNxdWFyZWQgCmVycm9yIG9uIHRoZSB0ZXN0IHNldC4KYGBge3J9Cm5ld29icyA8LSBkYXRhLmZyYW1lKEVkdWNhdGlvbiA9IGMoMTYuMDAsIDEyLjUyLCAxNS41NSwgMjEuMDksIDE4LjM2KSkKbmV3b2JzIDwtIG5ld29icyAlPiUKICAgIGFkZF9wcmVkaWN0aW9ucyhmaXQpCm5ld29icwpgYGAKCmBgYHtyfQppbmNvbWVfdGVzdCA8LSBpbmNvbWVfdGVzdCAlPiUgYWRkX3ByZWRpY3Rpb25zKGZpdCkKKFJNU0UgPC0gc3FydChtZWFuKChpbmNvbWVfdGVzdCRJbmNvbWUgLSBpbmNvbWVfdGVzdCRwcmVkKV4yKSkpCmBgYAoKCgojIyBQYXJ0IDIKCmEuIE5vdyBkb3dubG9hZCBkYXRhIGZyb20gImh0dHA6Ly93d3ctYmNmLnVzYy5lZHUvfmdhcmV0aC9JU0wvSW5jb21lMi5jc3YiCndoaWNoIGluY2x1ZGUgdGhlIHNhbWUgb2JzZXJ2YXRpb25zIGJ1dCBhbHNvIHJlY29yZHMgZGF0YSBvbiAic2Vub3JpdHkiLgpBZ2Fpbiwgc3BsaXQgdGhlIGRhdGEgaW50byB0cmFpbiBhbmQgdGVzdC4KCmBgYHtyfQppbmNvbWUyIDwtIHJlYWRfY3N2KCJodHRwOi8vd3d3LWJjZi51c2MuZWR1L35nYXJldGgvSVNML0luY29tZTIuY3N2IikKKGluY29tZTIgPC0gaW5jb21lMiAlPiUgc2VsZWN0KC1YMSkpCmBgYAoKYGBge3J9CnNldC5zZWVkKDEyMzQ1KQppZHggPC0gc2FtcGxlKDE6bnJvdyhpbmNvbWUyKSwgZmxvb3IoMC43ICogbnJvdyhpbmNvbWUpKSkKaW5jb21lMl90cmFpbiA8LSAgaW5jb21lMltpZHgsIF0KaW5jb21lMl90ZXN0IDwtIGluY29tZTJbLWlkeCwgXQpgYGAKCgpiLiBGaXQgYSBuZXcgbW9kZWwgaW5jbHVkaW5nIGEgbmV3IHZhcmlhYmxlIGFuZCBwcmludCB0aGUgbW9kZWwgc3VtbWFyeS4KCiFbW1NvdXJjZTogSW50cm9kdWN0aW9uIHRvIFN0YXRpc3RpY2FsIExlYXJuaW5nXShodHRwczovL2xpbmstc3ByaW5nZXItY29tLnN0YW5mb3JkLmlkbS5vY2xjLm9yZy9jaGFwdGVyLzEwLjEwMDcvOTc4LTEtNDYxNC03MTM4LTdfMildKC4vZWR1Y2F0aW9uX3Nlbmlvcml0eV9pbmNvbWUucG5nKQoKYGBge3J9Cm1maXQgPC0gbG0oZGF0YSA9IGluY29tZTJfdHJhaW4sIGZvcm11bGEgPSBJbmNvbWUgfiBFZHVjYXRpb24gKyBTZW5pb3JpdHkpCnN1bW1hcnkobWZpdCkKYGBgCgpjLiBQcmVkaWN0ZWQgdmFsdWVzIG9mIGluY29tZSBmb3IgdGhlIG9ic2VydmF0aW9ucyBpbiB0aGUgdHJhaW4gc2V0LgoKYGBge3J9CihpbmNvbWUyX3RyYWluIDwtIGluY29tZTJfdHJhaW4gJT4lIGFkZF9wcmVkaWN0aW9ucyhtZml0KSkKYGBgCgoKZC4gRXZhbHVhdGUgdGhlIFJNU0Ugb2YgdGhlIG1vZGVsIG9uIHRoZSB0ZXN0IHNldC4KCmBgYHtyfQppbmNvbWUyX3Rlc3QgPC0gaW5jb21lMl90ZXN0ICU+JSBhZGRfcHJlZGljdGlvbnMobWZpdCkKKFJNU0VfbWZpdCA8LSBzcXJ0KG1lYW4oKGluY29tZTJfdGVzdCRJbmNvbWUgLSBpbmNvbWUyX3Rlc3QkcHJlZCleMikpKQpgYGAKCk5vdGUgdGhhdCBzaW5jZSB3ZSBkb24ndCBoYXZlIHRoZSB2YWxpZGF0aW9uIHRlc3QsIHdlIGNhbm5vdCB1c2UgdGhlc2UgUk1TRQp0byBjaG9vc2UgYmV0d2VlbiB0aGUgdHdvIG1vZGVscy4gSWYgeW91IHdlcmUgdG8gZG8gdGhhdCwgeW91IHdvdWxkIApuZWVkIHRvIGZpcnN0IHN0YXJ0IHdpdGggdGhlIHNhbWUgZGF0YXNldCwgYW5kIHNwbGl0IHRoZW0gaW50byB0aHJlZQpwYXJ0cywgdXNlIFJNU0Ugb24gYSB2YWxpZGF0aW9uIHNldCB0byBwaWNrIGJldHdlZW4gdGhlIHR3byBtb2RlbHMsCmFuZCBldmFsdWF0ZSB0aGUgcGVyZm9ybWFuY2Ugb2YgdGhlIGNob3NlbiBtb2RlbCBvbiB0aGUgdGVzdCBzZXQgc2VwYXJhdGVseS4KCiMjIEV4ZXJjaXNlIDIKCkluIHRoaXMgZXhlcmNpc2UgeW91IHdpbGwgcGVyZm9ybSBMYXNzbyByZWdyZXNzaW9uIHlvdXJzZWxmLgpXZSB3aWxsIHVzZSB0aGUgYEJvc3RvbmAgZGF0YXNldCBmcm9tIHRoZSBgTUFTU2AgcGFja2FnZS4KVGhlIGRhdGFzZXQgY29udGFpbnMgaW5mb3JtYXRpb24gb24gdGhlIEJvc3RvbiBzdWJ1cmJzIApob3VzaW5nIG1hcmtldCBjb2xsZWN0ZWQgYnkgRGF2aWQgSGFycmlzb24gaW4gMTk3OC4KCldlIHdpbGwgdHJ5IHRvIHByZWRpY3QgdGhlIG1lZGlhbiB2YWx1ZSBvZiBvZiBob21lcyBpbiB0aGUgcmVnaW9uIGJhc2VkIG9uIAppdHMgYXR0cmlidXRlcyByZWNvcmRlZCBpbiBvdGhlciB2YXJpYWJsZXMuCgpGaXJzdCBpbnN0YWxsIHRoZSBwYWNrYWdlOgpgYGB7cn0KIyBpbnN0YWxsLnBhY2thZ2VzKCJNQVNTIikKYGBgCgpgYGB7cn0KIyBXZSBhcmUgbm90IGxvYWRpbmcgdGhlIGVudGlyZSBsaWJyYXJ5IE1BU1MgYmVjYXVzZSB3ZSBhcmUgdXNpbmcgb25seSAKIyBvbmUgZGF0YXNldCAoYWxzbyBNQVNTIGhhcyBhIHNlbGVjdCBmdW5jdGlvbiB3aGljaCBpcyBpbiBjb25mbGljdCB3aXRoIAojIGRwbHlyOjpzZWxlY3QpCkJvc3RvbiA8LSBNQVNTOjpCb3N0b24KaGVhZChCb3N0b24sIDMpCnN0cihCb3N0b24pCmBgYAoKCmEuIFNwbGl0IHRoZSBkYXRhIHRvIHRyYWluaW5nIGFuZCB0ZXN0aW5nIHN1YnNldHMuCgpgYGB7cn0Kc2V0LnNlZWQoMTIzKQppZHggPC0gc2FtcGxlKDE6bnJvdyhCb3N0b24pLCByb3VuZCgwLjcgKiBucm93KEJvc3RvbikpKQpib3N0b25fdHJhaW4gPC0gQm9zdG9uW2lkeCwgXQpib3N0b25fdGVzdCA8LSBCb3N0b25bLWlkeCwgXQpgYGAKCgpiLiBQZXJmb3JtIGEgTGFzc28gcmVncmVzc2lvbiB3aXRoIGBnbG1uZXRgLiBTdGVwczoKICAKMS4gRXh0cmFjdCB0aGUgaW5wdXQgYW5kIG91dHB1dCBkYXRhIGZyb20gdGhlIGBCb3N0b25gIGBkYXRhLmZyYW1lYCBhbmQgY29udmVydAp0aGVtIGlmIG5lY2Vzc2FyeSB0byBhIGNvcnJlY3QgZm9ybWF0LgoKYGBge3J9ClhfdHJhaW4gPC0gZGF0YS5tYXRyaXgoYm9zdG9uX3RyYWluWywgLW5jb2woYm9zdG9uX3RyYWluKV0pCnlfdHJhaW4gPC0gYm9zdG9uX3RyYWluJG1lZHYKWF90ZXN0IDwtIGRhdGEubWF0cml4KGJvc3Rvbl90ZXN0WywgLW5jb2woYm9zdG9uX3Rlc3QpXSkKeV90ZXN0IDwtIGJvc3Rvbl90ZXN0JG1lZHYKYGBgCgoKMi4gVXNlIGNyb3NzLXZhbGlkYXRpb24gdG8gc2VsZWN0IHRoZSB2YWx1ZSBmb3IgJFxsYW1iZGEkLgoKYGBge3J9CmxpYnJhcnkoZ2xtbmV0KQpjdmZpdCA8LSBjdi5nbG1uZXQoWF90cmFpbiwgeV90cmFpbiwgbmZvbGRzID0gNSkKcGxvdChjdmZpdCkKYGBgCgozLiBJbnNwZWN0IGNvbXB1dGVkIGNvZWZmaWNpZW50cyBmb3IgYGxhbWJkYS5taW5gIGFuZCBgbGFtYmRhLjFzZWAuCgpgYGB7cn0KY3ZmaXQkbGFtYmRhLm1pbgpgYGAKCmBgYHtyfQpjdmZpdCRsYW1iZGEuMXNlCmBgYAoKNC4gQ29tcHV0ZSB0aGUgcHJlZGljdGlvbnMgZm9yIHRoZSB0ZXN0IGRhdGFzZXQgdGhlIHR3byBjaG9pY2VzIG9mIHRoZSB0dW5pbmcKcGFyYW1ldGVyLCBgbGFtYmRhLm1pbmAgYW5kIGBsYW1iZGEuMXNlYC4gCkV2YWx1YXRlIHRoZSBNU0UgZm9yIGVhY2guCgpgYGB7cn0KZmluYWxfcHJlZCA8LSBwcmVkaWN0KGN2Zml0LCBuZXd4PVhfdGVzdCwgcz0ibGFtYmRhLjFzZSIpCmhlYWQoZmluYWxfcHJlZCkKYGBgCgpgYGB7cn0KKFJNU0UgPC0gc3FydChtZWFuKChmaW5hbF9wcmVkIC0geV90ZXN0KV4yKSkpCmBgYAoK