Starting from:

$30

Homework #3 Solution

Submission: You need to submit three files through MarkUs1 :




• Your answers to Questions 1 and 2 as a PDF file titled hw3_writeup.pdf. You can produce the file however you like (e.g. LATEX, Microsoft Word, scanner), as long as it is readable.

• Your completed code files q1.py and q2.py




Neatness Point: One of the 10 points will be given for neatness. You will receive this point as long as we don’t have a hard time reading your solutions or understanding the structure of your code.




Late Submission: 10% of the marks will be deducted for each day late, up to a maximum of 3 days. After that, no submissions will be accepted.




Collaboration. Weekly homeworks are individual work. See the Course Information handout2

for detailed policies.




Data. In this assignment we will be working with the Boston Housing dataset3. This dataset contains 506 entries. Each entry consists of a house price and 13 features for houses within the Boston area. We suggest working in python and using the scikit-learn package4 to load the data.




Starter Code. Starter code written in Python is provided for Question 2.




1. [3pts] Robust Regression. One problem with linear regression using squared error loss is that it can be sensitive to outliers. Another loss function we could use is the Huber loss, parameterized by a hyperparameter δ:




Lδ (y, t) = Hδ (y − t)

( 1 a2 if |a| ≤ δ

Hδ (a) =

2

2

δ(|a| − 1 δ) if |a| δ



2

(a) [1pt] Sketch the Huber loss Lδ (y, t) and squared error loss LSE (y, t) = 1 (y − t)2 for t = 0, either by hand or using a plotting library. Based on your sketch, why would you expect the Huber loss to be more robust to outliers?

(b) [1pt] Just as with linear regression, assume a linear model:




y = wx + b.







Give formulas for the partial derivatives ∂Lδ /∂w and ∂Lδ /∂b. (We recommend you find a formula for the derivative H 0 (a), and then give your answers in terms of H 0 (y − t).)

δ δ




1 https://markus.teach.cs.toronto.edu/csc411-2018-09

2 http://www.cs.toronto.edu/~rgrosse/courses/csc411_f18/syllabus.pdf

3 http://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html

4 http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html









(c) [1pt] Write Python code to perform (full batch mode) gradient descent on this model.

Assume the training dataset is given as a design matrix X and target vector y. Initialize w and b to all zeros. Your code should be vectorized, i.e. you should not have a for loop over training examples or input dimensions. You may find the function np.where helpful.

Submit your code as q1.py.




2. [6pts] Locally Weighted Regression.




(a) [2pts] Given {(x(1) , y(1) ), .., (x(N ), y(N ))} and positive weights a(1) , ..., a(N ) show that the solution to the weighted least squares problem







w∗ = arg min







is given by the formula

N

2

2

1 X a(i)(y(i) − wT x(i) )2 + λ ||w||2 (1)

i=1
w∗ = XT AX + λI −1 XT Ay (2)




where X is the design matrix (defined in class) and A is a diagonal matrix where Aii =

a(i)

It may help you to review Section 3.1 of the csc321 notes5.

(b) [2pts] Locally reweighted least squares combines ideas from k-NN and linear regression.

For each new test example x we compute distance-based weights for each training ex-
(i) 2 2

PN
P

ample a(i) = exp(−||x−x

|| /2τ )

, computes w∗ = arg min 1




i=1

a(i)(y(i) − wT x(i))2 +
j exp(−||x−x(j) ||2 /2τ 2 ) 2
λ

2 ||w||2

and predicts yˆ = xT

w∗. Complete the implementation of locally reweighted least
squares by providing the missing parts for q2.py.







Important things to notice while implementing: First, do not invert any matrix, use

P

a linear solver (numpy.linalg.solve is one example). Second, notice that exp(Ai ) =
exp(Ai −B)

j exp(Aj )

exp(Ai )
Pj exp(Aj −B) but if we use B = maxj Aj it is much more numerically stable as

Pj exp(Aj )
overflows/underflows easily. This is handled automatically in the scipy package with the

scipy.misc.logsumexp function6 .

(c) [1pt] Randomly hold out 30% of the dataset as a validation set. Compute the average loss for different values of τ in the range [10,1000] on both the training set and the validation set. Plot the training and validation losses as a function of τ (using a log scale for τ ).

(d) [1pt] How would you expect this algorithm to behave as τ → ∞? When τ → 0? Is this what actually happened?

























5 http://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/readings/L02%20Linear%20Regression.pdf

6 https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.misc.logsumexp.html

More products