程序代写代做代考 algorithm Microsoft Word – Assignment3v.4.docx

Microsoft Word – Assignment3v.4.docx

Latent Variables and Neural Networks

Part A. Document Clustering

Question 1

I. Derive Expectation and Maximisation steps of the hard-EM algorithm for Document

Clustering, show your work in your submitted report. In particular, include all model

parameters that should be learnt and the exact expressionthat should be used

to update these parameters during the learning process (ie., E step, M step and

assignments).

II. Implement the hard-EM (you derived above) and soft-EM . Please provide enough

comments in your submitted code.

III. Load Task3A.text file and necessary libraries, set the number of clusters K=4, and

run both the soft-EM and hard-EM algorithms on the provided data.

IV. Perform a PCA on the clusterings that you get based on the hard-EM and soft-EM

in . Then, visualise the obtained clusters with different colors where x and y axes

are the first two principal components . Save your visualizations as plots, and attach

them to your report.

Part B. Neural Network vs. Perceptron

Question 2

I. Load Task3B_train.csv and Task3B_test.csv sets, plot the training data with

classes are marked with different colors, and attach the plot to your report.

II. Run the implementations of Perceptron , calculate the test error, and plot the test

data while the points are colored with their estimated class labels; attach the pdf to

your report.

Hint. Note that you must remove NA records from the datasets (using

“complete.cases()’ function). You may also choose to change the labels from [0, 1] to

[-1, +1] for your convenience. You may need to change some initial settings

(e.g., epsilon and tau.max). Finally, remember that perceptron is sensitive to initial

weights. Therefore, we recommend to run your code a few times with different initial

weights.

III. Run the 3-layer Neural Network with different values of K (i.e, number of units in the

hidden layer) and record testing error for each of them; plot the error vs K and

attach it to your report. Based on this plot, find the best K and the corresponding

model, then plot the test data while the points are colored with their estimated

class labels using the best model that you have selected; attach the plot to your

report.

Hint. You may need to transpose the dataset ()using “t()” function) and use different

values for parameter settings (e.g., lambda). We also recommend to change K to 2,

4, 6, .., 100 (i.e. from 2 to 100 with the step size of 2).

IV. In a table, report the error rates obtained by the perceptron and all variants of NN.

Then bold the best model (with minimum error). Add this table to your report.

V. In your report explain the reason(s) responsible for such difference between

perceptron and a 3-layer NN.

Hint: Look at the plots and think about the model assumptions.

Part C. Self-Taught Learning

Question 3

I. Load Task3C_labeled.csv, Task3C_unlabeled.csv and Task3C_test.csv data

sets and required libraries (e.g., H2O). Note that we are going to use

Task3C_labeled.csv and Task3C_unlabeled.csv for training the autoencoder. We

are going to use Task3C_labeled.csv for training the classifier. Finally, we evaluate

the trained classifier on the test Task3C_test.csv.

II. Train an autoencoder with only one hidden layer and change the number of its

neurons to: 20, 40, 60, 80, .., 500 (i.e. from 20 to 500 with a step size of 20).

III. For each model in Step II, calculate and record the reconstruction error which is
simply the average (over all data points while the model is fixed) of Euclidian
distances between the input and output of the autoencoder (you can simply use
“h2o.anomaly()” function). Plot these values where the x-axis is the number of units
in the middle layer and the y-axis is the reconstruction error. Then, save and attach
the plot to your report. Explain your findings based on the plot in your report.

IV. Use the 3-layer NN “h2o.deeplearning” function (make sure you set “ autoencoder =
FALSE”) to build a model with 100 units in the hidden layer using all the original
attributes from the training set. Then, calculate and record the test error.

V. Build augmented self-taught networks using the models learnt in Step II. For each
model:

A. Add the output of the middle layer as extra features to the original feature set,
B. Train a 3-layer NN (similar to Step IV) using all features (original + extra).

Then calculate and record the test error.
VI. Plot the error rates for the 3-layer neural networks from Step IV and V while the x-

axis is the number of features and y-axis is the classification error. Save and attach
the plot to your report.

VII. Report the optimum number(s) of units in the middle layer of the autoencoder in
terms of the reconstruction and misclassification errors.

VIII. Comparing the plot from Step III and VI, do you observe any relation between the
reconstruction error and misclassification error? Explain your finding and add them to

your report.

Hint. Since the dataset for this task is large and high-dimensional, running the whole
experiments several times is very time consuming. Therefore, it is recommended to only use
a small portion of your data when you develop or debug your code.

Hint. If you can combine Step II and V (so learn each autoencoder only once), you may save
a great portion of the execution time.

Hint. If you don’t see the expected behaviour in your plots, you may need to check that the
data is clean, i.e. it doesn;t have NA entries, it’s normalised etc. Moreover, you may need to
check that your implementation of the model and training/decoding algorithms is correct.

Posted in Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *