程序代写代做代考 data mining python algorithm Data Mining and Machine Learning
Data Mining and Machine Learning
Fall 2018, Homework 1
(due on Sep 4, 11.59pm EST)
Jean Honorio jhonorio@purdue.edu
The homework is based on a total of 10 points. Your code should be in
Python 2.7. For clarity, the algorithms presented here will assume zero-based
indices for arrays, vectors, matrices, etc. Please read the submission instructions
at the end. Failure to comply to the submission instructions will cause
your grade to be reduced.
In this homework, we will focus on classification for separable data. You can
use the following script createsepdata.py to create some synthetic separable
data:
import numpy as np
import scipy.linalg as la
# Input: number of samples n
# number of features d
# Output: numpy matrix X of features, with n rows (samples), d columns (features)
# X[i,j] is the j-th feature of the i-th sample
# numpy vector y of labels, with n rows (samples), 1 column
# y[i] is the label (+1 or -1) of the i-th sample
# Example on how to call the script:
# import createsepdata
# X, y = createsepdata.run(10,3)
def run(n,d):
y = np.ones((n,1))
y[n/2:] = -1
X = np.random.random((n,d))
idx_row, idx_col = np.where(y==1)
X[idx_row,0] = 0.1+X[idx_row,0]
idx_row, idx_col = np.where(y==-1)
X[idx_row,0] = -0.1-X[idx_row,0]
U = la.orth(np.random.random((d,d)))
X = np.dot(X,U)
return (X,y)
1
Here are the questions:
1) [4 points] Implement the following perceptron algorithm, introduced in Lec-
ture 1.
Input: number of iterations L, training data xt ∈ Rd, yt ∈ {+1,−1} for
t = 0, . . . , n− 1
Output: θ ∈ Rd
θ ← 0
for iter = 1, . . . , L do
for t = 0, . . . , n− 1 do
if yt(θ · xt) ≤ 0 then
θ ← θ + ytxt
end if
end for
end for
The header of your Python script linperceptron.py should be:
# Input: number of iterations L
# numpy matrix X of features, with n rows (samples), d columns (features)
# X[i,j] is the j-th feature of the i-th sample
# numpy vector y of labels, with n rows (samples), 1 column
# y[i] is the label (+1 or -1) of the i-th sample
# Output: numpy vector theta of d rows, 1 column
def run(L,X,y):
# Your code goes here
return theta
2) [2 points] Implement the following linear predictor function, introduced in
Lecture 1.
Input: θ ∈ Rd, testing point x ∈ Rd
Output: label ∈ {+1,−1}
if θ · x > 0 then
label← +1
else
label← −1
end if
The header of your Python script linpred.py should be:
# Input: numpy vector theta of d rows, 1 column
# numpy vector x of d rows, 1 column
# Output: label (+1 or -1)
def run(theta,x):
# Your code goes here
return label
2
3) [4 points] Now we ask you to implement the following primal support vector
machines (PSVM) problem, introduced in Lecture 2.
minimize
1
2
θ · θ
subject to yi(xi · θ) ≥ 1 for i = 0, . . . , n− 1
Let H ∈ Rd×d be the identity matrix with d rows and d columns. Let f =
(0, 0, . . . , 0)
T ∈ Rd be a d-dimensional vector of zeros. Let A ∈ Rn×d be a
matrix of n rows and d columns, where ai,j = −yixi,j for all i = 0, . . . , n − 1
and j = 0, . . . , d − 1. (Recall that yi is the label of the i-th sample and xi,j
is the j-th feature of the i-th sample.) Let b = (−1,−1, . . . ,−1)T ∈ Rn be an
n-dimensional vector of minus ones. Since θ ∈ Rd, we can rewrite the PSVM
problem as:
minimize
1
2
θTHθ + fTθ
subject to Aθ ≤ b
Fortunately, the package cvxopt can solve exactly the above problem by doing:
import cvxopt as co
theta = np.array(co.solvers.qp(co.matrix(H,tc=’d’),co.matrix(f,tc=’d’),
co.matrix(A,tc=’d’),co.matrix(b,tc=’d’))[’x’])
The header of your Python script linprimalsvm.py should be:
# Input: numpy matrix X of features, with n rows (samples), d columns (features)
# X[i,j] is the j-th feature of the i-th sample
# numpy vector y of labels, with n rows (samples), 1 column
# y[i] is the label (+1 or -1) of the i-th sample
# Output: numpy vector theta of d rows, 1 column
def run(X,y):
# Your code goes here
return theta
Notice that for prediction you can reuse the linpred.py script that you
wrote for question 2.
SOME POSSIBLY USEFUL THINGS.
Python 2.7 is available at the servers antor and data. From the terminal, you
can use your Career account to start a ssh session:
ssh username@data.cs.purdue.edu
OR
ssh username@antor.cs.purdue.edu
3
From the terminal, to start Python:
python
Inside Python, to check whether you have Python 2.7:
import sys
print (sys.version)
Inside Python, to check whether you have the package cvxopt:
import cvxopt
From the terminal, to install the Python package cvxopt:
pip install –user cvxopt
More information at https://cvxopt.org/install/index.html
SUBMISSION INSTRUCTIONS.
Your code should be in Python 2.7. We only need the Python scripts
(.py files). We do not need the Python (compiled) bytecodes (.pyc files).
You will get 0 points if your code does not run. You will get 0 points in you
fail to include the Python scripts (.py files) even if you mistakingly include the
bytecodes (.pyc files). We will deduct points, if you do not use the right name for
the Python scripts (.py) as described on each question, or if the input/output
matrices/vectors/scalars have a different type/size from what is described on
each question. Homeworks are to be solved individually. We will run plagiarism
detection software.
Please, submit a single ZIP file through Blackboard. Your Python scripts
(linperceptron.py, linpred.py, etc.) should be directly inside the ZIP file.
There should not be any folder inside the ZIP file, just Python scripts.
The ZIP file should be named according to your Career account. For instance,
if my Career account is jhonorio, the ZIP file should be named jhonorio.zip
4