Basic optimization | Computer Science homework help

ISYE 6740 Homework 3

Total 100 points.

1. Basic optimization. (30 points.)

Consider a simplied logistic regression problem. Given m training samples (xi; yi), i = 1; : : : ;m.

The data xi 2 R (note that we only have one feature for each sample), and yi 2 f0; 1g. To t a

logistic regression model for classication, we solve the following optimization problem, where 2 R

is a parameter we aim to nd:

max

`(); (1)

where the log-likelhood function

`() =

Xm

i=1

f???? log(1 + expf????xig) + (yi ???? 1)xig :

(a) (10 points) Show step-by-step mathematical derivation for the gradient of the cost function `()

in (1) and write a pseudo-code for performing gradient descent to nd the optimizer . This is

essentially what the training procedure does. (pseudo-code means you will write down the steps

of the algorithm, not necessarily any specic programming language.)

(b) (10 points) Present a stochastic gradient descent algorithm to solve the training of logistic

regression problem (1).

(c) (10 points) We will show that the training problem in basic logistic regression problem

is concave. Derive the Hessian matrix of `() and based on this, show the training problem (1)

is concave (note that in this case, since we only have one feature, the Hessian matrix is just a

scalar). Explain why the problem can be solved eciently and gradient descent will achieve a

unique global optimizer, as we discussed in class.

2. Comparing Bayes, logistic, and KNN classiers. (30 points)

In lectures we learn three dierent classiers. This question is to implement and compare them. We are

suggest use Scikit-learn, which is a commonly-used and powerful Python library with various machine

learning tools. But you can also use other similar library in other languages of your choice to perform

the tasks.

Part One (Divorce classication/prediction). (20 points)

This dataset is about participants who completed the personal information form and a divorce predic-

tors scale.

The data is a modied version of the publicly available at https://archive.ics.uci.edu/ml/datasets/

Divorce+Predictors+data+set (by injecting noise so you will not replicate the results on uci web-

site). There are 170 participants and 54 attributes (or predictor variables) that are all real-valued. The

dataset marriage.csv. The last column of the CSV le is label y (1 means divorce”, 0 means no

divorce”). Each column is for one feature (predictor variable), and each row is a sample (participant).

A detailed explanation for each feature (predictor variable) can be found at the website link above.

Our goal is to build a classier using training data, such that given a test sample, we can classify (or

essentially predict) whether its label is 0 (no divorce”) or 1 (divorce”).

1

Build three classiers using (Naive Bayes, Logistic Regression, KNN). Use the rst 80% data for

training and the remaining 20% for testing. If you use scikit-learn you can use train test split to split

the dataset.

Remark: Please note that, here, for Naive Bayes, this means that we have to estimate the variance for

each individual feature from training data. When estimating the variance, if the variance is zero to

close to zero (meaning that there is very little variability in the feature), you can set the variance to

be a small number, e.g., = 10????3. We do not want to have include zero or nearly variance in Naive

Bayes. This tip holds for both Part One and Part Two of this question.

(a) (10 points) Report testing accuracy for each of the three classiers. Comment on their perfor-

mance: which performs the best and make a guess why they perform the best in this setting.

(b) (10 points) Use the rst two features to train three new classiers. Plot the data points and

decision boundary of each classier. Comment on the dierence between the decision boundary

for the three classiers. Please clearly represent the data points with dierent labels using dierent

colors.

Part Two (Handwritten digits classication). (10 points) Repeat the above using the MNIST

Data in our Homework 2. Here, give digit” 6 label y = 1, and give digit” 2 label y = 0. All the

pixels in each image will be the feature (predictor variables) for that sample (i.e., image). Our goal

is to build classier to such that given a new test sample, we can tell is it a 2 or a 6. Using the rst

80% of the samples for training and remaining 20% for testing. Report the classication accuracy on

testing data, for each of the three classiers. Comment on their performance: which performs the best

and make a guess why they perform the best in this setting.

3. Naive Bayes for spam ltering. (40 points)

In this problem we will use the Naive Bayes algorithm to t a spam lter by hand. This will en-

hance your understanding to Bayes classier and build intuition. This question does not involve any

programming but only derivation and hand calculation.

Spam lters are used in all email services to classify received emails as Spam” or Not Spam”. A

simple approach involves maintaining a vocabulary of words that commonly occur in Spam” emails

and classifying an email as Spam” if the number of words from the dictionary that are present in the

email is over a certain threshold. We are given the vocabulary consists of 15 words

V = fsecret, oer, low, price, valued, customer, today, dollar, million, sports, is, for, play, healthy, pizzag:

We will use Vi to represent the ith word in V . As our training dataset, we are also given 3 example

spam messages,

• million dollar oer

• secret oer today

• secret is secret

and 4 example non-spam messages

• low price for valued customer

• play secret sports today

• sports is healthy

• low price pizza

2

Recall that the Naive Bayes classier assumes the probability of an input depends on its input feature.

The feature for each sample is dened as x(i) = [x(i)

1 ; x(i)

2 ; : : : ; x(i)

d ]T , i = 1; : : : ;m and the class of the

ith sample is y(i). In our case the length of the input vector is d = 15, which is equal to the number

of words in the vocabulary V . Each entry x(i)

j is equal to the number of times word Vj occurs in the

i-th message.

(a) (5 points) Calculate class prior P(y = 0) and P(y = 1) from the training data, where y = 0

corresponds to spam messages, and y = 1 corresponds to non-spam messages. Note that these

class prior essentially corresponds to the frequency of each class in the training sample.

(b) (10 points) Write down the feature vectors for each spam and non-spam messages.

(c) (15 points) In the Naive Bayes model, assuming the keywords are independent of each other (this

is a simplication), the likelihood of a sentence with its feature vector x given a class c is given

by

P(xjy = c) =

Yd

k=1

xk

c;k; c = f0; 1g

where 0 c;k 1 is the probability of word k appearing in class c, which satises

Xd

k=1

c;k = 1; 8c:

Given this, the complete log-likelihood function for our training data is given by

`(1;1; : : : ; 1;d; 2;1; : : : ; 2;d) =

Xm

i=1

Xd

k=1

x(i)

k log y(i);k

(In this example, m = 7.) Calculate the maximum likelihood estimates of 0;1, 0;7, 1;1, 1;15 by

maximizing the log-likelihood function above. (Hint: We are solving a constrained maximization

problem. To do this, remember, you need to introduce two Lagrangian multiplier because you

have two constraints.)

(d) (10 points) Given a test message today is secret”, using the Naive Bayes classier that you have

trained in Part (a)-(c), to calculate the posterior and decide whether it is spam or not spam.

3

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more