Computer science final | data science | Morgan State University

Submit all your answers in one notebook file as (final_yourname.ipynb) 

Question 1 (80 pts) 

Sentiment Analysis helps data scientists to analyze any kind of data i.e., Business, Politics, Social Media, etc., For example, the IMDb dataset “movie_data.csv” file contains 25,000 highly polar ‘positive’ (12500) and ‘negative’ (12500) IMDB movie reviews (label negative review as ‘0’ and positive review as ‘1’).

 Similarly, “amazon_data.txt” and “yelp_data.txt” contain 1000 labeled negative review as ‘0’ and positive review as ‘1’ 

For further help, check the notebook sentiment_analysis.ipynb in Canvas and also explore the link: https://medium.com/@vasista/sentiment-analysis-using-svm338d418e3ff1 

Answer the following: 

a) Read all the above data files (.csv and .txt) in python Pandas DataFrame. For each dataset, make 70% as training and 30% as test sets. 

b) By using both CountVectorizer and TfidfVectorizer separately of the sklearn library , perform the Logistic regression classification in the IMDb dataset and evaluate the accuracies in the test set. 

c) Classify the Amazon dataset using Logistic Regression and Neural Network (two hidden layers) and compare the performances and show the confusion matrices. 

d) Generate classification model for the Yelp dataset with K-NN algorithms. Fit and test the model for different values for K (from 1 to 5) using a for loop and record and plot the KNN’s testing accuracy in a variable (scores). 

e) Generate prediction for the following reviews based Logistic regression classifier in Amazon dataset: Review1 = “SUPERB, I AM IN LOVE IN THIS PHONE”  Review 2 = “Do not purchase this product. 

My cell phone blast when I switched  the charger”

Question 2 (60 pts)

The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. This data set is in-built in scikit, so you don’t need to download it explicitly.You can check the code here:

https://towardsdatascience.com/machine-learning-nlp-text-classification-using-scikit-learn-python-and-nltk-c52b92a7c73a

to load the data set directly in notebook (this might take few minutes, so patience). For example,

from sklearn.datasets import fetch_20newsgroups

twenty_train = fetch_20newsgroups(subset=”train”, shuffle=True)

a)By using bothCountVectorizer and TfidfVectorizer separately of the sklearn library, perform the Logistic regression classificationon the training set and show the confusing matrix and accuracy by predicting the  class labels in the test set.

b)Perform a Logistic  Regression classification and show the accuracy of the test set

c)Perform a K-means Clustering in the training set with K =20

d)Plot the accuracy (Elbow method) of different cluster sizes (5, 10, 15, 20, 25, 30)and determine  the best cluster size.

Question 3 (60 pts)

The Medical dataset “image_caption.txt”contains captions for 1000 images (ImageID).Let’s build a small search engine (you may explore to get some help: https://towardsdatascience.com/create-a-simple-search-engine-using-python-412587619ff5and https://www.machinelearningplus.com/nlp/cosine-similarity/)  by performing the following:

a)Read all the data files in python Pandas DataFrame.

b)Perform the necessary pre-processing task (e.g.,punctuation, numbers,stop word removal, etc.)

c)Create Term-Document Matrix with TF-IDF weighting

d)Calculate the similarity using cosine similarity and show the top ranked ten (10) images Based on the following query

“CT images of chest showing ground glass opacity”

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more