Project background | Computer Science homework help

 

Tasks to complete

Goal: This project will be used to integrate concepts developed from all the assignments in the second half of this class, specifically. You will identify a data driven business problem that requires preparation of the data. This preparation involves Extracting data (from 3 or more sources), Transforming (or cleaning) the data before Loading it into a database for analysis. In other words, you will experience, first-hand, the ETL process of Data management.

Options: You can take this project in one of two directions: (1) Identify a large file, clean the data and normalize it into three or more tables OR (2) Identify three or more large data sources, clean the data and merge them into a denormalized table for analysis. In both cases, you will need to identify what you plan to learn from the cleaned and loaded data.

Resource: This articleLinks to an external site.

In preparation for your project this term, I need you to do some digging to identify sources and ideas for a decent project.

There are a couple of decisions that have to be made. And so, I am making part of the project a “deliverable” so you can begin mulling over it. Most ETL tasks involve cleaning and integration. For integration, it is vital that you have an attribute that is common across all three data sets

Cleaning

Cleaning is one of the most important steps as it ensures the quality of the data in the data warehouse. Cleaning should perform basic data unification rules, such as:

  • Making identifiers unique (sex categories Male/Female/Unknown, M/F/null, Man/Woman/Not Available are translated to standard Male/Female/Unknown)
  • Convert null values into standardized Not Available/Not Provided value
  • Convert phone numbers, ZIP codes to a standardized form
  • Validate address fields, convert them into proper naming, e.g. Street/St/St./Str./Str
  • Validate address fields against each other (State/Country, City/State, City/ZIP code, City/Street).

Transform

The transform step applies a set of rules to transform the data from the source to the target. This includes

  • converting any measured data to the same dimension (i.e. conformed dimension) using the same units so that they can later be joined.
  • generating surrogate keys or FKs so that you can join data from several sources,
  • generating aggregates
  • deriving new calculated values,
  • Adding columns to create PKs and/or FKs

Data Integration

It is at this stage that you get the most value for the project. This typically means you are adding some attribute from a related set that adds ‘Color’ to the data. Perhaps Census data to labor data or other demographic data. The challenge is to locate data that are relatable.

Project direction: You will need to complete a datamart with significant pre-processing (ETL) activities. 

Requirements:

  1. Problem being solved:  What do you propose to learn from this data? List several of these business questions and show how your project solution (data set) could answer them.
  2. Tools: You must complete the entire project using Visual Studio. OR you can do this with some other tool of your choice (ETL) like Power BI or tableauLinks to an external site..
  3. Volume: Total result data set must add up to at least 5k records, but not more than 100k.
  4. Destination: SQL server table(s). Depending on the direction you are taking, you can move all the data to a single CSV file and dump it into SQL server at the end or direct the final destination tables to SQL server.
  5. Transformation – it must include TWO new columns (for each final destination) that is populated by (a) the current date and time so you know when that data was brought into the final dataset and (b) a second one to know where the data came from (source file name). This may be done through SSIS or in SQL server.
    Note: Filename capturing works only when the source is a flat file.  So, if your source is NOT a flat file,  you may want to make a CSV file an intermediate destination and then use this file as the source (Hint: Use derived column transformation to add a column)
    In addition it must include at least 3 of the following transformations: data conversion, derived column, data split, lookup, merge, merge join, multicast, union all, fuzzy lookup or any of the transforms not covered in class. 

Data sources: You are welcome to use datasets from work that has been sufficiently “anonymizedLinks to an external site.“. In fact this itself is a valuable transformation task that you can then use to protect your data and make it available for additional analysis/exploration. There are many public data sets that can be used (see “data sources” tab)

Submit: Use the text area to submit a project “proposal” that addresses the following points; I am not looking for an elaborate write up, but use these 4 prompts to develop 4 well-written paragraphs free from language/grammar errors. Please do not write it in Q&A format!

  1. Be sure to give a meaningful title. 
  2. Motivation for the project: what insights do you anticipate getting from this ETL project? 
  3. what problems do you anticipate during the ETL process?  Cleaning? Transforming? 
  4. What data will you be using? where will you be getting these? How many rows would you be processing in all? What are the keys (Pk/fk)

    And finally, include

  5. What type of decision support do you expect this project to provide? Would this have been possible with Excel? Why is this approach an improvement?
  6. An ERD showing how the data sets are related to each other (either source or destination tables – see “options” above)
  7. A subset of data from EACH file (5-10 rows and 5-10 columns) that shows the kind of data you are dealing with. For each file, be sure to identify what you would consider as a primary key. These can be included as a screenshot with column headers.
  8. The datasets, properly named. Its best that you create a folder called {myGateID}_Project and save your datafiles there. ZIP this FOLDER and attach it
Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more