· The dataset in this project must be the one you selected for the course through Blackboard/Discussion board. Any other dataset will case your whole project not to be graded

· The dataset in this project must be the one you selected for the course through Blackboard/Discussion board. Any other dataset will case your whole project not to be graded · Submit the final paper, no later than Oct. 10th · Make sure you include the major 7 sections graded mentioned below in your paper · The deliverable should contain the following components: ( · You may delay this section until (1) you study all previous work and (2) you do some analysis and understand the dataset/project As most of the selected projects use public datasets, no doubt there are different attempts/projects to analyze those datasets. 30 % of this deliverable is in your overall assessment of previous data analysis efforts. This effort should include: · Evaluating existing source codes that they have (e.g. in Kernels and discussion sections) or any other refence. Make sure you try those codes and show their results · In addition to the code, summarize most relevant literature or efforts to analyze the same dataset you have picked. · For the few who picked their own datasets, you are still expecting to do your literature survey in this section on what is most relevant to your data/idea/area and summarize those most relevant contributions. Compare results in your own work/project with results from previous or other contributions (data and analysis comparison not literature review) The difference between section 3 and section 2 is that section 2 focuses on code/data analysis found in sources such as Kaggle, github, etc. while section 3 focuses on research papers that not necessary studied the same dataset, but the same focus area · were the most important features? · We suggest you provide: · a variable importance plot ( about halfway down the page), showing the 10-20 most important features and · partial plots for the 3-5 most important features · If this is not possible, you should provide a list of the most important features. · How did you select features? · Did you make any important feature transformations? · Did you find any interesting interactions between features? · Did you use external data? (if permitted) (5) · training methods did you use? · Did you ensemble the models? · If you did ensemble, how did you weight the different models? A6. Interesting findings · was the most important trick you used? · do you think set you apart from others in the competition? · Did you find any interesting relationships in the data that don’t fit in the sections above? Many customers are happy to trade off model performance for simplicity. With this in mind: · Is there a subset of features that would get 90-95% of your final performance? Which features? * · model that was most important? * · would the simplified model score? · * Try and restrict your simple model to fewer than 10 features and one training method. (10 %) Many customers care about how long the winning models take to train and generate predictions: · How long does it take to train your model? · How long does it take to generate predictions using your model? · How long does it take to train the simplified model (referenced in section A6)? · How long does it take to generate predictions from the simplified model? (15 %) Per the last chapter we have, make sure you employ at least two different ensemble models in your code and show the model details and results Citations to references, websites, blog posts, and external sources of information where appropriate. Summarize the most important aspects of your model and analysis, such as: The training method(s) you used (Convolutional Neural Network, XGBoost) The most important features The tool(s) you used How long it takes to train your model ———————————————— —————————————————————- 1. : Results in data analysis can be misleading. Without detail analysis of different performance metrics (e.g. accuracy, recall, ROC, AUC, etc.) one-side view of results can present incomplete and inaccurate findings. Presenting a thorough analysis for overall performance of your models will show that you did not ignore any factor in your model. 2. : You can find through the Internet several standard templates for data science projects (How to structure your code, data, etc.). While following standard templates is not a must or required but will be considered as part of quality criteria. Here are examples of code templates for different programming environments: a. R and RStudio: b.  Python: c. MS Azure Save the data + code that generated the output, rather than the output itself. Intermediate files are okay as long as there is clear documentation of how they were created e.g. using some websites such as Gitlab, GitHub / BitBucket 4.

Need your ASSIGNMENT done? Use our paper writing service to score better and meet your deadline.


Click Here to Make an Order Click Here to Hire a Writer