Continued work on the report
This commit is contained in:
@@ -72,10 +72,12 @@
|
||||
|
||||
\subsection{Dataset}
|
||||
%https://www.kaggle.com/datasets/mosapabdelghany/adult-income-prediction-dataset
|
||||
The dataset we decided to study is a labeled income prediction dataset. This dataset includes 14 features with
|
||||
The dataset we decided to study is a labeled income prediction dataset. This dataset includes 14 features with information about the people in the srudy and a label with the income as either more than 50 000\$ per year or less than or equal to 50 000 \$ per year. This means that we are looking at a binary classification problem. A lot of the features are discrete where only a set number of options available. This includes features such as marital status, education and working class. The dataset features around 32500 data points.
|
||||
|
||||
\subsection{Data cleaning and feature engineering}
|
||||
§
|
||||
There were a couple of things with our dataset that had to be modified in order for it to be usable in our ML application. We find that some of the features are redundant or not interesting in our project. We romove the redundant feature education since there is another already numerically encoded feature containing the same data. We also chose to remove the feature 'fnlwgt' since it is a already calculated number that is used by the Census Bureau to estimate population statistics. Since we want to estimate the population statistics based on the other features and not the already calculated weight we remove this feature. We have a mix of numerical and non-numerical features in our dataset. Since the machine learning models cannot use non-numerical data we have to encode the non-numercial data into corresponding numbers. This is with the label encoder built into sci-kit learn and used on all non-numerical data.
|
||||
\subsection{Handling missing values}
|
||||
With our numerical version of the dataset we found with the info function in pandas that around 2500 values were NaN values. We reasoned that filling these values with something as the mean of the category does not make very much sense for our application. Since there are many discrete categories a mean value means nothing. Especially since we gave many categories arbitrary numbers
|
||||
|
||||
\section{Model selection}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user