Yapped about hyperparameter tuning (not done)
This commit is contained in:
@@ -83,9 +83,26 @@ Before doing any sort of training or analysis on the data, se split it into trai
|
||||
\section{Model selection}
|
||||
When selecting the model to use for this project we have to limit us to using models that are appropriate to the type of problem that we are trying to solve. The problem is a classification task so all models that are used for regression are immediately invalid. There are plenty of different types of classification models left to choose from. Many of them however, are good for data that has non-discrete features. This includes models such as logistic regression, KNN and other similar types of classification models. Also since we have so many features that are non-numerical and converted into arbitrary numbers these types of models would not be optimal. What is left is the Gaussian Naïve Baye's and the different tree based models. Naïve Baye's can be a bit troublesome for this dataset since we have found that some parameters are slightly correlated. However, this does not necessarliy make in an inappropriate method as it has been found to perform well despite this strict assumption. Therefore we are left with the tree based models such as the decision tree and random forests. We decided to implement two different types of models. We first do a decision tree and see how good we can get that model to work. We then do a random forest which may not be the absolute best model but since it is a continuation on the decision tree it might be interesting to see if it performs better. We then do analysis on both methods and see if these models are good enough and if there is any meaningful difference between the two.
|
||||
|
||||
\subsection{Data cleaning and feature engineering}
|
||||
There were a couple of things with our dataset that had to be modified in order for it to be usable in our ML application. We find that some of the features are redundant or not interesting in our project. We romove the redundant feature education since there is another already numerically encoded feature containing the same data. We also chose to remove the feature 'fnlwgt' since it is a already calculated number that is used by the Census Bureau to estimate population statistics. Since we want to estimate the population statistics based on the other features and not the already calculated weight we remove this feature. We have a mix of numerical and non-numerical features in our dataset. Since the machine learning models cannot use non-numerical data we have to encode the non-numercial data into corresponding numbers. This is with the label encoder built into sci-kit learn and used on all non-numerical data.
|
||||
\subsection{Handling missing values}
|
||||
With our numerical version of the dataset we found with the info function in pandas that around 2500 values were NaN values. We reasoned that filling these values with something as the mean of the category does not make very much sense for our application. Since there are many discrete categories a mean value means nothing. Especially since we gave many categories arbitrary numbers the mean means nothing. We therefore decided to only use complete data points. This resulted in removing about 6\% of the total amount of data points or about 2500 data points.
|
||||
\subsection{Training, validation and test sets}
|
||||
Before doing any sort of training or analysis on the data, se split it into training, test and validation data. We did this by first splitting a random 20\% of the data into test data. This data is reserved for the final testing of the model and will not be touched until the model is finished. Then we did a further split of the rest of the data were 25\% was designated as validation data. This data will be used for calibration of the model and hyperparameter tuning. The rest of the data which is 60\% of the total data or around 18000 data points will be used to train the model.
|
||||
\section{Model selection}
|
||||
When selecting the model to use for this project we have to limit us to using models that are appropriate to the type of problem that we are trying to solve. The problem is a classification task so all models that are used for regression are immediately invalid. There are plenty of different types of classification models left to choose from. Many of them however, are good for data that has non-discrete features. This includes models such as logistic regression, KNN and other similar types of classification models. Also since we have so many features that are non-numerical and converted into arbitrary numbers these types of models would not be optimal. What is left is the Gaussian Naïve Baye's and the different tree based models. Naïve Baye's can be a bit troublesome for this dataset since we have found that some parameters are slightly correlated. However, this does not necessarliy make in an inappropriate method as it has been found to perform well despite this strict assumption. Therefore we are left with the tree based models such as the decision tree and random forests. We decided to implement two different types of models. We first do a decision tree and see how good we can get that model to work. We then do a random forest which may not be the absolute best model but since it is a continuation on the decision tree it might be interesting to see if it performs better. We then do analysis on both methods and see if these models are good enough and if there is any meaningful difference between the two.
|
||||
|
||||
\section{Model Training and Hyperparameter Tuning}
|
||||
During the model training there are some important changes we can make to improve the accuracy of our model. One thing we implement is cross validation. Since there is a great spread in our data we choose to use randomized search. %Add more here and change type of x-val if needed. How many folds?
|
||||
Another very important part of the model training is finding the optimal hyperparameters. This is an important step in minimizing the risk of overfitting. Some important hyperparameters in our decision trees are the maximum depth and minimum sample split. The maximum depth hyperparameter decides how deep the tree is allowed to go. If a tree is allowed to go very deep there is a high risk of overfitting. We therefore test multiple different depths and see which values give the best training and validation accuracy. This will ensure that we use the most optimal depth for our tree. The minimum sample split states how many data points there has to be for a new split to be created. This is also a good measure against overfitting since if it is very low we risk training the noise of the data instead of the general trend and end up overfitting the data. It is also important that it is not to small since we then loose information and underfit instead. For the random forest there is also the hyperparameter of how many estimators to use. This decides how many trees to choose from.
|
||||
\subsection{Models and methods used}
|
||||
During the model training there are some important changes we can make to improve the accuracy of our model. One of the most fundemental procedures was hyperparameter tuning which was performed inside a custom class which performs model opitmization and comparison for different models. The class handles the full workflow of tuning the hyperparameters, training the models and recording evaluation metrics. More specifically the method used for hyperparameter tuning is Scikit Learn's GridSearchCV with accuracy as the scoring metric. This method tests different combinations of hyperparameters to establish the best one's. In addition it incorporates cross-validation to prevent overfitting and increase the reliability of the results. For the cross-validation, we used Scikit Learn's stratified k-fold. This type of cross validation is beneficial to use as it preserves the percentage of samples for the classes in each fold, making the model more robust. We used 10 folds for the cross validation, there is of course no "correct" number of folds to use as it's more of a trade off between performance and computational efficiency.
|
||||
|
||||
The hyperparameters included in the grid for the decision tree were the maximum depth and the minimum sample split. The maximum depth hyperparameter decides how deep the tree is allowed to go. If a tree is allowed to go very deep there is a high risk of overfitting, on the contrary, a shallow tree will instead risk underfitting. The minimum sample split states how many data points there has to be for a new split to be created. This is also a good measure against overfitting since if it is very low we risk training the noise of the data instead of the general trend and end up overfitting the data. It is also important that it is not to small since we then loose information and underfit instead. For Random Forest the hyperparameters in the grid were maximum depth, minimum sample split and number of estimators, which decides how many trees are used in the Random Forest algorithm. % Something about XGBoost as well
|
||||
|
||||
When performing the hyperparameter tuning, we started out with a rough grid to get a decent estimate of the optimal configuration. From the resluts we then performed a finer grid around the optimal configuration. This way we where able to inspect both a wide range and a more precise range without severly increasing the computational load.
|
||||
|
||||
\subsection{Caveats and restrictions}
|
||||
Although the validation results produced from the script are quite promising there are a couple of important notes to make, not only to better understand the final models but also to avoid pitfalls in potential future projects. Firslty, in our script we decided to not use any standardization as this is a sort of unique case where the models used do not require it. % Elaborate... Secondly, there are more hyperparameters that one might want to consider... Continuing, the scoring metric used is not always the best choice. In fact, the scoring metric one should use is highly dependent on what one's goal is...
|
||||
|
||||
|
||||
\section{Model Evaluations}
|
||||
There are two interesting parts to look at after our analysis. One part is to analyze how well the actual models performed and compare the difference between the two models we have chosen to study. We fine tuned our models using the validation part of the data. After running it on the test data we can see how well it actually performs. A great way to get a quick overview of how well a model classifies is to look at the confusion matrix.
|
||||
|
||||
Reference in New Issue
Block a user