Muhammad Nabil Muyassar Rahman
Deep learning is a function of Artificial Intelligence that copy's how the human brain works in processing data and pattern creation that are vital in making strategic decisions. Deep learning is also known as a deep neural network since it has systems capable of learning unsupervised data from unstructured data. Deep knowledge helps to gain massive amounts of unstructured data that makes it strenuous for humans to process and understand. Deep learning uses a hierarchical level of artificial neural networks that makes the system undergo the process of machine learning. In general, deep learning artificial intelligence learns from unstructured and unlabeled data. Deep learning AI is vital to an organization since it helps prevent fraud or money laundering. In this portfolio, we will use Deep Learning in finding the pattern of a certain drowsiness datasets. After we train and validate the datasets with the Tensorflow Keras CNN model, we can use the model we got to predict new data and see of how accurate it is.
Before we start, we need to download the dataset that can be found through this link: https://www.kaggle.com/datasets/dheerajperumandla/drowsiness-dataset
Thus, below are the explanation and steps to prepare the data, preprocessing the data, creating the model, fitting the model, and predicting new data:
You can see the data classes and the index that they will come out as in the model later.
As it shown, the ‘subset’ tells the method as to how much data is given to the dataset. As the ‘validation_split’ was filled with 0.1 in other word 10%. The validation will only have amount of data of (total data*0.1).
This is the link to TensorFlow documentation for the keras.layers: https://www.tensorflow.org/api_docs/python/tf/keras/layers
The first thing to do is creating the type of model. A Sequential model is appropriate for a plain stack of layers where each layer has exactly one input tensor and one output tensor. Next to do is adding the layers, in this step the possibility is endless as things vary following the dataset. Finally, compile the model.
This is the loss and validation loss of the model; the validation loss is going up in a very uncontrollable faze as this is usually a problem in the amount of training data or validation data.
This happens in means that the model is very good at reading and predicting data that it has seen before but enough at predicting a new data that it has never seen before.
It can be seen that the prediction still has some problem. This can be solved by editing the amount of training and validation data, add more datasets, edit its learning rate, and more.