CAT Vs Non Cat - accuracy not quite right

Hi,
I was attempting the CAT vs Non Cat classifier assignment.
I tried using DNN alone after converting the images to Gray scale and normalizing the same.
The results (loss and accuracy for train and validation) when trained was quite fluctuating and was not settling down.

To ensure I try other options, I did try using CNN as well, and I see the val accuracy around 0.697 odd and test accuracy is around 0.66. But when run on test dataset, it is too low at 0.34
I tried using BW image as well and I still see the values similar to this.
Not sure where I am going wrong. Appreciate if you could help.
I have the source file under my “cloudxlab_jupyter_notebooks” directory under the file name as “cat_vs_noncat_classifier.ipynb”.
(https://jupyter.e.cloudxlab.com/user/preedesh2010/tree/cloudxlab_jupyter_notebooks/cat_vs_noncat_classifier.ipynb)"

Each of these strategies as mentioned above (using CNN on original normalized image, using CNN but normalized Black and White image and using DNN have been segregated in the code and I have tried putting sufficient comments

Hi,

First, the link you have posted is a private link to your personal Jupyter notebook in the lab, no one else would have access to it except you, and so no one else will be able to view it other than yourself. Would request you to post a screenshot of your code instead.

Second, have you tried modifying the learning rate in the DNN model you created? What optimizer are you using? What is the model summary?

Thanks.

Thanks Rajtilak.
Could not figure out a way to export the whole file into one image and hence exported the same to a PDF. Since it looks like PDFs cannot be uploaded into the forum directly, I have uploaded the exported PDF in Google drive and given access to all. Hope this also is fine.


(I was under the impression that the code kept under cloudxlab_jupyter_notebooks is accessible for you as the instructions in the assignments ask this to be created under this directory. )

To the questions you asked on DNN.
Yes, I did try with multiple learning rates (on the DNN approach however), and did see changes, but nothing positive. The issue with using DNN was that the loss function and accuracy was not converging but toggling back and forth.
I have tried using is sgd as the optimizer. Utilized SELU as activation.

Attached are the model summaries for each approach I tried

  1. CNN on Colour images (3 channels)

  2. CNN on BW images (1 channel)

  3. DNN on BW images
    CAT_DNN_BW

Any updates or thoughts on this please?