Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

An Improved CNN Model For Classification Of Apple Leaf Disease And Visualization Using Weighted Gradient Class Activation Map


Affiliations
1 Department of Electronic Science, Babasaheb Bhimrao Ambedkar Bihar University, India
2 Department of Electronics, Maharaja Agrasen College, India
     

   Subscribe/Renew Journal


Convolutional Neural Network (CNN), a particular type of forwarding feed network composed of convolutional, pooling, and fully connected layers, has become the dominant and most widely used deep learning architecture. Significantly enhanced effectiveness of Conv Nets has made CNNs the go-to architecture model for almost every image processing-based application. CNNs automatically and adaptively learn spatial hierarchies of features with high accuracy, precision, and efficiency. This paper proposes three CNN models with 5, 6, and 7 layers with two types of classification layers at the top of the model, resulting in six kinds of models. Each model is trained on apple leaf diseases obtained with augmentation deployed on the Plant Village dataset containing images of healthy and three types of leaf diseases. The trained models are compared on training time, testing accuracy, testing time. The best performing model (6-layer based model with fully connected layer as a classifier (6FC) in our case) yields 99.14% accuracy. This best-performing model is also compared with the state of-art models such as VGG-16, InceptionV3, and MobileNetV2, trained using the transfer learning approach. After model comparison, we found our best model (6FC) outperformed the other models based on evaluated performance metrics with improvements as 3.94% gain in accuracy, 25.97% reduced parameters, and less training time (0.51hr) and testing time (20.5 sec) compared to VGG-16. Comparing precision, recall, and f1-score values are also found high (between 0.98 to 1) with our proposed model. The weighted gradient class activation map (GradCAM) technique generates a visualization of class predictions on the test dataset. The Grad-Cam visualization of results validates the prediction score attained by the proposed model.

Keywords

Convolutional Neural Network, Grad-CAM, Deep Learning, Data Augmentation, Transfer Learning
Subscription Login to verify subscription
User
Notifications
Font Size

  • V. Buhrmester, D. Munch and M. Arens, “Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 232-239, 2019.
  • R.R Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh and D. Batra, “Gradcam: Visual Explanations from Deep Networks via Gradient-Based Localization”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 618-626, 2017.
  • B. Zhou, A. Khosla, A. Lapedriza, A. Oliva and A. Torralba, “Learning Deep Features for Discriminative Localization”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921-2929, 2016.
  • P.J. Kindermans, S. Hooker, J. Adebayo, M. Alber, K.T Schutt, S. Dahne, D. Erhan and B. Kim, “The (Un)reliability of Saliency Methods”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 415-449, 2017.
  • K. Simonyan, A. Vedaldi and A. Zisserman, “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2221-2228, 2013.
  • J.T. Springenberg, A. Dosovitskiy, T. Brox and M. Riedmiller, “Striving for Simplicity: The All Convolutional Net”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1189-1197, 2015.
  • M.D. Zeiler, and R. Fergus, “Visualizing and Understanding Convolutional Networks”, Proceedings of IEEE Conference on Computer Vision, pp. 818-833, 2014.
  • R.G. Cinbis, J. Verbeek and C. Schmid, “Weakly Supervised Object Localization with Multi-Fold Multiple Instance Learning”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 39, No. 1, pp. 189-203, 2017.
  • M. Oquab, L. Bottou, I. Laptev and J. Sivic, “Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1717-1724, 2014.
  • M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Is Object Localization for Free? Weakly-Supervised Learning with Convolutional Neural Networks”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 685-694, 2015.
  • S.P. Mohanty, D.P. Hughes and M. Salathe, “Using Deep Learning for Image-Based Plant Disease Detection”, Frontiers in Plant Science, Vol. 7, pp. 1419-1427, 2016.
  • A. Krizhevsky, I. Sutskever and G. Hinton, “Imagenet Classification with Deep Convolutional Neural Networks”, Advances in Neural Information Processing Systems, Vol. 25, pp. 1106-1114, 2012.
  • C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, “Going Deeper with Convolutions”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-14, 2014.
  • B. Liu, Y. Zhang, D. He and X. Li, “Identification of Apple Leaf Diseases Based on Deep Convolutional Neural Networks”, Symmetry, Vol. 10, No. 1, pp.1-18, 2017.
  • K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 330-345, 2015.
  • K.P. Ferentinos, “Deep Learning Models for Plant Disease Detection and Diagnosis”, Computers and Electronics in Agriculture, Vol. 145, pp. 311-318. 2018.
  • H. Yu and C. Son, “Apple Leaf Disease Identification through Region-of-Interest-Aware Deep Convolutional Neural Network”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-19, 2019.
  • A.I. Khan, S.M.K. Quadri and S. Banday, “Deep Learning for Apple Diseases: Classification and Identification”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-13, 2020.
  • Y. Guo, J. Zhang, C. Yin, X. Hu, Y. Zou, Z. Xue and W. Wang, “Plant Disease Identification Based on Deep Learning Algorithm in Smart Farming”, Discrete Dynamics in Nature and Society, Vol. 2020, pp. 1-11, 2020.
  • X. Chao, G. Sun, H. Zhao, M. Li and D. He, “Identification of Apple Tree Leaf Diseases Based on Deep Learning Models”, Symmetry, Vol. 12, No. 7, pp. 1-17, 2020.
  • J. Liu and X. Wang, “Plant Diseases and Pests Detection based on Deep Learning: A Review”, Plant Methods, Vol. 17, No. 22, pp. 1-18, 2021.
  • B. Zhou, A. Khosla, A. Lapedriza, A. Oliva and A. Torralba, “Object Detectors Emerge in Deep Scene CNNs”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-12, 2014.
  • X. Jia and L. Shen, “Skin Lesion Classification using Class Activation Map”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 441-448, 2017.
  • P. Charuchinda, T. Kasetkasem, I. Kumazawa and T. Chanwimaluang, “On the Use of Class Activation Map for Land Cover Mapping”, Proceedings of IEEE International Conference on Electrical Engineering Electronics, Computer, Telecommunications and Information Technology, pp. 653-656, 2019.
  • K.H. Sun, H. Huh, B.A. Tama, S.Y. Lee, J.H. Jung and S. Lee, “Vision-Based Fault Diagnostics using Explainable Deep Learning with Class Activation Maps”, IEEE Access, Vol. 8, pp. 129169-129179, 2020.
  • P. Jiang, Y. Chen, B. Liu, D. He and C. Liang, “Real-Time Detection of Apple Leaf Diseases using Deep Learning Approach Based on Improved Convolutional Neural Networks”, IEEE Access, Vol. 7, pp. 59069-59080, 2019.
  • G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever and R.R. Salakhutdinov, “Improving Neural Networks by Preventing Co-Adaptation of Feature Detectors”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 666-678, 2012.
  • K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 441-449, 2014.
  • S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 881-889, 2015.
  • C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 771-776, 2015.
  • A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto and H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications”, Proceedings of IEEE Conference on Computer Vision, pp. 1-12, 2017.
  • M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 45-58, 2018.
  • C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang and C. Liu, “A Survey on Deep Transfer Learning”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 89-98, 2018. K
  • . Weiss, T.M. Khoshgoftaar and D. Wang, “A Survey of Transfer Learning”, Journal of Big Data, Vol. 3, No. 9, pp. 1-13, 2016.
  • Y. Nagaraju, Venkatesh, S. Swetha and S. Stalin, “Apple and Grape Leaf Diseases Classification using Transfer Learning via Fine-tuned Classifier”, Proceedings of IEEE International Conference on Machine Learning and Applied Network Technologies, pp. 1-6, 2020.
  • N.K. Hebbar and A.S. Kunte, “Transfer Learning Approach for Splicing and Copy-Move Image Tampering Detection”, ICTACT Journal on Image and Video Processing, Vol. 11, No. 4, pp. 2447-2452, 2021.
  • V.C. Burkapalli and P.C. Patil, “Transfer Learning: Inception-V3 Based Custom Classification Approach for Food Images”, ICTACT Journal on Image and Video Processing, Vol. 11, No. 1, pp. 2261-2267, 2020.
  • Keras.io, “Grad-CAM Class Activation Visualization, Code Example by Fchollet”. Available at:
  • https://keras.io/examples/vision/grad_cam/#lets-tryanother-image, Accessed at 2020.

Abstract Views: 64

PDF Views: 0




  • An Improved CNN Model For Classification Of Apple Leaf Disease And Visualization Using Weighted Gradient Class Activation Map

Abstract Views: 64  |  PDF Views: 0

Authors

Dharmendra Kumar Mahato
Department of Electronic Science, Babasaheb Bhimrao Ambedkar Bihar University, India
Amit Pundir
Department of Electronics, Maharaja Agrasen College, India
Geetika Jain Saxena
Department of Electronics, Maharaja Agrasen College, India

Abstract


Convolutional Neural Network (CNN), a particular type of forwarding feed network composed of convolutional, pooling, and fully connected layers, has become the dominant and most widely used deep learning architecture. Significantly enhanced effectiveness of Conv Nets has made CNNs the go-to architecture model for almost every image processing-based application. CNNs automatically and adaptively learn spatial hierarchies of features with high accuracy, precision, and efficiency. This paper proposes three CNN models with 5, 6, and 7 layers with two types of classification layers at the top of the model, resulting in six kinds of models. Each model is trained on apple leaf diseases obtained with augmentation deployed on the Plant Village dataset containing images of healthy and three types of leaf diseases. The trained models are compared on training time, testing accuracy, testing time. The best performing model (6-layer based model with fully connected layer as a classifier (6FC) in our case) yields 99.14% accuracy. This best-performing model is also compared with the state of-art models such as VGG-16, InceptionV3, and MobileNetV2, trained using the transfer learning approach. After model comparison, we found our best model (6FC) outperformed the other models based on evaluated performance metrics with improvements as 3.94% gain in accuracy, 25.97% reduced parameters, and less training time (0.51hr) and testing time (20.5 sec) compared to VGG-16. Comparing precision, recall, and f1-score values are also found high (between 0.98 to 1) with our proposed model. The weighted gradient class activation map (GradCAM) technique generates a visualization of class predictions on the test dataset. The Grad-Cam visualization of results validates the prediction score attained by the proposed model.

Keywords


Convolutional Neural Network, Grad-CAM, Deep Learning, Data Augmentation, Transfer Learning

References