Refine your search
Collections
Co-Authors
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Balasubramanian, S.
- Co-Curing Noisy Annotations for Facial Expression Recognition
Abstract Views :281 |
PDF Views:1
Authors
Affiliations
1 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
1 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
Source
ICTACT Journal on Image and Video Processing, Vol 12, No 1 (2021), Pagination: 2508-2516Abstract
Driven by the advancement in technology that can facilitate implementation of deep neural networks (DNNs), and due to the availability of large scale datasets, automatic recognition performance of the machines has increased by leaps and bounds. This is also true with regard to facial expression recognition (FER) wherein the machine automatically classifies a given facial image in to one of the basic expressions. However, annotations of large scale datasets in FER suffer from noise due to various factors like crowd sourcing, automatic labelling based on key word search etc. Such noisy annotations impede the performance of FER due to the memorization ability of DNNs. To address it, this paper proposes a learning algorithm called Co-curing: peer training of two joint networks using a supervision loss and a mimicry loss that are balanced dynamically, and supplemented with a relabeling module to correct the noisy annotations. Specifically, peer networks are trained independently using supervision loss during early part of the training. As training progresses, mimicry loss is given higher weightage to bring consensus between the two networks. Our Co-curing does not need to know the noise rate. Samples with wrong annotations are relabeled based on the agreement of peer networks. Experiments on synthetic as well real world noisy datasets validate the effectiveness of our method. State-of-the-art (SOTA) results on benchmark in-the-wild FER datasets like RAF-DB (89.70%), FERPlus (89.6%) and AffectNet (61.7%) are reported.Keywords
Noisy Annotations, Facial Expression Recognition, Co-Curing, Mimicry Loss, Peer LearningReferences
- C. Darwin and P. Prodger, “The Expression of the Emotions in Man and Animals”, Oxford University Press, 1998.
- S. Li, W. Deng, “Deep Facial Expression Recognition: A Survey”, IEEE Transactions on Affective Computing, Early Access, 2020.
- P. Ekman and W.V. Friesen, “Constants across Cultures in the Face and Emotion”, Journal of Personality and Social Psychology, Vol. 17, No. 2, pp. 124-129, 1971
- P. Ekman, “Strong Evidence for Universals in Facial Expressions: A Reply to Russell’s Mistaken Critique”, Psychological Bulletin, Vol. 115, No. 2, pp. 268-287, 1994.
- D. Matsumoto, “More Evidence for the Universality of a Contempt Expression”, Motivation and Emotion, Vol. 16, No. 4, pp. 363-368, 1992.
- X. Fan, Z. Deng, K. Wang, X. Peng and Y. Qiao, “Learning Discriminative Representation for Facial Expression Recognition from Uncertainties”, Proceedings of IEEE International Conference on Image Processing, pp. 903-907, 2020.
- J. MA, “Facial Expression Recognition using Hybrid Texture Features based Ensemble Classifier”, International Journal of Advanced Computer Science and Applications, Vol. 6, pp. 1-13, 2017.
- C. Shan, S. Gong and P.W. Mcwoan, “Facial Expression Recognition based on Local Binary Patterns: A Comprehensive Study”, Image and Vision Computing, Vol. 27, No. 6, pp. 803-816, 2009.
- P. Hu, D. Cai, S. Wang, A. Yao and Y. Chen, “Learning Supervised Scoring Ensemble for Emotion Recognition in the Wild”, Proceedings of ACM International Conference on Multimodal Interaction, pp. 553-560, 2017.
- H. Chun Lo and R. Chung, “Facial Expression Recognition Approach for Performance Animation”, Proceedings of IEEE International Workshop on Digital and Computational Video, pp. 613-622, 2001.
- T. Kanade, J.F. Cohn and Y. Tian, “Comprehensive Database for Facial Expression Analysis”, Proceedings of 4th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46-53, 2000.
- P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar and I. Matthews, “The Extended Cohn-Kanade Dataset (ck+): A Complete Dataset for Action Unit and Emotion-Specified Expression”, Proceedings of IEEE International Workshops on Computer Vision and Pattern Recognition, pp. 94-101, 2010.
- G. Zhao, X. Huang, M. Taini, S.Z. Li and M. Pietikainen, “Facial Expression Recognition from Near-Infrared Videos”, Proceedings of IEEE International Conference on Image and Vision Computing, pp. 607-619, 2011.
- F.Y. Shih, C.F. Chuang and P.S.P. Wang, “Performance Comparisons of Facial Expression Recognition in Jaffe Database”, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 22, No. 3, pp. 445-459, 2008.
- A. Mollahosseini, B. Hasani and M.H. Mahoor, “A Database for Facial Expression, Valence, and Arousal Computing in the Wild”, IEEE Transactions on Affective Computing, Vol. 10, No. 1, pp.18-31, 2017.
- E. Barsoum, C. Zhang, C.C. Ferrer and Z. Zhang, “Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution”, Proceedings of 18th ACM International Conference on Multimodal Interaction, pp. 279-283, 2016.
- S. Li and W. Deng, “Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial Expression Recognition”, IEEE Transactions on Image Processing, Vol. 28, No. 1, pp. 356-370, 2018.
- S. Li, W. Deng and J. Du, “Reliable Crowdsourcing and Deep Locality Preserving Learning for Expression Recognition in the Wild”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 28522-2861, 2017.
- K. Wang, X. Peng, J. Yang, S. Lu and Y. Qiao, “Suppressing Uncertainties for Large-Scale Facial Expression Recognition”, Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6897-6906, 2020.
- D. Arpit, S. Jastrz, N. Ballas, D. Krueger, E. Bengio, M.S. Kanwal, T. Maharaj, A. Fischer, A. Courville and Y. Bengio, “A Closer Look at Memorization in Deep Networks”, Proceedings of International Conference on Machine Learning, pp. 233-242, 2017.
- C. Zhang, S. Bengio, M. Hardt, B. Recht and O. Vinyals, “Understanding Deep Learning requires Rethinking Generalization”, Proceedings of International Conference on Machine Learning, pp. 1-13, 2017.
- B. Frenay and M. Verleysen, “Classification in the Presence of Label Noise: A Survey”, IEEE Transactions on Neural Networks and Learning Systems, Vol. 25, No. 5, pp. 845-869, 2013.
- J. Goldberger and E. Ben-Reuven, “Training Deep Neural-Networks using A Noise Adaptation Layer”, Proceedings of International Conference on Machine Learning, pp. 1-5, 2016.
- G. Patrini, A. Rozza, A. Menon, R. Nock and L. Qu, “Making Neural Networks Robust to Label Noise: A Loss Correction Approach”, Proceedings of International Conference on Machine Learning, pp. 1-9, 2016.
- B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang and M. Sugiyama, “Coteaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels”, Proceedings of International Conference on Machine Learning, pp. 1-13, 2018.
- Darshan Gera and S. Balasubramanian, “Landmark Guidance Independent Spatio-Channel Attention and Complementary Context Information based Facial Expression Recognition”, Pattern Recognition Letters, Vol:145, pp. 58-66, 2021.
- Samuli Laine and Timo Aila, “Temporal Ensembling for Semisupervised Learning”, Proceedings of International Conference on Neural and Evolutionary Computing, pp. 1-13, 2016.
- X. Yu, B. Han, J. Yao, G. Niu, I. Tsang, M. Sugiyama, “How does Disagreement Help Generalization Against Label Corruption?”, Proceedings of International Conference on Machine Learning, pp. 7164-7173, 2019.
- X. Wang, Y. Hua, E. Kodirov and N.M. Robertson, “Image for Noise-Robust Learning: Mean Absolute Error does not Treat Examples Equally and Gradient Magnitude’s Variance Matters”, Proceedings of International Conference on Machine Learning, pp. 1-14, 2019.
- Y. Wang, X. Ma, Z. Chen, Y. Luo, J. Yi and J. Bailey, “Symmetric Cross Entropy for Robust Learning with Noisy Labels”, Proceedings of IEEE/CVF International Conference on Computer Vision, pp. 322-330, 2019.
- Ying Zhang, Tao Xiang, Timothy M. Hospedales and Huchuan Lu, “Deep Mutual Learning”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 4320-4328, 2018.
- Zhilu Zhang and Mert Sabuncu, “Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels”, Proceedings of IEEE Conference on Neural Information Processing Systems, pp. 8778-8788, 2018.
- Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan and Andrew Rabinovich, “Training Deep Neural Networks on Noisy Labels with Bootstrapping”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3320-3328, 2015.
- H. Siqueira, S. Magg and S. Wermter, “Efficient Facial Feature Learning with Wide Ensemble-Based Convolutional Neural Networks”, Proceedings of AAAI Conference on Artificial Intelligence, pp. 5800-5809, 2020.
- P. Jiang, B. Wan, Q. Wang and J. Wu, “Fast and Efficient Facial Expression Recognition using a Gabor Convolutional Network”, IEEE Signal Processing Letters, Vol. 27, pp. 1954-1958, 2020.
- P. Ding and R. Chellappa, “Occlusion-Adaptive Deep Network for Robust Facial Expression Recognition”, Proceedings of IEEE International Joint Conference on Biometrics, pp. 1-9, 2020.
- Y. Li, J. Zeng, S. Shan and X. Chen, “Occlusion Aware Facial Expression Recognition using CNN with Attention Mechanism”, IEEE Transactions on Image Processing, Vol. 28, No. 5, pp. 243902450, 2018.
- E. Malach and S. Shalev-Shwartz, “Decoupling” when to Update” from” How to Update”, Proceedings of IEEE International Conference on Advances in Neural Information Processing Systems, pp. 1-11, 2017.
- H. Wei, L. Feng, X. Chen and B. An, “Combating Noisy Labels by Agreement: A Joint Training Method with Co-Regularization”, Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13726-13735, 2020.
- F. Sarfraz, E. Arani and B. Zonooz, “Noisy Concurrent Training for Efficient Learning under Label Noise”, Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3159-3168, 2021.
- J. Zeng, S. Shan and X. Chen, “Facial Expression Recognition with Inconsistently Annotated Datasets”, Proceedings of European Conference on Computer Vision, pp. 222-237, 2018.
- K. Zhang, Z. Zhang, Z. Li and Y. Qiao, “Joint Face Detection and Alignment using Multitask Cascaded Convolutional Networks”, IEEE Signal Processing Letters, Vol. 23, No. 10, pp. 1499-1503, 2016.
- K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition”, Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
- Y. Guo, L. Zhang, Y. Hu, X. He and J. Gao, “Ms-Celeb-1m: A Dataset and Benchmark for Large-Scale Face Recognition”, Proceedings of European Conference on Computer Vision, pp. 87-102, 2016.
- K. Wang, X. Peng, J. Yang, D. Meng and Y. Qiao, “Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition”, IEEE Transactions on Image Processing, Vol. 29, pp. 4057-4069, 2020.
- S. Li and W. Deng, ‘Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial Expression Recognition”, IEEE Transactions on Image Processing, Vol. 28, No. 1, pp. 356-370, 2018.
- Iterative Collaborative Routing among Equivariant Capsules for Transformation-Robust Capsule Networks
Abstract Views :165 |
PDF Views:1
Authors
Affiliations
1 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
1 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
Source
ICTACT Journal on Image and Video Processing, Vol 13, No 2 (2022), Pagination: 2865-2873Abstract
Transformation-robustness is an important feature for machine learning models that perform image classification. Many methods aim to bestow this property to models by the use of data augmentation strategies, while more formal guarantees are obtained via the use of equivariant models. We recognise that compositional, or part-whole structure is also an important aspect of images that has to be considered for building transformation-robust models. Thus, we propose a capsule network model that is, at once, equivariant and compositionality aware. Equivariance of our capsule network model comes from the use of equivariant convolutions in a carefully-chosen novel architecture. The awareness of compositionality comes from the use of our proposed novel, iterative, graph-based routing algorithm, termed Iterative collaborative routing (ICR). ICR, the core of our contribution, weights the predictions made for capsules based on an iteratively averaged score of the degree-centralities of its nearest neighbours. Experiments on transformed image classification on FashionMNIST, CIFAR-10, and CIFAR-100 show that our model that uses ICR outperforms convolutional and capsule baselines to achieve state-of-the-art performance.Keywords
Equivariance, Transformation Robustness, Capsule Network, Image Classification, Deep Learning.References
- T. Cohen and M. Welling, “Group Equivariant Convolutional Networks”, Proceedings of International Conference on Machine Learning, pp. 2990-2999, 2016.
- T.S. Cohen and M. Welling, “Spherical CNNs”, Proceedings of International Conference on Learning Representations, pp. 1-6, 2018.
- S.R. Venkataraman, S. Balasubramanian and R.R. Sarma, “Building Deep Equivariant Capsule Networks”, Proceedings of International Conference on Learning Representations, pp. 1-6, 2020.
- M. Weiler and G. Cesa, “General E (2)-Equivariant Steerable CNNs”, Advances in Neural Information Processing Systems, Vol. 32, pp. 1-16, 2019.
- S. Batzner, J.P. Mailoa, M. Kornbluth and B. Kozinsky, “E (3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials”, Nature Communications, Vol. 13, No. 1, pp. 1-11, 2022.
- C. Esteves, K. Daniilidis and A. Makadia, “Cross-Domain 3D Equivariant Image Embeddings”, Proceedings of International Conference on Machine Learning, pp. 1812-1822, 2019.
- G.E. Hinton, A. Krizhevsky and S.D. Wang, “Transforming Auto-Encoders”, Proceedings of International Conference on Artificial Neural Networks, pp. 44-51, 2011.
- S. Sabour, N. Frosst and G.E. Hinton, “Dynamic Routing between Capsules”, Advances in Neural Information Processing Systems, Vol. 30, pp. 1-14, 2017.
- G.E. Hinton, S. Sabour and N. Frosst, “Matrix Capsules with EM Routing”, Proceedings of International Conference on Learning Representations, pp. 1-8, 2018.
- J. Rajasegaran, V. Jayasundara, S. Jayasekara and R. Rodrigo, “Deepcaps: Going Deeper with Capsule Networks”, Proceedings of International Conference on Computer Vision and Pattern Recognition, pp. 10725-10733, 2019.
- J. Choi, H. Seo, S. Im and M. Kang, “Attention Routing between Capsules”, Proceedings of International Conference on Computer Vision, pp. 1-5, 2019.
- J.E. Lenssen, M. Fey and P. Libuschewski, “Group Equivariant Capsule Networks”, Advances in Neural Information Processing Systems, Vol. 31, pp. 1-14, 2018.
- N. Garau, N. Bisagno and N. Conci, “DECA: Deep Viewpoint-Equivariant Human Pose Estimation using Capsule Autoencoders”, Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11677-11686, 2021.
- B. Ozcan, F. Kinli and F. Kiraç, “Quaternion Capsule Networks”, Proceedings of International Conference on Pattern Recognition, pp. 6858-6865, 2021.
- M.D. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks”, Proceedings of International Conference on Computer Vision, pp. 818-833, 2014.
- K. Ahmed and L. Torresani, “Star-Caps: Capsule Networks with Straight-Through Attentive Routing”, Advances in Neural Information Processing Systems, Vol. 32, pp. 1-14, 2019.
- C. Pan and S. Velipasalar, “PT-CapsNet: a Novel Prediction-Tuning Capsule Network Suitable for Deeper Architectures”, Proceedings of International Conference on Computer Vision, pp. 11996-12005, 2021.
- A. Krizhevsky and G. Hinton, “Learning Multiple Layers of Features from Tiny Images”, Proceedings of International Conference on Computer Vision, pp. 1-12, 2009.
- K. He and J. Sun, “Deep Residual Learning for Image Recognition”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
- I. Loshchilov and F. Hutter, “Decoupled Weight Decay Regularization”, Proceedings of International Conference on Learning Representations, pp. 1-15, 2018.
- L.N. Smith and N. Topin, “Super-Convergence: Very Fast Training of Neural Networks using Large Learning Rates”, Proceedings of International Conference on Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, Vol. 11006, pp. 369-386, 2019.
- Shape Face Remove Guides, Available at http://sharenoesis.com/wp-content/uploads/2010/05/7ShapeFaceRemoveGuides.jpg, Accessed at 2010.
- Pixabay, Available at https://cdn.pixabay.com/photo/2016/11/29/11/57/dolphins-1869337_960_720.jpg, Accessed at 2016.
- Robustcaps: A Transformation-Robust Capsule Network For Image Classification
Abstract Views :137 |
PDF Views:0
Authors
Affiliations
1 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., IN
1 Department of Mathematics and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., IN
Source
ICTACT Journal on Image and Video Processing, Vol 13, No 3 (2023), Pagination: 2883-2892Abstract
Geometric transformations of the training data as well as the test data present challenges to the use of deep neural networks to vision-based learning tasks. To address this issue, we present a deep neural network model that exhibits the desirable property of transformationrobustness. Our model, termed RobustCaps, uses group-equivariant convolutions in an improved capsule network model. RobustCaps uses a global context-normalised procedure in its routing algorithm to learn transformation-invariant part-whole relationships within image data. This learning of such relationships allows our model to outperform both capsule and convolutional neural network baselines on transformation-robust classification tasks. Specifically, RobustCaps achieves state-of-the-art accuracies on CIFAR-10, FashionMNIST, and CIFAR-100 when the images in these datasets are subjected to train and test-time rotations and translations.Keywords
Deep Learning, Capsule Networks, Transformation Robustness, Equivariance.References
- T. Cohen and M. Welling, “Group Equivariant Convolutional Networks”, Proceedings of International Conference on Machine Learning, pp. 2990-2999, 2016.
- M. Weiler and G. Cesa, “General E (2)-Equivariant Steerable CNNs”, Advances in Neural Information Processing Systems, Vol .32, pp. 1-15, 2019.
- T.S. Cohen and M. Welling, “Spherical CNNs”, Proceedings of International Conference on Learning Representations, pp. 1-7, 2018.
- G.E. Hinton, A. Krizhevsky and S.D. Wang, “Transforming Auto-Encoders”, Proceedings of International Conference on Artificial Neural Networks, pp. 44-51, 2011.
- S. Sabour and G.E. Hinton, “Dynamic Routing between Capsules”, Advances in Neural Information Processing Systems, Vol. 30, pp. 1-12, 2017.
- G.E. Hinton, S. Sabour and N. Frosst, “Matrix Capsules with EM Routing”, Proceedings of International Conference on Learning Representations, pp. 241-254, 2018.
- S.R. Venkataraman, S. Balasubramanian and R.R. Sarma, “Building Deep Equivariant Capsule Networks”, Proceedings of International Conference on Learning Representations, pp. 1-10, 2020.
- R. Pucci, C. Micheloni and N. Martinel, “Self-Attention Agreement Among Capsules”, Proceedings of International Conference on Computer Vision, pp. 272-280, 2021.
- J.E. Lenssen and P. Libuschewski, “Group Equivariant Capsule Networks”, Advances in Neural Information Processing Systems, Vol. 31, pp. 1-15, 2018.
- T.S. Cohen and M. Weiler, “A General Theory of Equivariant CNNs on Homogeneous Spaces”, Advances in Neural Information Processing Systems, Vol. 32, pp. 1-12, 2019.
- J. Rajasegaran, S. Seneviratne and R. Rodrigo, “Deepcaps: Going Deeper with Capsule Networks”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10725-10733, 2019.
- K. Ahmed and L. Torresani, “Star-Caps: Capsule Networks with Straight-Through Attentive Routing”, Advances in Neural Information Processing Systems, Vol. 32, pp. 167- 178, 2018.
- H. Xiao, K. Rasul and R. Vollgraf, “Fashion-Mnist: A Novel Image dataset for Benchmarking Machine Learning Algorithms”, Proceedings of International Conference on Computer Vision, pp. 1-8, 2017.
- A. Krizhevsky and G. Hinton, “Learning Multiple Layers of Features from Tiny Images”, Available at https://www.cs.toronto.edu/~kriz/learning-features-2009- TR.pdf, 2009.
- K. He and J. Sun, “Deep Residual Learning for Image Recognition”, Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
- D. Romero and M. Hoogendoorn, “Attentive Group Equivariant Convolutional Networks”, Proceedings of the IEEE International Conference on Machine Learning, pp. 8188-8199, 2020.
- I. Loshchilov and F. Hutter, “Decoupled Weight Decay Regularization”, Proceedings of International Conference on Learning Representations, Vol. 32, pp. 89-97, 2018.
- L.N. Smith and N. Topin, “Super-Convergence: Very Fast Training of Neural Networks using Large Learning Rates”, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, Vol. 11006, pp. 369-386, 2019.