Open Access Open Access  Restricted Access Subscription Access

A Proposed Technique for Automatic Recognition of Human Activities


Affiliations
1 Research Scholar, Kavayitri Bahinabai Chaudhari North Maharashtra University, Jalgaon, India
2 Assistant Professor & Head (Commerce) JETs Zulal Bhilajirao Patil College, Dhule, India
3 Assistant Professor & Head (Commerce) KES'S Pratap College, Amalner, India
 

While computer vision is widely employed in a wide variety of applications, the precise and efficient identification of human behaviour remains a challenging area in computer vision science. Recent research has concentrated on smaller issues such as approaches for human action recognition of depth data, 3D skeleton data, photographic data, spatiotemporal methods focusing on interest and the identification of human activity. Despite this, no systematic survey of human behaviour appraisal has been conducted. To that end, we present a comprehensive review of methods for identifying human actions, including advances in the hand design of action characteristics in RGB and depth data, established methods for representing deeper learning action-based features, advancements in the methodology for identifying human-object interaction, and prominent present methods of deeper information.



Keywords

Action detection; action feature; human action recognition; human–object interaction recognition; systematic survey.
User
Notifications
Font Size

  • Aggarwal, J. K., and Cai, Q. (1999). Human motion analysis: a review. Comput. Vis. Image Understand. 73, 428–440. doi:10.1006/cviu. 1998.0744
  • Aggarwal, J. K., and Ryoo, M. S. (2011). Human activity analysis: a review. ACM Comput. Surv. 43, 1–43.
  • doi:10.1145/1922649.1922653
  • Aggarwal, J. K., and Xia, L. (2014). Human activity recognition from 3D data: a review. Pattern Recognit. Lett. 48, 70–80. doi:10.1016/ j.patrec.2014.04.011
  • Akata, Z., Perronnin, F., Harchaoui, Z., and Schmid, C. (2013). “Label-embedding for attribute-based classification,†in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Portland, OR), 819–826.
  • Alahi, A., Ramanathan, V., and Fei-Fei, L. (2014). “Socially-aware large-scale crowd forecasting,†in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Columbus, OH), 2211–2218.
  • AlZoubi, O., Fossati, D., D'Mello, S. K., and Calvo, R. A. (2013). “Affect detection and classification from the non-stationary physiological data,†in Proc. International Conference on Machine Learning and Applications (Portland, OR), 240–245.
  • Amer, M. R., and Todorovic, S. (2012). “Sumproduct networks for modeling activities with stochastic structure,†in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Providence, RI), 1314 – 1321.
  • Amin, S., Andriluka, M., Rohrbach, M., and Schiele, B. (2013). “Multi-view pictorial structures for 3D human pose estimation,†in Proc. British Machine Vision Conference (Bristol), 1–12.
  • Andriluka, M., Pishchulin, L., Gehler, P. V., and Schiele, B. (2014). “2D human pose estimation: new benchmark and state of the art analysis,†in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Columbus, OH), 3686–3693.
  • Andriluka, M., and Sigal, L. (2012). “Human context: modeling human-human interactions for monocular 3D pose estimation,†in Proc. International Conference on Articulated Motion and Deformable Objects (Mallorca: Springer-Verlag), 260–272.
  • Anirudh, R., Turaga, P., Su, J., and Srivastava, A. (2015). “Elastic functional coding of human actions: from vector-fields to latent variables,†in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Boston, MA), 3147–3155.
  • Atrey, P. K., Hossain, M. A., El-Saddik, A., and Kankanhalli, M. S. (2010). Multimodal fusion for multimedia analysis: a survey.
  • Multimed. Syst. 16, 345–379. doi:10.1007/s00530-0100182-0
  • Bandla, S., and Grauman, K. (2013).
  • “Active learning of an action detector from untrimmed videos,†in Proc. IEEE International Conference on Computer Vision (Sydney, NSW), 1833–1840.
  • Baxter, R. H., Robertson, N. M., and Lane, D. M. (2015). Human behaviour recognition in data-scarce domains. Pattern Recognit. 48, 2377–2393. doi:10.1016/j.patcog.2015.02.019
  • Belagiannis, V., Amin, S., Andriluka, M., Schiele, B., Navab, N., and Ilic, S. (2014). “3D pictorial structures for multiple human pose estimation,†in Proc.
  • IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Columbus, OH), 1669–1676.

Abstract Views: 161

PDF Views: 89




  • A Proposed Technique for Automatic Recognition of Human Activities

Abstract Views: 161  |  PDF Views: 89

Authors

Snehal Nirmal
Research Scholar, Kavayitri Bahinabai Chaudhari North Maharashtra University, Jalgaon, India
Kalpana M. Gholap
Assistant Professor & Head (Commerce) JETs Zulal Bhilajirao Patil College, Dhule, India
Yogesh V. Torawane
Assistant Professor & Head (Commerce) KES'S Pratap College, Amalner, India

Abstract


While computer vision is widely employed in a wide variety of applications, the precise and efficient identification of human behaviour remains a challenging area in computer vision science. Recent research has concentrated on smaller issues such as approaches for human action recognition of depth data, 3D skeleton data, photographic data, spatiotemporal methods focusing on interest and the identification of human activity. Despite this, no systematic survey of human behaviour appraisal has been conducted. To that end, we present a comprehensive review of methods for identifying human actions, including advances in the hand design of action characteristics in RGB and depth data, established methods for representing deeper learning action-based features, advancements in the methodology for identifying human-object interaction, and prominent present methods of deeper information.



Keywords


Action detection; action feature; human action recognition; human–object interaction recognition; systematic survey.

References





DOI: https://doi.org/10.21904/weken%2F2022%2Fv7%2Fi1%2F170784