Open Access Open Access  Restricted Access Subscription Access

Development of Security Detection Model for the Security of Social Blogs and Chatting from Hostile Users


Affiliations
1 Jaypee Institute of Information Technology, Noida, India
2 CEAS, University of Cincinnati, Cincinnati, United States
 

Worldwide, a large number of people interact with each other by means of online chatting. There has been a significant rise in the number of platforms, both social and professional, such as WhatsApp, Facebook, and Twitter, which allow people to share their experiences, views and knowledge with others. Sadly enough, with online communication getting embedded into our daily communication, incivility and misbehaviour has taken on many nuances from professional misbehaviour to professional decay. Generally flaming starts with the exchange of rude messages and comments, which in turn triggers to higher scale of flaming. To prevent online communication from getting downgraded, it is essential to keep away the hostile users from communication platforms. This paper presents a Security Detection Model and a tool which checks and prevents online flaming. It detects the presence of flaming while chatting or posting blogs, and censors swear words as well as blocks the users from flaming.

Keywords

Information Technology (IT), Flame Detector Tool, Censor Words, Flaming, Flame Intensity, Notification, Social Platform, Security Detection Model.
User
Notifications
Font Size

  • Gupta, S. (2017). Detection and Elimination of Censor Words on Online Social Media, International Journal of Computer Science and Information Technologies (IJCSIT), Vol. 8(5), pp 545 – 547.
  • Steele, G., Woods, D., Finkel, R., Crispin, M., Stallman, R., and Goodfellow, G. (1983). The Hacker's Dictionary, 1983.
  • Siegel, J., Dubrovsky, V., Kiesler, S. and McGuire T.W. (1986). Group processes in computermediated communication, Organizational Behavior and Human Decision Processes, 37, 157-187, 1986.
  • Zuckerberg, M. (2010). 500 million stories, South Atlantic Quarterly, vol. 92, pp. 559-568, 2010.
  • Verma, R. and Nitin (2015). On security negotiation model developed for the security of the social networking sites from the hostile user, Issues in Information Systems, vol. 16, Issue II, pp. 1-15.
  • Friedman, R.A. and Currall, S.C. (2003). Conflict escalation: Dispute exacerbating elements of e-mail communication conflict, Human Relations, vol. 56(11), pp. 1325-1347, 2003.
  • Harrison, T.M. and Falvey L. (2002). Democracy and new communication technologies, Communication Yearbook, vol. 25, pp. 1-33.
  • Landry, E.M. (2000). Scrolling around the new organization: The potential for conflict in the on-line environment, Negotiation Journal, vol. 16(2), pp. 133-142.
  • Markus, M.L. (1994). Finding a happy medium: Explaining the negative effects of electronic communication on social life at work, ACM Transactions on Information Systems, vol. 12(2), pp. 119-149.
  • Moore D.A., Kurtzberg T.R., Thompson, L.L. and Morris, M.W. (1999). Long and short routes to success in electronically mediated negotiations: Group affiliations and good vibrations, Organizational Behavior and Human Decision Processes, vol. 77(1), pp. 22-43.
  • O'Sullivan P.B. and Flanagin, A.J. (2003). Reconceptualizing "flaming" and other problematic messages, New Media & Society, vol. 5 (1), pp. 69-94, 2003.
  • Boyd D.M. and Ellison N.B. (2007). Social network sites: Definition, history, and scholarship, Journal of Computer Mediated Communication, vol. 13(1), article 11.
  • Zhang, H., Lu, Y., Gupta, S., & Zhao, L. (2014). What motivates customers to participate in social commerce? The impact of technological environments and virtual customer experiences, Information and Management, 51(8), 1017–103.
  • Alghaith, W. (2015). Applying the Technology Acceptance Model to understand Social Networking Sites (SNS) usage: Impact of perceived Social Capital, International Journal of Computer Science & Information Technology (IJCSIT), Vol 7, No. 4.
  • Poll (2017). How many chat apps do you use?, https://www.theverge.com/2017/4/14/15298732/messaging-apps-poll, April 14, 2017.
  • Abozinadah, E. A., and Jone, J.H. J. (2016). Improved micro-blog classification for detecting abusive Arabic Twitter accounts, International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.6, No.6, DOI: 10.5121/ijdkp.2016.6602, pp. 17-27.
  • Abozinadah, E. A., Mbaziira, A. V., and Jones, J. H. J. (2015). Detection of Abusive Accounts with Arabic Tweets, Int. J. Knowl. Eng.-IACSIT, vol. 1, no. 2, pp. 113–119.

Abstract Views: 213

PDF Views: 123




  • Development of Security Detection Model for the Security of Social Blogs and Chatting from Hostile Users

Abstract Views: 213  |  PDF Views: 123

Authors

Shubhankar Gupta
Jaypee Institute of Information Technology, Noida, India
Nitin
CEAS, University of Cincinnati, Cincinnati, United States

Abstract


Worldwide, a large number of people interact with each other by means of online chatting. There has been a significant rise in the number of platforms, both social and professional, such as WhatsApp, Facebook, and Twitter, which allow people to share their experiences, views and knowledge with others. Sadly enough, with online communication getting embedded into our daily communication, incivility and misbehaviour has taken on many nuances from professional misbehaviour to professional decay. Generally flaming starts with the exchange of rude messages and comments, which in turn triggers to higher scale of flaming. To prevent online communication from getting downgraded, it is essential to keep away the hostile users from communication platforms. This paper presents a Security Detection Model and a tool which checks and prevents online flaming. It detects the presence of flaming while chatting or posting blogs, and censors swear words as well as blocks the users from flaming.

Keywords


Information Technology (IT), Flame Detector Tool, Censor Words, Flaming, Flame Intensity, Notification, Social Platform, Security Detection Model.

References