- R. Ravichandran
- V. Vijayalakshmi
- S. Preetha
- R. V. Siva Balan
- K. Maheswari
- S. Manimegalai
- C. Sumithiradevi
- G. Satyavathy
- A. S. Naveenkumar
- D. Anitha
- P. Sumathi
- S. Sukumaran
- B. Rosiline Jeetha
- B. Shanmugapriya
- G. Selvavinayagam
- D. Hari Prasad
- R. Shanmugasundaram
- K. Anandakumar
- C. S. Vijayasri
- S. C. Punitha
- R. Ranga Raj
- M. Ramkumar
- R. Manikandan
- V. S. Akshaya4
- Shanmugaraj Madasamy
- Software Engineering
- Networking and Communication Engineering
- Digital Image Processing
- Data Mining and Knowledge Engineering
- Artificial Intelligent Systems and Machine Learning
- AIRCC's International Journal of Computer Science and Information Technology
- International Journal of Advanced Networking and Applications
- ICTACT Journal on Image and Video Processing
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Punithavalli, M.
- Application of SPC in CMM Level 3 Companies to Improve Process Capability with Respect to Effort Deviation and Schedule Deviation
Authors
1 PSG College of Technology, IN
2 Computer Applications Department, Sri Ramakrishna College of Engineering, Coimbatore, IN
Source
Software Engineering, Vol 4, No 11 (2012), Pagination: 473-482Abstract
This study probed into improving the process capability in defined level software companies to achieve high process maturity with respect to effort deviation and schedule deviation using Statistical Process Control (SPC) technique. Using the results of research through literature review, collection of data from software industry this study would be identified the relative importance of SPC, software quality and software process. SPC can be applied in defined level companies to improve the process maturity [1]. A case study is conducted in the present study to obtain practical evidence on the validity of the reasoning about statistical process control in achieving high level process maturity at defined level companies. The real time data were collected for effort deviation and schedule deviation for the different phases of the project life cycle considering two different languages namely VB.NET and ASP.NET. Projects were grouped language wise for analysis and the control limits were constructed and defined for effort deviation and schedule deviation separately. The applicability of SPC through control charts in improving the process capability with respect to effort deviation and schedule deviation have been validated considering projects done in two different languages.Keywords
Statistical Process Control (SPC), Effort Deviation, Schedule Deviation, Process Maturity.- Fault Detection in Testing-A Survey Approach
Authors
1 Sri Ramakrishna College of Arts and Science for Women, Coimbatore, IN
Source
Software Engineering, Vol 1, No 7 (2009), Pagination: 224-230Abstract
Software testing is an important but expensive process because about fifty percent of the total development cost is spent for it. However, this part is the first one to miss by software developers if there is a limited time to complete the project. Tests are commonly generated from program source code, graphical models of software (such as control flow graphs), and specifications/requirements. Creating test cases manually is a huge work for software developers. It is time consuming and error prone. A solution which automatically generates test cases and test data can help the software developers to create test cases from software designs/models in early stage of the software development (before coding). High quality software cannot be done without high quality testing. Heuristic techniques can be applied for creating quality test data. A fault is defined as a textual problem with the code resulting from a mental mistake by the programmer or designer. A fault is also called a defect. Fault-based testing refers to the collection of information on whether classes of software faults (or defects) exist in a program. Since testing can only prove the existence of errors and not their absence, this testing approach is a very sound one. In this paper we describe the methods for evaluating the fault, methods for characterizing faults.Keywords
Software Testing, Fault, Test Cases.- Object Oriented Software Architecture – A Survey Approach
Authors
1 Department of Computer Science and Information Technology Sri Krishna Arts and Science College, Coimbatore, IN
2 Department of Computer Applications, Sri Ramakrishna College of Arts and Science for Women, Coimbatore, IN
Source
Software Engineering, Vol 2, No 2 (2010), Pagination: 22-26Abstract
The UML extension is based on design principles that are derived from characteristics of MAS (Multi-Agent Systems) and concept of software architecture which helps to design reusable and well structured multi-agent architecture. The extension allows one to use original object oriented method without syntactic or semantic changes which implies the preservation of OO productivity, i.e., the availability of developers and tools, the utilization of past experiences and knowledge, and the seamless integration with other systems. This survey discusses in more detail how architectures can be described and the uses of such descriptions. Much research so far has also been dedicated to methods and case studies, to make the research of practical interest. This survey describes how the quality of the software can be ensured to a certain degree through informal approaches - not least because an architectural description provides a common understanding around which different stakeholders can meet and discuss a system. Formal approaches are also emerging, and there are a number of formal languages for description of a system's software architecture.- Reusability of Interfaces for Component-Based Software Development
Authors
1 KGiSL Educational Institutions, Coimbatore, IN
2 Department of Computer Science, Sri Ramakrishna College of Arts and Science for Women, Coimbatore, IN
Source
Software Engineering, Vol 1, No 2 (2009), Pagination: 78-83Abstract
As organizations implement systematic software reuse programs to improve productivity and quality, they must be able to measure their progress and identify the most effective reuse strategies. This is done with reuse metrics and models. Reusability of interfaces has now become more generalized approach for application development. The main advantages of this are reduced development time, cost and efforts along with several others. These advantages are mainly contributed by the reuse of already built-in software components. In order to realize the reuse of components effectively in interfaces, it is required to measure the reusability of components. Paper proposes several reusability metrics in terms of cost and productivity like Reuse cost avoidance, Reuse value added and Additional development cost, which can be used significantly for business applications. Component-based software development relies on reusable components in order to improve quality and flexibility of products as well as increasing development productivity. This paradigm promotes deployment of reusable components as black-box units that can only work and communicate with one another through their well defined interfaces. In this paper, understandability of component interfaces is considered as a major quality affecting reusability of software components. A set of metrics for measuring properties believed to be relevant to understandability and reusability of software components are presented. Then, their usefulness and relevance are analyzed based upon data gathered from the measurement of a variety of component interfaces. The paper concludes with some ideas for further research in this area.Keywords
CBSD, Interface, Metrics, Reuse, Inheritance.- An Implementation of Security in VoIP Using Modified Shamir‟s Secret Sharing Algorithm
Authors
1 Department of Computer Applications, SNR SONS College, Coimbatore, IN
2 Department of Computer Applications, Sri Ramakrishna Engineering College, Coimbatore, IN
Source
Networking and Communication Engineering, Vol 3, No 13 (2011), Pagination: 864-868Abstract
A major change in telecommunication industry is Voice over Internet Protocol (VoIP). VoIP offers interactive communications. It differs from conventional circuit switched networks. It allows people to communicate with each other at very low rates. The transmission of Real time voice data is not as easy as ordinary text data. The real time voice transmission faces lot of difficulties. It suffers from packet loss, delay, security and quality. These factors will affect the communication, degrades the performance and quality of a VoIP. This paper addresses the security aspects of VoIP to improve the quality. The modified shamir's secret sharing algorithm is designed to improve security in terms of delay.Keywords
Security, Quality, Telecommunication and VoIP.- Cloud Computing: Practice of Efficient Approaches, Techniques and Challenges for Data Centers
Authors
1 Department of Computer Applications in V.S.B. Engineering College, Karur, Tamil Nadu, IN
2 Bharathiyar University, Tamil Nadu, IN
3 Department of computer applications in SNS Arts and Science College, Coimbatore, Tamil Nadu, IN
Source
Networking and Communication Engineering, Vol 3, No 5 (2011), Pagination: 285-288Abstract
Cloud computing, a rapidly developing information technology, has aroused the concern of the whole world. Cloud computing is a virtualized pool of computing resources. It can manage a variety of different workloads, including the batch of back-end operations and user-oriented interactive applications. There are many resources available in a data center and in the cloud that a client can purchase or rent, such as processing time, network bandwidth, disk storage, and memory. The users of the cloud do not need to know where the data center is, or have any expertise on how to operate or maintain the resources in the cloud. Clients only need to know how to connect to the resources and how to use the applications needed to perform their jobs. This article introduces the background and service model of cloud computing. This article also introduces efficient approaches, techniques and challenges for data centers.Keywords
Cloud Computing, Cloud Service, Data Centers, SaaS.- LSB and DCT Based Steganography in Safe Message Routing and Delivery for Structured Peer-To-Peer Systems
Authors
1 Department of Computer Science, Sri Ramakrishna College of Arts and Science for Women, Coimbatore-641 044, IN
Source
Networking and Communication Engineering, Vol 2, No 9 (2010), Pagination: 365-369Abstract
In structured P2P systems, message deliverance can be done by identifying the peer IDs of the individual systems. The initiator has to decide the destination and can route the message through one or more hops. The message passes from one hop to another correctly by identifying the IP address and finally reaches the destination. In this paper we propose an efficient routing strategy to control the routing path and to identify the malicious nodes. We also eliminate the drawbacks of encryption by introducing steganography in message deliverance. This paper proposes a new steganographic encoding scheme which separates the colour channels of the windows bitmap images and then hides messages randomly in the LSB of one colour component of a chosen pixel where the colour components of the other two are found to be equal to the key selected. In addition to this we apply DCT based Steganography which embeds the text message in least significant bits of the Discrete Cosine (DC) coefficient of digital picture. When information is hidden inside video, the program hiding the information usually performs the DCT. DCT works by slightly changing each of the images in the video, only to the extent that is not noticeable by the human eye. An implementation of both these methods and their performance analysis has been done in this paper.Keywords
Peer-To-Peer, Least Significant Bit (LSB), Discrete Cosine Transform (DCT), Steganography.- Alternative Best Effort (ABE) Router (Providing a Low-Delay Service within Best Effort)
Authors
1 Department of Finance and Computer Application, S.N.R. Sons College, Coimbatore, Tamil Nadu, IN
2 Computer Science Department, Sri Ramakrishna College for women, Coimbatore, Tamil Nadu, IN
Source
Networking and Communication Engineering, Vol 1, No 8 (2009), Pagination: 479-487Abstract
In all the applications that transfer binary data such as bulk data transfer will seek to minimize the overall transfer time. This paper proposes a novel approach for Internet Protocols (IP) networks. This paper presents Alternative Best Effort (ABE) Router that relies on the notion of providing low delay at the disbursement of fewer throughputs. The main objective of the proposed approach is to retain the simplicity of the original Internet single-class best-effort service while providing low delay to interactive adaptive applications. With ABE, each best effort packet is marked as either blue or green. The presence of green packets in every router ensures low bounded delay. Based on the nature of the traffic and global traffic conditions of a particular application the color the packets is chosen for every router. At the time of congestion the green packets are more likely to be dropped than the blue packet. The proposed router requirements aim at enforcing benefits for all types of traffic; as the green packet achieves low delay and the blue traffic receives at least as much throughput as it would in a flat best effort network. In addition this paper also discuss on ABE service, properties, requirements and usage. Moreover, it discusses the implications of replacing the existing IP best effort service by ABE service.Keywords
Data Transfer, Internet Protocols, Alternative Best Effort, Router, Packets, Throughput.- A Survey of Secure Group Key Management
Authors
1 Sri Ramakrishna College of Arts and Science for Women, Coimbatore, Tamil Nadu, IN
2 Department is with the Sri Ramakrishna College of Arts and Science for Women, Coimbatore, Tamil Nadu, IN
Source
Networking and Communication Engineering, Vol 1, No 7 (2009), Pagination: 422-429Abstract
Group key management is an important functional building block for any secure multicast architecture. Many grouporiented and distributed applications need security services which includes key management. Such applications need a secure group key to communicate their data. This brings importance to key distribution techniques. Secure group communication has applications in multimedia conferencing, stock quote distribution shared work space,distributed interactive simulation, grid computing, teleconferences, pay-per-view, multi party games etc., some of these applications engage in one to many communications while others involve many to many communications. Several protocols have been proposed to support secure group key management. This paper focuses on several protocols that are designed for secure group key management. Various protocols are discussed on focusing the key management.
Keywords
Group Key Management, Security, Secure Group Communication, Multicast.- The Hybrid Architecture for the Secure Exchange of Data in E-Governance Applications
Authors
1 Department of Computer Science, PSG College of Arts & Science, Coimbatore, IN
2 Department of Computer Science, Sri Ramakrishna College of Arts and Science for Women, Coimbatore, IN
Source
Networking and Communication Engineering, Vol 1, No 2 (2009), Pagination: 50-56Abstract
This paper provides a design framework for the adoption of Grid Computing for E-governance applications. It enables citizens to make best use of automated administration processes that are accessible on-line. In this paper, we illustrate the creation of a virtual environment by using existing Grid technologies to specific e-governance applications on distributed resources. Grid computing is an ideal solution to this type of applications and the paper presents how grid computing can be used to effectively and efficiently handle such huge data. Also this paper discusses the requirements of the clients of an advance reservation service and a distributed architecture for such a service. It has been tested using Grid Simulation Toolkit called GridSim. The experimental results shown that the new reservation algorithm can lead to significant performance gain in various applications.Keywords
Advance Reservation, Scheduling, Heterogeneous System, Grid Simulation, Meta Schedule, Grid Service Provider, Task Graph and Runtime Estimates.- A Survey on Fractals
Authors
1 ErodeArtsCollege (Autonomous), Erode – 638 009, Tamil Nadu, IN
2 Sri Ramakrishna College of Arts & Science for Women, Coimbatore-641 044, Tamil Nadu, IN
Source
Digital Image Processing, Vol 1, No 2 (2009), Pagination: 35-41Abstract
Fractal is an irregular and fragmented geometric shape that can be subdivided in parts, where each part appears to be the same in all range of scale. Fractal geometry and its concepts have become central tools in most of the natural sciences. Fractals are of interest to graphic designers and film makers for their ability to create new and exiting shapes and artificial but realistic worlds. Fractals may appear complex, but they can be developed by simple rules.Computer graphics has played an important role in the development and acceptance of fractal geometry. The generation of each fractal is dependent on the approach and algorithm used. Thus it is important to understand the properties of each type of fractals as this will influence the procedure during the generation of fractal. Fractal plays a central role in the realistic rendering and modeling natural phenomena in computer graphics. This paper discusses on the evolution, classification and application of fractals. It also demonstrates the methods to generate the fractals.
Keywords
Fractal, Fractal Dimension, Mandel Set, Julia Set, Self Similarity.- On Fractal Dimension Estimation Methods
Authors
1 Erode Arts College (Autonomous), Erode–638009, Tamil Nadu, IN
2 Sri Ramakrishna College of Arts & Science for Women Coimbatore-641044, Tamil Nadu, IN
Source
Digital Image Processing, Vol 1, No 6 (2009), Pagination: 226-230Abstract
Fractals are complex geometric figures made up of small scale and large scale structures that resemble one another. Fractal dimension is an effective measure for complex objects. It is widely applied in the fields of image segmentation and shape recognition. There are number of methods to estimate the fractal dimension. This paper contributes the comparative study of fractal dimension methods in terms of effectiveness and accuracy. In this work, we have taken five main types of fractal dimension estimation methods and compared.
Keywords
Fractal, Fractal Dimension, Box-Counting, Mass Method, Dividers Method.- An Emerging Classification Method for Huge Dataset in Clustering
Authors
1 School of Computer Studies (PG), RVS College of Arts and Science, Coimbatore, IN
2 Department of Computer Science, SNS Raja Lakshmi College of Arts and Science, Coimbatore, IN
Source
Data Mining and Knowledge Engineering, Vol 3, No 10 (2011), Pagination: 599-601Abstract
Clustering analysis is used to explore the classification for large dataset and Canberra distance is generalized so that it can process the data with categorical attributes. Based on the generalized Canberra distance definition, an instance of constraint-based clustering is introduced. Meanwhile, the nearest neighbor classification is improved. Class-labeled clusters are regarded as classifying models used for classifying data. The proposed classification method can discover the data of big difference from the instances in training data, which may mean a new data type. The generalize Canberra distance for continuous numerical attributes data to mixed attributes data, and use clustering analysis technique to squash existing instances, improve the classical nearest neighbor classification method.Keywords
ID3, C4.5, Canberra Distance, Clustering, Improved Nearest Neighbour.- A Survey on Classification Methods Based on Decision Tree Algorithms in Data Mining
Authors
1 Bharathiar University, Coimbatore, IN
2 Department of Computer Science, SNS Raja Lakshmi College of Arts and Science, Coimbatore, IN
Source
Data Mining and Knowledge Engineering, Vol 3, No 4 (2011), Pagination: 207-210Abstract
Data mining resides in the junction of traditional statistics and computer science. As distinct from statistics, data mining is more about searching for hypotheses in data that happens to be available instead of verifying research hypotheses by collecting data from designed experiments. Data mining is also characterized as being oriented toward problems with a large number of variables and/or samples that makes scaling up algorithms important. This means developing algorithms with low computational complexity, using parallel computing, partitioning the data into subsets, or finding effective ways to use relational data bases. The process- and utility-centered thinking in data mining and knowledge discovery is manifested also in the reported, commercial systems. Decision Trees are considered to be one of the most popular approaches for representing classifiers. Researchers from various disciplines such as statistics, machine learning, pattern recognition, and data mining considered the issue of growing a decision tree from available data. The technology for building Knowledge based system by decision tree algorithms has been demonstrated successfully in several practical applications. This paper summarizes an approach to synthesizing decision trees that has been used in variety of systems, and it describes such system ID3, C4.5 and CART. Results from recent studies show ways in which the methodology can be modified to deal with information that is noisy and/or incomplete.Keywords
Decision Tree, ID3, C4.5 and CART.- An Enhanced Projected Clustering Algorithm for High Dimensional Space
Authors
1 Department of Computer Science, Sri Ramakrishna College of Arts and Science for Women, Coimbatore, IN
2 Department of Computer Science Dr.SNS College of Arts and Science, Coimbatore, IN
3 Department of Computer Science and Engineering, Park College of Engineering & Technology, Coimbatore, IN
Source
Data Mining and Knowledge Engineering, Vol 3, No 2 (2011), Pagination: 104-109Abstract
Clustering is a data mining technique for identifying groups in the data set based on some similarity measure. Clustering high dimensional data has been a major challenge due to the inherent sparsity of the points. Most existing clustering algorithms become substantially inefficient if the required similarity measure is computed between data points in the full dimensional space. A number of projected clustering algorithms have been proposed to overcome the above issue. This led to the development of a robust partitional distance based projected clustering algorithm based on K-means algorithm with the computation of distance restricted to subsets of attributes with dense object values. The algorithm is capable of detecting projected clusters of low dimensionality embedded in a high-dimensional space and avoids the computation of the distance in full-dimensional space. The algorithm has been demonstrated using synthetic and real datasets.Keywords
Clustering, High Dimensional Data, Projected Cluster, K-Means Clustering, Subspace Clustering.- A Survey on Clustering Algorithms
Authors
1 Department of Computer Applications, Sri Ramakrishna Institute of Technology, Coimbatore, IN
2 Department of Computer Science, Sri Ramakrishna Arts College for Women, Coimbatore, IN
Source
Data Mining and Knowledge Engineering, Vol 2, No 2 (2010), Pagination: 28-32Abstract
Clustering is a widely used technique to find interesting patterns dwelling in the dataset that remain unknown. In general, clustering is a method of dividing the data into groups of similar objects. One of significant research areas in data mining is to develop methods to modernize knowledge by using the existing knowledge, since it can generally augment mining efficiency,especially for very bulky database. Data mining uncovers hidden,previously unknown, and potentially useful information from large amounts of data. This paper presents a general survey of various clustering algorithms. In addition, the paper also describes the efficiency of Self-Organized Map (SOM) algorithm in enhancing the mixed data clustering.
Keywords
Data Clustering, Data Mining, Mixed Data Clustering, Self-Organized Map Algorithm.- A Survey on Data Clustering Algorithms
Authors
1 Department of Computer Science, Erode Arts & Science College, Erode, Tamil Nadu, IN
2 Department of Computer Science, Sri Ramakrishna College of Arts and Science for Women, Coimbatore, IN
Source
Data Mining and Knowledge Engineering, Vol 1, No 8 (2009), Pagination: 421-425Abstract
Clustering is a significant area of application for a range of fields including data mining, statistical data analysis, image compression, and vector quantization. Moreover Clustering has been formulated in different manners in machine learning, pattern recognition, optimization, and statistics literature. The basic problem in clustering arise at grouping together (clustering) data streams which are analogous to each other. A variety of algorithms have emerged that meet the requirements and were successfully applied to real-life data clustering problems. This paper makes a general survey on various Clustering algorithms that have been proposed earlier in literature. In addition the future enhancement section of this paper suggests some of the modifications of earlier proposed work to overcome their limitations.Keywords
Clustering, Data Mining, Image Compression, Machine Learning, Optimization, Pattern Recognition, Statistical Data Analysis, Vector Quantization.- Software Tool for Agent Based Distributed Data Mining
Authors
1 Computer Applications Department, Dr. SNS Rajalakshmi College of Arts and Science, Coimbatore, IN
2 Computer Science Department, Sri Ramakrishna College of Arts and Science for Women, Coimbatore, IN
Source
Data Mining and Knowledge Engineering, Vol 1, No 1 (2009), Pagination: 33-39Abstract
The main objective of this project is to illustrate the maximum utilization of available resources for the data mining activities. Mining information and knowledge from huge data sources such as Weather databases, financial data portals or emerging disease information systems has been recognized by industrial companies as an important area with an opportunity of major revenues from applications such as business data warehousing, process control, and personalized on-line customer services over Internet and web. Distributed Data mining is expected to perform partial analysis of data at clients and then to send the outcome as results to the server where it is sometimes required to be aggregated to the global result. The primary issues to be considered for DDM are Scalability, privacy of data and autonomy of data. These issues can be easily handled when we go for intelligent software agents for Distributed Data mining, because of its inherent features of being autonomous, capable of adaptive and deliberative reasoning.Keywords
Data Mining, Frequent Item Set, Distributed Data Mining.- Steganography Technique for Secure Transmission of Secret Message or Image
Authors
1 Anna University, Chennai-600025, IN
2 Sri Ramakrishna Engineering College, Coimbatore-641022, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 5, No 12 (2013), Pagination: 505-508Abstract
Steganography is a relatively new field of study in Information Technology which deals with information hiding. Steganography or Stego literally means “covered writing”. Steganography is the art and science of communicating in a way which hides the existence of the communication. In divergence to Cryptography, where the enemy is allowed to detect, intercept and modify messages without being able to violate certain security premises guaranteed by a cryptosystem, the goal of Steganography is to camouflage messages in a way that does not allow any enemy to even detect the presence of secret messages. Steganographic research is primarily driven by the lack of strength in the cryptographic systems and the desire to have complete secrecy in an open-systems environment. This paper deals with a brief history of Steganography, various concepts and techniques of Steganography that is used for secure transmission of secret messages or images.
Keywords
Steganography, Cryptography, Camouflage, Transmission.- An Effective Cancer Classification Using Machine Learning Algorithms
Authors
1 Department of Computer Science, Dr. SNS Rajalakshmi College of Arts and Science, IN
2 Department of Computer Science, Sri Ramakrishna College of Arts and Science for Women, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 2, No 8 (2010), Pagination: 194-199Abstract
In this paper, the recently developed Extreme Learning Machine (ELM) is used for direct multicategory classification problems in the cancer diagnosis area. It uses Microarray gene expression cancer diagnosis for directing multicategory classification problems in the cancer diagnosis area. The common problems faced by iterative learning methods like local minima improper learning rate and over fitting are avoided by ELM. ELM completes the training at a faster rate. We have evaluated the multicategory classification performance of ELM on three benchmark microarray data sets for cancer diagnosis, namely, the Lymphoma data set. The results indicate that ELM produces comparatively better classification accuracies with reduced training time. The implementation complexity of ELM is very less compared to artificial neural networks methods like conventional back-propagation ANN, Linder's SANN and Support vector machine.Keywords
ELM, ANOVA, Cancer Classification and Gene Expression.- A Comparative Study to Find a Suitable Method for Text Document Clustering
Authors
1 Department of Computer Science, P.S.G.R. Krishnammal College for Women, Coimbatore, IN
2 Sri Ramakrishna College of Engineering, Coimbatore, IN
Source
AIRCC's International Journal of Computer Science and Information Technology, Vol 3, No 6 (2011), Pagination: 49-59Abstract
Text mining is used in various text related tasks such as information extraction, concept/entity extraction, document summarization, entity relation modeling (i.e., learning relations between named entities), categorization/classification and clustering. This paper focuses on document clustering, a field of text mining, which groups a set of documents into a list of meaningful categories. The main focus of this paper is to present a performance analysis of various techniques available for document clustering. The results of this comparative study can be used to improve existing text data mining frameworks and improve the way of knowledge discovery. This paper considers six clustering techniques for document clustering. The techniques are grouped into three groups namely Group 1 - K-means and its variants (traditional K-means and K Means algorithms), Group 2 - Expectation Maximization and its variants (traditional EM, Spherical Gaussian EM algorithm and Linear Partitioning and Reallocation clustering (LPR) using EM algorithms), Group 3 - Semantic-based techniques (Hybrid method and Feature-based algorithms). A total of seven algorithms are considered and were selected based on their popularity in the text mining field. Several experiments were conducted to analyze the performance of the algorithm and to select the winner in terms of cluster purity, clustering accuracy and speed of clustering.Keywords
Text Mining, Traditional K-Means, Traditional Em Algorithm, sGEM, HSTC Model, TCFS Method.- Evaluation of Enhanced K-MEAN Algorithm to the Student Dataset
Authors
1 Department of Computer Science, Hindusthan College of Arts and Science, Coimbatore, IN
2 Department of Computer Studies, Sri Ramakrishna Engineering College, Coimbatore, IN
Source
International Journal of Advanced Networking and Applications, Vol 4, No 2 (2012), Pagination: 1578-1580Abstract
Conventional database querying methods are inadequate to extract useful information from huge data banks. Cluster analysis is one of the major data analysis methods and the k-means clustering algorithm is widely used for many practical applications. In this paper, the enhanced k-mean algorithm applied to the huge student dataset to find out the different categories and group them.Keywords
Clustering, K-Means Algorithm, Enhanced K-Means Algorithm.- Classification of Cervical Cancer in Women Using Convolutional Neural Network
Authors
1 Department of Computer Science and Engineering, Gnanamani College of Technology, IN
2 Department of Computer Science, The Quaide Milleth College for Men, IN
3 Department of Mechanical Engineering, Rathinam Technical Campus, IN
4 Department of Computer Science and Engineering, Sri Eshwar College of Engineering, IN
5 Department of Computer Science, Cork Institute of Technology, IE
Source
ICTACT Journal on Image and Video Processing, Vol 11, No 4 (2021), Pagination: 2470-2474Abstract
Cervical cancer is regarded as a serious threats to humanity, globally and this is a vital disease with huge spreading of virus that affects the health of humans. The virus is spreading at a rapid rate through mosquitoes that even may kill the one who is affected with cervical cancer. In this paper, we develop a quick response system that certainly finds the disease through a faster validation process. The study uses Convolutional Neural Network (CNN) as a deep learning model that classifies and predicts the condition or the infection status of a patient. The study uses a pre-processing model and a feature extraction model to prepare the image datasets for classification. The simulation is conducted to validate the effectiveness of the model over cervical cancer image datasets i.e. the blood samples of humans. The validation shows that the proposed method effectively classifies the patients in a faster manner than the other deep learning models.Keywords
Machine Learning, Cervical Cancer, Classification, Diagnosis.References
- M. Guzman and G. Kouri, “Dengue and Dengue Hemorrhagic Fever in the Americas: Lessons and Challenges”, Journal of Clinical Virology, Vol. 27, No. 1, pp. 1-13, 2003.
- S. Kannan and S.N. Mohanty, “Survey of Various Statistical Numerical and Machine Learning Ontological Models on Infectious Disease Ontology”, Proceedings of International Conference on Data Analytics in Bioinformatics: A Machine Learning Perspective, pp. 431-442, 2021.
- N. Kousik, A. Kallam, R. Patan and A.H. Gandomi, “Improved Salient Object Detection using Hybrid Convolution Recurrent Neural Network”, Expert Systems with Applications, Vol. 166, pp. 114064-114075, 2021.
- N.V. Kousik, “Analyses on Artificial Intelligence Framework to Detect Crime Pattern”, Proceedings of International Conference on Intelligent Data Analytics for Terror Threat Prediction: Architectures, Methodologies, Techniques and Applications, 119-132, 2021.
- K. Srihari, S. Chandragandhi, G. Dhiman and A. Kaur, “Analysis of Protein-Ligand Interactions of SARS-Cov-2 Against Selective Drug using Deep Neural Networks”, Big Data Mining and Analytics, Vol. 4, No. 2, pp. 76-83, 2021.
- K.M. Baalamurugan and S.V. Bhanu, “An Efficient Clustering Scheme for Cloud Computing Problems using Metaheuristic Algorithms”, Cluster Computing, Vol. 22, No. 5, pp. 12917-12927, 2019.
- T. Karthikeyan and K. Praghash, “An Improved Task Allocation Scheme in Serverless Computing Using Gray Wolf Optimization (GWO) Based Reinforcement Learning (RIL) Approach”, Wireless Personal Communications, Vol. 80, No. 7, 1-19, 2020.
- J.L. San Martín, J.O. Solorzano and M.G. Guzman, “The Epidemiology of Dengue in the Americas over the Last Three Decades: A Worrisome Reality”, American Journal of Tropical Medicine and Hygiene, Vol. 82, No. 1, pp. 128-135, 2010.
- K. Srihari, G. Dhiman, K. Somasundaram and M. Masud, “Nature-Inspired-Based Approach for Automated Cyberbullying Classification on Multimedia Social Networking”, Mathematical Problems in Engineering, Vol. 2021, pp. 1-18, 2021.
- D.A. Thitiprayoonwongse, P.R. Suriyaphol and N.U. Soonthornphisaj, “Data Mining of Dengue Infection using Decision Tree”, Proceedings of International Conference on Latest Advances in Information Science and Applications, pp. 1-14, 2012.
- V. Nandini and R. Sriranjitha, “Dengue Detection and Prediction System using Data Mining with Frequency Analysis”, Proceedings of International Conference on Computer Science and Information Technology, pp. 1-12, 2016.
- G. Li, X. Zhou and J. Liu, “Comparison of Three Data Mining Models for Prediction of Advanced Schistosomiasis Prognosis in the Hubei Province”, PLoS Neglected Tropical Diseases, Vol. 12, pp. 1-22, 2018.
- V. Chang, B. Gobinathan, A. Pinagapani and S. Kannan, “Automatic Detection of Cyberbullying using Multi-Feature Based Artificial Intelligence with Deep Decision Tree Classification”, Computers and Electrical Engineering, Vol. 92, pp. 107186-107198, 2021.
- A. Shukla, G. Kalnoor and A. Kumar, “Improved Recognition Rate of Different Material Category using Convolutional Neural Networks”, Materials Today: Proceedings, Vol. 78, No. 1, pp. 1-5, 2021.
- S. Kannan, G. Dhiman and M. Gheisari, “Ubiquitous Vehicular Ad-Hoc Network Computing using Deep Neural Network with IoT-Based Bat Agents for Traffic Management”, Electronics, Vol. 7, No. 1, pp. 785-793, 2021.
- J. Gowrishankar, T. Narmadha and M. Ramkumar, “Convolutional Neural Network Classification On 2d Craniofacial Images”, International Journal of Grid and Distributed Computing, Vol. 13, No. 1, pp. 1026-1032, 2020.
- A. Khadidos, A.O. Khadidos and G. Tsaramirsis, “Analysis of COVID-19 Infections on a CT Image using DeepSense Model”, Frontiers in Public Health, Vol. 8, pp. 1-12, 2020.
- Siriyasatien Padet,Atchara Phumee, Phatsavee Ongruk, Katechan Jampachaisri and Kraisak Kesorn, “Analysis of Significant Factors for Dengue Fever Incidence Prediction”, BMC Bioinformatics, Vol. 17, No. 166, pp. 1-22, 2016.
- P. Vivekanandan, “An Efficient SVM Based Tumor Classification with Symmetry Non-Negative Matrix Factorization using Gene Expression Data”, Proceedings of International Conference on Information Communication and Embedded Systems, pp. 761-768, 2013.
- A. Daniel and K.M. Baalamurugan, “A Novel Approach to Minimize Classifier Computational Overheads in Big Data using Neural Networks”, Physical Communication, Vol. 42, pp. 101130-101135, 2020.
- K.M. Baalamurugan and S.V. Bhanu, “A Multi-Objective Krill Herd Algorithm for Virtual Machine Placement in Cloud Computing”, The Journal of Supercomputing, Vol. 76, No. 6, pp. 4525-4542, 2020.
- I. Kononenko, “Machine Learning for Medical Diagnosis: History, State of the Art and Perspective,” Artificial Intelligence in Medicine, Vol. 23, No. 1, pp. 89-109, 2001.
- D. Raval, D. Bhatt, M.K. Kumar, V. Parikh and D. Vyas, “Medical Diagnosis System using Machine Learning”, International Journal of Computer Science and Communication, Vol. 7, No. 1, pp. 177-182, 2016.
- M. Umar, D. Babu, K.M. Baalamurugan and P. Singh, “Automation of Energy Conservation for Nodes in Wireless Sensor Networks”, International Journal of Future Generation Communication and Networking, Vol. 13, No. 3, pp. 1-12, 2020.
- C. Saravanabhavan, T. Saravanan, D.B. Mariappan, S. Nagaraj and K.M. Baalamurugan, “Data Mining Model for Chronic Kidney Risks Prediction Based on Using NB-CbH”, Proceedings of IEEE International Conference on Advance Computing and Innovative Technologies in Engineering, pp. 1023-1026, 2021.