A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Askarunisa, A.
- A Proposal for the Semantic based Report Generation of Related HTML Documents
Authors
1 Department of Information Technology, Thiagarajar College of Engineering, Madurai, Tamil Nadu, IN
2 Department of Computer Science and Engineering, Thiagarajar College of Engineering, Madurai, Tamil Nadu, IN
Source
Software Engineering, Vol 3, No 9 (2011), Pagination: 418-421Abstract
Today most of the web pages are in the form of HTML only. Many data do exist, but there is no or less way for generating reports from various but related HTML pages. For example, the information of an individual person may be stored in HTML pages. There is no way for collectively getting the report about all the people for particular information. Most of the time, this is done manually. This paper proposes a semantic based approach for generating reports from HTML pages using semantic technologies like OWL, RDF and SPARQL. The required HTML pages are navigated and information from the table and the list are collected as a first step. The data is pre-processed and formatted in a CSV file, such that it enables further processing easier. OWL files are created for the corresponding domain which can act as a dictionary for the application. CSV contents are separated based on the OWL files and the rules. Separated contents are stored in the RDF format and SPARQL is used to query the RDF file. The proposed model thus can be a handy tool for the management people to generate reports readily, without spending much manual time.Keywords
RDF, OWL, SPARQL, HTML Reports.- Test Case Prioritization of Composite Web Service Based on Ontology
Authors
1 Thiagarajar College of Engineering, Department of Computer Science, Madurai, IN
2 Computer Science Department in Thiagarajar College of Engineering, Madurai, IN
3 Thiagarajar College of Engineering, Department of Information Technology, Madurai, IN
Source
Software Engineering, Vol 3, No 1 (2011), Pagination: 9-15Abstract
Web services are the basic building blocks for the business which is different from web applications. Testing of web services is difficult and increases the cost due to the unavailability of source coder. In previous work, web services are tested based on the syntactic structure using Web Service Description Language (WSDL) for atomic web services. This paper proposes an automated testing framework for composite web services based semantics where the domain knowledge of the web services is described by protégé tool [4] and the behavior of the entire business operation flow for the composite web service is provided by Ontology Web Language for services (OWL-S)[1]. Prioritization of test cases is performed based on various coverage criteria for composite web services. Series of experiments were conducted to assess the effects of prioritization on the coverage values and benefits of prioritization techniques were found.Keywords
Composite Web Services, Ontology Web Language for Services, Protégé, Test Case Prioritization.- Comparison of the Features of GUI Testing Tools
Authors
1 Thiagarajar College of Engineering, IN
Source
Software Engineering, Vol 2, No 5 (2010), Pagination: 79-88Abstract
Testing software manually is a labour intensive process. Efficient automated testing significantly reduces the overall cost of software development and maintenance. GUI test automation is a major challenge for test automation. Different kinds of automated tools are available in market for various types of GUI application testing. This paper performs an analysis of various GUI tools based on some specific features so as to enable the tester to choose an appropriate tool to satisfy his requirements and to perform efficient testing of GUI applications. And also this paper proposes GUI Automation testing technique to test GUI-Based java programs as an alternative to the CR technique. This technique develops GUI-event test specification language for GUI application written using java swing APIs, which initiates an automated test engine. Visual editor helps in viewing the test runs. The test engine generates GUI events and captures event responses to automatically verify the results of the test cases. This includes the test case generation, test case execution and test case verification modules. The testing efficiency is measured by determining coverage metric based on Code coverage, while may be useful during Regression Testing. This paper uses Abbot and JUnit tools for test case generation and execution and Clover tool for code coverage. We have performed tests on various GUI applications and the efficiency of this technique is provided.Keywords
Abbot, Capture Reply, Code Coverage, GUI Testing, Unit Testing.- Test Case Generation and Prioritization for Semantic Based Web Services Using Orthogonal Array Testing Technique
Authors
1 Department of Computer Science and Engineering, Thiagarajar College of Engineering, IN
2 Department of Information Technology, Thiagarajar College of Engineering, IN
3 GKM College of Engineering, IN
Source
Software Engineering, Vol 2, No 5 (2010), Pagination: 89-99Abstract
Web Services (WS) are the basic building blocks for every e-business applications. They provide efficient reusability mechanism, thereby reducing the development time and cost. Web services can be identified by Uniform Resource Identifier (URI). The interfaces and bindings of Web Services can be discovered, defined and described as XML artifacts according to Web Service Description Language (WSDL). WSDL can be used to describe web service operations including input, output and exceptions. It cannot identify pre and post conditions of web services. But Semantic WSDL (WSDL-S) identifies the pre and post conditions of web services to generate optimal number of test cases. This paper presents an approach for generating web service test cases using WSDL-S and Object Constraint Language (OCL), while the test case generation technique is Orthogonal Array Testing (OAT). We have generated WSDL of web service to be tested using NetBeans IDE and converted into WSDL-S by giving OCL references, where pre and post conditions are defined. Test data, using OAT, with different factors, levels and strengths are generated and documented in XML based test files called Web Service Test Specifications (WSTS) and executed. Test cases are prioritized based on different criteria like statement coverage, condition coverage, execution time and fault rate. We have conducted testing on various web service applications and the results have shown that the test case prioritization based on fault rate is effective for determining the faults earlier and it is proved by the metric Average Percent of Fault Detection (APFD).Keywords
Web Services Testing, Semantics, Test Case Generation, Orthogonal Testing, Test Case Prioritization, APFD, FDD, FDE Metrics.- Test Suite Minimization Using Selective Redundancy
Authors
1 Thiagarajar College of Engineering, IN
2 GKM College of Engineering, IN
Source
Software Engineering, Vol 1, No 4 (2009), Pagination: 140-146Abstract
Software testing is a process of checking the correctness of software thereby can deal with defects of the software and prevent loss due to defects. Test suite sizes may grow significantly with modifications to the software over time. Due to time and resource constraints for regression testing, test suite minimization techniques attempt to remove those test cases from the test suite that have become redundant over time since the requirements covered by them are also covered by other test cases in the test suite. In this paper, we present an approach to test suite reduction that attempts to selectively keep redundant tests in the reduced suites. Our approach can significantly improve the fault detection effectiveness of reduced suites without severely affecting the extent of test suite size reduction. Evaluations were performed on fifteen java programs and our study shows that, though there is an increase in the number of reduced test cases compared to HGS algorithm, our algorithm has comparatively more fault detection effectiveness than HGS.Keywords
Test Suite, Test Suite Minimization, Selective Redundancy, Regression Testing.- Hybrid Prioritization of User-Session Based Test Cases for Web Application Testing
Authors
1 Thiagarajar College of Engineering, IN
2 GKM College of Engineering, IN
Source
Software Engineering, Vol 1, No 3 (2009), Pagination: 90-96Abstract
Increased use of web-based applications by business, government and consumers to perform their daily operations has led to the need for reliable, well-tested web applications. A short time to market, large user community, demand for continuous availability, and frequent updates motivate cost-effective testing strategies. One promising approach to testing the functionality of web applications leverages user-session data collected by web servers. This approach, called user-session based testing, avoids the problem of generating artificial test cases by capturing real user interactions-rather than tester interactions-and utilizing the user sessions as representative of user behavior. On considering the various regression testing techniques like test case selection, test case reduction etc., there is a possibility for discarding test cases which may lead to incomplete testing. To overcome this disadvantage, we have chosen test case prioritization technique which prioritizes the various test cases based on different criteria. In our paper, we propose some test suite prioritization strategies for web application and examine whether these strategies can improve the rate of fault detection for web applications. Our experimental results show that the proposed prioritization techniques improve the rate of fault detection of the test suites when compared to other techniques.Keywords
Test Management, Web Application Testing, Test Cases, Test Case Prioritization.- Cost and Coverage Based Test Case Prioritization
Authors
1 Thiagarajar College of Engineering, Madurai, IN
2 Thiagarajar College of Engineering, Madurai, IN
Source
Software Engineering, Vol 1, No 1 (2009), Pagination: 7-17Abstract
Test case prioritization techniques schedule test cases for execution in an order that attempts to maximize some objective function. A variety of objective functions are applicable; one such function involves rate of fault detection-a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during regression testing can provide faster feedback on a system under regression test and let debuggers begin their work earlier than might otherwise be possible. In this paper, we describe several techniques for prioritizing test cases and report our results measuring the effectiveness of these techniques for improving rate of fault detection. We have computed two different categories of metrices viz.fault criterion based and coverage criterion based. Metrices like, the average rate of fault detection(APFD), Average rate of fault detection with cost (APFDc), come under the first category while Average Percentage of Statement Coverage (APSC), Average Percentage of Branch Coverage (APBC), Average Percentage of Loop Coverage (APLC) and Average Percentage of Condition Coverage (APCC) come under the second category. Test cases were executed using JUnit tool. Code cover tool is used to find code coverage information. Test case prioritization is performed based on coverage and cost information. The results provide insights into the tradeoffs among various techniques for test case prioritization.Keywords
Regression Testing, Code Coverage, Test Case Prioritization, Mutation Faults, Average Percentage of Fault Detection (APFD).- Database Test Management Made Effective Through Metrics
Authors
1 Anna University, IN
2 GKM Engineering College, Chennai, IN
Source
Software Engineering, Vol 1, No 1 (2009), Pagination: 18-29Abstract
In majority of software applications, Database systems play an important role. They are becoming increasingly complex and are subject to constant change. Test management of such a database application is a very complex task. The increase in both application complexity and reliability expectation has contributed to great demands on database testing activities.This paper, deals with managing the process of testing a Database, through automated testing which will ensure that test management time is not wasted and aids in better decision-making. We have proposed an effective automated test framework that manages testing of database applications thereby reducing the various resource attributes such as people, cost, time during the test process. This framework also ensures quality in the test management process by reducing the manual work. We have measured the effectiveness of the test process through various metrices that enhances the quality of the process. We have implemented the test process for six different database applications and their effectiveness computed through metrices.Keywords
Test Management, Database Testing, Test Cases, Coverage Tree Metrics, Command Form Metrics.- An Enhanced Method for Efficient Information Retrieval from Resume Documents Using SPARQL
Authors
1 Department of Information Technology, Thiagarajar College of Engineering, Madurai, Tamil Nadu, IN
2 Department of Computer Science and Engineering, Thiagarajar College of Engineering, Madurai, Tamil Nadu, IN
Source
Data Mining and Knowledge Engineering, Vol 4, No 1 (2012), Pagination: 5-10Abstract
It is more important to retrieve information from various types of documents like DOC, HTML, etc that contain vital information to be preserved and used in future. Information retrieval from these documents is mostly the manual effort. Though search algorithms do this retrieval, they may not be accurate as expected by the user. Also, some documents like candidates' resumes cannot be stored into the relational database as such because the number of fields is more. Much of manual efforts are put in use to analyze the various resumes to select the candidates who satisfy the specific criteria. To minimize the manual efforts and to get the results faster, this paper proposes the use of Semantic Web Technology like OWL, RDF and SPARQL to retrieve the information from the documents efficiently. This paper proposes to create the Ontology for the required domain as a first step. Based on the fields or tags in the owl file, the user is given a form to provide his personal and academic details. These data is converted into RDF/XML document. RDF files are retrieved and grouped based on some category. Query text is entered and the relevant records are retrieved from RDF documents using SPARQL. SPARQL is an RDF query language that enhances fast and efficient search of data when compared to other XML query languages like XPATH and XQUERY. Comparison between SPARQL and XPATH in terms of time taken to retrieve records is also analyzed in this paper.Keywords
RDF, OWL, SPARQL, Document Filter, Information Retrieval.- An Approach for Improving Security in Distance Education through Iris Recognition
Authors
1 Vickram College of Engineering, Madurai, Tamilnadu, IN
Source
Biometrics and Bioinformatics, Vol 6, No 2 (2014), Pagination:Abstract
In Distance Education the teacher and students are separated by time and distance; students can access their online courses by proper authentication methods. Student authentication in distance education has been a primary issue of federal policy makers. This paper describes the approach to strengthen student’s user IDs and passwords, by adding new Biometric technology to increase academic integrity and ensure proper use of federal student aid. This paper proposes a method combining traditional authentication (password and username) with biometric technology. To access the registration, participation, assessment, academic credit of distance education courses, Iris recognition used for authentication. Iris Recognition is a high-confidence biometric identification system with promising future in the security systems area. In this paper the features of a query images are compared with those of a database image to obtain matching scores. The features are extracted from the pre-processed images of iris. Iris Recognition uses Hamming Bit Distance (HBD) and Fragile Bit Distance (FBD) for matching process.
Keywords
EER-Equal Error Rate, FAR-False Acceptance Rate, FRR-False Rejection Rate, FBD-Fragile Bit Distance.- Social Media Analysis for TamilNadu Tourism Places using VIKOR Approach
Authors
1 Department of Information Technology, Thiagarajar College of Engineering, Madurai, Tamil Nadu, IN
2 Department of Computer Science and Engineering, Vickram College of Engineering, Madurai, TamilNadu, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 7, No 5 (2015), Pagination: 133-138Abstract
Sentiment Analysis deals with the analysis of emotions, opinions and facts in the sentences which are expressed by the people. It allows us to track attitudes and feelings of the people by analyzing blogs, comments, reviews and tweets about all the aspects. In this paper, a sentiment analysis model using VIKOR approach has been proposed to rank the tourism places from the user reviews. The model involves the process of gathering online user reviews regarding the TamilNadu tourism and analyzes those reviews in terms of the sentiments expressed. Information Extraction process filters irrelevant reviews, extracts sentimental words of features identified and quantifies the sentiment of features using General Inquirer. Finally, VIKOR approach, a multi-criteria decision making (MCDM) method ranks the tourism places from the aggregated senti-scores. The sentiment analysis on tweets/reviews is done with multiple dimensions using Natural Language Processing techniques. This information can be used to improve business outcomes and ensure a very high level of user satisfaction.Keywords
Sentiment Analysis, Opinion Mining, VIKOR, MCDM Technique.- Applying MCDM Techniques for Ranking Products Based on Online Customer Feedback
Authors
1 Department of Information Technology, Thiagarajar College of Engineering, Madurai, Tamil Nadu, IN
2 Department of Computer Science and Engineering, Vickram College of Engineering, Enathi, Tamil Nadu, IN
Source
International Journal of Knowledge Based Computer System, Vol 3, No 2 (2015), Pagination: 21-26Abstract
Text analytics is to distil out structured information from unstructured or semi-structured text. User feedback analysis or sentiment analysis on products enables to highlight the best and worst of features and recommend the product to new buyers. The model extracts the positive and negative comments and identifies the emotions in the piece of text or n-way analysis and classification like very-positive, positive, neutral, negative or very-negative. Natural Language Processing (NLP) tools play vital role in classifying the sentiment polarity of sentences while data analytics has the role in recommendation of the product. In this paper, we propose a recommender system model to rank the products based on the feedback given by the users. Features, the topics of interest, are identified from the set of review text. Sentiments are detected from each review and thus senti-score is calculated for each feature of the product. We use the Analytic Hierarchy Process and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), which are Multi-Criteria Decision Making techniques to rank a set of products. This method provides a logical framework to determine the benefits of each product based on the features and thus the products are ranked.Keywords
Sentiment Analysis, User Feedback Analysis, Multi-Criteria Decision Making, Technique for Order Preference by Similarity to Ideal Solution, Analytic Hierarchy Process.References
- Arabameri, A. (2014). Application of the Analytic Hierarchy Process (AHP) for locating fire stations: Case study Maku city. Merit Research Journal of Art, Social Science and Humanities. 2(1), 1-10.
- Azar, F. S. (2000). Multi-attribute decision-making: Use of three scoring methods to compare the performance of imaging techniques for breast cancer detection.
- University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-00-10. Retrieved from http://repository.upenn.edu/cis_reports/119/ (accessed June 29, 2015).
- Bhutia, P. W., & Phipon, R. (2012). Application of AHP and TOPSIS method for supplier selection problem. IOSR Journal of Engineering, 2(10), 43-50.
- Caterino, N., Iervolino, I., Manfredi, G., & Cosenza, E. (2008). A comparative analysis of decision making methods for the seismic retrofit of RC buildings. Paper presented at the 14th World Conference on Earthquake Engineering, Beijing, China.
- Hwang, C. L., & Yoon, K. P. (1981). Multiple attribute decision making: Methods and applications. New York, Springer-Verlag.
- Kang, D., & Park, Y. (2012). Measuring customer satisfaction of service based on an analysis of the user generated contents: Sentiment analysis and aggregating function based MCDM approach. Paper presented at the 6th IEEE International Conference on Management of Innovation and Technology, ICMIT, Bali, Indonesia.
- Liu, B. (2012). Sentiment analysis and opinion mining. Morgan & Claypool Publishers.
- Senseforth Technologies. (2014). Consumer electronics. Retrieved from http://senseforth.com (accessed on November 15, 2014).
- The Harvard General Inquirer. (2015). Guide to general inquirer category listings: General inquirer basic spreadsheet. Retrieved from http://www.wjh.harvard.edu/~inquirer/spreadsheet_guide.htm (accessed on August 12, 2015).
- The R Project. (2015). R version 3.2.2. Retrieved from http://www.r-project.org (accessed July 19, 2015).
- The Stanford NLP Group. (2015). Stanford log-linear part-of-speech tagger. Retrieved from http:// nlp.stanford.edu/software/tagger.shtml (accessed August 5, 2015).
- The twitteR API. (2015). twitteR: R Based Twitter Client. Retrieved from http://cran.r-project.org/web/packages/ twitteR (accessed on July 19, 2015).
- Triantaphyllou, E., & Mann, S. H. (1995). Using the analytic hierarchy process for decision making in engineering applications: Some challenges. International Journal of Industrial Engineering: Applications and Practice, 2(1), 35-44.
- Velasquez, M. & Hester, P. T. (2013). An analysis of multicriteria decision making methods. International Journal of Operations Research. 10(2), 56-66.
- Context-Based Feature Extraction Technique – LSI vs LDA
Authors
1 Department of Information Technology, Thiagarajar College of Engineering, Madurai, Tamil Nadu, IN
2 Department of Computer Science and Engineering, KLN Information Technology, Madurai, Tamil Nadu, IN
Source
Data Mining and Knowledge Engineering, Vol 10, No 5 (2018), Pagination: 85-92Abstract
Internet has enormous amount of documents and they need to be annotated for further processing. Customer reviews or feedback on product is mostly done by using text mining or text analytics techniques. Feature extraction plays the vital role in text analytics methodology by which the most relevant features are extracted and used for text processing. This research article focuses on the use of Latent Dirichlet Allocation (LDA) as the feature extraction technique and it is compared with the prominent technique Latent Semantic Indexing (LSI).
Keywords
Text Analytics, Feature Extraction, Latent Semantic Indexing (LSI), Latent Dirichlet Allocation (LDA), Document Categorization.References
- Aswani Kumar, & Srinivas, S. (2009). On the Performance of Latent Semantic Indexing-based Information Retrieval. Journal of Computing and Information Technology, 17(3), 259–264.
- Blei, D., Ng, A., & Jordan, M. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3.
- Chawla, K., Ramteke, A., Bhattacharyya, P.: “IITB-Sentiment-Analysts: Participation in Sentiment Analysis in Twitter SemEval 2013 Task”, Seventh International Workshop on Semantic Evaluation (2013), 495-500.
- David Binkley, Daniel Heinz, Dawn Lawrie & Justin Overfelt. (2014). Understanding LDA in Source Code Analysis.Proceedings of 22nd International Conference on Programme Comprehension ICPC ’14, Hyderabad, India.
- Guo, H., Zhu, H., Guo, Z., & Su, Z. (2009). Product feature categorization with multilevel latent semantic association, in Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM 2009, Hong Kong, China.
- Harb, A., Plantie, M., Dray, G., Roche, M., Trousset, F., Poncelet, P.: “Web Opinion Mining: How to extract opinions from blogs?”, CSTST ’08 International Conference on Soft Computing as Transdisciplinary Science and Technology, (2008), 211-217.
- Liu, J.; Cao, Y.; Lin, C. Y.; Huang, Y.; and Zhou, M. 2007. Low-Quality Product Review Detection in Opinion Summarization. InProceedings of the 2007 Joint Conferenceon Empirical Methods in Natural Language Processing andComputational Natural Language Learning.
- Liu, B (2012), Sentiment Analysis and Opinion Mining, Morgan & Claypool Publishers, San Rafael, California, USA.
- Manning, C. D., Raghavan, P., Schūtze, & Hinrich. (2009). An Introduction to Information Retrieval. Cambridge, England: Cambridge University Press.
- Meena, A., Prabhakar, T.V.: “Sentence Level Sentiment Analysis in the Presenceof Conjuncts Using Linguistic Analysis”, 29th European Conference on IR Research ECIR 2007, LNCS 4425 (2007), 573–580.
- Pang, B., Lee, L.: “Thumps up? Sentiment Classification using Machine Learning techniques”, Proceedings of Empirical Methods in Natural Language Processing (2002), 79-86.
- Qiu, G., Liu, B., Bu, J., Chen, C.: “Expanding Domain sentiment lexicon through double propagation”, Computational Linguistics, 37, 1 (2008), 9-27.
- Saif, H., He, Y., & Alani, H. (2012). Semantic sentiment analysis of twitter. In the 11th International Semantic Web Conference (ISWC 2012), Boston, MA, USA.
- Somprasertsri, G., Lalitrojwong, P.: “Mining Feature-Opinion in Online Customer Reviews for Opinion Summarization”, Journal of Universal Computer Science, 16, 6 (2010), 938-955.
- Wei Wei, & John Atla Gulla. (2010). Sentiment Learning on Product Reviews via Sentiment Ontology Tree. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics Sweden, pp. 404–413.