Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Fault Tolerant Dynamic Task Clustering to Improve Workflow make Span in Clouds


Affiliations
1 Department of Computer Application, Alagappa University, Karaikudi, Tamil Nadu, India
2 Department of Computer Science, Alagappa University, Karaikudi, Tamil Nadu, India
     

   Subscribe/Renew Journal


Task clustering is a compute intensive method that will reduce execution overhead there by improving the computational granularity in clouds. Usually a job is composed of one or more tasks. The jobs with multiple tasks are always having higher risk of failures when compared to single task job. Clustering strategies can be used to reduce the impact of job failures. Fault tolerant clustering methods are used to enhance the runtime execution of workflow executions. Static task clustering is the widely employed clustering method for faulty environments. The proposed work utilizes Dynamic task clustering to improve the workflow make span by dynamically modifying the clustering granularity whenever there arises chances of failure. The proposed method performs well to adapt unexpected behaviours and provides better make-spans when compared to the static method.

This paper discusses the implication of the rise of big data and especially that of high velocity data in the domain of high frequency trading (HFT), a growing niche of securities trading. We first take a brief look at the intricacies of HFT including some of the commonly used strategies used by HFT traders. The technological challenges in processing HFT and responding to the real time changes in the market conditions are also discussed. Some of the potential technological solutions to solve the issues thrown up by HFT are analyzed for their effectiveness to address the real time performance requirements of HFT. We identify Complex event processing (CEP) as a candidate to address the HFT problem. The paper is divided into 3 parts; part A deals with understanding HFT and the challenges that it poses to the technological processing. In Part B we look at complex Event Processing (CEP) and the types of problems it can be applied to. In Part C we show a framework to process HFT using techniques derived from CEP.


Keywords

Fault-Tolerant, Scientific Workflows, Task Clustering.
Subscription Login to verify subscription
User
Notifications
Font Size


  • G. Singh, M. Su, K. Vahi, E. Deelman, B. Berriman, J. Good, D. S. Katz, and G. Mehta, “Workflow task clustering for best effort systems with pegasus,” In Proc. 15th ACM Mardi Gras Conf., p. 9, 2008.
  • W. Chen, and E. Deelman, “Fault tolerant clustering in scientific workflows,” In Proc. IEEE 8th World Congr. Services, pp. 9-16, 2012.
  • G. B. Berriman, G. Juve, E. Deelman, M. Rynge, and J.-S. Vockler, “The application of cloud computing to astronomy: A study of cost and performance,” In Proc. Workshop e-Science Challenges Astron. Astrophys., pp. 1-7, 2010.
  • N. Muthuvelu, I. Chai, E. Chikkannan, and R. Buyya, “On-line task granularity adaptation for dynamic grid applications,” In Proc. 10th Int. Conf. Algorithms Archit. Parallel Process., pp. 266-277, 2010.
  • S. Kalayci, G. Dasgupta, L. Fong, O. Ezenwoye, and S. M. Sadjadi, “Distributed and adaptive execution of condor DAGMan workflows,” In Proc. 22nd Int. Conf. Softw. Eng. Knowl. Eng., pp. 587-590, 2010.
  • Y. Zhang, and M. S. Squillante, “Performance implications of failures in large-scale cluster scheduling,” In Proc. 10th Workshop Job Scheduling Strategies Parallel Process., pp. 233-252, June 2004.
  • R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. F. De Rose, and R. Buyya, “CloudSim: A toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms,” Softw.: Practice Experience, vol. 41, no. 1, pp. 23-50, 2011.
  • F. Jrad, J. Tao, and A. Streit, “A broker-based framework for multi-cloud workflows,” In Proc. Int. Workshop Multi-Cloud Appl. Federated Clouds, pp. 61-68, 2013.
  • Amazon Web Services. [Online]. Available: http://aws.amazon.com, 2014.
  • FutureGrid. [Online]. Available: http://futuregrid.org, 2014.
  • Workflow Archive [Online]. Available http://work-flowarchive.org , 2014.

Abstract Views: 153

PDF Views: 3




  • Fault Tolerant Dynamic Task Clustering to Improve Workflow make Span in Clouds

Abstract Views: 153  |  PDF Views: 3

Authors

Vinodhini
Department of Computer Application, Alagappa University, Karaikudi, Tamil Nadu, India
A. Padmapriya
Department of Computer Science, Alagappa University, Karaikudi, Tamil Nadu, India

Abstract


Task clustering is a compute intensive method that will reduce execution overhead there by improving the computational granularity in clouds. Usually a job is composed of one or more tasks. The jobs with multiple tasks are always having higher risk of failures when compared to single task job. Clustering strategies can be used to reduce the impact of job failures. Fault tolerant clustering methods are used to enhance the runtime execution of workflow executions. Static task clustering is the widely employed clustering method for faulty environments. The proposed work utilizes Dynamic task clustering to improve the workflow make span by dynamically modifying the clustering granularity whenever there arises chances of failure. The proposed method performs well to adapt unexpected behaviours and provides better make-spans when compared to the static method.

This paper discusses the implication of the rise of big data and especially that of high velocity data in the domain of high frequency trading (HFT), a growing niche of securities trading. We first take a brief look at the intricacies of HFT including some of the commonly used strategies used by HFT traders. The technological challenges in processing HFT and responding to the real time changes in the market conditions are also discussed. Some of the potential technological solutions to solve the issues thrown up by HFT are analyzed for their effectiveness to address the real time performance requirements of HFT. We identify Complex event processing (CEP) as a candidate to address the HFT problem. The paper is divided into 3 parts; part A deals with understanding HFT and the challenges that it poses to the technological processing. In Part B we look at complex Event Processing (CEP) and the types of problems it can be applied to. In Part C we show a framework to process HFT using techniques derived from CEP.


Keywords


Fault-Tolerant, Scientific Workflows, Task Clustering.

References