Show simple item record

dc.contributor.authorAhmed N
dc.contributor.authorBarczak ALC
dc.contributor.authorSusnjak T
dc.contributor.authorRashid MA
dc.date.available2020-12-14
dc.date.issued2020-12-14
dc.identifierhttp://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000599799400001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=c5bb3b2499afac691c2e3c1a83ef6fef
dc.identifierARTN 110
dc.identifier.citationJOURNAL OF BIG DATA, 2020, 7 (1)
dc.description.abstractBig Data analytics for storing, processing, and analyzing large-scale datasets has become an essential tool for the industry. The advent of distributed computing frameworks such as Hadoop and Spark offers efficient solutions to analyze vast amounts of data. Due to the application programming interface (API) availability and its performance, Spark becomes very popular, even more popular than the MapReduce framework. Both these frameworks have more than 150 parameters, and the combination of these parameters has a massive impact on cluster performance. The default system parameters help the system administrator deploy their system applications without much effort, and they can measure their specific cluster performance with factory-set parameters. However, an open question remains: can new parameter selection improve cluster performance for large datasets? In this regard, this study investigates the most impacting parameters, under resource utilization, input splits, and shuffle, to compare the performance between Hadoop and Spark, using an implemented cluster in our laboratory. We used a trial-and-error approach for tuning these parameters based on a large number of experiments. In order to evaluate the frameworks of comparative analysis, we select two workloads: WordCount and TeraSort. The performance metrics are carried out based on three criteria: execution time, throughput, and speedup. Our experimental results revealed that both system performances heavily depends on input data size and correct parameter selection. The analysis of the results shows that Spark has better performance as compared to Hadoop when data sets are small, achieving up to two times speedup in WordCount workloads and up to 14 times in TeraSort workloads when default parameter values are reconfigured.
dc.publisherBioMed Central Ltd
dc.rightsThe Authors (CC BY 4.0)
dc.subjectHiBench
dc.subjectBigData
dc.subjectHadoop
dc.subjectMapReduce
dc.subjectBenchmark
dc.subjectSpark
dc.titleA comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench
dc.typeJournal article
dc.citation.volume7
dc.identifier.doi10.1186/s40537-020-00388-5
dc.identifier.elements-id436695
dc.relation.isPartOfJOURNAL OF BIG DATA
dc.citation.issue1
dc.identifier.eissn2196-1115
dc.description.publication-statusPublished
pubs.organisational-group/Massey University
pubs.organisational-group/Massey University/College of Sciences
pubs.organisational-group/Massey University/College of Sciences/School of Food and Advanced Technology
pubs.organisational-group/Massey University/College of Sciences/School of Mathematical and Computational Sciences
dc.identifier.harvestedMassey_Dark
pubs.notesNot known
dc.subject.anzsrc08 Information and Computing Sciences


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record