Loading...
Please wait, while we are loading the content...
Similar Documents
Map Reduce a Programming Model for Cloud Computing Based On Hadoop Ecosystem
| Content Provider | Semantic Scholar |
|---|---|
| Author | Voruganti, Santhosh |
| Copyright Year | 2014 |
| Abstract | Cloud Computing is emerging as a new computational paradigm shift.Hadoop MapReduce has become a powerful Computation Model for processing large data on distributed commodity hardware clusters such as Clouds. MapReduce is a programming model developed for large-scale analysis. It takes advantage of the parallel processing capabilities of a cluster in order to quickly process very large datasets in a fault-tolerant and scalable manner. The core idea behind MapReduce is mapping the data into a collection of key/value pairs, and then reducing over all pairs with the same key. Using key/value pairs as its basic data unit, the framework is able to work with the less-structured data types and to address a wide range of problems. In Hadoop, data can originate in any form, but in order to be analyzed by MapReduce software, it needs to eventually be transformed into key/value pairs.In this paper we implements MapReduce programming model using two components: aJobTracker (masternode) and many TaskTrackers (slave nodes). Keywords— Cloud Computing, Hadoop Ecosystem, apreduce, |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://www.ijcsit.com/docs/Volume%205/vol5issue03/ijcsit20140503252.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |