Tag Archives: MapReduce

Length of a Post/Answer Map Reducer Code

Correlation between the length of a post and the length of answers.
Output the length of the post and the average answer length for each post.

Mapper result Reducet result
111 35 111
15084 237 15084
2 145 2
3778 164 3848
3778 69 3778
66193 60 66193
66193 34 66199
66193 302 66196
66193 288 66195
7185 86 7185
10000001 140 323.940451745
10000002 625 465.578947368
10000005 0 35.0
10000006 836 99.6666666667
10000007 4224 580.428571429
66193 59 154.666666667

Read more of this post


Top tag Map Reducer Code

The top 10 tags used in posts, ordered by the number of questions they appear in.
forum csv file contains
“id”    “title”    “tagnames”    “author_id”    “body”    “node_type”    “parent_id”    “abs_parent_id”    “added_at”    “score”    “state_string”    “last_edited_id”    “last_activity_by_id”    “last_activity_at”    “active_revision_id”    “extra”    “extra_ref_id”    “extra_count”    “marked”
Read more of this post

Study Group Map Reducer Code

Analysis the each forum thread would give us a list of students that have posted there – either asked the question, answered a question or added a comment.
forum csv file contains
“id”    “title”    “tagnames”    “author_id”    “body”    “node_type”    “parent_id”    “abs_parent_id”    “added_at”    “score”    “state_string”    “last_edited_id”    “last_activity_by_id”    “last_activity_at”    “active_revision_id”    “extra”    “extra_ref_id”    “extra_count”    “marked”
Read more of this post

MapReduce Code

Input Data

2012-01-01    2:01    Omaha    Book    10.51    Visa

it’s tab delimited, values will be the date, the time, the store name, a description of the item, the cost, and the method of payment.
Mapper Code (mapper.py)

    for line in sys.stdin:
        data = line.strip().split("\t")
            date, time, storename, productname, cost, paymethod = data
            print "{0}\t{1}".format(storename, cost)

Reducer Code (reducer.py)
In my case, i have a single Reducer, because that’s the Hadoop default, so it will get all the keys. If i had specified more than one Reducer, each would receive some of the keys, along with all the values from all the Mappers for those keys. Read more of this post

Running a Hadoop Job

In my local directory, I have mapper.py and reducer.py, that’s the code for the mapper and reducer.
Running a job
Read more of this post

MapReduce – Mappers and Reducers

How that data is processed with MapReduce.

PTopToBottomrocessing a large file serially from the top to the bottom could take a long time.

MapReduce is designed to be a very parallelized way of managing data, meaning that your input data is split into many pieces, and each piece is processed MapReducesimultaneously.

RealReal-world scenario.  A ledger which contains all the sales from thousands of stores around the USA, organized by date. Calculate the total sales generated by each store over the last year.  Just to start at the beginning of the ledger and, for each entry, write the store name and the amount next to it. For the next entry, if store name is already there, add the amount to that store. If not, add a new store name and that first purchase. And so on, and so on.
Read more of this post

Hadoop Cluster / Ecosystem

Hadoop Clusterbd1
Core Hadoop consists of a way to store data, known as the Hadoop Distributed File System, or HDFS, and a way to process the data, called MapReduce. Split the data up and store it across a collection of machines, known as a cluster.

Then, when we want to process the data, we process it where it’s actually stored. Rather than retrieving the data from a central server, instead it’s already on the cluster, and we can process it in place. You can add more machines to the cluster (make the cluster bigger) as the amount of data you’re storing grows. The machines in the cluster don’t need to be particularly high-end; although most clusters are built using rack-mount servers.
Read more of this post