Error while running map reduce Job


#1

[girijaundefined5400@cxln4 ~]$ /usr/local/anaconda/bin/python RatingsBreakdown.py -r hadoop --hadoop-streaming-jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-
streaming.jar hdfs:///user/girijaundefined5400/Moviereview/u.data

The Code snippet is:
from mrjob.job import MRJob
from mrjob.step import MRStep
class RatingsBreakdown(MRJob):
def steps(self):
return [
MRStep(mapper=self.mapper_get_ratings,
reducer=self.reducer_count_ratings)
]
def mapper_get_ratings(self, _, line):
(userID, movieID, rating, timestamp) = line.split(’\t’)
yield rating, 1
def reducer_count_ratings(self, key, values):
yield key, sum(values)
if name == ‘main’:
RatingsBreakdown.run()

error:

Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Step 1 of 1 failed: Command ‘[’/bin/hadoop’, ‘jar’, ‘/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar’, ‘-files’, ‘hdfs:///user/girijaundefined5400/tm
p/mrjob/RatingsBreakdown.girijaundefined5400.20191207.182601.641907/files/wd/RatingsBreakdown.py#RatingsBreakdown.py,hdfs:///user/girijaundefined5400/tmp/mrjob/Rat
ingsBreakdown.girijaundefined5400.20191207.182601.641907/files/wd/mrjob.zip#mrjob.zip,hdfs:///user/girijaundefined5400/tmp/mrjob/RatingsBreakdown.girijaundefined54
00.20191207.182601.641907/files/wd/setup-wrapper.sh#setup-wrapper.sh’, ‘-input’, ‘hdfs:///user/girijaundefined5400/Moviereview/u.data’, ‘-output’, ‘hdfs:///user/gi
rijaundefined5400/tmp/mrjob/RatingsBreakdown.girijaundefined5400.20191207.182601.641907/output’, ‘-mapper’, ‘/bin/sh -ex setup-wrapper.sh python3 RatingsBreakdown.
py --step-num=0 --mapper’, ‘-reducer’, ‘/bin/sh -ex setup-wrapper.sh python3 RatingsBreakdown.py --step-num=0 --reducer’]’ returned non-zero exit status 256.


#2

Hi, Girija.

I do not have idea which file you are trying to run.
You can run the file using the following Mapreduce and and reducer command.

Run the Mapreduce jobs using the hadoop Streaming for without mapper :-

  1. hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar -D mapred.reduce.task=2 -input /data/mr/wordcount/input -output wordcount_clean-unix -mapper ./mycmd.sh -reducer ‘uniq -c’ -file mycmd.sh

  2. hadoop jar /usr/hdp/hadoop-mapreduce/hadoop-streaming.jar -input /data/mr/wordcount/big.txt -output mapreduce-programming/character_frequency -mapper mapper.py -file mapper.py -reducer reducer.py -file reducer.py

  3. hadoop jar /hadoop-streaming.jar -input /data/mr/wordcount/big.txt -output mapreduce-programming/character_frequency -mapper mapper.py -file mapper.py -reducer reducer.py -file reducer.py

All the best.