Hadoop Streaming Character Count using Shell

I tried making the hadoop streaming program of calculating characters using shell but it seems to be failing again and again. Please help. The following is the code:
Mapper.sh
#!/bin/bash
while read line
do
for i in echo $line | awk -F '' -v 'OFS=\n' '{$1=$1}1'
do
echo -e $i’\t’“1”
done
done

Reducer.sh
#!/bin/bash
i=1
lastkey="";
total=0;
while read line
do
newkey=echo $line | awk '{print $1}'
value=echo $line | awk '{print $2}'
if [ “$i” == “1” ]
then
lastkey=$newkey;
i=expr $i + 1;
fi
if [[ “$lastkey” != “$newkey” ]]
then
echo -e “$lastkey\t$total”
total=0;
lastkey=$newkey;
i=0;
fi
total=expr $total + 1;
lastkey=$newkey;
i=expr $i + 1;
done

Hi @Bhanu_Saurabh,

Can you paste the error here?

Hi Abhinav,

There seems to be no error message as such but the reducer is getting stuck infinitely.
please suggest me the changes i need.

Regards
Bhanu

Okay,

How big is the input file?

Hi Abhinav,

I have used the example file: /data/mr/wordcount/big.txt
and wanted to match the results with the given example in the lecture.

Regards
Bhanu

Okay,

Can you please post your command here?

Hi Abhinav,

The result of mapper is working fine, when used without any reducer.

The commands are as follows:
cd /home/bsofcs1496/try_mr_bash

hadoop jar /usr/hdp/2.6.5.0-292/hadoop-mapreduce/hadoop-streaming.jar -input /data/mr/wordcount/big.txt -output mapreduce-programming/character_frequency_shell -mapper ./mapper.sh -file mapper.sh -reducer ./reducer.sh -file reducer.sh

Regards
Bhanu

Hi @Bhanu_Saurabh,

Curious if you have gone through the error log? The error is self-explanatory.