Sqoop Export Failing

Hello,

I am trying to to do a basic sqoop export from hive table to mysql, but it is failing.

Sqoop Command

sqoop export
–connect jdbc:mysql://ip-172-31-20-247/sqoopex
–username sqoopuser
–password ***
–export-dir /apps/hive/warehouse/charantheking16954_test.db/daily_revenue_hive
–table sai_daily_revenue_hive
–input-lines-terminated-by “\001”

Error

19/03/19 12:07:30 INFO mapreduce.Job: Job job_1552564637687_1237 failed with state FAILED due to: Task failed task_1552564637687_1237_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0

19/03/19 12:07:30 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=1
Killed map tasks=3
Launched map tasks=4
Data-local map tasks=4
Total time spent by all maps in occupied slots (ms)=67632
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=22544
Total vcore-milliseconds taken by all map tasks=22544
Total megabyte-milliseconds taken by all map tasks=34627584
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
19/03/19 12:07:30 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
19/03/19 12:07:30 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 18.1876 seconds (0 bytes/sec)
19/03/19 12:07:30 INFO mapreduce.ExportJobBase: Exported 0 records.
19/03/19 12:07:30 ERROR mapreduce.ExportJobBase: Export job failed!
19/03/19 12:07:30 ERROR tool.ExportTool: Error during export: Export job failed!

Hive Table Description

hive> describe daily_revenue_hive;
OK
order_date string
revenue double
Time taken: 0.374 seconds, Fetched: 2 row(s)

MySQL Table Description

mysql> describe sai_daily_revenue_hive;
±-----------±-------------±-----±----±--------±------+
| Field | Type | Null | Key | Default | Extra |
±-----------±-------------±-----±----±--------±------+
| order_date | varchar(100) | YES | | NULL | |
| revenue | float | YES | | NULL | |
±-----------±-------------±-----±----±--------±------+

Data within the hive file
[charantheking16954@ip-172-31-38-146 ~]$ hadoop fs -tail /apps/hive/warehouse/charantheking16954_test.db/daily_revenue_hive/000000_0
2013-07-25 00:00:00.068153.82999999997
2013-07-26 00:00:00.0136520.1700000003
2013-07-27 00:00:00.0101074.34000000014
2013-07-28 00:00:00.087123.08000000013
2013-07-29 00:00:00.0137287.09000000032
2013-07-30 00:00:00.0102745.62000000011
2013-07-31 00:00:00.0131878.06000000006

Can you please advise if I am missing anything?

Thanks,
Sai

My mistake. I have used ‘input-lines-terminated-by’ instead of ‘input-fields-terminated-by’.