Can you share a practical example to access and modify JSON data in znode which can tested in lab?
Also if the main data is in HDFS , what type ofJSON data is stored in znodes ?
The normal zookeeper command doesn’t seem to be functional with proper JSON objects with spaces.
The zookeeper is generally used by various Highly Available services via APIs. The APIs are available in C/JAVA natively and in almost every other language. To find more clients on GitHub use this: https://github.com/search?q=zookeeper+client
I would suggest taking a look at zookeeper chapter of the Big Data with Hadoop and Spark course. Also, try looking at the way Kafka brokers are storing its information in ephemeral nodes:
[zk: localhost:2181(CONNECTED) 0] ls /brokers
[ids, topics, seqid]
[zk: localhost:2181(CONNECTED) 1] ls /brokers/ids
[1003, 1002, 1001]
[zk: localhost:2181(CONNECTED) 2] get /brokers/ids/1003
_**{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://ip-172-31-20-247.ec2.internal:6667"],"rack":"/default-rack","jmx_port":-1,"host":"ip-172-31-20-247.ec2.internal","timestamp":"1535092037717","port":6667,"version":4}**_
cZxid = 0xa00017e92
ctime = Fri Aug 24 06:27:17 UTC 2018
mZxid = 0xa00017e92
mtime = Fri Aug 24 06:27:17 UTC 2018
pZxid = 0xa00017e92
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x3656622f4450016
dataLength = 251
numChildren = 0
Is zookeeper processing the MapReduce jobs?
No. Some of the applications might use zookeeper for coordination. But we almost never use any connection to the central system inside our map-reduce jobs.
if possible explain this with architecture diagram.
I would say, please look into our course. In the course, we discuss it in more details.