Which of the following scenario may not be a good fit for HDFS ?

Which of the following scenario may not be a good fit for HDFS ?
HDFS is not suitable for scenarios requiring multiple/simultaneous writes to the same file
Excellent ! Your Answer is Correct. HDFS can be used for storing archive data since it is cheaper as HDFS allows storing the data on low cost commodity hardware while ensuring a high degree of fault-tolerance.
HDFS is suitable for storing data related to applications requiring low latency data access
HDFS is suitable for storing data related to applications requiring low latency data access
None of the Option is Correct

On a tasktracker, the map task passes the split to the createRecordReader() method on InputFormat to obtain a _________ for that split.

On a tasktracker, the map task passes the split to the createRecordReader() method on InputFormat to obtain a _________ for that split.
RecordReader
Excellent ! Your Answer is Correct. The RecordReader loads data from its source and converts into key-value pairs suitable for reading by mapper.
InputReader
OutputReader
None of the Option is Correct

InputFormat class calls the ________ function and computes splits for each file and then sends them to the jobtracker.

InputFormat class calls the ________ function and computes splits for each file and then sends them to the jobtracker.
getSplits
Excellent ! Your Answer is Correct. InputFormat uses their storage locations to schedule map tasks to process them on the tasktrackers.
gets
puts
None of the Option is Correct

For ________ the HBase Master UI provides information about the HBase Master uptime.

For ________ the HBase Master UI provides information about the HBase Master uptime.
Hbase
Excellent ! Your Answer is Correct. Explanation: HBase Master UI provides information about the num­ber of live, dead and transitional servers, logs, ZooKeeper information, debug dumps, and thread stacks.
Oozie
Kafka
All Options are Correct

Point out the correct statement.

Point out the correct statement.
The Hadoop framework publishes the job flow status to an internally running web server on the master nodes of the Hadoop cluster
Excellent ! Your Answer is Correct. Explanation: The web interface for the Hadoop Distributed File System (HDFS) shows information about the NameNode itself.
Each incoming file is broken into 32 MB by default
Data blocks are replicated across different nodes in the cluster to ensure a low degree of fault tolerance
None of the Option is Correct
Page 3 of 5
1 2 3 4 5