Which of the following scenario may not be a good fit for HDFS?

Which of the following scenario may not be a good fit for HDFS?
HDFS is not suitable for scenarios requiring multiple/simultaneous writes to the same file
Excellent ! Your Answer is Correct. Explanation: HDFS can be used for storing archive data since it is cheaper as HDFS allows storing the data on low cost commodity hardware while ensuring a high degree of fault-tolerance.
HDFS is suitable for storing data related to applications requiring low latency data access
HDFS is suitable for storing data related to applications requiring low latency data access
None of the Option is Correct

Point out the wrong statement.

Point out the wrong statement.
DataNode is aware of the files to which the blocks stored on it belong to
Excellent ! Your Answer is Correct. Explanation: NameNode is aware of the files to which the blocks stored on it belong to.
User data is stored on the local file system of DataNodes
Block Report from each DataNode contains a list of all the blocks that are stored on that DataNode
Replication Factor can be configured at a cluster level (Default is set to 3) and also at a file level

Point out the correct statement.

Point out the correct statement.
DataNode is the slave/worker node and holds the user data in the form of Data Blocks
Excellent ! Your Answer is Correct. Explanation: There can be any number of DataNodes in a Hadoop Cluster.
Each incoming file is broken into 32 MB by default
Data blocks are replicated across different nodes in the cluster to ensure a low degree of fault tolerance
None of the Option is Correct
Page 2 of 2
1 2