HDFS, by default, replicates each data block _____ times on different nodes and on at least ____ racks.

HDFS, by default, replicates each data block _____ times on different nodes and on at least ____ racks.
3, 2
Excellent ! Your Answer is Correct. Explanation: HDFS has a simple yet robust architecture that was explicitly designed for data reliability in the face of faults and failures in disks, nodes and networks.
1, 2
2, 3
All Options are Correct

Point out the wrong statement.

Point out the wrong statement.
HDFS is designed to support small files only
Excellent ! Your Answer is Correct. Explanation: HDFS is designed to support very large files.
Any update to either the FsImage or EditLog causes each of the FsImages and EditLogs to get updated synchronously
NameNode can be configured to support maintaining multiple copies of the FsImage and EditLog
None of the Option is Correct
Page 1 of 5
1 2 3 5