HDFS, by default, replicates each data block _____ times on different nodes and on at least ____ racks.

HDFS, by default, replicates each data block _____ times on different nodes and on at least ____ racks.
3, 2
Excellent ! Your Answer is Correct. Explanation: HDFS has a simple yet robust architecture that was explicitly designed for data reliability in the face of faults and failures in disks, nodes and networks.
1, 2
2, 3
All Options are Correct

Point out the wrong statement.

Point out the wrong statement.
HDFS is designed to support small files only
Excellent ! Your Answer is Correct. Explanation: HDFS is designed to support very large files.
Any update to either the FsImage or EditLog causes each of the FsImages and EditLogs to get updated synchronously
NameNode can be configured to support maintaining multiple copies of the FsImage and EditLog
None of the Option is Correct

Point out the correct statement.

Point out the correct statement.
The HDFS architecture is compatible with data rebalancing schemes
Excellent ! Your Answer is Correct. Explanation: A scheme might automatically move data from one DataNode to another if the free space on a DataNode falls below a certain threshold.
Datablocks support storing a copy of data at a particular instant of time
HDFS currently support snapshots
None of the Option is Correct
Page 23 of 60
1 21 22 23 24 25 60