HDFS, by default, replicates each data block _____ times on different nodes and on at least ____ racks.

HDFS, by default, replicates each data block _____ times on different nodes and on at least ____ racks.
3, 2
Excellent ! Your Answer is Correct. Explanation: HDFS has a simple yet robust architecture that was explicitly designed for data reliability in the face of faults and failures in disks, nodes and networks.
1, 2
2, 3
All Options are Correct

Point out the wrong statement.

Point out the wrong statement.
HDFS is designed to support small files only
Excellent ! Your Answer is Correct. Explanation: HDFS is designed to support very large files.
Any update to either the FsImage or EditLog causes each of the FsImages and EditLogs to get updated synchronously
NameNode can be configured to support maintaining multiple copies of the FsImage and EditLog
None of the Option is Correct

Point out the correct statement.

Point out the correct statement.
The HDFS architecture is compatible with data rebalancing schemes
Excellent ! Your Answer is Correct. Explanation: A scheme might automatically move data from one DataNode to another if the free space on a DataNode falls below a certain threshold.
Datablocks support storing a copy of data at a particular instant of time
HDFS currently support snapshots
None of the Option is Correct

The HDFS client software implements __________ checking on the contents of HDFS files.

The HDFS client software implements __________ checking on the contents of HDFS files.
checksum
Excellent ! Your Answer is Correct. Explanation: When a client creates an HDFS file, it computes a checksum of each block of the file and stores these checksums in a separate hidden file in the same HDFS namespace.
metastore
parity
None of the Option is Correct
Page 1 of 4
1 2 3 4