__________ is the primary interface for a user to describe a MapReduce job to the Hadoop framework for execution.

__________ is the primary interface for a user to describe a MapReduce job to the Hadoop framework for execution.
JobConf
Excellent ! Your Answer is Correct. JobConf is typically used to specify the Mapper, combiner (if any), Partitioner, Reducer, InputFormat, OutputFormat and OutputCommitter implementations.
JobConfig
JobConfiguration
All Options are Correct

Point out the wrong statement :

Point out the wrong statement :
All Options are Correct
Excellent ! Your Answer is Correct. Outputs of the map-tasks go directly to the FileSystem, into the output path set by setOutputPath(Path).
It is legal to set the number of reduce-tasks to zero if no reduction is desired
The outputs of the map-tasks go directly to the FileSystem
he Mapreduce framework does not sort the map-outputs before writing them out to the FileSystem

Point out the wrong statement :

Point out the wrong statement :
All Options are Correct
Excellent ! Your Answer is Correct. All intermediate values associated with a given output key are subsequently grouped by the framework, and passed to the Reducer(s) to determine the final output.
The Mapper outputs are sorted and then partitioned per Reducer
The total number of partitions is the same as the number of reduce tasks for the job
The intermediate, sorted outputs are always stored in a simple (key-len, key, value-len, value) format

Point out the wrong Statement.

Point out the wrong Statement.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner.
Excellent ! Your Answer is Correct.
The MapReduce framework operates exclusively on pairs.
Applications typically implement the Mapper and Reducer interfaces to provided the map and reduce methods
None of the Option is Correct

Point out the Correct Statement.

Point out the Correct Statement.
The Hadoop framework publishes the job flow status to an internally running web server on the master nodes of the Hadoop cluster.
Excellent ! Your Answer is Correct.
Each incoming file is broken into 32MB by default
Data blocks are replicated across different nodes in the cluster to ensure a low degree of fault tolerance
None of the Option is Correct
Page 32 of 60
1 30 31 32 33 34 60