The standard output (stdout) and error (stderr) streams of the task are read by the TaskTracker and logged to :

The standard output (stdout) and error (stderr) streams of the task are read by the TaskTracker and logged to :
${HADOOP_LOG_DIR}/userlogs
Excellent ! Your Answer is Correct. The child-jvm always has its current working directory added to the java.library.path and LD_LIBRARY_PATH.
${HADOOP_LOG_DIR}/user
${HADOOP_LOG_DIR}/logs
None of the Option is Correct

__________ is the primary interface for a user to describe a MapReduce job to the Hadoop framework for execution.

__________ is the primary interface for a user to describe a MapReduce job to the Hadoop framework for execution.
JobConf
Excellent ! Your Answer is Correct. JobConf is typically used to specify the Mapper, combiner (if any), Partitioner, Reducer, InputFormat, OutputFormat and OutputCommitter implementations.
JobConfig
JobConfiguration
All Options are Correct

Point out the wrong statement :

Point out the wrong statement :
All Options are Correct
Excellent ! Your Answer is Correct. Outputs of the map-tasks go directly to the FileSystem, into the output path set by setOutputPath(Path).
It is legal to set the number of reduce-tasks to zero if no reduction is desired
The outputs of the map-tasks go directly to the FileSystem
he Mapreduce framework does not sort the map-outputs before writing them out to the FileSystem

Point out the wrong statement :

Point out the wrong statement :
All Options are Correct
Excellent ! Your Answer is Correct. All intermediate values associated with a given output key are subsequently grouped by the framework, and passed to the Reducer(s) to determine the final output.
The Mapper outputs are sorted and then partitioned per Reducer
The total number of partitions is the same as the number of reduce tasks for the job
The intermediate, sorted outputs are always stored in a simple (key-len, key, value-len, value) format

Point out the wrong Statement.

Point out the wrong Statement.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner.
Excellent ! Your Answer is Correct.
The MapReduce framework operates exclusively on pairs.
Applications typically implement the Mapper and Reducer interfaces to provided the map and reduce methods
None of the Option is Correct
Page 3 of 5
1 2 3 4 5