Point out the wrong statement.
Spark is intended to replace, the Hadoop stack
Excellent ! Your Answer is Correct. Explanation: Spark is intended to enhance, not replace, the Hadoop stack.
Spark was designed to read and write data from and to HDFS, as well as other storage systems
Hadoop users who have already deployed or are planning to deploy Hadoop Yarn can simply run Spark on YARN
None of the Option is Correct