openedx—运行内置“compute pi”hadoop作业的命令

fzwojiic  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(418)

我正在尝试在azure示例上安装open edx insights。lms和insights在同一个框中。作为安装的一部分,我已经通过yml脚本安装了hadoop、hive等。现在,下一条指令是测试hadoop安装,对于该文档,要求计算“pi”的值。为此,他们发出了以下命令:

hadoop jar hadoop*/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar pi 2 100

但在我运行这个命令之后,它给出的错误是:

hadoop@MillionEdx:~$ hadoop jar hadoop*/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar pi 2 100
Unknown program 'hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar' chosen.

Valid program names are:
 aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
``` `aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files. bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi. dbcount: An example job that count the pageview counts from a database. distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi. grep: A map/reduce program that counts the matches of a regex in the input. join: A job that effects a join over sorted, equally partitioned datasets multifilewc: A job that counts words from several files. pentomino: A map/reduce tile laying program to find solutions to pentomino problems. pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method. randomtextwriter: A map/reduce program that writes 10GB of random textual data per node. randomwriter: A map/reduce program that writes 10GB of random data per node. secondarysort: An example defining a secondary sort to the reduce. sort: A map/reduce program that sorts the data written by the random writer. sudoku: A sudoku solver. teragen: Generate data for the terasort terasort: Run the terasort teravalidate: Checking results of terasort wordcount: A map/reduce program that counts the words in the input files. wordmean: A map/reduce program that counts the average length of the words in the input files. wordmedian: A map/reduce program that counts the median length of the words in the input files. wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.` 我试过很多方法,比如给一个 `Pi` 但我从来没有意识到 `pi` . 请提出一些解决办法。提前谢谢。
bqjvbblv

bqjvbblv1#

需要检查与hadoop mapreduce 2.7.2兼容的java版本。
这可能是因为错误的jar文件对应于java版本。

相关问题