java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. spark Eclipse on windows 7
我无法在
上安装的
已添加 Spark 核心依赖项。
1 2 3 4 |
错误:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | 16/02/26 18:29:33 INFO SparkContext: Created broadcast 0 from textFile at FrameDemo.scala:13 16/02/26 18:29:34 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\\bin\\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300) at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293) at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76) at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362) at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015) at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015) at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176) at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176) at scala.Option.map(Option.scala:145) at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929) at org.apache.spark.rdd.RDD.count(RDD.scala:1143) at com.org.SparkDF.FrameDemo$.main(FrameDemo.scala:14) at com.org.SparkDF.FrameDemo.main(FrameDemo.scala) |
这里很好地解释了您对解决方案的问题。
在操作系统级别或以编程方式设置您的 HADOOP_HOME 环境变量:
System.setProperty("hadoop.home.dir", "winutils文件夹的完整路径");
享受
关注这个:
在任意目录中创建一个
下载winutils.exe,放到bin目录下。
现在在您的代码中添加
1 2 3 4 5 6 7 | 1) Download winutils.exe from https://github.com/steveloughran/winutils 2) Create a directory In windows"C:\\winutils\\bin 3) Copy the winutils.exe inside the above bib folder . 4) Set the environmental property in the code System.setProperty("hadoop.home.dir","file:///C:/winutils/"); 5) Create a folder"file:///C:/temp" and give 777 permissions. 6) Add config property in spark Session".config("spark.sql.warehouse.dir","file:///C:/temp")" |
在 Windows 10 上 - 您应该添加两个不同的参数。
(1) 在系统变量下添加新变量和值作为 - HADOOP_HOME 和路径(即 c:\\\\\\\\Hadoop)。
(2) 将新条目添加/附加到 "Path" 变量为 "C:\\\\\\\\Hadoop\\\\\\\\bin"。
以上对我有用。
您也可以从 GITHub 下载
https://github.com/steveloughran/winutils/tree/master/hadoop-2.7.1/bin
用你想要的版本替换
If you do not have access rights to the environment variable settings
on your machine, simply add the below line to your code:
1 | System.setProperty("hadoop.home.dir","D:\\\\hadoop"); |
如果我们看到下面的问题
ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\\bin\\winutils.exe in the Hadoop binaries.
然后执行以下步骤
赢阿尔法/winutils.exe。
System.setProperty("hadoop.home.dir", "C:\\\\\\\\Hadoop");
我在运行单元测试时遇到了同样的问题。我找到了这个解决方法:
以下解决方法可以消除此消息:
1 2 3 4 |
来自:https://issues.cloudera.org/browse/DISTRO-544
在系统属性中设置 Hadoop_Home 环境变量对我不起作用。但是这样做了:
- 在 Eclipse Run Configurations 环境选项卡中设置 Hadoop_Home。
- 从这里按照 \\'Windows 环境设置\\'
以下细节Java 1.8.0_121我也遇到过类似的问题,
Spark spark-1.6.1-bin-hadoop2.6、Windows 10 和 Eclipse Oxygen。当我使用 HADOOP_HOME 作为上一篇文章中提到的系统变量在 Eclipse 中运行 WordCount.java 时,它不起作用,什么对我有用是 -
System.setProperty("hadoop.home.dir", "PATH/TO/THE/DIR");
PATH/TO/THE/DIR/bin=winutils.exe 无论您是在 Eclipse 中作为 Java 应用程序运行,还是使用
从 cmd 通过 spark-submit 运行
spark-submit --class groupid.artifactid.classname --master local[2] /path 使用 maven 创建的 jar 文件 /path
到一个演示测试文件/路径到输出目录命令
示例:转到 Spark/home/location/bin 的 bin 位置并按照上述执行 spark-submit,
D:\\\\\\\\BigData\\\\\\\\spark-2.3.0-bin-hadoop2.7\\\\\\\\bin>spark-submit --class com.bigdata.abdus.sparkdemo.WordCount --master local [1] D:\\\\\\\\BigData\\\\\\\\spark-quickstart\\\\\\\\target\\\\\\\\spark-quickstart-0.0.1-SNAPSHOT.jar D:\\\\\\\\BigData\\\\\\\\spark-快速入门\\\\\\\\wordcount.txt
除了在windows中将
这是一个棘手的问题...你的存储信一定是大写的。例如 "C:\\\\\\\\..."