我是apache nutch的新手,所以我遵循apache wiki上的apache nutch教程,
环境是:
视窗10
jdk 1.8版
赛金2.9.0
Apache坚果1.14
解决方案6.6.2
winutils-hadoop-2.8.1版本
我已经设置了java\u home和hadoop\u home
当我在cygin中运行crawl命令时
./crawl-s网址mycrawdir 3
出现以下错误
注入种子URL
/home/apache-nutch-1.14/bin/nutch-injectmycrawdir/crawldb-url
喷油器:2018-01-19 20:55:48开始
注射器:crawldb:mycrawdir/crawldb
注入器:urldir:url
注入器:将注入的url转换为爬网db条目。
Exception in thread "main" java.lang.UnsatisfiedLinkError: <br>org.apache.hadoop.io. nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:6 09)
at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskCheck er.java:187)
at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:17 4)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChange d(LocalDirAllocator.java:285)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPa thForWrite(LocalDirAllocator.java:344)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirA llocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirA llocator.java:131)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirA llocator.java:115)
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDist ributedCacheManager.java:125)
at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.jav a:163)
at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java :731)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitt er.java:240)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma tion.java:1746)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.nutch.crawl.Injector.inject(Injector.java:417)
at org.apache.nutch.crawl.Injector.run(Injector.java:563)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.Injector.main(Injector.java:528)
Error running:
/home/apache-nutch-1.14/bin/nutch inject mycrawdir/crawldb urls
Failed with exit value 1.
暂无答案!
目前还没有任何答案,快来回答吧!