仅限nutch 2.3.1爬网种子url

lbsnaicq  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(336)

我必须抓取所有的链接(最多)几个网址。为此,我将apachenutch2.3.1与hadoop和hbase结合使用。以下是用于此目的的nutch-site.xml文件。

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
   <name>http.agent.name</name>
   <value>crawler</value>
</property>
<property>
   <name>storage.data.store.class</name>
   <value>org.apache.gora.hbase.store.HBaseStore</value>
</property>
<property>
  <name>plugin.includes</name>
 <value>protocol-httpclient|protocol-http|indexer-solr|urlfilter-regex|parse-(html|tika)|index-(basic|more|urdu)|urlnormalizer-(pass|regex|basic)|scoring-opic</value>
</property>
<property>
<name>parser.character.encoding.default</name>
<value>utf-8</value>
</property>
<property>
  <name>http.robots.403.allow</name>
  <value>true</value>
<property>
  <name>db.max.outlinks.per.page</name>
  <value>-1</value>
</property>
<property>
  <name>http.robots.agents</name>
  <value>crawler,*</value>
</property>

<!-- language-identifier plugin properties -->

<property>
  <name>lang.ngram.min.length</name>
  <value>1</value>
</property>

<property>
  <name>lang.ngram.max.length</name>
  <value>4</value>
</property>

<property>
  <name>lang.analyze.max.length</name>
  <value>2048</value>
</property>

<property>
  <name>lang.extraction.policy</name>
  <value>detect,identify</value>
</property>

<property>
  <name>lang.identification.only.certain</name>
  <value>true</value>
</property>

<!-- Language properties ends here -->
<property> 
         <name>http.timeout</name> 
         <value>20000</value> 
</property> 
<!-- These tags are included as our crawled documents has started to decrease -->
<property>
 <name>fetcher.max.crawl.delay</name>
 <value>10</value>
</property>
<property>
  <name>generate.max.count</name>
  <value>10000</value>
</property>

<property>
 <name>db.ignore.external.links</name>
 <value>true</value>
</property>
</configuration>

当我对几个URL进行爬网时,只有种子URL是fetch,然后爬网以这个消息结束

GeneratorJob: Selecting best-scoring urls due for fetch.
GeneratorJob: starting
GeneratorJob: filtering: false
GeneratorJob: normalizing: false
GeneratorJob: topN: 20
GeneratorJob: finished at 2017-04-21 16:28:35, time elapsed: 00:00:02
GeneratorJob: generated batch id: 1492774111-8887 containing 0 URLs
Generate returned 1 (no new segments created)
Escaping loop: no more URLs to fetch now

这里说明了一个类似的问题,但它适用于版本1.1,并且我实现了不适用于我的案例的解决方案。

qltillow

qltillow1#

你能检查一下你的房间吗 conf/regex-urlfilter.txt 不管是url过滤正则表达式是否阻止了预期的大纲链接。


# accept anything else

+.

当你准备好的时候 db.ignore.external.linkstrue ,所以nutch不会从不同的主机生成大纲视图。你需要检查一下 db.ignore.internal.links 你的财产也是 conf/nutch-default.xml 不管是不是 false 不管怎样。否则,将不会生成大纲视图。

<property>
    <name>db.ignore.internal.links</name>
    <value>false</value>
</property>
<property>
    <name>db.ignore.external.links</name>
    <value>true</value>
</property>
<property>

嗯。

相关问题