爬虫框架的依赖本身----由开源github作者处获取 在本地install将内容打包成jar放入maven管理仓库
 
 
 
 
Go to file
yihua.huang 7335c67fe4 update parent pom 2013-08-19 09:52:59 +08:00
en_docs update readme 2013-08-18 07:16:33 +08:00
webmagic-core update version 2013-08-19 09:52:15 +08:00
webmagic-extension update version 2013-08-19 09:52:15 +08:00
webmagic-lucene remove invalid @date 2013-08-09 20:43:06 +08:00
webmagic-samples comments in english 2013-08-17 18:33:05 +08:00
webmagic-saxon move xpath2.0 support to seperate package 2013-08-07 23:21:28 +08:00
webmagic-selenium closable 2013-08-17 21:19:24 +08:00
zh_docs update readme 2013-08-17 23:38:13 +08:00
.gitignore ignore 2013-07-18 18:23:15 +08:00
.travis.yml add jdk 2013-06-20 17:54:46 +08:00
README.md update readme 2013-08-18 07:16:33 +08:00
pom.xml update parent pom 2013-08-19 09:52:59 +08:00
release-note.md release notes and docs 2013-08-11 10:21:26 +08:00
webmagic manual.md update docs 2013-08-13 13:03:24 +08:00

README.md

webmagic

Readme in Chinese

Build Status

A scalable crawler framework. It covers the whole lifecycle of crawler: downloading, url management, content extraction and persistent. It can simply the development of a specific crawler.

Features:

  • Simple core with high flexibility.
  • Simple API for html extracting.
  • Annotation with POJO to customize a crawler, no configuration.
  • Multi-thread and Distribution support.
  • Easy to be integrated.

Install:

Clone the repo and build:

git clone https://github.com/code4craft/webmagic.git
cd webmagic
mvn clean install	  

Add dependencies to your project:

	<dependency>
        <groupId>us.codecraft</groupId>
        <artifactId>webmagic-core</artifactId>
        <version>0.2.0</version>
    </dependency>
	<dependency>
        <groupId>us.codecraft</groupId>
        <artifactId>webmagic-extension</artifactId>
        <version>0.2.0</version>
    </dependency>

Get Started:

First crawler:

Write a class implements PageProcessor

public class OschinaBlogPageProcesser implements PageProcessor {

    private Site site = Site.me().setDomain("my.oschina.net")
       .addStartUrl("http://my.oschina.net/flashsword/blog");

    @Override
    public void process(Page page) {
        List<String> links = page.getHtml().links().regex("http://my\\.oschina\\.net/flashsword/blog/\\d+").all();
        page.addTargetRequests(links);
        page.putField("title", page.getHtml().xpath("//div[@class='BlogEntity']/div[@class='BlogTitle']/h1").toString());
        page.putField("content", page.getHtml().$("div.content").toString());
        page.putField("tags",page.getHtml().xpath("//div[@class='BlogTags']/a/text()").all());
    }

    @Override
    public Site getSite() {
        return site;

    }

    public static void main(String[] args) {
        Spider.create(new OschinaBlogPageProcesser())
             .pipeline(new ConsolePipeline()).run();
    }
}
  • page.addTargetRequests(links)

    Add urls for crawling.

You can also use annotation way:

@TargetUrl("http://my.oschina.net/flashsword/blog/\\d+")
public class OschinaBlog {

    @ExtractBy("//title")
    private String title;

    @ExtractBy(value = "div.BlogContent",type = ExtractBy.Type.Css)
    private String content;

    @ExtractBy(value = "//div[@class='BlogTags']/a/text()", multi = true)
    private List<String> tags;

    public static void main(String[] args) {
        OOSpider.create(
        	Site.me().addStartUrl("http://my.oschina.net/flashsword/blog"),
			new ConsolePageModelPipeline(), OschinaBlog.class).run();
    }
}

Docs and samples:

The architecture of webmagic (refered to Scrapy)

image

Javadocs: http://code4craft.github.io/webmagic/docs/en/

There are some samples in webmagic-samples package.

Lisence:

Lisenced under Apache 2.0 lisence

Thanks:

To write webmagic, I refered to the projects below :