爬虫框架的依赖本身----由开源github作者处获取 在本地install将内容打包成jar放入maven管理仓库
 
 
 
 
Go to file
yihua.huang 0b9e0465ed add delay queue 2013-08-21 23:49:15 +08:00
en_docs update readme 2013-08-18 07:16:33 +08:00
webmagic-core add a sample 2013-08-21 21:55:15 +08:00
webmagic-extension [maven-release-plugin] prepare for next development iteration 2013-08-20 23:39:59 +08:00
webmagic-lucene update pom 2013-08-20 21:52:39 +08:00
webmagic-samples add delay queue 2013-08-21 23:49:15 +08:00
webmagic-saxon update pom 2013-08-20 21:52:39 +08:00
webmagic-selenium update pom 2013-08-20 21:52:39 +08:00
zh_docs readme 2013-08-21 07:44:39 +08:00
.gitignore ignore 2013-07-18 18:23:15 +08:00
.travis.yml add jdk 2013-06-20 17:54:46 +08:00
README.md update readme version 2013-08-21 07:36:41 +08:00
pom.xml 增加了对文件编码的强制限定,限定于UTF-8 2013-08-21 06:55:18 +08:00
release-note.md release notes 2013-08-20 22:51:30 +08:00
webmagic manual.md readme 2013-08-21 07:44:39 +08:00

README.md

webmagic

Readme in Chinese

Build Status

A scalable crawler framework. It covers the whole lifecycle of crawler: downloading, url management, content extraction and persistent. It can simply the development of a specific crawler.

Features:

  • Simple core with high flexibility.
  • Simple API for html extracting.
  • Annotation with POJO to customize a crawler, no configuration.
  • Multi-thread and Distribution support.
  • Easy to be integrated.

Install:

Add dependencies to your pom.xml:

	<dependency>
        <groupId>us.codecraft</groupId>
        <artifactId>webmagic-core</artifactId>
        <version>0.2.1</version>
    </dependency>
	<dependency>
        <groupId>us.codecraft</groupId>
        <artifactId>webmagic-extension</artifactId>
        <version>0.2.1</version>
    </dependency>

Get Started:

First crawler:

Write a class implements PageProcessor

public class OschinaBlogPageProcesser implements PageProcessor {

    private Site site = Site.me().setDomain("my.oschina.net")
       .addStartUrl("http://my.oschina.net/flashsword/blog");

    @Override
    public void process(Page page) {
        List<String> links = page.getHtml().links().regex("http://my\\.oschina\\.net/flashsword/blog/\\d+").all();
        page.addTargetRequests(links);
        page.putField("title", page.getHtml().xpath("//div[@class='BlogEntity']/div[@class='BlogTitle']/h1").toString());
        page.putField("content", page.getHtml().$("div.content").toString());
        page.putField("tags",page.getHtml().xpath("//div[@class='BlogTags']/a/text()").all());
    }

    @Override
    public Site getSite() {
        return site;

    }

    public static void main(String[] args) {
        Spider.create(new OschinaBlogPageProcesser())
             .pipeline(new ConsolePipeline()).run();
    }
}
  • page.addTargetRequests(links)

    Add urls for crawling.

You can also use annotation way:

@TargetUrl("http://my.oschina.net/flashsword/blog/\\d+")
public class OschinaBlog {

    @ExtractBy("//title")
    private String title;

    @ExtractBy(value = "div.BlogContent",type = ExtractBy.Type.Css)
    private String content;

    @ExtractBy(value = "//div[@class='BlogTags']/a/text()", multi = true)
    private List<String> tags;

    public static void main(String[] args) {
        OOSpider.create(
        	Site.me().addStartUrl("http://my.oschina.net/flashsword/blog"),
			new ConsolePageModelPipeline(), OschinaBlog.class).run();
    }
}

Docs and samples:

The architecture of webmagic (refered to Scrapy)

image

Javadocs: http://code4craft.github.io/webmagic/docs/en/

There are some samples in webmagic-samples package.

Lisence:

Lisenced under Apache 2.0 lisence

Thanks:

To write webmagic, I refered to the projects below :