爬虫框架的依赖本身----由开源github作者处获取 在本地install将内容打包成jar放入maven管理仓库
 
 
 
 
Go to file
yihua.huang b13f1da039 reformat 2014-04-01 08:04:43 +08:00
asserts add distributed architechure 2014-03-12 22:50:03 +08:00
en_docs update en_docs 2014-03-24 17:22:37 +08:00
webmagic-avalon Do not cache document in Selectable for selected Html element #73 2014-03-19 22:19:06 +08:00
webmagic-core Change HashMap to LinkedHashMap in ResultItems for same order of input and output #76 2014-03-25 08:23:20 +08:00
webmagic-extension add warning of slf4j #78 2014-04-01 07:42:23 +08:00
webmagic-samples Clean project structure #70 2014-03-14 23:24:38 +08:00
webmagic-saxon Clean project structure #70 2014-03-14 23:24:38 +08:00
webmagic-scripts Clean project structure #70 2014-03-14 23:24:38 +08:00
webmagic-selenium Clean project structure #70 2014-03-14 23:24:38 +08:00
zh_docs reformat 2014-04-01 08:04:43 +08:00
.gitignore The SeleniumDownloader should call the setRawText 2013-12-27 11:09:04 +08:00
.gitmodules update submodule url 2014-03-15 21:11:39 +08:00
.travis.yml add travis ci support for submodule 2014-03-15 21:09:44 +08:00
README.md reformat 2014-04-01 08:03:47 +08:00
pom.xml update xsoup version to 0.2.2 2014-03-30 09:43:35 +08:00
release-note.md #34 Close reader in FileCacheQueueScheduler 2013-11-08 14:59:09 +08:00
user-manual.md update version in docs 2013-12-03 23:46:31 +08:00
webmagic-avalon.md scripts readme 2013-11-28 12:04:05 +08:00

README.md

logo

Readme in Chinese

User Manual (Chinese)

Build Status

A scalable crawler framework. It covers the whole lifecycle of crawler: downloading, url management, content extraction and persistent. It can simplify the development of a specific crawler.

Features:

  • Simple core with high flexibility.
  • Simple API for html extracting.
  • Annotation with POJO to customize a crawler, no configuration.
  • Multi-thread and Distribution support.
  • Easy to be integrated.

Install:

Add dependencies to your pom.xml:

<dependency>
    <groupId>us.codecraft</groupId>
    <artifactId>webmagic-core</artifactId>
    <version>0.4.3</version>
</dependency>
<dependency>
    <groupId>us.codecraft</groupId>
    <artifactId>webmagic-extension</artifactId>
    <version>0.4.3</version>
</dependency>

WebMagic use slf4j with slf4j-log4j12 implementation. If you customized your slf4j implementation, please exclude slf4j-log4j12.

<exclusions>
    <exclusion>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-log4j12</artifactId>
    </exclusion>
</exclusions>

Get Started:

First crawler:

Write a class implements PageProcessor

public class OschinaBlogPageProcesser implements PageProcessor {

    private Site site = Site.me().setDomain("my.oschina.net");

    @Override
    public void process(Page page) {
        List<String> links = page.getHtml().links().regex("http://my\\.oschina\\.net/flashsword/blog/\\d+").all();
        page.addTargetRequests(links);
        page.putField("title", page.getHtml().xpath("//div[@class='BlogEntity']/div[@class='BlogTitle']/h1").toString());
        page.putField("content", page.getHtml().$("div.content").toString());
        page.putField("tags",page.getHtml().xpath("//div[@class='BlogTags']/a/text()").all());
    }

    @Override
    public Site getSite() {
        return site;

    }

    public static void main(String[] args) {
        Spider.create(new OschinaBlogPageProcesser()).addUrl("http://my.oschina.net/flashsword/blog")
             .addPipeline(new ConsolePipeline()).run();
    }
}
  • page.addTargetRequests(links)

    Add urls for crawling.

You can also use annotation way:

@TargetUrl("http://my.oschina.net/flashsword/blog/\\d+")
public class OschinaBlog {

    @ExtractBy("//title")
    private String title;

    @ExtractBy(value = "div.BlogContent",type = ExtractBy.Type.Css)
    private String content;

    @ExtractBy(value = "//div[@class='BlogTags']/a/text()", multi = true)
    private List<String> tags;

    public static void main(String[] args) {
        OOSpider.create(
        	Site.me(),
			new ConsolePageModelPipeline(), OschinaBlog.class).addUrl("http://my.oschina.net/flashsword/blog").run();
    }
}

Docs and samples:

The architecture of webmagic (refered to Scrapy)

image

Javadocs: http://code4craft.github.io/webmagic/docs/en/

There are some samples in webmagic-samples package.

Lisence:

Lisenced under Apache 2.0 lisence

Contributors:

Thanks these people for commiting source code, reporting bugs or suggesting for new feature:

Thanks:

To write webmagic, I refered to the projects below :

Mail-list:

https://groups.google.com/forum/#!forum/webmagic-java

Bitdeli Badge