Commit Graph

699 Commits (7586e3d75c62ee2226af349bad30ab665ec3732d)

Author SHA1 Message Date
yihua.huang 7586e3d75c add some test for github repo downloader 2016-01-19 08:05:53 +08:00
yihua.huang 800f66c4cc Revert "remove some unkown config"
This reverts commit 0e245c9896.
2016-01-18 23:20:08 +08:00
yihua.huang 73ae7a1d52 remove ci for jdk6 2016-01-18 23:19:39 +08:00
yihua.huang 0e245c9896 remove some unkown config 2016-01-18 22:39:45 +08:00
yihua.huang 9ed06ccdf0 update surefire version 2016-01-18 22:31:15 +08:00
Yihua Huang 9d0eeb9000 Merge pull request #218 from bingoko/master
添加PhantomJS无界面浏览器支持
2016-01-16 19:38:01 +08:00
Yihua Huang 84b046e4c9 Merge pull request #227 from hsqlu/master
update deprecated method
2016-01-16 19:36:52 +08:00
Yihua Huang cfde3b7657 Merge pull request #237 from SpenceZhou/master
Update RedisScheduler.java
2015-12-02 22:17:00 +08:00
SpenceZhou 165e5a72eb Update RedisScheduler.java
修改redisscheduler中获取爬取总数bug
2015-12-02 17:10:42 +08:00
Yihua Huang 5f9e1a96f2 Merge pull request #233 from x1ny/master
修正FileCacheQueueScheduler导致程序不能正常结束和未关闭流
2015-11-17 15:06:08 +08:00
Yihua Huang 7d7eb033d3 Merge pull request #234 from chy996633/master
知乎爬虫抓取
2015-11-17 15:05:46 +08:00
chy996633 afd1617b58 知乎爬虫抓取 2015-11-13 11:56:46 +08:00
x1ny 90e14b31b0 修正FileCacheQueueScheduler导致程序不能正常结束和未关闭流
FileCacheQueueScheduler中开启了一个线程周期运行来保存数据但在爬虫结束后没有关闭导致程序无法结束,以及没有关闭io流。

解决方法:
让FileCacheQueueScheduler实现Closable接口,在close方法中关闭线程以及流。
在Spider的close方法中添加对scheduler的关闭操作。
2015-11-12 23:10:20 +08:00
Qiannan Lu 155215290f resolve issue #226 2015-09-18 09:30:01 +08:00
Qiannan Lu 21f81bb8c1 update deprecated method 2015-09-18 08:59:40 +08:00
bingoko 5d365f7bf4 update and validate pom.xml
Update selenium and GhostDriver (PhantomJSDriver) to latest version.
2015-07-11 15:43:49 +01:00
bingoko d3bbece202 Add PhantomJS support for selenium
The configuration file is config.ini
The dependencies are updated in pom.xml.
Update SeleniumDownloader and WebDriverPool to support PhantomJS. 
NOTE: The versions of GhostDriver, Selenium, and PhantomJS are stable
and validated.

A GooglePlay Example is under samples package: GooglePlayProcessor.java
2015-07-11 15:34:21 +01:00
yihua.huang 56e0cd513a compile error fix 2015-04-15 23:21:06 +08:00
yihua.huang c5740b1840 change assert #200 2015-04-15 08:32:08 +08:00
yihua.huang 67eb632f4d test for issue #200 2015-04-15 08:31:45 +08:00
Yihua Huang b30ca6ce1e Merge pull request #198 from okuc/master
修正site.setHttpProxy()不起作用的bug
2015-03-09 16:58:52 +08:00
高军 590561a6e4 修正site.setHttpProxy()不起作用的bug 2015-03-09 15:54:15 +08:00
Yihua Huang 05a1f39569 Merge pull request #193 from EdwardsBean/fix-mppipeline
Bug fix:MultiPagePipeline and DoubleKeyMap concurrent bug
2015-02-18 09:50:17 +08:00
edwardsbean 74962d69b9 fix bug:MultiPagePipeline and DoubleKeyMap concurrent bug 2015-02-13 15:03:13 +08:00
Yihua Huang 6b9d21fcf3 Merge pull request #188 from EdwardsBean/retry_time
add retry sleep time
2015-01-22 10:40:03 +08:00
edwardsbean 4978665633 add retry sleep time 2015-01-21 13:30:02 +08:00
yihua.huang 8ffc1a7093 add NPE check for POST method 2015-01-13 14:10:00 +08:00
yihua.huang 8551b668a0 remove commented code 2014-09-29 14:51:36 +08:00
Yihua Huang 20422f1b63 Merge pull request #161 from zhugw/patch-4
Update FileCacheQueueScheduler.java
2014-09-29 14:46:35 +08:00
zhugw eb3c78b9d8 Update FileCacheQueueScheduler.java
这样是不是更严谨? 否则的话,中断后再次启动时, (第一个)入口地址仍会被添加到队列及写入到文件中. 
但是现在有另外一个问题存在,如第一遍全部抓取完毕了(通过spider.getStatus==Stopped判断),休眠24小时,再来抓取(通过递归调用抓取方法).
这时不同于中断后再启动,lineReader==cursor, 于是初始化时队列为空,入口地址又在urls集合中了, 故导致抓取线程马上就结束了.这样的话就没有办法去抓取网站上的新增内容了.
解决方案一:
判断抓取完毕后,紧接着覆盖cursor文件,第二次来抓取时,curosr为0, 于是将urls.txt中的所有url均放入队列中了, 可以通过这些url来发现新增url.
方案二:
对方案一进行优化,方案一虽然可以满足业务要求,但会做很多无用功,如仍会对所有旧target url进行下载,抽取,持久化等操作.而新增的内容一般都会在HelpUrl中, 比如某一页多了一个新帖子,或者多了几页内容. 故第二遍及以后来爬取时可以仅将HelpUrl放入队列中. 

希望能给予反馈,我上述理解对不对, 有什么没有考虑到的情况或者有更简单的方案?谢谢!
2014-09-14 16:20:03 +08:00
Yihua Huang 3a9c1d3002 Merge pull request #159 from zhugw/patch-3
Update Site.java
2014-09-12 13:09:50 +08:00
zhugw bc666e927d Update Site.java
setCycleRetryTimes的javadoc是这么说的:Set cycleRetryTimes times when download fail, 0 by default. Only work in RedisScheduler.
而通过查看源码发现似乎并没有做限制,即只能用于RedisScheduler. 故想问一下该javadoc是否过时了?
2014-09-12 12:42:57 +08:00
yihua.huang 42a30074c9 update urls.contains to DuplicateRemover in FileCacheQueueScheduler #157 2014-09-12 07:52:38 +08:00
Yihua Huang 689e89a9b2 Merge pull request #157 from zhugw/patch-1
Update FileCacheQueueScheduler.java
2014-09-12 07:37:56 +08:00
zhugw 1db940a088 Update FileCacheQueueScheduler.java
在使用过程中发现urls.txt文件存在重复URL的情况,经跟踪源代码,发现初始化加载文件后,读取所有的url放入一集合中,但是之后添加待抓取URL时并未判断是否已存在该集合中(即文件中)了,故导致文件中重复URL的情况.故据此对源码做了修改,还请作者审阅.
2014-09-11 15:46:09 +08:00
yihua.huang 147401ce5e remove duplicate setPath in ProxyPool 2014-09-09 22:58:44 +08:00
yihua.huang 3734865a6a fix package name =.= 2014-08-21 14:39:44 +08:00
yihua.huang e7668e01b8 fix SourceRegion error and add some tests on it #144 2014-08-21 14:29:06 +08:00
yihua.huang 4e5ba02020 fix test cont' 2014-08-18 11:08:17 +08:00
yihua.huang 4446669c24 fix test 2014-08-18 10:54:24 +08:00
yihua.huang 9866297ec4 Disable jsoup entity escape by Default. Set Html.DISABLE_HTML_ENTITY_ESCAPE to false to enable it. #149 2014-08-14 08:04:56 +08:00
yihua.huang 4e6e946dd7 more friendly exception message in PlainText #144 2014-08-13 10:02:16 +08:00
yihua.huang ebb931e0bf update assertj to test scope 2014-06-25 19:01:27 +08:00
yihua.huang af9939622b move thread package out of selector (because it is add by mistake at the beginning) 2014-06-25 18:19:50 +08:00
yihua.huang 2fd8f05fe2 change path seperator for varient OS #139 2014-06-25 14:55:23 +08:00
yihua.huang eae37c868b new sample 2014-06-10 17:38:54 +08:00
yihua.huang b3a282e58d some fix for tests #130 2014-06-10 00:05:30 +08:00
yihua.huang b75e64a61b t push origin masterMerge branch 'yxssfxwzy-proxy' 2014-06-09 23:51:47 +08:00
yihua.huang 074d767f45 Merge branch 'proxy' of github.com:yxssfxwzy/webmagic into yxssfxwzy-proxy 2014-06-09 23:51:36 +08:00
zwf 2f89cfc31a add test and fix bug of proxy module 2014-06-09 13:32:02 +08:00