The settions I've mentioned in my last post are now available in the current JDownloader version.
Next update will include one more change:
Whenever something in a profile-crawl process goes wrong, the dummy-item that is returned contains all parameters needed to continue the crawl process at the same place where it stopped e.g.:
Code:
twitter.com/username/likes?page=178&totalCrawledTweetsCount=3656&nextCursor=xxxyyy
Now if you re-add this URL later the crawler will resume where it stopped and you should be able to crawl big profiles even without a proper 100% automated rate-limit handling.
Bitte auf das nächste
CORE-Update warten!
Please wait for the next
CORE-Update!
Wartest du auf einen angekündigten Bugfix oder ein neues Feature?
Updates werden nicht immer sofort bereitgestellt!
Bitte lies unser Update FAQ! | Please read our Update FAQ!
---
Are you waiting for recently announced changes to get released?
Updates to not necessarily get released immediately!
Bitte lies unser Update FAQ! | Please read our Update FAQ!
-psp-