#1
|
|||
|
|||
Cancelling FolderWatch Linkgrabber
hi, I added a bunch of *.crawljob to folderwatch, each file contains any where from 30-70urls in them. Jd became slow to respond to a simple right click. Checked Taskmanager for cpu usage and it was posting above 45%. So I decided to cancel all linkgrabbering via RightClick on linkgrabberring icons at the bottom of the jd window.
What I didn't realize was jd moved the all *.crawljobS file that just got cancelled to 'added' folder although knowingly not all urls were added to linkgrabber tab. now i don't know which *.crawljob files to retrieve that I cancelled on because these files were created couple of days ago. so since folderwatch didn't make any modification to the *crawljob files, there is no way to used modified timestamp to back reference. Is there a log where I could look into for timestemp where latest *.crawljob files were moved to 'added' folder in folderwatch? Also why there are so many linkgrabbing icons at bottom of the screen while grabbing the urls from *.crawljob? It appears this process alone using alot of cpu recourses. Could the limit be set to grabbing link down to 5 or somethign at the same time? it appears there were over 10 linkgrabbing icons. |
#2
|
||||
|
||||
Hi,
1. Every job starts in an own crawler which will create a separate icon. 2. You can set the total allowed number of link crawler threads with this advanced Setting: Code:
link crawler maxthreads They are not "intelligent" at all. 4. Logs: EDIT There are no specific logs you can use to track single crawljobs but you can enable debug logs and then checkout your logs. Every plugin e.g. has its own log. See "debug logs" setting description as part of log posting instructions: https://support.jdownloader.org/Know...d-session-logs -psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download Last edited by pspzockerscene; 21.04.2020 at 13:08. |
#3
|
||||
|
||||
Sorry, I got a few things wrong here in my initial answer.
Your post was about the crawljob things and NOT link crawler rules. I'm correting my previous post atm. -psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download Last edited by pspzockerscene; 21.04.2020 at 12:39. |
#4
|
|||
|
|||
Quote:
Code:
link crawler maxthreads Quote:
Quote:
|
#5
|
||||
|
||||
Quote:
It is impossible for JD to know whether the crawl process was successful or not. JD could only know this if you e.g. were able to add "number of expected results" or so. Most users would never use this --> I would again recommend you do try to do this via EventScripter. Quote:
Only official JD supporters can view logs. -psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#6
|
||||
|
||||
Quote:
Example: 1 Crawljob with 1 Job 1. Job: parse the crawljob file 2. Job: parse the job itself 3. Job: process the job itself The indiciator icon will disappear once the complete job chain is finished.
__________________
JD-Dev & Server-Admin |
#7
|
||||
|
||||
Quote:
Also JDownloader doesn't know if complete/incomplete because it doesn't know how many links you expect as a result. Maybe unsupported link -> no result -> incomplete or not? Maybe supported link but no content found -> no result but processed -> incomplete or not? Maybe supported link found but due to a bug only 1/10 links found -> results and processed -> incomplete or not? There is no indication if the job is finished/incomplete/nothing found...
__________________
JD-Dev & Server-Admin |
|
|