JDownloader Community - Appwork GmbH
 

Reply
 
Thread Tools Display Modes
  #81  
Old 24.04.2017, 21:16
verheiratet1952 verheiratet1952 is offline
Bandwidth Beast
 
Join Date: Jan 2016
Posts: 147
Default

Quote:
Originally Posted by raztoki View Post
its already live,
the change that jiaz uploaded to implement those features was
**External links are only visible to Support Staff**...
and last build full was on **External links are only visible to Support Staff**...
you can usually see/monitor here, but there are projects used which do not have redmine front end **External links are only visible to Support Staff**...

ah okay, thanks!

is there anything to change here at LinkCrawler.linkcrawlerrules


[ {
"enabled" : true,
"maxDecryptDepth" : 0,
"id" : 1489938888362,
"name" : null,
"pattern" : "file:/.*?\\.csv$",
"rule" : "DEEPDECRYPT",
"packageNamePattern" : null,
"formPattern" : null,
"deepPattern" : null,
"rewriteReplaceWith" : null
} ]

that above has been created after adding

[ {
"pattern" : "file:/.*?\\.csv$",
"rule" : "DEEPDECRYPT"
} ]


but now JIAZ was updating with new deepdecryptfilesizelimit, and I have little issues with getting links crawled from some csv files...
Reply With Quote
  #82  
Old 25.04.2017, 08:36
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 62,502
Default

Did you change the Value of the new setting? You should set to -1 for unlimited

No, the rule is the same. Only the rest of the settings are now included with their default values.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #83  
Old 25.04.2017, 10:48
verheiratet1952 verheiratet1952 is offline
Bandwidth Beast
 
Join Date: Jan 2016
Posts: 147
Default

Quote:
Originally Posted by Jiaz View Post
Did you change the Value of the new setting? You should set to -1 for unlimited

No, the rule is the same. Only the rest of the settings are now included with their default values.

I have been using some 51874558 what is nearly max52428800...
but I dont think it is the value itself, it seems to me like not responding to url links (being more exactly - it is direkt http jpg image links) ...

as you can see (screenshot) it is detecting number of links but no going to process further...



and this is also happening if I dont use that specific csv file with not more than 1.6mb - it is also happening if I simply copy (CTRL+C) all urlimage links directly from Excel with open csv document...
Attached Thumbnails
parse-clipboard-processing-queue.JPG  
Reply With Quote
  #84  
Old 25.04.2017, 10:52
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 62,502
Default

Please create a logfile and post the shown logID.
Create log when this happens! from screenshot it looks like something blocks processing of the links.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #85  
Old 25.04.2017, 11:10
verheiratet1952 verheiratet1952 is offline
Bandwidth Beast
 
Join Date: Jan 2016
Posts: 147
Default

Quote:
Originally Posted by Jiaz View Post
Please create a logfile and post the shown logID.
Create log when this happens! from screenshot it looks like something blocks processing of the links.
how to create a logfile?
I think I have to do that right now without re-starting JD to keep all info possible...


there is also no full path shown at some "download from" please see screenshot... (column is wide enough I think)
Attached Thumbnails
download-from-no-full-path.JPG  
Reply With Quote
  #86  
Old 25.04.2017, 11:32
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 62,502
Default

DownloadFrom is only visible for plain input. You can rightclick and show url and click into url to see all known urls.
Create log, see here https://support.jdownloader.org/Know...d-session-logs
and post the shown logID here

Create log after crawling stalls
I can also offer live help via Teamviewer, just send me e-mail to support@jdownloader.org
__________________
JD-Dev & Server-Admin
Reply With Quote
  #87  
Old 28.04.2017, 10:56
verheiratet1952 verheiratet1952 is offline
Bandwidth Beast
 
Join Date: Jan 2016
Posts: 147
Default

Quote:
Originally Posted by Jiaz View Post
DownloadFrom is only visible for plain input. You can rightclick and show url and click into url to see all known urls.
Create log, see here **External links are only visible to Support Staff**...
and post the shown logID here

Create log after crawling stalls
I can also offer live help via Teamviewer, just send me e-mail to support@jdownloader.org


I removed java and installed those two 64bit versions...

those two are all the same? just for standard usage and advanced users?
Attached Thumbnails
java-8-131-64.JPG  
Reply With Quote
  #88  
Old 28.04.2017, 11:52
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 62,502
Default

JRE is for using Java
JDK = Java Development Kit = for Developers
You can use any of them
__________________
JD-Dev & Server-Admin
Reply With Quote
  #89  
Old 26.06.2017, 11:13
verheiratet1952 verheiratet1952 is offline
Bandwidth Beast
 
Join Date: Jan 2016
Posts: 147
Default

Quote:
Originally Posted by Jiaz View Post
JRE is for using Java
JDK = Java Development Kit = for Developers
You can use any of them
just wanted to give an update...

crawling 'large' csv files is working nearly perfectly...
files I was working with had up to 380'000 lines, csv filesize is depending on absolute characters... csv files have between 3-42mb...

(I am using -Xmx12g with a 16gb RAM machine... sometimes CPU usage is really high while there is just a usage from JD of 5-7gb RAM and another 6gb RAM free and available)


what about to add some text line with origin source (url path or local file) at the BUBBLE on the right bottom footer... can be added to those characteristics listing with currently 8 (duration, found links, found packages, ...)

that would help to see what source has been added for crawling... sometimes crawling is taking some hours and when coming back after 6-12 hours sometimes there are still 3 bubbles pending...

######



important:

how to re-initiate or to push while link crawler is active?

RAM usage from JD is at about 6-7gb, and system has more than 6gb free RAM available... it seems to be stuck or freezed, but it is still active, and if I add some new links to be crawled, it is like a re-activation for other pending tasks...
Reply With Quote
  #90  
Old 26.06.2017, 14:38
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 62,502
Default

Quote:
Originally Posted by verheiratet1952 View Post
what about to add some text line with origin source (url path or local file) at the BUBBLE on the right bottom footer... can be added to those
With next core update, the bubble icon will be different for Folderwatch sources.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #91  
Old 26.06.2017, 14:39
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 62,502
Default

Quote:
Originally Posted by verheiratet1952 View Post
it seems to be stuck or freezed, but it is still active, and if I add some new links to be crawled, it is like a re-activation for other pending tasks...
Please create and provide a logfile, then we can check what is going on.
There is nothing for you to do and it should not stuck/freeze.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #92  
Old 26.06.2017, 15:47
verheiratet1952 verheiratet1952 is offline
Bandwidth Beast
 
Join Date: Jan 2016
Posts: 147
Default

Quote:
Originally Posted by Jiaz View Post
Please create and provide a logfile, then we can check what is going on.
There is nothing for you to do and it should not stuck/freeze.
it not freezing or being stuck... but it seems to be for a few seconds, and then it does continue with 10-1000 links... and this is happening again and again...

the computer is not overloaded...
CPU is between 25-60%
there is minimum 3gb free of RAM
no firefox, no chrome, no other apps

more than 50gb of free SSD space
C:\Users\xxxxxxxx\AppData\Local\JDownloader v2.0

C:\Users\xxxxxxxx\AppData\Local\JDownloader v2.0\cfg
has 10gb of files, where 9gb are from more than 1 month ago... mostly *.backup file format
Reply With Quote
  #93  
Old 26.06.2017, 15:49
verheiratet1952 verheiratet1952 is offline
Bandwidth Beast
 
Join Date: Jan 2016
Posts: 147
Default

Quote:
Originally Posted by verheiratet1952 View Post
it not freezing or being stuck... but it seems to be for a few seconds, and then it does continue with 10-1000 links... and this is happening again and again...

the computer is not overloaded...
CPU is between 25-60%
there is minimum 3gb free of RAM
no firefox, no chrome, no other apps

more than 50gb of free SSD space
C:\Users\xxxxxxxx\AppData\Local\JDownloader v2.0

C:\Users\xxxxxxxx\AppData\Local\JDownloader v2.0\cfg
has 10gb of files, where 9gb are from more than 1 month ago... mostly *.backup file format

about jd screenshot attached
Attached Thumbnails
about-jd-26062017.JPG  
Reply With Quote
  #94  
Old 26.06.2017, 16:27
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 62,502
Default

How many links does your linkcrawler list have when this happens? Linklist saving is sync at the moment, that means that every manipulation of list (add, move, remove) waits for save to finish. This will be changed in future. I guess you have long list and this takes a moment to save/compress and linkcrawler has to wait to access list again.

You can delete .backup files.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #95  
Old 26.06.2017, 22:06
verheiratet1952 verheiratet1952 is offline
Bandwidth Beast
 
Join Date: Jan 2016
Posts: 147
Default

Quote:
Originally Posted by Jiaz View Post
How many links does your linkcrawler list have when this happens? Linklist saving is sync at the moment, that means that every manipulation of list (add, move, remove) waits for save to finish. This will be changed in future. I guess you have long list and this takes a moment to save/compress and linkcrawler has to wait to access list again.

You can delete .backup files.

current list has 223k, it is only .jpg files/links...


I dont really understand !?

"linklist saving is sync at the moment, that means that every manipulation of list (add, move, remove) waits for save to finish"


I do add csv-file by "load linkcontainer" and that's it...
Reply With Quote
  #96  
Old 27.06.2017, 01:17
raztoki's Avatar
raztoki raztoki is offline
English Supporter
 
Join Date: Apr 2010
Location: Australia
Posts: 16,062
Default

its how JDownloader is designed
on each new link added, regardless is if its a image or zip it requires memory and after x seconds after linkgrabber change (adding of link/changing of path/changing of filename) needs to be saved. The larger your list, the longer it takes!

Consider breaking up the csv files into smaller components,
or enabling auto adding of links to download tab & download. This would keep linkgrabber small (size to zip and write to disk) though still keeps it busy with changes.

raztoki
__________________
raztoki @ jDownloader reporter/developer
http://svn.jdownloader.org/users/170

Don't fight the system, use it to your advantage. :]
Reply With Quote
  #97  
Old 27.06.2017, 11:00
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 62,502
Default

Quote:
Originally Posted by verheiratet1952 View Post
current list has 223k, it is only .jpg files/links...
Saving 223k links to disk takes a moment and saving is a blocking/synchronious action, blocking all modifications on list. So while saving nothing happens with list.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #98  
Old 27.06.2017, 13:03
verheiratet1952 verheiratet1952 is offline
Bandwidth Beast
 
Join Date: Jan 2016
Posts: 147
Default

Quote:
Originally Posted by raztoki View Post
its how JDownloader is designed
on each new link added, regardless is if its a image or zip it requires memory and after x seconds after linkgrabber change (adding of link/changing of path/changing of filename) needs to be saved. The larger your list, the longer it takes!

Consider breaking up the csv files into smaller components,
or enabling auto adding of links to download tab & download. This would keep linkgrabber small (size to zip and write to disk) though still keeps it busy with changes.

raztoki

splitting csv files to max 80'000 links each works fine...

my idea was to launch linkgrabber with a single csv file and leave it alone for 12-18 hours... I will try to split and add 4 csv files at the same time... maybe that helps... (adding it one by one does work)
Reply With Quote
  #99  
Old 27.06.2017, 13:15
raztoki's Avatar
raztoki raztoki is offline
English Supporter
 
Join Date: Apr 2010
Location: Australia
Posts: 16,062
Default

doing it at the same time will have the same result = large volume links within linkgrabber, which will result in saving issue again.
you need to remove them from linkgrabber (As in auto add to download tab) as they decrypt.... thus keeping volume down.
__________________
raztoki @ jDownloader reporter/developer
http://svn.jdownloader.org/users/170

Don't fight the system, use it to your advantage. :]
Reply With Quote
  #100  
Old 27.06.2017, 17:10
verheiratet1952 verheiratet1952 is offline
Bandwidth Beast
 
Join Date: Jan 2016
Posts: 147
Default

Quote:
Originally Posted by raztoki View Post
doing it at the same time will have the same result = large volume links within linkgrabber, which will result in saving issue again.
you need to remove them from linkgrabber (As in auto add to download tab) as they decrypt.... thus keeping volume down.
big sorry!

I forgot about removing links with prefix "**External links are only visible to Support Staff**

that is why it was looking like stuck/freezing or taking extremely long
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +2. The time now is 11:09.
Provided By AppWork GmbH | Privacy | Imprint
Parts of the Design are used from Kirsch designed by Andrew & Austin
Powered by vBulletin® Version 3.8.10 Beta 1
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.