#81
|
|||
|
|||
Quote:
ah okay, thanks! is there anything to change here at LinkCrawler.linkcrawlerrules [ { "enabled" : true, "maxDecryptDepth" : 0, "id" : 1489938888362, "name" : null, "pattern" : "file:/.*?\\.csv$", "rule" : "DEEPDECRYPT", "packageNamePattern" : null, "formPattern" : null, "deepPattern" : null, "rewriteReplaceWith" : null } ] that above has been created after adding [ { "pattern" : "file:/.*?\\.csv$", "rule" : "DEEPDECRYPT" } ] but now JIAZ was updating with new deepdecryptfilesizelimit, and I have little issues with getting links crawled from some csv files... |
#82
|
||||
|
||||
Did you change the Value of the new setting? You should set to -1 for unlimited
No, the rule is the same. Only the rest of the settings are now included with their default values.
__________________
JD-Dev & Server-Admin |
#83
|
|||
|
|||
Quote:
I have been using some 51874558 what is nearly max52428800... but I dont think it is the value itself, it seems to me like not responding to url links (being more exactly - it is direkt http jpg image links) ... as you can see (screenshot) it is detecting number of links but no going to process further... and this is also happening if I dont use that specific csv file with not more than 1.6mb - it is also happening if I simply copy (CTRL+C) all urlimage links directly from Excel with open csv document... |
#84
|
||||
|
||||
Please create a logfile and post the shown logID.
Create log when this happens! from screenshot it looks like something blocks processing of the links.
__________________
JD-Dev & Server-Admin |
#85
|
|||
|
|||
Quote:
I think I have to do that right now without re-starting JD to keep all info possible... there is also no full path shown at some "download from" please see screenshot... (column is wide enough I think) |
#86
|
||||
|
||||
DownloadFrom is only visible for plain input. You can rightclick and show url and click into url to see all known urls.
Create log, see here https://support.jdownloader.org/Know...d-session-logs and post the shown logID here Create log after crawling stalls I can also offer live help via Teamviewer, just send me e-mail to support@jdownloader.org
__________________
JD-Dev & Server-Admin |
#87
|
|||
|
|||
Quote:
I removed java and installed those two 64bit versions... those two are all the same? just for standard usage and advanced users? |
#88
|
||||
|
||||
JRE is for using Java
JDK = Java Development Kit = for Developers You can use any of them
__________________
JD-Dev & Server-Admin |
#89
|
|||
|
|||
Quote:
crawling 'large' csv files is working nearly perfectly... files I was working with had up to 380'000 lines, csv filesize is depending on absolute characters... csv files have between 3-42mb... (I am using -Xmx12g with a 16gb RAM machine... sometimes CPU usage is really high while there is just a usage from JD of 5-7gb RAM and another 6gb RAM free and available) what about to add some text line with origin source (url path or local file) at the BUBBLE on the right bottom footer... can be added to those characteristics listing with currently 8 (duration, found links, found packages, ...) that would help to see what source has been added for crawling... sometimes crawling is taking some hours and when coming back after 6-12 hours sometimes there are still 3 bubbles pending... ###### important: how to re-initiate or to push while link crawler is active? RAM usage from JD is at about 6-7gb, and system has more than 6gb free RAM available... it seems to be stuck or freezed, but it is still active, and if I add some new links to be crawled, it is like a re-activation for other pending tasks... |
#90
|
||||
|
||||
With next core update, the bubble icon will be different for Folderwatch sources.
__________________
JD-Dev & Server-Admin |
#91
|
||||
|
||||
Quote:
There is nothing for you to do and it should not stuck/freeze.
__________________
JD-Dev & Server-Admin |
#92
|
|||
|
|||
Quote:
the computer is not overloaded... CPU is between 25-60% there is minimum 3gb free of RAM no firefox, no chrome, no other apps more than 50gb of free SSD space C:\Users\xxxxxxxx\AppData\Local\JDownloader v2.0 C:\Users\xxxxxxxx\AppData\Local\JDownloader v2.0\cfg has 10gb of files, where 9gb are from more than 1 month ago... mostly *.backup file format |
#93
|
|||
|
|||
Quote:
about jd screenshot attached |
#94
|
||||
|
||||
How many links does your linkcrawler list have when this happens? Linklist saving is sync at the moment, that means that every manipulation of list (add, move, remove) waits for save to finish. This will be changed in future. I guess you have long list and this takes a moment to save/compress and linkcrawler has to wait to access list again.
You can delete .backup files.
__________________
JD-Dev & Server-Admin |
#95
|
|||
|
|||
Quote:
current list has 223k, it is only .jpg files/links... I dont really understand !? "linklist saving is sync at the moment, that means that every manipulation of list (add, move, remove) waits for save to finish" I do add csv-file by "load linkcontainer" and that's it... |
#96
|
||||
|
||||
its how JDownloader is designed
on each new link added, regardless is if its a image or zip it requires memory and after x seconds after linkgrabber change (adding of link/changing of path/changing of filename) needs to be saved. The larger your list, the longer it takes! Consider breaking up the csv files into smaller components, or enabling auto adding of links to download tab & download. This would keep linkgrabber small (size to zip and write to disk) though still keeps it busy with changes. raztoki
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#97
|
||||
|
||||
Saving 223k links to disk takes a moment and saving is a blocking/synchronious action, blocking all modifications on list. So while saving nothing happens with list.
__________________
JD-Dev & Server-Admin |
#98
|
|||
|
|||
Quote:
splitting csv files to max 80'000 links each works fine... my idea was to launch linkgrabber with a single csv file and leave it alone for 12-18 hours... I will try to split and add 4 csv files at the same time... maybe that helps... (adding it one by one does work) |
#99
|
||||
|
||||
doing it at the same time will have the same result = large volume links within linkgrabber, which will result in saving issue again.
you need to remove them from linkgrabber (As in auto add to download tab) as they decrypt.... thus keeping volume down.
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#100
|
|||
|
|||
Quote:
I forgot about removing links with prefix "**External links are only visible to Support Staff** that is why it was looking like stuck/freezing or taking extremely long |
#101
|
|||
|
|||
Quote:
BUT: I was adding some csv with at about 50'000 links... the bubble was showing exactly the right number... but when I came back to my desktop I found some package with at about 33'000 images... no more linkgrabber processing... I was a little bit surprised... then - after adding those 33'000 image package to downloads, I was noticing some "postprocessing" with arrow icon adding additional images to linkgrabber... do you have an idea what was going on? it must be minimum 2 hours between "linkgrabber 1st finish" and 'postprocessing" - jd2 was using 5gb from available 12gb... that is why I believe it is not because of performance issues... |
#102
|
||||
|
||||
A log and example csv files would be good to have ...
GreeZ psp
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#103
|
||||
|
||||
Sure the linkcrawling was finished? The more links in linkgrabber, the longer it will take to add new links (dupe checking, filename handling, packaging, adding)
So do I understand you right? JDownloader seems to be idle, but as soon as you move links to downloadlist, the crawling continues and adds new links?
__________________
JD-Dev & Server-Admin |
#104
|
|||
|
|||
Quote:
but sometimes that postprocessing was also happening after so called idle status active for hours without adding to download list... (of course it is also depending what server and location, and if it is a direkt link, or url path, where jd2 deep crawling is necessary, ...) maybe I am a bit meticulous, but better doing things properly |
#105
|
||||
|
||||
the balloon windows should show up during crawling, did it close itself?
When this happens again, could you please provide a screenshot?
__________________
JD-Dev & Server-Admin |
#106
|
|||
|
|||
Quote:
the ballon window disappears very often... and shows up again... I dont close the ballon by hitting the X... but in that special case there was no ballon window after idle status when postprocessing begin to add links... but before when I was adding csv file with those 50k links ballon window was there with that exact number of total links... |
#107
|
||||
|
||||
Please provide screenshots when this happens again.
Does the same window show up again or a different one?
__________________
JD-Dev & Server-Admin |
#108
|
|||
|
|||
Quote:
there was no other csv or links added in between... by the way - just to let you know - I am just telling to report what I do remember... a) just a few minutes ago 'arrow icon' was showing up with "mouse-on-over" text saying something like "downloads will start in a few seconds" there is no links at linkgrabber that has been scheduled? and settings to start anything is set to 'never' b) sometimes when there is an update available and I click on that globe icon, I have to confirm again in some alert window, ... so far , so good but sometimes update is taking more than 30min, or there is no more task about jd at process within taskmanager gui... |
#109
|
||||
|
||||
a.) this will show up either when you have set to auto add/auto start (check quick settings in linkgrabber) or you have packagizer rules in place that trigget auto add/auto start.
b.) what do you mean? the click on button should show you about pending update and that you can install it now if you want. Update taking more than 1 min (okay, for slower computer, maybe 2 mins) there is something NOT okay. There must be something NOT correct with your setup. What exactly does take so long? Please know that update starts when JDownloader has stopped and exit itself. Update happens in new process. Maybe there are pending linkcrawling processes that JDownloader is waiting for?
__________________
JD-Dev & Server-Admin |
#110
|
|||
|
|||
Quote:
but I forgot about telling you it is happening when jd has idle status... but it occurs only every 3rd or 4th time... by the way: I know about those saving procedure at /cfg/ folder for linkcrawler or download files ... it can take some minutes... but that is not the point... and sometimes after hitting the globe for update, jd is shutting down and I have to re-start it, then the update starts... |
#111
|
|||
|
|||
Quote:
RAM usage is alot less than before after closing and re-starting JD again from idle... it is all the same at download and link list... there is no way to free up RAM when running JD, ... looks like RAM usage was because of csv crawling, but it does not free up when it has finished with crawling... (for that special case there was no postprocessing) |
#112
|
||||
|
||||
Java does not free memory/give back to os by default. That means it will grow its heap and of course memory will be lot less after restart. If you want you can memory tune your java. There are many parameters available for fine tuning. It is even possible to give back memory to java. I suggest to google for parameters
__________________
JD-Dev & Server-Admin |
#113
|
|||
|
|||
Quote:
I can also add more parameters at vmoptions where I use -Xmx12g at the moment... |
#114
|
||||
|
||||
.vmoptions , you can also specify java parameters. Xmx is also a java parameter
for example see here stackoverflow.com/questions/43543404/jvm-release-unused-heap
__________________
JD-Dev & Server-Admin |
#115
|
|||
|
|||
how to turn off linkgrabber to add links from clipboard (CTRL+C) ?
unchecking the box 'linkgrabber auto start' in advanced settings does not help :( and some more specific question: how to exclude domain names like tumblr.com from linkgrabber if linkgrabber is turned ON ? |
#116
|
|||
|
|||
I can also add "-Xms1g -Xmx6g" at .vmoptions instead of "-Xmx6g" ?
|
#117
|
||||
|
||||
clipboard monitoring option is what JD monitors by default
if you disable that function it wont monitor that. raztoki
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#118
|
||||
|
||||
Quote:
setup a filter that matches tumblr.com host
__________________
JD-Dev & Server-Admin |
#119
|
|||
|
|||
but how to setup those filters for those two options?
|
#120
|
|||
|
|||
Quote:
I dont see the reason why linkgrabber starts crawling/adding links in 3 out of 10 times after CTRL+C ... linkgrabber auto start is off/unchecked. jd2beta is updated to the lastest version and re-started several times after settings had been changed, desktop machine was also re-booted serveral times... |
Thread Tools | |
Display Modes | |
|
|