|
#1
|
|||
|
|||
Extraction extremely slow with huge number of archives
Hi,
In JD2 beta I have been downloading in one haul the huge number of 5448 .zip archives which the DWD (Deutsche Wetterdienst) site (ftp protocol) has made available for free download. I had automatic extraction on. After completing the download JD2 started extracting as planned, which went well for a while I guess for about 2000 archives. But then the extraction dropped to a very slow crawl of maybe 20 archives per minute. Cpu usage was very high (about 90%) on all four cpu-cores, which is extremely rare on my computer (and I do use some very demanding software sometimes). So after having lost patience I exited JD2 and extracted the remainder of about 3000 archives by hand within a few minutes using WinRAR. These archives are all quite small, each being less than 200 KB in size. The total of the 5448 archives is about 380 MB in size. I'm on Windows Vista, with an AMD Phenom II X4 810 4-core cpu, 4 GB RAM and plenty of harddisk space. The link to the folder containing the archives is: **External links are only visible to Support Staff**ftp://ftp-cdc.dwd.de/pub/CDC/observa...ip/historical/ This link cannot be used directly in JD2, it is the link to the folder on the DWD ftp page where the links to the archives can be collected. It seems more sensible to present this link than the more than 5000 direct links, which would use far too much space in this post. I hope you can do something with this observation. Maybe it is something that should be fixed in JD2. Thanks for looking into this.;) Last edited by rockwater; 17.06.2015 at 01:46. |
#2
|
||||
|
||||
Thanks for your post, hopefully Jiaz can spend some time and test this.
raztoki
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#3
|
||||
|
||||
How do you setup your JDownloader?
Do you let JD remove links after finish? How much memory can your JDownloader use? See About Dialog, please tell me all 3 numbers shown there. My guess. Many Links -> Many Archive Infos -> Java reached memory limit and spend much time on garbage collection.
__________________
JD-Dev & Server-Admin |
#4
|
|||
|
|||
Hi Jiaz,
My setup of JDownloader: General Setup / Download Management: - max. simultaneous downloads: 6. - max. simultaneous downloads per host: 1. - max. chunks per download: 1. - remove finished downloads: never. - if the file already exists: ask for each file. Memory used by JDownlader (about JDownloader dialog): Java: Oracle Corporation - 1.8.0_31 (298.33 MB/396.50 MB/910.50 MB) For practical reasons I had turned off automatic link checking by LinkCollector in the Advanced Settings section for these links. I hope these are the data you asked for. I didn't know JDownloader could automatically remove links immediately after downloading. Is that what is meant with "remove finished downloads" in the General Setup / Download Management section? Thanks for addressing my post. |
#5
|
||||
|
||||
My guess is that somehow it reached memory limit. Next time when it is slow/high cpu usage, then open About dialog again and write down the 3 numbers and tell us.
I think that many archives (not few hundred, thousands) in same session cause this. I will rewrite this part as soon as possible, but for the moment I think best would be to restart after a few thousand extracted archives Yes, remove finished downloads. But you can also setup JD to remove finished extracted archives (better in your case). Settings-Extraction and there you can setup to 1.) remove links from list 2.) remove the archive parts from disk after successful extraction
__________________
JD-Dev & Server-Admin |
#6
|
|||
|
|||
Hi Jiaz,
Thank you very much for your quick answer. I'm not a software programmer, but I understand from what you're saying that Java's memory management is probably overburdened when a massive load of archives must be processed in one session. I'll remember to write down the figures in the About Dialog window and send them to the JD community whenever I encounter problems like this with JDownloader (if JD is still responsive under such circumstances). Thanks for your advice on how to handle this kind of situation and about setting op JD to remove finished links/archives automatically. I'm going to use that option. I hope you can find time to rewrite the parts in JD that are responsible for this memory problem. Do you think it helps to free up memory space if I instruct JD to remove the finished (extracted) archives and links during the active session? rockwater |
#7
|
||||
|
||||
That will not help much as the Archive Info is stored in different way and restart is only way to "cleanup" at the moment. That is the reason I want/need to rewrite this
Thanks again for the understanding! and much fun with JDownloader
__________________
JD-Dev & Server-Admin |
Thread Tools | |
Display Modes | |
|
|