#1
|
|||
|
|||
JD Constantly rewriting cfg\downloadlist__.zip and big CPU load
See attached picture.
The downloads running are ~500kB/s and the static you can barely see. The big humps are JD writing with ~8MB/s the files cfg\downloadlistXXXXXXX.zip - then a little pause and JD will write the next file. Each of those are ~140MB zipped. The CPU load at the same time is >25% (4 core CPU). JD is pretty much ripping my SSD apart with over 5 GB writes every 15 Minutes. Last edited by zrato; 29.12.2016 at 00:00. |
#2
|
|||
|
|||
Seems I have found the problem thanks to jd.controlling.downloadcontroller.DownloadWatchDog.log.0
> --ID:119TS:1482976127383-29.12.16 02:48:47 - [jd.controlling.downloadcontroller.DownloadWatchDog(setFinalLinkStatus)] -> DownloadCandidate:xxx.zip@fileflyer.com|Host fileflyer.com|Account:Plugin:fileflyer.com|Version:35367|Type:NONE|Account:null|Proxy:Direkt->FILE_UNAVAILABLE It was a download in my list that was unavailable from fileflyer.com. It was tried like every 30s. If I see it correctly, you save the download list directly into the zip, yes? Every time the download attempt started the package size/loaded MB changed, so I guess this is why an updated file was written? Also had one similar download from sendmyway that I deleted from the list prior to deleting the fileflyer download, but that one did not change the behavior (probably because the fileflyer download was still in the list?). Now with both those downloads removed, not only did the giant copy orgies stop, the CPU usage dropped significantly and even my RAM usage dropped very significantly (2,6 GB -> 1,9 GB and still falling. EDIT: 1,7 GB now). Last edited by zrato; 29.12.2016 at 04:12. |
#3
|
||||
|
||||
Ticket:
GreeZ pspzockerscene
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#4
|
|||
|
|||
Please stop writing downloadList__.zip all the time
JD2 backups the download list in the cfg folder to some zip file.
This file for me is ~175MB in size. Not only does this shred my SSD with giant amounts of writes (compared to what it does normally) but it also uses a lot of CPU as you can see in the screenshot - Windows says 25% CPU for ~25s (on a 4 core CPU). The problem gets worse (or starts happening), when there are some stray downloads trying to re-run (for example if the host does not respond or there is some problem with the plugin that cannot get the file or such). Please give the user the option (expert options) to have a hard minimum backup time limit where I can set the minutes between two backups. |
#5
|
|||
|
|||
Next example, downloads running at 800KB/s, JD writing constantly 7MB/s pretty much without any pause. And giving me a constant 25% CPU on a 4 core i5.
|
#6
|
||||
|
||||
please refrain from making new threads about something you have commented on previously.
I've merged your duplicated threads . Maybe look at finding solution/work around, for example adjust the saving of configs settings in the mean time (Waiting for SVN ticket to get resolution). raztoki
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#7
|
|||
|
|||
I just noticed the writes to my ssd, this is still a problem?
|
#8
|
||||
|
||||
nothing has changed in respects to saving to disk.
there are advanced config settings to delay the writing to disk (existing for years), but longer the delay the higher the consequence when it comes to data recovery in a not so nice event. raztoki
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#9
|
|||
|
|||
jdlog://7810804015941/
02.06.17 20.00.13 <--> 03.06.17 16.19.31 jdlog://7810804015941/ The condition seems to occur when there is ram starvation? When I open a bajillion chrome tabs it starts to happen, even after things calm down and I have free memory again. What is the setting for delaying the saving of the download list? It seems to resave the download list even if the files being downloaded have not changed status, aka the same file is still being downloaded. *ok the system has calmed down it isn't doing it as frequently but still every few minutes even though the same file is being downloaded, maybe its io piling up? Also my download list is quite large. Last edited by travestree; 04.06.2017 at 04:22. |
#10
|
||||
|
||||
it only ever saves configs if there has been a change
there is checks to only save every x seconds between save. seems to be only linkcollector (linkgrabber) preference (Settings > advanced settings > filter: 'save') raztoki
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#11
|
|||
|
|||
Quote:
So does it write excessively if downloading small files like packages filled with jpgs? I've seen the strange behavior where it is writing 3 different download lists at the same time. |
#12
|
||||
|
||||
I will add more fine tuning settings for this next week.
__________________
JD-Dev & Server-Admin |
#13
|
||||
|
||||
Sorry for long waiting, I will try to find time for it this week
__________________
JD-Dev & Server-Admin |
#14
|
|||
|
|||
Any updates on this? And what are the mentioned settings for changing the save delay?
I recently noticed that my SSD that's not even a year old has already degraded to 90% quality. Was pretty shocked and then I saw that jD2 had 2 TB (!!!!) of IO Writes in task manager just in this last session. The high CPU usage has been an issue forever too. For the love of God, can't you just use an SQLite table for it all? Then you can update a status with barely any disk usage at all. Rewriting a massive zip with everything in it all the time is insanity. The performance of JD2 is one of the worst software issues I have ever experienced in my whole life (using a minimum of 50% CPU all the time once a certain amount of things are in the list) and now I learn it has also half destroyed my SSD. Please at the very least let me know how to change the save behavior. This is nuts. I'm tearing my hair out at why someone thought this was a good idea. Edit: With SQLite you also wouldn't have to keep absolutely everything in memory. You could reduce RAM usage to the current list view and active/unfinished items. Last edited by TomArrow; 16.09.2021 at 12:52. |
#15
|
||||
|
||||
You can customize the write delay in Settings->Advanced Settings, search for
LinkCollector.minimumsavedelay LinkCollector.maximumsavedelay and DownloadController.minimumsavedelay DownloadController.maximumsavedelay Quote:
1.) databases also need memory to hold at least the index to find information fast. without index in memory, access will be very slow! 2.) databases create a (huge) dependency, right now JDownloader can run perfectly fine on nearly any platform/system/environment,see https://support.jdownloader.org/Know...bedded-devices 3.) the application still needs to hold some (yes, not all) data in memory. 4.) the complete data access need to be reworked in order to properly/synchronized read/write data memory and database/disk 5.) would impact performance on MANY levels. eg filter/search/download and simple things like scroll through the list And most important. We did use SQL database in the beginning and it had many more disadvantages than advantages that we moved away from it. I still agree that storing everything in one large (zip) file has advantages and disadvantages. And yes, I still do have plans to change to different way/method to store the information. Advantages: -easy to read/write AND rescue/restore -you can easily create/update/modify with normal tools -good enough for *normal* use cases -easy to backup, just one file -copy on write, all or nothing Disadvantages: -does perform (very) bad for large lists (*power* use case) -large cpu/storage requirement for large lists -large write output for large lists The recommendation is not use the list as archive/history and only keep those links you want to download. Database is a perfect example for history/archive/duplicate....stuff. And in case you don't want to remove finished/archived entries from your lists, increase the minimumsavedelay and maximumsavedelay settings. eg set DownloadController.minimumsavedelay to 300000 ( 5 mins) DownloadController.maximumsavedelay to 900000 ( 15 mins) that will cause JDownloader to only write once every 15 mins during downloads and wait minimum 5 mins after normal change @TomArrow: Just out of interest, how many links do you have in list? and how large is your .zip file of the list?
__________________
JD-Dev & Server-Admin Last edited by Jiaz; 16.09.2021 at 15:22. |
#16
|
|||
|
|||
Quote:
The thing is, I like to archive various places and so I crawl them and run the linkcollector. If I delete the finished stuff, it won't know the stuff is already downloaded. I have 116895 links in 2911 packages in the download list and 247363 links in 20442 packages in the link collector. My CFG folder is 3.5 GB. downloadList[number].zip is 81 MB. linkcollector[number].zip is 229 MB. And that's with me already having pruned the lists twice in the past, leading to me being unable to detect duplicates as mentioned. After my previous post I checked what settings I could find and the ones you mentioned were among them. I changed them to 180000 and 1800000 (extra zero) and restarted JD2. Since that time, JD2 has already written almost 200 GB according to Task Manager. This is not "just" a performance issue. This is literally slowly destroying hardware which costs money. It's also eating electricity costs. JD2 *constantly* eats at minimum one whole CPU core, which is 25% of my entire computer's performance. On average I'd say it's closer to 50% and it spikes to 70-80% or more sometimes. My CPU max is around 40W. Let's say JD2 eats 15 W on average. I run 24/7, so in a year I'll be paying 40 EUR just for the electricity of having an open download manager. Add a new SSD every 2 years and it's probably 100 EUR per year that jD2 costs me. I know this sounds arrogant, but I think this is not justified for a tool whose purpose is the occasional downloading of files. JD2 is by far the biggest consumer of system resources on my PC, and I regularly do video rendering and whatnot, so that says something. I understand that an SQLite database is not as responsive as keeping everything in memory at all times, but that benefit is not only completely eaten up by how absolutely atrociously this solution scales, it results in such bad performance that the software stops responding, sometimes for a while. And I don't think that my use case of 100k and 200k links respectively is in any shape or form "extreme", compared to what some data hoarders might need. Portability I understand, but SQLite is cross-platform afaik. It's just a single file that you can basically do partial updates in. Doesn't require a full blown SQL server or anything. And keeping the index in memory certainly would be preferrable to this. Also because of RAM usage, which is considerable too (4+ GB). Actually, thinking about it, you wouldn't even have to keep the index in memory. You could just query the database whenever the view (list) is scrolled through. You'd just have to keep the unfinished/active records in memory as well as the total count of packages (or links, for non-collapsed packages) so that the scrolling works. I've done this with Javascript on a remote PHP server and it was responsive enough. With a local single-file database it should be perfectly responsive imo. Not straight-outta-memory-responsive maybe, but massively more scalable. And updates could happen instantaneously too. I don't know anything about Java, but in C# I could simply attach an event that watches for changed properties and whenever one is changed, the database is updated. All model-based, no query-writing required even. Again, sorry for being a d*** about this all, it's just ruined my day so many times over many years, I've become a bit irrational about it lol. I just wish this software could "behave itself" on my computer haha. It's so awesome in every other way, fixing this issue would make it perfect. It's such a frustratingly unnecessary issue to have in an otherwise so wonderful program imo. Anyway, have a good day and I pray that one day the gods of JD2 will eradicate this issue! :D Edit: I do have to partially take back what I said about how much it wrote to disk this time because I forgot to set the setting for the linkcollector too... sorry about that. I'll observe it for a bit for a few days and then give feedback on whether it helped. I still think my suggestion is a good one though, for all the other reasons, if only for those. And it would still be better to update records in a table than to rewrite entire files, even if it only happens every few minutes. |
#17
|
||||
|
||||
Quote:
minimum 180 000 =3 mins, so write changes to disk 3 mins after last change maximum 1800 000 = 30 mins, delay write changes to max 30 mins Example to explain those values: you add/modify/enable/disable/move a link = a change. 3 mins after this change the changes will be written to disk. when you download stuff, then the resume/position information changes while downloading, but as the download continues the minimum counter is resetted all the time and delayed until maximum (30 mins).
__________________
JD-Dev & Server-Admin |
#18
|
||||
|
||||
Quote:
Mostly downloading stuff and no changes in linkcollector? or manging/adding links to collector? I'm asking to find out the cause of the high write output. Because default maximum delay is 1 minute, so during ongoing downloads, the list is saved maximum 60 times per hour, would sum up to 4GB/hour So must be connected to your linkcollector use? or have you set VERY large buffers in JDownloader so during download the changes take much longer? Please help me to understand your usecase and I will try my best to help with possible/feasible optimizations! I already have new proof of concept storage in work that reduces file sizes even more.
__________________
JD-Dev & Server-Admin |
#19
|
||||
|
||||
Please don't mind me not going into details about SQLite. We've justed in the past but the experience/performance was terrible and it heavily limited features/possibilities in JDownloader.
I'm sorry but I don't see us going back to this sort of storage but I'll do my best to further optimize storage to reduce write output.
__________________
JD-Dev & Server-Admin |
#20
|
||||
|
||||
Quote:
I understand your concerns/thoughts on write output and I'm confident that we can improve it to a point that you can *live* with it
__________________
JD-Dev & Server-Admin |
Thread Tools | |
Display Modes | |
|
|