JDownloader Community - Appwork GmbH
 

Reply
 
Thread Tools Display Modes
  #1  
Old 20.08.2015, 04:18
tdguchi
Guest
 
Posts: n/a
Default MaxBufferSize top limit 50240

Hi, im using jdownloader 2, (fully updated) and im downloading to a HDD. (not SSD) everytime jdownloader downloads, it writes the content, using the disk.

what I want, is to download on RAM or cache, then when cache is completed, move it to the disk at once.

The advanced settings, only allows me to put buffer in 50240Kb but i want it to upload it to... 1gb (i have 6gb of ram only 2 is used by the SO) or more.

is this posible?

thanks
Reply With Quote
  #2  
Old 20.08.2015, 05:29
raztoki's Avatar
raztoki raztoki is offline
English Supporter
 
Join Date: Apr 2010
Location: Australia
Posts: 16,717
Default

cache is for each download and maybe for each chunk (assumption), its not a cache for all downloads. You definitely do not want to increase it to 1gb. the larger you make it and fault happens before writing to disk happens (like JD pid is force shutdown) you will loose what data is in the buffer and not written to disk.

raztoki
__________________
raztoki @ jDownloader reporter/developer
http://svn.jdownloader.org/users/170

Don't fight the system, use it to your advantage. :]
Reply With Quote
  #3  
Old 20.08.2015, 07:23
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 66,134
Default

JDownloader uses the buffer, then give the data to OS and then it is the decisino of OS when to write to disk. Tune your OS settings, nothing to do in JDownloader.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #4  
Old 20.08.2015, 13:28
tdguchi
Guest
 
Posts: n/a
Default

So there is no way to use a temp folder in ram, then when finished the file is written to disk?

or use a temporal folder to unfinished downloads (i have an ssd and hdd, i would like to download .part to the ssd then, when finished, move to the hdd)

thanks
Reply With Quote
  #5  
Old 20.08.2015, 15:17
raztoki's Avatar
raztoki raztoki is offline
English Supporter
 
Join Date: Apr 2010
Location: Australia
Posts: 16,717
Default

@tdguchi

google ram drive, this is not a function of our program but of something dedicated to this task. Just be aware that ram drives typically do not preserve data, so if computer restarts data is gone.

You can make package customiser rules that involves extraction
- extract content to customised paths after downloading to default save path or customised save path
Non extraction paths
- download directly to the end location with save to path.

raztoki
__________________
raztoki @ jDownloader reporter/developer
http://svn.jdownloader.org/users/170

Don't fight the system, use it to your advantage. :]
Reply With Quote
  #6  
Old 21.08.2015, 02:19
tdguchi
Guest
 
Posts: n/a
Default

only a few cases the files is a compressed file.

for example, a video file in 1080p (around ... 2 gb) it wont be "decompressed" i cant set a download folder, and a demcompress folder.

a complete solution is use a method like utorrent, emule, mipony etc, that uses a folder for .part and another to move the complete download.

even if i want to use a temp folder of ramdisk, or use my ssd, this is the best solution.

regards
Reply With Quote
  #7  
Old 21.08.2015, 08:56
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 66,134
Default

Why use a temp folder for Downloads at all? What do you think it will help?
A temp download folder and moving downloaded file will not be added as it only causes a lot of issues (file exists check, extraction, mirror handling) and no real advantages.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #8  
Old 21.08.2015, 13:41
tdguchi
Guest
 
Posts: n/a
Default

why temp folder?

because i dont want my HDD to be doing "cricricricricri" every time, i could use a ramdisk temp folder (2gb, 4gb) or my ssd to partial downloads, then when it finishes, the whole 4 gb are written at once. that is better for the HDD life....

File check extraction... are you kidding? where is the problem? extract only when all package is located on the second folder. file existing, the same, but adding that there may be a .part on the temp, its easy for someone who master java.

regards
Reply With Quote
  #9  
Old 21.08.2015, 14:01
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 66,134
Default

"cricricricricri" ??
HDDs are there for using them. I really doubt that you write several TB/PB so you disk will die of data amount. Most HDD die at startup. I'm sorry but in these days there is absolute no need for this. And if you really want to *save* your HDD, then tune your OS to use larger buffers/delayed writes.

Its not about the coding language. It's about the complexity of the changes
->File check -> must then be checked in two folder (temp and final)
->Reset/Delete -> delete in temp only? temp and final? final only?
->Spacecheck -> needs to check/watch 2 folders.
->temp and final folders on different drives -> rename will no longer work -> copying file from temp to final -> takes some time
->Extraction -> are all files in same place?
->DownloadFolder/PackageRules -> create in temp folder or final or both?

I'm sorry but I don't see any real reason for that complex changes. A *normal* person will hardly have that amount of written data to reach any lifetime write of hdd/sdd
__________________
JD-Dev & Server-Admin
Reply With Quote
  #10  
Old 21.08.2015, 14:17
tdguchi
Guest
 
Posts: n/a
Default

First, using my laptop, and listening a noise, is disgusting, thats because we use silent power supply, silent fans... etc. Now the noise is the hdd writing.

second, my hdd could in a laptop, or be external. In both cases, a HDD writting all time increases the posibility to move it give it a shock. if the disk is not be using, the header is not on the disk, so a shock wont damage it so much as if the disk is being used.

PD: I dont know how to tune windows 10 to use larger buffer, if you know how to do that, i will be pleased y if you teach me, how to config JD and my OS.

regards
Reply With Quote
  #11  
Old 21.08.2015, 14:32
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 66,134
Default

I will allow bigger caches, if you are happy with that solution.
Memory is only limit then. But even with 50 mbyte cache its already very much (depending on your inet connection speed)
image you download at 1 mbyte/s, then it is one write every 50 secs.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #12  
Old 21.08.2015, 14:35
raztoki's Avatar
raztoki raztoki is offline
English Supporter
 
Join Date: Apr 2010
Location: Australia
Posts: 16,717
Default

In my vast years of running physical harddrives your are better off having them spin all the time than sleeping them when idle. You be surprised how much longer they run for at a constant spin than stop starting all the time. That said you will use more power.

Jiaz is right, in the case of moving a part from temp path to end location, (crc[read]/extraction[read&write]), requires a move/copy event to another location vs a rename to the same path/device which is instant. Effectively you will do 2x the damage to all your hardware life read/write cycles due to moving them all the time.
__________________
raztoki @ jDownloader reporter/developer
http://svn.jdownloader.org/users/170

Don't fight the system, use it to your advantage. :]
Reply With Quote
  #13  
Old 21.08.2015, 14:43
tdguchi
Guest
 
Posts: n/a
Default

first reply said that is not a very good idea to grow that....

im curious of how to grow the os writing delay, how could i do it?

Is there any way to run a command when a download is finished? something like "shutdown on finish" that is already on JD, but running any other command, like a batch with OS instructions

that could be another solution, let the user to execute robocopy (on windows) or mv (in linux) on every download finish

regards
Reply With Quote
  #14  
Old 21.08.2015, 14:48
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 66,134
Default

Use the Eventscripter for such stuff (calling a tool)
I will give possibility to increase buffer -> still no good idea, but it is up 2 you to use more memory

write delay -> easy on linux, on windows---google..dont know
__________________
JD-Dev & Server-Admin
Reply With Quote
  #15  
Old 21.08.2015, 14:53
tdguchi
Guest
 
Posts: n/a
Default

Im not an expert, a reply says "you definitely dont want that" and if you read, i have forgotten that idea, asking you to use a temp folder.

"cache is for each download and maybe for each chunk (assumption), its not a cache for all downloads"

if the maxbuffer is for all download, increase it, if not, i will believe in what he says, and say that its not a good idea to increase it.
Reply With Quote
  #16  
Old 21.08.2015, 15:07
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 66,134
Default

At the moment cache is for each chunk <-> Next downloadcore has a JD wide write cache (download,extraction....) so bigger caches make much more sense then.
so at the moment I suggest to use caches between 2-4 mbyte/s
__________________
JD-Dev & Server-Admin
Reply With Quote
  #17  
Old 21.08.2015, 15:20
tdguchi
Guest
 
Posts: n/a
Default

ok, thanks for your replies,

sorry if i sound rude, isnt my intention.

i will set to 4 mb my cache
Reply With Quote
  #18  
Old 21.08.2015, 15:20
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 66,134
Default

No harm done
__________________
JD-Dev & Server-Admin
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +2. The time now is 02:07.
Provided By AppWork GmbH | Privacy | Imprint
Parts of the Design are used from Kirsch designed by Andrew & Austin
Powered by vBulletin® Version 3.8.10 Beta 1
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.