Thread: [Solved] Speedproblems
View Single Post
  #8  
Old 05.10.2010, 08:17
Statter Statter is offline
JD Legend
 
Join Date: Aug 2010
Posts: 541
Default

Quote:
Originally Posted by dacleric View Post
Okay here a short report:

My System:

Windows 7 64Bit 4GB RAM
HDD: 750GB 7200rpm sata with cache activated
JAVA: 32BIT Version 6 Update 21 (1.6.0_21-b07)

Only running JDownloader doesn't fix anything. The hdd is defragmented. The download and precaching does take ages and creates a heavy io load compared to extracting the downloaded stuff. Jdownloader should not swap - or is there a maximum RAM usage i can set within a java application? I won't start using java 64bit until 1.7.x is stable.

My point is that JD does i/o much more as its supposed to. I had peaks of 30+MB/s IO with 8mb/s download size.

I wonder about some things:
It seems that the writes are very fragmented (and that it is the reason of the heavy disk noise) but shouldnt it be possible for http download writes to create big cache writeback buffers?

Why is there a 2MB limit of disc write cache? Shouldnt i have at least 2mb for each download?

cleric
Although this really does not have to do with JD speed problems and regarding I/O peaks is more of a function of the OS and hardware (occasionally if an app is written poorly it can be a cause but, most apps allow the OS to deal with I/O calls and Java is still buggy in itself), there are several reasons for the fragmentation and not much JD can do about that as it is also a function of the particular OS one uses along with some basic Caching issues regardless of OS. Windows 7 along with anay version of windows is notorious for fragmentation.

As for the 2MB limit maximum of disc write cache for JD this can get really unstable regardless of how much ram is on any machine and OS if it is not done very carefully and even so it can still cause problems along with un-stability between Applications/Memeory and disc usage and space.

If you really want details of the issues here in the Below Spoiler are some examples of testing with various OSEs from various sources and their findings.

It is long sorta so grab a drink!

Spoiler:
The concept of dedicated memory used for disk cache is
not valid, the OS will know if a disk block is still in memory and
"reclaim" it rather than access the disk drive. So effectively all memory not being used for programs becomes a "disk cache". You will not see this in action because the OS releases unused pages but if you check with vmstat (most OSes) or vm_stat (OS X) and see high page reclaim rate and lots of free memory it is working correctly.

The amount of memory utilized at a given time will depend on the number of applications running, the amount of memory allocated to each application and so on. 
Setting the memory to a particular size especially trying to set the maximum size will make the situation worse. It is It is important to remember that the file cache competes for access to the same real memory resources as all other applications.

One additional process at work here needs to be understood. Caching transforms some logical file I/O operations from synchronous requests into asynchronous disk requests. These transformations are associated with read ahead requests for sequential files and lazy write deferred disk updates. As the name implies, read ahead requests are issued in anticipation of future logical I/O requests. (These anticipated future requests may not even occur).

Think about the number of times you open a document file in MS Word but do not scroll all the way through it.) Lazy write deferred disk updates occur sometime after the original logical file request. The update needs to be applied to the physical disk, but the required physical disk operation is usually not performed right away. So what is happening at the physical disk right now, as expressed by the current logical and physical disk statistics, is usually not in sync with logical file requests. This is the influence of caching. Caching makes it almost impossible to determine which applications are causing a physical disk to be busy except under very limited conditions (when very few applications are using the disk, for example).

Overrunning the Cache it is impossible to buffer the entire file in memory, so the random access pattern forces many more cache misses. In fact, the overall Copy Read Hits % is only about 48%. Because there are many more misses to process, the Performance Probe program is able to perform far fewer cached file I/O operations, slightly under 70 reads per second. Lazy write activity increases sharply to about 30 lazy writes per second, flushing about 50 I/Os per second from the cache to disk. Similar numbers of data flushes and data flush pages per second are also occurring.

If you are not careful, many operating systems may trim back the working sets of other applications too much with the LargeSystemCache setting in effect. Some applications may become slow and unresponsive due to excessive page stealing directed at them. If file cache activity heats up, there may be a noticeable delay when desktop applications are swapped back into memory following a period of inactivity. Due to high paging rates, any application that suffers a hard page fault may encounter delays at the busy paging disk.
LargeSystemCache is set to 1, the behavior in most operating systems is to preserve 20% of RAM for other applications (including the services)


If you read that hope it clears up some issues for you in the realm of caching at least.
__________________
OS X !0.6.8 Mac Pro Intel (Workhorse)
OS X 10.13.6 MBP Intel 17" (Secondary)
OS X MBP Intel 15" Dual boot 10.6.8 and 10.13.6 (as needed)

Last edited by Statter; 05.10.2010 at 09:03.
Reply With Quote