JDownloader Community - Appwork GmbH
 

Reply
 
Thread Tools Display Modes
  #1  
Old 28.09.2010, 01:13
dacleric
Guest
 
Posts: n/a
Default Speedproblems

Hello,

i got speed problems using JD.

I know that it is complaining on high ground but i actually have a 100mbit (fibre) internet connection (not shared). When i use JD for downloading from rapidshare i need to focus the main window and close all other windows to get the 10mb/s.

If i watch a movie or do anything else that eats some (not all) cpu time the dl speed goes down to 3-4mb/s

If i use CL instead of JD i get 10mb/s even when its in the tray bar and i am watching HD Movies (Downloading the same content at the same hout just for this test).

The main difference is the disc activity. Using JD my HDD is pretty noisy and its constantly accessing the disc - even with 2MB disc cache set. CL is buggy and doesnt work as well as JD - but has a great performance advantage - if it works. Please dear JD developers - please do more disc caching.

My system:
Core2Duo@3,6Ghz, 4GB - 750GB Samsung 7200rpm disc.
Reply With Quote
  #2  
Old 28.09.2010, 12:51
remi
Guest
 
Posts: n/a
Default

In jD the maximum buffer size is 2000. What would you suggest as a maximum?

When you watch a movie it's possible that it consumes almost all your CPU's cycles. You can try to solve that by increasing the process priority of the java(w) process.
Reply With Quote
  #3  
Old 29.09.2010, 02:54
dacleric
Guest
 
Posts: n/a
Default

Watching the movie costs around 10-20% cpu time. It is gpu accelerated. Even without doing anything but downloading stuff creates a lot if i/o and cpu load around 15-20%. The problem is still the i/o-Load. My disc is nearly unusable when i download something with ~10mb/s - i dont have a ssd.

I already set it to 2MB but i am not sure if there is a big difference. I also don't know how the cache and the disc writeback works. But i would like to try 20MB or something like that.

Last edited by dacleric; 29.09.2010 at 03:08.
Reply With Quote
  #4  
Old 29.09.2010, 13:02
remi
Guest
 
Posts: n/a
Cool

Allow java and javaw in your firewall and virus software. Disable (or un-install) the port 80/html/web scanner in your virus software.

There might be something wrong with your disk. Did you already try another (external) disk?
Reply With Quote
  #5  
Old 04.10.2010, 06:20
dacleric
Guest
 
Posts: n/a
Default

I checked everything.

It is jdownloader and seems to be the preallocation and writing donwloaded stuff it at the same time.

I attached a screenshot.

2. Try:
http://img693.imageshack.us/img693/7...ownloaderb.png
Attached Images
File Type: jpg jdownloader.jpg (17.1 KB, 375 views)

Last edited by dacleric; 04.10.2010 at 06:28. Reason: Nice image resizing ;)
Reply With Quote
  #6  
Old 04.10.2010, 10:55
drbits's Avatar
drbits drbits is offline
JD English Support (inactive)
 
Join Date: Sep 2009
Location: Physically in Los Angeles, CA, USA
Posts: 4,437
Default

If nothing else is going on, JDownloader will fragment your disk, but not as badly as most programs (because of the large buffer). However, defragmenting your disk is recommended if you are hearing a lot of noise from your disk. Most systems come from the factory with high disk fragmentation. If your pagefile or MFT is fragmented, you might need to use Pagedefrag (from Microsoft) or a commercial defragmenter.

A good, free defragmenter is MyDefrag (mydefrag.com). There are many others available. This may double your disk speed.

Please check your OS disk cashing (you did not specify your OS). If you are in Windows, make sure that you are not running enough programs to require swapping programs to the page file.

With 4GB of memory, I am guessing that you are using Windows 6.x (vista or "7") 64 bit or Linux. Please make sure that the last version of Java you have installed is an up to date (6u20 or 6u21) 32 bit runtime. Unless you have a program that uses over 2GB of memory, very few programs gain an advantage from the 64 bit OS (it just allows you to install more than 3GB of memory).

After you have defragmented your drive and made sure you are running the correct version of Java, please check and see if that fixes your problem (and let us know). If that does not fix your problem, post URLs and upload a log (my signature contains a link to instructions). This will allow us to check for anomalies in JD.

-------

Ideally, you should have two hard drives in your computer. The first drive should hold your OS and program files. The second hard drive should hold your data files. If you are using Windows, you will probably need a Page file on the OS drive as well (I think it is 128MB to allow dumps). The second hard drive should have a 4GB page file.

Make sure that the drive that JDownloader writes to is setup for cached writing (less secure in case of a system crash, but much faster).
__________________
Please, in each Forum, Read the Rules!.Helpful Links. Read before posting.
Reply With Quote
  #7  
Old 05.10.2010, 04:53
dacleric
Guest
 
Posts: n/a
Default

Okay here a short report:

My System:

Windows 7 64Bit 4GB RAM
HDD: 750GB 7200rpm sata with cache activated
JAVA: 32BIT Version 6 Update 21 (1.6.0_21-b07)

Only running JDownloader doesn't fix anything. The hdd is defragmented. The download and precaching does take ages and creates a heavy io load compared to extracting the downloaded stuff. Jdownloader should not swap - or is there a maximum RAM usage i can set within a java application? I won't start using java 64bit until 1.7.x is stable.

My point is that JD does i/o much more as its supposed to. I had peaks of 30+MB/s IO with 8mb/s download size.

I wonder about some things:
It seems that the writes are very fragmented (and that it is the reason of the heavy disk noise) but shouldnt it be possible for http download writes to create big cache writeback buffers?

Why is there a 2MB limit of disc write cache? Shouldnt i have at least 2mb for each download?

cleric

Last edited by dacleric; 05.10.2010 at 04:54. Reason: removed c&p spam :/
Reply With Quote
  #8  
Old 05.10.2010, 09:17
Statter Statter is offline
JD VIP
 
Join Date: Aug 2010
Posts: 433
Default

Quote:
Originally Posted by dacleric View Post
Okay here a short report:

My System:

Windows 7 64Bit 4GB RAM
HDD: 750GB 7200rpm sata with cache activated
JAVA: 32BIT Version 6 Update 21 (1.6.0_21-b07)

Only running JDownloader doesn't fix anything. The hdd is defragmented. The download and precaching does take ages and creates a heavy io load compared to extracting the downloaded stuff. Jdownloader should not swap - or is there a maximum RAM usage i can set within a java application? I won't start using java 64bit until 1.7.x is stable.

My point is that JD does i/o much more as its supposed to. I had peaks of 30+MB/s IO with 8mb/s download size.

I wonder about some things:
It seems that the writes are very fragmented (and that it is the reason of the heavy disk noise) but shouldnt it be possible for http download writes to create big cache writeback buffers?

Why is there a 2MB limit of disc write cache? Shouldnt i have at least 2mb for each download?

cleric
Although this really does not have to do with JD speed problems and regarding I/O peaks is more of a function of the OS and hardware (occasionally if an app is written poorly it can be a cause but, most apps allow the OS to deal with I/O calls and Java is still buggy in itself), there are several reasons for the fragmentation and not much JD can do about that as it is also a function of the particular OS one uses along with some basic Caching issues regardless of OS. Windows 7 along with anay version of windows is notorious for fragmentation.

As for the 2MB limit maximum of disc write cache for JD this can get really unstable regardless of how much ram is on any machine and OS if it is not done very carefully and even so it can still cause problems along with un-stability between Applications/Memeory and disc usage and space.

If you really want details of the issues here in the Below Spoiler are some examples of testing with various OSEs from various sources and their findings.

It is long sorta so grab a drink!

Spoiler:
The concept of dedicated memory used for disk cache is
not valid, the OS will know if a disk block is still in memory and
"reclaim" it rather than access the disk drive. So effectively all memory not being used for programs becomes a "disk cache". You will not see this in action because the OS releases unused pages but if you check with vmstat (most OSes) or vm_stat (OS X) and see high page reclaim rate and lots of free memory it is working correctly.

The amount of memory utilized at a given time will depend on the number of applications running, the amount of memory allocated to each application and so on. 
Setting the memory to a particular size especially trying to set the maximum size will make the situation worse. It is It is important to remember that the file cache competes for access to the same real memory resources as all other applications.

One additional process at work here needs to be understood. Caching transforms some logical file I/O operations from synchronous requests into asynchronous disk requests. These transformations are associated with read ahead requests for sequential files and lazy write deferred disk updates. As the name implies, read ahead requests are issued in anticipation of future logical I/O requests. (These anticipated future requests may not even occur).

Think about the number of times you open a document file in MS Word but do not scroll all the way through it.) Lazy write deferred disk updates occur sometime after the original logical file request. The update needs to be applied to the physical disk, but the required physical disk operation is usually not performed right away. So what is happening at the physical disk right now, as expressed by the current logical and physical disk statistics, is usually not in sync with logical file requests. This is the influence of caching. Caching makes it almost impossible to determine which applications are causing a physical disk to be busy except under very limited conditions (when very few applications are using the disk, for example).

Overrunning the Cache it is impossible to buffer the entire file in memory, so the random access pattern forces many more cache misses. In fact, the overall Copy Read Hits % is only about 48%. Because there are many more misses to process, the Performance Probe program is able to perform far fewer cached file I/O operations, slightly under 70 reads per second. Lazy write activity increases sharply to about 30 lazy writes per second, flushing about 50 I/Os per second from the cache to disk. Similar numbers of data flushes and data flush pages per second are also occurring.

If you are not careful, many operating systems may trim back the working sets of other applications too much with the LargeSystemCache setting in effect. Some applications may become slow and unresponsive due to excessive page stealing directed at them. If file cache activity heats up, there may be a noticeable delay when desktop applications are swapped back into memory following a period of inactivity. Due to high paging rates, any application that suffers a hard page fault may encounter delays at the busy paging disk.
LargeSystemCache is set to 1, the behavior in most operating systems is to preserve 20% of RAM for other applications (including the services)


If you read that hope it clears up some issues for you in the realm of caching at least.
__________________
OS X !0.6.8 Mac Pro Intel (Workhorse)
OS X 10.10.5 MBP Intel (Secondary)

Last edited by Statter; 05.10.2010 at 10:03.
Reply With Quote
  #9  
Old 05.10.2010, 14:52
dacleric
Guest
 
Posts: n/a
Default

The peaks i was talking about wasnt something that only appears for one second but more like 30 seconds - and its the java process.

I wouldn't have created this thread if other downloaders would have the same issue but they dont.

JDownloader is nice and has all features i want it to have but the download speed/system resource usage is pretty poor - at least for my machine.

As i wrote in my first post when using CL instead of JD i dont have any speed problems. But since CL is, aside of the nice download speed with little system resource, not my taste in things of features and stability (when parsing stuff etc.) its not an alternative.
Reply With Quote
  #10  
Old 05.10.2010, 21:37
Statter Statter is offline
JD VIP
 
Join Date: Aug 2010
Posts: 433
Default

K sorry I am unfamiliar with CL download is that something like the Common Lisp download app out there or Something else?

The reason I ask is that if so that does not use Java in anyway to my knowledge. and would explain to me at least why that CL vs the JD reacts a bit different with the I/O peaks etc. If it is another CL downloading app I am unfamiliar with then I would need to look at that also to see what it uses for base coding etc.
__________________
OS X !0.6.8 Mac Pro Intel (Workhorse)
OS X 10.10.5 MBP Intel (Secondary)
Reply With Quote
  #11  
Old 06.10.2010, 00:12
dacleric
Guest
 
Posts: n/a
Default

Sorry. I was talking about CryptLoad - and yes it is not written in Java.

I don't want to offend anyone!

Since i am playing with android development i will take a look at i/o stuff in java.
Reply With Quote
  #12  
Old 06.10.2010, 09:17
drbits's Avatar
drbits drbits is offline
JD English Support (inactive)
 
Join Date: Sep 2009
Location: Physically in Los Angeles, CA, USA
Posts: 4,437
Default

If you watch closely, you will notice that JD usually writes a full buffer at a time. That is a much larger write than most other programs. Most experts believe that 2GB fragments will not significantly slow your computer.

Unless there is a problem with your security software, CryptLoad should not download faster than JD and will probably cause more disk fragmentation than JD.

If you are downloading more than one file at a time, then the files will be mixed together on disk. For example, when downloading three files, you might end up with abacbca... If you are downloading multiple connections for a file, some preallocation will occur and you will have less disk fragmentation. However, for most hosts this requires a premium account. For example, if Max.Con.=4, then 3/4 of the file will be preallocated.

If you have more than enough RAM to hold all of the running programs, Windows will not page programs to disk and read them back. Programs like Process Explorer or Task Manager will show that the program is only allocated a fraction of the working memory that it has requested, but Windows leaves all but a few of the memory pages without changing them. If a page is needed, it is added back to the program and not written to disk. If a page sits in memory for a while, Windows will write it to the Paging file, but as long as the page is in memory, it will not be read from disk.

Make sure the total initial size of all of your paging files is at least as large as your RAM. Do not allow Windows to dynamically manage the paging file or it will be fragmented and slow your system.

In your case, with 4GB of RAM, you should have two HDD. The first HDD should hold the OS, programs, and the small files created in Users. The second HDD should hold all of the larger data files. This way, the first HDD will always have low fragmentation.

The OS requires a minimum size Paging file on the drive with the Windows directory. This is so that a kernel dump can be saved when the system crashes. I believe this minimum is 64MB. The maximum size of a Paging file is 4GB (although it might be larger on Windows x64). Allocate a maximum size Paging file on the second (data only) drive. Allocation of the Paging file this way can significantly increase the overall speed of your system.

Under Windows 7, you should always use the 32 bit version of Java, except for programs that require more than 2GB of virtual memory. Very few programs (except for some games) run significantly faster under x64, regardless of the programming language.

On a newer computer running Windows 7, you JDownloader.exe calls javaw.exe as
javaw -Xmx512m -jar JDownloader.exe

-Xmx sets the maximum amount of memory that can be used by the heaps (dynamically allocated memory). Unless you have a very large number of connections (Max.Con. times Max.Dls) or a huge Links List, JD rarely uses over 350MiB of the 512MiB limit. If the buffer size limit is increased to 256MiB (instead of 2MiB) then you should use -Xmx1536m and -Xms512m (initial heap size).

As a general rule, when you purchase your second hard drive, you want to purchase one with a large memory cache (for example, 64 MiB) this can significantly increase the cost of the drive, but will speed up your I/O. In addition, the JD buffer size should probably not be more than half of the drive's cache size.

Some programs, such as Diskeeper will attempt to keep your hard drive defragmented. In addition to normal defragmentation, they change the way that Windows allocates disk space. This is probably not worthwhile.
Reply With Quote
  #13  
Old 06.10.2010, 13:13
dacleric
Guest
 
Posts: n/a
Default

THX drbits

using

"C:\Program Files (x86)\Java\jre6\bin\javaw" -Xmx1536m -jar JDownloader.jar

did the trick

No more heavy disc access from jdownloader. GREAT
Reply With Quote
  #14  
Old 02.11.2010, 16:02
remi
Guest
 
Posts: n/a
Cool

@drbits

Your articles are getting better and better. Where will it end?

But, WtH ... is

Quote:
Originally Posted by drbits View Post
Most experts believe that 2GB fragments will not significantly slow your computer.
a quote from your upcoming Science Fiction book?
Reply With Quote
  #15  
Old 03.11.2010, 22:31
drbits's Avatar
drbits drbits is offline
JD English Support (inactive)
 
Join Date: Sep 2009
Location: Physically in Los Angeles, CA, USA
Posts: 4,437
Default

Quote:
a quote from your upcoming Science Fiction book?
Needs some work, huh?

Large files that do not get used a lot can be fragmented within reason. The things you do not want to fragment are usually all less than 2GB in size (except pagefile and hyberfile).

What slows Windows down the most is fragmented boot files (this is why by default, Windows defrags those files every 3 days), MFT (Master File Table), and directories.

The boot files are all of the files that get loaded within 1 minute after the OS starts.
The hardest file to keep defragmented is the Journal that records information for system restore. But if you don't use system restore very often, it does not matter.
_________________________________________

Hiberfil is the size of RAM. Until recently, that was limited to 4GB on a desktop. Even a 32GB RAM will only have 16 fragments if they are 2GB. For large RAM (over 1GB), the recommended pagefile size is the same as the memory size or up to 1.5 times the RAM size. Except for 3D graphic designers, 2GB fragments do not matter.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +2. The time now is 15:53.
Provided By AppWork GmbH | Privacy | Imprint
Parts of the Design are used from Kirsch designed by Andrew & Austin
Powered by vBulletin® Version 3.8.10 Beta 1
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.