#21
|
|||
|
|||
![]()
True, during the closing JD2 (process javaw.exe uses much more RAM)
The application (process javaw.exe) for a long time before it closes probably save) |
#22
|
|||
|
|||
![]() Quote:
Code:
[jd.controlling.linkcollector.LinkCollector$16(run) CollectorList found: 10564/1157667 java.lang.OutOfMemoryError: Java heap space at java.io.BufferedOutputStream.<init>(Unknown Source) at jd.controlling.linkcollector.LinkCollector$23.run(LinkCollector.java:2315) at jd.controlling.linkcollector.LinkCollector$23.run(LinkCollector.java:2249) at org.appwork.utils.event.queue.QueueAction.start(QueueAction.java:202) at org.appwork.utils.event.queue.Queue.startItem(Queue.java:491) at org.appwork.utils.event.queue.Queue.runQueue(Queue.java:425) at org.appwork.utils.event.queue.QueueThread.run(QueueThread.java:64) |
#23
|
||||
|
||||
![]()
Thanks for the logfile. Now I know what you mean, Please update next core update in a few minutes
__________________
JD-Dev & Server-Admin |
#24
|
|||
|
|||
![]() Quote:
1 mln - 1,4 mln links Update causes damage to the archives! Previous Size "LinkCollector": 657 MB Now 1,9 MB !!! zip.backup 9 MB linkcollector.zip Previously in the previous version JD2 (example date 30-10.16) NEVER damage archive even if forcibly cut off power supply (so better!) Old Archive rename larger number, but in the future again and again and again cause damage. A very serious bug! ![]() |
#25
|
|||
|
|||
![]()
Previous Version: (...) (...) - 30.10.2016 work
In previous logs is not has performed this entry, now find all the time New: This causes damage to the removal and proper archives (even old archives !!!) All the time the loss of links Code:
------------------------Thread: 95:Log.L.log----------------------- --ID:95TS:1478004548042-01.11.16 13:49:08 - [org.appwork.shutdown.ShutdownController$3(run)] -> Exit Now: Code: 0 ------------------------Thread: 21:Log.L.log----------------------- --ID:21TS:1478004548421-01.11.16 13:49:08 - [] -> java.net.SocketException: socket closed at java.net.TwoStacksPlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(Unknown Source) at java.net.PlainSocketImpl.accept(Unknown Source) at java.net.ServerSocket.implAccept(Unknown Source) at java.net.ServerSocket.accept(Unknown Source) at org.appwork.utils.singleapp.SingleAppInstance$1.run(SingleAppInstance.java:364) at java.lang.Thread.run(Unknown Source) |
#26
|
||||
|
||||
![]()
Not confirmed - your screenshot doesn't prove anything.
How do we know that you didn't delete them by yourself? Here are mine list - before (21:47) and after (21:51) latest core update: ![]() Nothing was lost after updating, filesize was updated after I've moved 2 packages into Downloads. Last edited by raztoki; 01.11.2016 at 23:41. |
#27
|
|||
|
|||
![]()
editestowy -
1. Misread my previous post!!! Linkcollector!!! (NOT(!) DownloadList 2. You tested with 1,000,000 - 1,400,000 Link??? 600 - 700 MB ! Certainly not! So I do not understand your answer! Jiaz had to update the core (was supposed to reduce the use of creating a backup that caused damage to the archive eg. After a long terminate process javaw.exe (more than 60 seconds TimeOut! Socket Exception Error) I do not like the new update https://board.jdownloader.org/showpo...5&postcount=19 https://board.jdownloader.org/showpo...6&postcount=20 because it bring catastrophic consequences. I'm going back to the old version. I'm sorry! I'm telling the truth. Reproduce a) 1+ mln Links (600-700 MB) b) Restart JD2 WARNING: Unexpected end of archive 0000 (broken) (1 MB) |
#28
|
|||
|
|||
![]()
New update - unexpected bad results
Why after a restart JD2 damages archive (Practically all the time) Note (!) Also removes Xold archives Closing the application will take several minutes. Perhaps it may work for 100,000 - 200,000 but it does not work correctly for 1,400,000 K links (!) CollectorLinks ![]() |
#29
|
||||
|
||||
![]()
Please provide complete logfile. Nothing changed and that you have links loss is the same reason for your other bugreport about those empty/zero size files. It is the exact same cause.
The Stracktrace you've provided has nothing to do with this
__________________
JD-Dev & Server-Admin |
#30
|
||||
|
||||
![]()
Please stop telling lies. Only difference is that instead of writing everything to memory first and then one big write to disk (which caused the high memory usage for you) it is now written with smaller buffer. So in fact it is the same issue you suffer all the time that the shutdown has not enough time to write list and instead of having a zero size file (before) you now have a half finished file.
__________________
JD-Dev & Server-Admin |
#31
|
|||
|
|||
![]()
f I'm lying, please withdraw this update. I repeat once again, please remove this update. I do not need an update, which causes damage to (the loss of links) Thanks.
Previously, no damage archive LinkCollector (only bakup) 0byte. That's all from my side! I do not have the strength to prove everything every time (this is my personal opinions / testing) Sorry for wszytsko. I'm just experiencing problems so he writes. You do not believe me. Excuse me :( |
#32
|
|||
|
|||
![]() Quote:
Note very successfully re-START JD2 (not to be confused with the launch (run JD2) !!!) It is unfair suspicion if someone does not have a million links and test :( |
#33
|
|||
|
|||
![]() |
#34
|
|||
|
|||
![]()
02.11.16 10.50.48 <--> 02.11.16 10.51.03 jdlog://3681881887641/
|
#35
|
|||
|
|||
![]()
New Update
I set -Xmx11 GB is too little? for 1 million links? Why damages zip archive XXX.zip & XXXzip.bakup and lose links after restart. Please answer? Old Version -Xmx6G or 7G - support 1+ mln links (new version - NOT SUPPORT) corrupt archiwe when restart JD2 |
#36
|
|||
|
|||
![]()
How to: always autorecover links after a Run or restart JD2?
I mean more than 10,000 packages and millions of links? ![]() |
#37
|
||||
|
||||
![]()
This is still the same error as before.
Before you had zero byte Files and now you can see that JDownloader has not enough time to save the complete list. Do you close JD normally or shutdown Computer?
__________________
JD-Dev & Server-Admin |
#38
|
||||
|
||||
![]() Quote:
The question is, why is the file broken/not finished. So do you close JDownloader normally or kill it (eg on computer shutdown)?
__________________
JD-Dev & Server-Admin |
#39
|
||||
|
||||
![]() Quote:
It is the same cause for 0 byte files. Now JDownloader is able to start writing the files but because of unknown reason JDownloader is killed before saving the list.
__________________
JD-Dev & Server-Admin |
#40
|
||||
|
||||
![]()
Before the update:
Write 500 mbyte to memory and then begin to write to disk ->result: 0 byte file written, broken list Now: Directly write to disk with smaller buffer (1 Mbyte max) -> result: file size depends how long JDownloader before it gets killed, broken list HOW do you do it? Please the exact steps! This problem only happens for you and we need to find out why. I've just tested with 500k links and 10k packages. all fine
__________________
JD-Dev & Server-Admin Last edited by Jiaz; 02.11.2016 at 14:42. |
![]() |
Thread Tools | |
Display Modes | |
|
|