JDownloader Community - Appwork GmbH
 

Go Back   JDownloader Community - Appwork GmbH > English Support > Host Plugin Reports
Reply
 
Thread Tools Display Modes
  #1  
Old 19.09.2017, 16:51
TheQL
Guest
 
Posts: n/a
Default uploaded.to downloads never finish larger files

First of all, I searched, I didn't find anything. So sorry if this is already being discussed.

I am not reporting this light heartedly, this issue persists for several weeks now. From a DLC I am trying to download several files from uploaded.to, most are split archives with the first file being 500MB of size, the following files are smaller. A few of the larger downloads completed some time in the past but now all I am seeing is the download starting and some time later on it aborts. No premium account here. So after waiting it out and solving tons of captchas at a much too high frequency (1h wait time just starts over and over, but captchas seem to appear even more frequently) the download restarts. It doesn't resume, it starts from scratch, aborting again and this cycle continues. Some of the smaller split archives complete from time to time, but that is of no use in total.

The setup is running on a Synology NAS (ARM architecture) in headless mode. I update JDownloader via the built-in update function quite frequently but nothing has changed since the last updates. I am not sure if the issues started with a Java update but don't believe so.

Is this known/common/normal? Do you need any more info or maybe log output? My ISP should not be an issue there are no forced line disconnects or such, also this should not affect each and every larger download. Any ideas or help is highly appreciated! Thanks!

Some additional info:
I have ONE free account set up for uploaded.to, I have no idea if I should add more or remove that one or what is best without premium. I did not have any of the experimental plugin features set but just activated "Uploadedto: EXPERIMENTALHANDLING
Activate reconnect workaround for freeusers: Prevents having to enter additional captchas in between downloads."
This will probably not help much, if at all...

Last edited by TheQL; 19.09.2017 at 16:59. Reason: Additional info
Reply With Quote
  #2  
Old 20.09.2017, 16:46
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Uploaded restricts resume feature to premium customers only.
Please provide a logfile. You can create one from webinterface in settings, choose the correct time range. and post the shown logID here.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #3  
Old 21.09.2017, 14:39
TheQL
Guest
 
Posts: n/a
Default

Thanks, did that. jdlog://7640614015941

This was after startup, entered captcha, download of file started from scratch and then stopped at some time (39% of a 501MB file).
Reply With Quote
  #4  
Old 21.09.2017, 14:42
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Your log is full of network connection issues
java.net.SocketTimeoutException: Read timed out
Sure the network connection to your NAS is stable/error free?

log is full or errors. Each such 'read timed out' means that JDownloader did not receive a single byte in more than 60 seconds.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #5  
Old 21.09.2017, 16:22
TheQL
Guest
 
Posts: n/a
Default

Thanks for checking, I will check that as well.
My NAS has some issues regarding RAM usage but is otherwise well, I would assume. Also if I stop other services and swapping stops I cannot see any improvement with JDownloader. Other file hosters also didn't have issues, but I can try to verify that as well. Also transfer speed seems to match what can be expected - Usenet downloads are stable and fast. But I will try to run some network checks nevertheless.

So, I looked at some data, all looks quite well:
Code:
# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:11:xx:xx:xx:xx
          inet addr:192.168.3.250  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: 2003:aa:f4b:f00:211:32ff:fe3d:3382/64 Scope:Global
          inet6 addr: fe80::211:32aa:fe3d:3382/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:376175206 errors:0 dropped:0 overruns:0 frame:0
          TX packets:212339161 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1024
          RX bytes:2753647440 (2.5 GiB)  TX bytes:3267538788 (3.0 GiB)
          Interrupt:73
.
.
.

# ethtool eth0
Settings for eth0:
        Supported ports: [ TP MII ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Half 1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Link partner advertised link modes:  10baseT/Half 10baseT/Full
                                             100baseT/Half 100baseT/Full
                                             1000baseT/Half 1000baseT/Full
        Link partner advertised pause frame use: No
        Link partner advertised auto-negotiation: Yes
        Speed: 1000Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: g
        Wake-on: d
        Link detected: yes
some pinging...
Code:
# ping -i 0.1 ix.de
PING ix.de(redirector.heise.de (2a02:2e0:3fe:1001:302::)) 56 data bytes
.
.
.
--- ix.de ping statistics ---
208 packets transmitted, 208 received, 0% packet loss, time 22933ms
rtt min/avg/max/mdev = 22.013/23.975/37.286/2.512 ms

# ping -i 0.1 uploaded.to
PING uploaded.to (81.171.123.200) 56(84) bytes of data.
.
.
.
.
--- uploaded.to ping statistics ---
276 packets transmitted, 276 received, 0% packet loss, time 30398ms
rtt min/avg/max/mdev = 38.493/39.775/52.371/1.043 ms
and a last test with a larger iso download
Code:
# wget **External links are only visible to Support Staff**
--2017-09-21 16:41:48--  **External links are only visible to Support Staff**
Resolving cdimage.debian.org... 2001:6b0:19::173, 2001:6b0:19::165, 194.71.11.173, ...
Connecting to cdimage.debian.org|2001:6b0:19::173|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: **External links are only visible to Support Staff**
--2017-09-21 16:41:49--  **External links are only visible to Support Staff**
Resolving gemmei.ftp.acc.umu.se... 2001:6b0:19::137, 194.71.11.137
Connecting to gemmei.ftp.acc.umu.se|2001:6b0:19::137|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3825205248 (3.6G) [application/x-iso9660-image]
Saving to: 'debian-9.1.0-amd64-DVD-1.iso'

10% [==========>                                                 ] 412,246,016 3.55MB/s  eta 16m 15s
Nothing out of the ordinary, I would say...

Last edited by TheQL; 21.09.2017 at 16:45. Reason: additional info
Reply With Quote
  #6  
Old 21.09.2017, 18:06
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Just for your information, JDownloader also supports usenet downloads

So you only have this issues with uploaded?
What kind of RAM issues do you mean?
__________________
JD-Dev & Server-Admin
Reply With Quote
  #7  
Old 22.09.2017, 11:15
TheQL
Guest
 
Posts: n/a
Default

Thanks, I am quite happy with my setup that grabs stuff mostly automatic
JDownloader wouldn't fit into that easily...

I need to maybe check a large download from another free hoster. Well, RAM is short, I need to be a little bit careful which tasks to start simultaneously to avoid excessive swapping leading to timeouts and problems. But there were no such issues yesterday.
Reply With Quote
  #8  
Old 22.09.2017, 11:49
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Thanks for the feedback! How many memory does the system have?

You can limit memory of java process with
java -XmxYm -jar ..

-XmxYm = YMbyte
-XmxYg = YGbyte
__________________
JD-Dev & Server-Admin
Reply With Quote
  #9  
Old 22.09.2017, 11:51
TheQL
Guest
 
Posts: n/a
Default

It has whopping 512MB
I didn't package it myself but could edit the startup script. But the memory culprit is usually Sabnzbd+

I will get back to you once I completed a large download from a different hoster, will then check uploaded.to and create another log.
Reply With Quote
  #10  
Old 22.09.2017, 12:13
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Use JDownloader for usenet as well
You should limit memory of Java via parameter
for example -Xmx192m
__________________
JD-Dev & Server-Admin
Reply With Quote
  #11  
Old 22.09.2017, 15:31
TheQL
Guest
 
Posts: n/a
Default

Current status: successfully completed large downloads from other hosters, successfully downloaded smaller files from uploaded.to
Will wait with posting a log for when a large uploaded.to download has failed. Currently I am waiting until the next download is allowed.
Reply With Quote
  #12  
Old 22.09.2017, 16:05
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Thanks for the feedback!
__________________
JD-Dev & Server-Admin
Reply With Quote
  #13  
Old 23.09.2017, 11:05
TheQL
Guest
 
Posts: n/a
Default

So, maybethis is a little more conclusive. Forgot to stop it in time, but there should be several examples of successful downloads and also a failed one from uploaded.to: jdlog://5102614015941
Reply With Quote
  #14  
Old 26.09.2017, 11:07
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

@TheQL: Log does not contain any error/download
__________________
JD-Dev & Server-Admin
Reply With Quote
  #15  
Old 26.09.2017, 12:47
TheQL
Guest
 
Posts: n/a
Default

That's odd... I selected the entire day with several downloads. I will check on the shell...
Try this: jdlog://4414614015941

There have to be errors, I get those on the shell:



I do understand that the problems are obviously not limited to uploaded.to, but the other downloads finished. I don't quite get why I have this many timeouts as the network connection of the NAS seems to be otherwise stable, but it did work in the past with uploaded.to, not sure if they changed something or if my network connection really got worse.

Last edited by Jiaz; 26.09.2017 at 13:19.
Reply With Quote
  #16  
Old 26.09.2017, 13:19
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Do you use VPN/Proxy?
Maybe your router has a reconnect? Does your external IP change?
Do you use torrent/p2p at the same time?
__________________
JD-Dev & Server-Admin
Reply With Quote
  #17  
Old 26.09.2017, 13:45
TheQL
Guest
 
Posts: n/a
Default

No, it's directly connected, also the router should not disconnect. My ISP does not force reconnects. NAS has a gigabit connect and I can't see any issues with the interface. I am a little out of ideas here, sadly.

Sometimes there might be P2P or Usenet connects from other tools, but I explicitly had everything else deactivated while performing the tests with JD.
I did adjust the RAM settings recently, but in fact it just reserves a little more virtual memory on startup, resident memory usage was moderate at all times.
Reply With Quote
  #18  
Old 26.09.2017, 13:58
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Maybe you should check if your IP really doesn't change
ipcheck0.jdownloader.org
write down the ip and check if it is still the same after this issue has happened.

does kernel log (dmesg) show something unusual ?
__________________
JD-Dev & Server-Admin
Reply With Quote
  #19  
Old 26.09.2017, 15:38
TheQL
Guest
 
Posts: n/a
Default

Checked my router, IP is unchanged since June. Rebooted the NAS today because of an update so dmesg is currently not of great use. Will check that on next occasion.
Reply With Quote
  #20  
Old 26.09.2017, 15:51
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Thanks, waiting for new feedback then
__________________
JD-Dev & Server-Admin
Reply With Quote
  #21  
Old 26.09.2017, 16:57
TheQL
Guest
 
Posts: n/a
Default

And here it is, just had an aborted download at approx. 130MB/260MB and nothing in dmesg output. RAM usage is moderate, system load is not out of the ordinary. But there is this again...

Code:
------------------------Thread: 5981:uploaded.to_jd.plugins.hoster.Uploadedto-----------------------
--ID:5981TS:1506434969768-9/26/17 4:09:29 PM -  [jd.plugins.download.raf.OldRAFDownload(setupChunks)] -> Setup virgin download
--ID:5981TS:1506434969769-9/26/17 4:09:29 PM -  [jd.plugins.download.raf.OldRAFDownload(setupVirginStart)] -> Start Download in 1 chunks. Chunksize: 272275854
--ID:5981TS:1506434969856-9/26/17 4:09:29 PM -  [jd.plugins.download.raf.OldRAFDownload(setupVirginStart)] -> Setup chunk 0: Thread[DownloadChunkRAF:xxx-xxxx.part2.rar,1,main]
--ID:5981TS:1506434969858-9/26/17 4:09:29 PM -  [jd.plugins.download.raf.OldRAFDownload(waitForChunks)] -> Wait for chunks
------------------------Thread: 6052:uploaded.to_jd.plugins.hoster.Uploadedto-----------------------
--ID:6052TS:1506434969859-9/26/17 4:09:29 PM -  [jd.plugins.download.raf.RAFChunk(run0)] -> Start Chunk 0 : 0 - -1
--ID:6052TS:1506434969860-9/26/17 4:09:29 PM -  [jd.plugins.download.raf.RAFChunk(copyConnection)] -> Takeover connection(no range) for Start:0|End:-1
--ID:6052TS:1506436896897-9/26/17 4:41:36 PM -  [] -> java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
        at java.net.SocketInputStream.read(SocketInputStream.java:171)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at org.appwork.utils.net.CountingInputStream.read(CountingInputStream.java:62)
        at org.appwork.utils.net.LimitedInputStream.read(LimitedInputStream.java:70)
        at org.appwork.utils.net.throttledconnection.ThrottledInputStream.read(ThrottledInputStream.java:159)
        at jd.plugins.download.raf.RAFChunk.download(RAFChunk.java:266)
        at jd.plugins.download.raf.RAFChunk.run0(RAFChunk.java:593)
        at jd.plugins.download.raf.RAFChunk.run(RAFChunk.java:460)
.
.
.
.
I seem to have a problem, I admit that, I just can't pin it down. Is the read timeout set on a per plugin basis and are others set to be more forgiving? Or do other hosters just allow resuming even with free accounts?

But just as an example, just look at the time! This many hours of download are unthinkable with uploaded.to

Code:
--ID:18502TS:1506178669581-9/23/17 4:57:49 PM -  [jd.plugins.download.raf.OldRAFDownload(setupVirginStart)] -> Start Download in 1 chunks. Chunksize: 977981910
--ID:18502TS:1506178669582-9/23/17 4:57:49 PM -  [jd.plugins.download.raf.OldRAFDownload(setupVirginStart)] -> Setup chunk 0: Thread[DownloadChunkRAF:xxxxx.part2.rar,1,main]
--ID:18502TS:1506178669582-9/23/17 4:57:49 PM -  [jd.plugins.download.raf.OldRAFDownload(waitForChunks)] -> Wait for chunks
------------------------Thread: 18549:share-online.biz_jd.plugins.hoster.ShareOnlineBiz-----------------------
--ID:18549TS:1506178669583-9/23/17 4:57:49 PM -  [jd.plugins.download.raf.RAFChunk(run0)] -> Start Chunk 0 : 0 - -1
--ID:18549TS:1506178669585-9/23/17 4:57:49 PM -  [jd.plugins.download.raf.RAFChunk(copyConnection)] -> Takeover connection(no range) for Start:0|End:-1
--ID:18549TS:1506208313859-9/24/17 1:11:53 AM -  [] -> java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
        at java.net.SocketInputStream.read(SocketInputStream.java:171)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at org.appwork.utils.net.CountingInputStream.read(CountingInputStream.java:62)
        at org.appwork.utils.net.LimitedInputStream.read(LimitedInputStream.java:70)
        at org.appwork.utils.net.throttledconnection.ThrottledInputStream.read(ThrottledInputStream.java:159)
        at jd.plugins.download.raf.RAFChunk.download(RAFChunk.java:266)
        at jd.plugins.download.raf.RAFChunk.run0(RAFChunk.java:593)
        at jd.plugins.download.raf.RAFChunk.run(RAFChunk.java:460)

Last edited by TheQL; 26.09.2017 at 17:09.
Reply With Quote
  #22  
Old 26.09.2017, 17:04
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

The read timeout is global at 60 secs. Of course you can try to increase it but I doubt it will help
Settings-Advanced Settings-InternetConnectionSettings.httpreadtimeout
__________________
JD-Dev & Server-Admin
Reply With Quote
  #23  
Old 26.09.2017, 17:12
TheQL
Guest
 
Posts: n/a
Default

Thanks, doubled it, who knows. Still don't get the whole picture here, though.
Reply With Quote
  #24  
Old 27.09.2017, 09:27
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Maybe it helps. You can also try to ping the final download server and check for paketloss
let it run over long time
__________________
JD-Dev & Server-Admin
Reply With Quote
  #25  
Old 27.09.2017, 12:38
TheQL
Guest
 
Posts: n/a
Default

Not a bad idea. Found the server and am running ping parallel to downloading. Will see.
Reply With Quote
  #26  
Old 28.09.2017, 11:47
TheQL
Guest
 
Posts: n/a
Default

So, there it is:

Code:
--ID:50TS:1506588512616-9/28/17 10:48:32 AM -  [jd.plugins.download.raf.OldRAFDownload(setupVirginStart)] -> Setup chunk 0: Thread[DownloadChunkRAF:xxx.part2.rar,1,main]
--ID:50TS:1506588512617-9/28/17 10:48:32 AM -  [jd.plugins.download.raf.OldRAFDownload(waitForChunks)] -> Wait for chunks
------------------------Thread: 191:uploaded.to_jd.plugins.hoster.Uploadedto-----------------------
--ID:191TS:1506588512618-9/28/17 10:48:32 AM -  [jd.plugins.download.raf.RAFChunk(run0)] -> Start Chunk 0 : 0 - -1
--ID:191TS:1506588512620-9/28/17 10:48:32 AM -  [jd.plugins.download.raf.RAFChunk(copyConnection)] -> Takeover connection(no range) for Start:0|End:-1
--ID:191TS:1506591698100-9/28/17 11:41:38 AM -  [] -> java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
        at java.net.SocketInputStream.read(SocketInputStream.java:171)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
Meanwhile:
Code:
--- 81.171.103.63 ping statistics ---
3142 packets transmitted, 3142 received, 0% packet loss, time 3177240ms
rtt min/avg/max/mdev = 21.638/22.965/47.419/1.892 ms
Reply With Quote
  #27  
Old 28.09.2017, 11:57
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Tried to increase read timeout?
what is your kernel?
uname -a
any special configs in /etc/sysctl.conf


one more test would be to check if the same issue also happens on another computer?
__________________
JD-Dev & Server-Admin
Reply With Quote
  #28  
Old 28.09.2017, 12:26
TheQL
Guest
 
Posts: n/a
Default

Yeah, timeout was doubled to 120 seconds. Rest is pretty much Synology standard. Also it did work a short while ago.

Code:
# uname -a
Linux Diskstation 3.2.40 #15152 SMP Thu Sep 21 09:59:03 CST 2017 armv7l GNU/Linux synology_armada375_ds215j
# cat /etc/sysctl.conf
kernel.panic=3
I am pretty sure the same issue won't happen on my Mac, but I'd prefer to download on my NAS anyway. All other computers would be connected via WiFi so no fair comparison, but still I suppose downloads would complete.
Reply With Quote
  #29  
Old 28.09.2017, 12:30
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

do the timeouts only happen for uploaded? or any long downloads?
would be nice if you can test if the issue happens on other computer as well.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #30  
Old 28.09.2017, 12:41
TheQL
Guest
 
Posts: n/a
Default

In this thread somewhere I did notice the occasional timeout with other hosters, but also long running downloads without which are more or less unthinkable to achieve with uploaded.to, which is why I somehow suspected the plugin to do something different. But I'm pretty much clueless. I will try to get one of the large files downloaded on my Mac tomorrow.
Reply With Quote
  #31  
Old 28.09.2017, 16:48
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

I can provide large test downloads, contact us via support@jdownloader.org and I will provide 10gb files direct downloadable, okay?
__________________
JD-Dev & Server-Admin
Reply With Quote
  #32  
Old 29.09.2017, 12:55
TheQL
Guest
 
Posts: n/a
Default

Thanks, maybe I'll just grab a Linux .iso or something... Or is the server type relevant for what you have in mind?
Reply With Quote
  #33  
Old 29.09.2017, 12:57
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

I just wanted to provide a file that you can use for testing. Iso is fine as well
__________________
JD-Dev & Server-Admin
Reply With Quote
  #34  
Old 29.09.2017, 15:57
TheQL
Guest
 
Posts: n/a
Default

I will have to check for timeouts when it's complete. I put in a speed limit it won't finish too early. But as the server probably supports resuming the download will complete one way or the other.
Reply With Quote
  #35  
Old 30.09.2017, 13:42
TheQL
Guest
 
Posts: n/a
Default

Ok, so from my Mac the uploaded.to download completed.

And this is the output from the .iso via NAS:

Looks fine to me.

Last edited by Jiaz; 02.10.2017 at 10:12.
Reply With Quote
  #36  
Old 02.10.2017, 10:12
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

Is the download speed from UL on your server about the same as from your computer? Maybe long living/slow connections will get killed by ul server?
__________________
JD-Dev & Server-Admin
Reply With Quote
  #37  
Old 02.10.2017, 11:00
TheQL
Guest
 
Posts: n/a
Default

Speed is about the same and what a free user can expect. Also sorry if the log output here was too long or otherwise inappropriate.
Reply With Quote
  #38  
Old 02.10.2017, 16:45
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

You could run wireshark/tcpdump and record the network flow and later analyze to see what has happened.
You could also run a testlink and the uploaded one at the same time.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #39  
Old 03.10.2017, 15:37
TheQL
Guest
 
Posts: n/a
Default

Solid advice, also pretty painful. I'll look into it.
Reply With Quote
  #40  
Old 04.10.2017, 09:27
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,567
Default

In case you've got access to a vpn, would also be a good test.
__________________
JD-Dev & Server-Admin
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +2. The time now is 17:27.
Provided By AppWork GmbH | Privacy | Imprint
Parts of the Design are used from Kirsch designed by Andrew & Austin
Powered by vBulletin® Version 3.8.10 Beta 1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.