View Single Post
  #963  
Old 05.10.2019, 07:56
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,533
Default

Quote:
Originally Posted by Demongornot View Post
... I use "A Download Stopped" + checking "myDownloadLink.isFinished();"...
This will not return all 'finished' links, but only the ones that stopped. Mirror links are marked as finished by JD, without downloading them. Also it is possible the user marks a download as "finished" too (using context menu command) without starting/downloading or finishing it. Neither of them will trigger this event.

While it is possible to iterate the package links with this event [link.getPackage().getDownloadLinks()] and find the related links which were marked by JD as "mirror finished", you may not be able to get the finished link from a package using the same method, if the user has manually marked all the links in the package as 'finished".

Quote:
Originally Posted by Demongornot View Post
... By the way, how much URLs per files should I put by default to optimise both memory and performances ...
Not sure. Most modern systems have the resources to handle large files. I have created a few scripts which require to read/load several 100,000 links at once, without having the need to limit it (based on feedback from the people who are using it). Also, a plain text file which contain only the urls will take much less resources (I presume) compared to keeping the links in the list in native format (as the user currently does), which will also contain the various link properties and download progress related data.

While a single file will be easier to manage. A multiple file design might be required/useful in some cases. Guess Jiaz can provide insight in this matter.

It may also be easier to create files on per hoster basis (as suggested by Jiaz), instead of limiting the number of urls per file. It will prevent having to iterate all the stored urls, by only having to query those that belong to a particular hoster.

Quote:
Originally Posted by Demongornot View Post
Also my current code remove the http(s) and the www when it is in the link because I thought that there could be cases where one files have been downloaded through a "http" and another through "https", and knowing that some websites accept both with and without "www" it might also reduce possible different URLs pointing to the same address, but is it a good idea or should I remove it or make it optional ?
Quote:
Originally Posted by raztoki View Post
.. We also typically also correct all urls into JD to one format protocol://domain/(path/)?uid
While rare, it is quite possible the same file will be added to JD with a different url. If a "LINKDUPEID" is available for a link you can use it instead of or in conjunction with the download url. Most users may like to have it as a default feature rather than optional.

Code:
var url = link.getProperty("LINKDUPEID") || link.getPluginURL();
Reply With Quote