JDownloader Community - Appwork GmbH
 

Reply
 
Thread Tools Display Modes
  #961  
Old 05.10.2019, 03:16
Demongornot Demongornot is offline
Registered / Inactive
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

@mgpai thanks a lot but I already started working on the script using one for finished downloads and another for new links where I use "A Download Stopped" + checking "myDownloadLink.isFinished();" and "A new link has been added", it is way more optimised, performance friendly and faster than trying to use a single script with many API calls, I mean, I already feel bad because I have to go twice through the same array xD.
I am busy doing a lot of things and I have a life rhythm leaving me with not much free time so the script(s) progress slowly, but it is still one of my priorities.

By the way, how much URLs per files should I put by default to optimise both memory and performances, considering that if the url is found in the first file(s) it will pay off to have "many" small files but it will punish if the download is on the latest one (also knowing that I'll first compare with downloads already in JD) ?

Also my current code remove the http(s) and the www when it is in the link because I thought that there could be cases where one files have been downloaded through a "http" and another through "https", and knowing that some websites accept both with and without "www" it might also reduce possible different URLs pointing to the same address, but is it a good idea or should I remove it or make it optional ?
Reply With Quote
  #962  
Old 05.10.2019, 04:13
raztoki's Avatar
raztoki raztoki is offline
English Supporter
 
Join Date: Apr 2010
Location: Australia
Posts: 17,668
Default

removing protocol and subdomain prefix might improve, but wont entirely as many sites have multiple domains, and or continually add new ones. some plugins set a unique identifier (most sites have uid) to combat that. We also typically also correct all urls into JD to one format protocol://domain/(path/)?uid. maybe also adding feature on checksumming either from advertised (hoster end) and confirmation your end could also assist. Note and many alter small components of files to create unique checksums.
__________________
raztoki @ jDownloader reporter/developer
http://svn.jdownloader.org/users/170

Don't fight the system, use it to your advantage. :]
Reply With Quote
  #963  
Old 05.10.2019, 06:56
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by Demongornot View Post
... I use "A Download Stopped" + checking "myDownloadLink.isFinished();"...
This will not return all 'finished' links, but only the ones that stopped. Mirror links are marked as finished by JD, without downloading them. Also it is possible the user marks a download as "finished" too (using context menu command) without starting/downloading or finishing it. Neither of them will trigger this event.

While it is possible to iterate the package links with this event [link.getPackage().getDownloadLinks()] and find the related links which were marked by JD as "mirror finished", you may not be able to get the finished link from a package using the same method, if the user has manually marked all the links in the package as 'finished".

Quote:
Originally Posted by Demongornot View Post
... By the way, how much URLs per files should I put by default to optimise both memory and performances ...
Not sure. Most modern systems have the resources to handle large files. I have created a few scripts which require to read/load several 100,000 links at once, without having the need to limit it (based on feedback from the people who are using it). Also, a plain text file which contain only the urls will take much less resources (I presume) compared to keeping the links in the list in native format (as the user currently does), which will also contain the various link properties and download progress related data.

While a single file will be easier to manage. A multiple file design might be required/useful in some cases. Guess Jiaz can provide insight in this matter.

It may also be easier to create files on per hoster basis (as suggested by Jiaz), instead of limiting the number of urls per file. It will prevent having to iterate all the stored urls, by only having to query those that belong to a particular hoster.

Quote:
Originally Posted by Demongornot View Post
Also my current code remove the http(s) and the www when it is in the link because I thought that there could be cases where one files have been downloaded through a "http" and another through "https", and knowing that some websites accept both with and without "www" it might also reduce possible different URLs pointing to the same address, but is it a good idea or should I remove it or make it optional ?
Quote:
Originally Posted by raztoki View Post
.. We also typically also correct all urls into JD to one format protocol://domain/(path/)?uid
While rare, it is quite possible the same file will be added to JD with a different url. If a "LINKDUPEID" is available for a link you can use it instead of or in conjunction with the download url. Most users may like to have it as a default feature rather than optional.

Code:
var url = link.getProperty("LINKDUPEID") || link.getPluginURL();
Reply With Quote
  #964  
Old 07.10.2019, 01:22
Amiganer Amiganer is offline
JD Fan
 
Join Date: Mar 2019
Posts: 72
Default Preventing double Downloads

Hello.

I use the script from post 950 in this thread. It copies all finished downloads to a new container (=Already Downloaded). The listed flag is disabled.
In this few days, I found out that not all links are moved. There is no system in that, only most of them are small files like pictures, some are longer archives.
Does that Script has to be run in synchron Mode? I have that not enabled.

Bye,
Christian
Reply With Quote
  #965  
Old 07.10.2019, 10:30
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by Amiganer View Post
... In this few days, I found out that not all links are moved. There is no system in that, only most of them are small files like pictures, some are longer archives ...
I was not able to reproduce the behvior. Are those files mirrors by any chance? Mirrors are marked by JD as finished, but will not trigger that ("download stopped") event, and hence will not be moved. If so, the script can be modified to identify/move such files, or a script similar to the first script in #954 can be used.
Reply With Quote
  #966  
Old 07.10.2019, 23:20
Demongornot Demongornot is offline
Registered / Inactive
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

@raztoki I'll try to use the plugin uid rather than host when this is possible then.

@mgpai
Could you provide me urls that would create mirror links recognised by JDownloader.
I don't know how it does that, if it need to be on the same folder or not as I tried downloading the same file from two different hosts and save them in different folder where it just started downloaded normally then I tried again on the same folder, and I just got prompted that the file already existed and there are no choice for any kind of mirror.
So I have no idea how it work in JDownloader.
Also I thought about it I was about to implement a logic which check if the file have downloaded by checking its size, but all those cases match an already scenario where we should actually set the URLs in the list of already downloaded files anyway.

I was thinking about making both a file per host but when it reach a certain number of links make another versions, for example host then host1, host2 etc.
Actually to avoid the cases where host would have a number on their name messing things up, writing something like host!1 or host_1, as valid hostnames only accept letters numbers dot and "-" sign anyway.

Also, in absence of LINKDUPEID and plugin id, is .getContentURL() the right one for individual files ? There are so many 'URLs' that I don't know which one use...
Reply With Quote
  #967  
Old 08.10.2019, 07:49
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by Demongornot View Post
... Could you provide me urls that would create mirror links recognized by JDownloader. I don't know how it does that, if it need to be on the same folder or not
JD will use name, size, hash etc. (depending on default/user settings), to determine mirror links.

MIRROR LINK: Is detected by JD at the time of starting the download, by comparing it with other links in the SAME package based on the "Mirror Detection" settings (Advanced Settings).

When a download is completed, the final status of that link will be set to "FINISHED" and that of it's "mirrors" will be set to "FINISHED_MIRROR". You will just need to query that status to determine if a download is finished.

DUPLICATE FILE : If a file with same name exists in the destination folder (irrespective of the package where the download link originated from), JD will consider it as a duplicate file.

Quote:
Originally Posted by Demongornot View Post
Also, in absence of LINKDUPEID and plugin id, is .getContentURL() the right one for individual files ? There are so many 'URLs' that I don't know which one use...
link.getPluginURL() will always return the final url (AFAIK). On the other hand, link.getContentURL() will be null if the container is encrypted.
Reply With Quote
  #968  
Old 09.10.2019, 16:40
Demongornot Demongornot is offline
Registered / Inactive
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

I couldn't trigger a mirror link status even with two identical files (same size, different host, different name but in advanced settings I turned off name matching).

Anyway, after experimenting I concluded that myDownloadLink.getDownloadHost() is reliable to get a proper host name for the files that will contain the URLs.

I tried to see what I got with myDownloadLink. + getPluginURL() and getProperty("LINKDUPEID");
For PluginURL, sometime I get protocol://domain/(path/)?uid as raztoki stated and sometime I got domain://(path/)?uid.
And for LINKDUPEID I either get domain://(path/)?uid or (path/)?uid and sometime I get a format of websitecom_(path/)?uid :
(protocol://)website.com/path/video/?quality=480 turned into :
websitecom_path_480p.

I never saw PluginURL with Domain first format without having LINKPUPEID being identical, but in case of I made the code so that it filter out domain and protocol anyway.

So I made this code which always return the (path/)?uid or LINKDUPEID version of it :

Code:
var myShortURL = Discombobulator(myDownloadLink.getPluginURL(), myDownloadLink.getProperty("LINKDUPEID"));
function Discombobulator(pluginURL, LINKDUPEID) {
    var shortURL = ''; //Check if there is a LINKDUPEID and take LINKDUPEID or PluginURL depending
    if (LINKDUPEID == null) {
        shortURL = pluginURL;
    } else {
        shortURL = LINKDUPEID.toString();
    }
    var authority = shortURL.indexOf('://');
    if (authority < 0) return shortURL; //Check if URL contain '://' if not return it as it is already the shortest
    /*Check if there is a protocol before the '://' meaning it contain protocol and host.
If it contain protocol, remove protocol and host and return, otherwise remove host and return*/
    var shorterURL = shortURL.substring(authority + ('://').length);
    if (turboEncabulator(shortURL.substring(0, authority))) return shorterURL.substring(shorterURL.indexOf('/') + 1);
    return shorterURL;
}

function turboEncabulator(bit) {
    var protocols = ['http', 'https', 'ftp'];
    for (var i = 0; i < protocols.length; i++)
        if (bit == protocols[i]) return true;
    return false;
}
Do you think this code is solid enough to deal with all possible cases or did I miss something ?

Last edited by Demongornot; 09.10.2019 at 16:43.
Reply With Quote
  #969  
Old 09.10.2019, 17:56
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by Demongornot View Post
Do you think this code is solid enough to deal with all possible cases or did I miss something ?
Not all links will have a unique ID. It is better to include the domain name in "shortURL". For e.g., if the url is "https://board.jdownloader.org/images/logo.png", the script will currently generate only "images/logo.png" as "shortURL".

Also, "LINKDUPEID" is not useful outside of JD. It is better to store the final url in its original format, and strip the protocol only when comparing them during dupe check. This will allow the list to be used outside of JD (review/edit/open link in browser/Add back to JD etc.).

The plugin url is not always useful (e.g. youtube plugin url) outside of JD. It is better to use content url wherever possible and have plugin url as fallback (From what I have seen, this will return a usable url when content url is null).

You can strip to the protocol at the time of dupe check. For example:
Code:
var duplicate = linkInList.replace(/https?:\/\/(www\.)?/, "") == linkInJD.replace(/https?:\/\/(www\.)?/, "");

Not in any way suggesting this is the way to do it. Just sharing my thoughts on the subject.
Reply With Quote
  #970  
Old 09.10.2019, 20:47
Demongornot Demongornot is offline
Registered / Inactive
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Is it really necessary to keep the domain name as the file in which short URL would be saved will be already named as the the domain name ?
Or do you suggesting that this isn't enough as subdomain.domain.com can turn into domain.com when using myDownloadLink.getDownloadHost() ?
Because in this case I already have a code which return the whole domain and subdomains without the protocol and path, could make it the file name, but this mean that links from the same domain with different subdomains won't be checked, so I think the getDownloadHost is better in that regard.

Alternatively, as you said :
Quote:
Originally Posted by mgpai View Post
While rare, it is quite possible the same file will be added to JD with a different url. If a "LINKDUPEID" is available for a link you can use it instead of or in conjunction with the download url. Most users may like to have it as a default feature rather than optional.]
I could write for each lines : completeURL ShortURL:LINKDUPEID and use the (space)ShortURL: as keyword for indexOf, if it return -1 I check the whole line which is the complete url minus protocol and "www" and when it return a positive value ShortURL would be LINKDUPEID when available, otherwise it will use the code I posted earlier to get a shortURL.
Would it work better ?

Well in the case where we want users to be able to manually interact with the URLs indeed Plugin URL isn't the way to go and I like your dupe check code.
Reply With Quote
  #971  
Old 09.10.2019, 21:35
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by Demongornot View Post
Is it really necessary to keep the domain name as the file in which short URL would be saved will be already named as the the domain name ?
If you are using a mulitple file system (per host basis), it should be enough to just store the "shortURL" generated by your snippet. Associating the "shortURL" with host (e.g. link.getDownloadHost()+shortURL), would be required only if single file system is adopted. I should have made the distinction clear in my previous reply. Sorry for the confusion.

Storing the urls in original format may not be necessary if the script will be primarily used for dupe check.
Reply With Quote
  #972  
Old 09.10.2019, 21:46
Demongornot Demongornot is offline
Registered / Inactive
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

No problems
So I'll make files title being getDownloadHost_number.txt containing lines being shortURL.
Considering the default path will be JD_HOME + '\\History' this isn't really for user but rather for dupe check.
Using short URL have the advantage of lowering file size and required performances when checking for match.
But the way it work could allow for user to set their own path, so I guess I could make an option to store the whole URL only without protocol and www, but this is a one time only decision as obviously changing formats would make things complicated, that's why I didn't really considered making it an option, but well if someone want to, why not.
Reply With Quote
  #973  
Old 09.10.2019, 21:54
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by Demongornot View Post
So I'll make files title being getDownloadHost_number.txt containing lines being shortURL.
Should be fine.
Reply With Quote
  #974  
Old 09.10.2019, 22:53
Demongornot Demongornot is offline
Registered / Inactive
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Quote:
Originally Posted by mgpai View Post
Should be fine.
Great
By the way, am I good using only http, https and ftp as protocols ?
I read that JDownloader also support Metalinks and Podcasts, and I don't know how those protocols work, as what I understood from a quick read is that Metalink is a collection of regular URL but I don't know how JDownloader handle those anyway.
Reply With Quote
  #975  
Old 10.10.2019, 19:31
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by Demongornot View Post
.... am I good using only http, https and ftp as protocols ?
I read that JDownloader also support Metalinks and Podcasts, and I don't know how those protocols work, as what I understood from a quick read is that Metalink is a collection of regular URL but I don't know how JDownloader handle those anyway.
With regards to protocol, I know of one more - "usenet". There might be others. Jiaz should be able to confirm.

As far as the containers are concerned, the final url will always be available as 'content url' (regular container) or 'plugin url' (encrypted/protected containers).

You can also try this code to generate the 'shortURL':
Code:
var link = myDownloadLink;
var host = link.getDownloadHost();
var url = link.getProperty("LINKDUPEID") || link.getPluginURL();
var shortURL = url.replace(new RegExp(".+:\/\/.*" + host + "/"), "").replace(/.+:\/\//, "");
Reply With Quote
  #976  
Old 10.10.2019, 19:44
Demongornot Demongornot is offline
Registered / Inactive
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Good, I was afraid it could be a list of URL split by a comma or something like that.

Awesome code, I haven't learn how to control those string and character yet for regular expression, replace and all that.
I tested it and well it is impressive how this can filter out so many things in a single line, including cases with subdomains that getDownloadHost() don't return and the domainpluginname:// case too !

Last edited by Demongornot; 10.10.2019 at 19:56.
Reply With Quote
  #977  
Old 10.10.2019, 20:19
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by Demongornot View Post
... how to control those string and character yet for regular expression, replace and all that ...
I had used 'download host' in the expression to make it restrictive. But if the subdomains can be either before or after the domain, based on the examples you provided in this post (pre-edit), you can use broader match pattern, without including the download host in it.

Code:
var link = myDownloadLink;
var url = link.getProperty("LINKDUPEID") || link.getPluginURL();
var shortURL = url.replace(/(^(https?|ftp):\/\/[^\/]+\/)/, "").replace(/.+:\/\//, "");

Last edited by mgpai; 10.10.2019 at 21:32.
Reply With Quote
  #978  
Old 10.10.2019, 23:06
Demongornot Demongornot is offline
Registered / Inactive
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

I tested your 2 codes and mine and I got to the conclusion that mine and your second one does the same thing but your first on get trouble when the "DownloadHost" differ from what is in the url.

Using this code :
Trigger : Downloadlist Contextmenu Button Pressed
Code:
myDownloadlistSelection = dlSelection;
if (myDownloadlistSelection.isLinkContext() == true) {
    var myDownloadLink = myDownloadlistSelection.getContextLink();

    var rAr = Discombobulator(myDownloadLink.getPluginURL(), myDownloadLink.getProperty("LINKDUPEID"));
    var host = myDownloadLink.getDownloadHost();
    var url = myDownloadLink.getProperty("LINKDUPEID") || myDownloadLink.getPluginURL();
    var rAr1 = url.replace(new RegExp(".+:\/\/.*" + host + "/"), "").replace(/.+:\/\//, "");
    var rAr2 = url.replace(/(^(https?|ftp):\/\/[^\/]+\/)/, "").replace(/.+:\/\//, "");
    var nl = getEnvironment().getNewLine();
    var sep = nl + "_______________________________" + nl;
    var t = ["Demongornot's :" + nl, "mgpai's 1 :" + nl, "mgpai's 2 :" + nl, "Host :" + nl, "LINKDUPEID or Plugin URL :" + nl];
    alert(t[0] + rAr + sep + t[1] + rAr1 + sep + t[2] + rAr2 + sep + t[3] + host + sep + t[4] + url);
}

function Discombobulator(pluginURL, LINKDUPEID) {
    var shortURL;
    if (LINKDUPEID == null) {
        shortURL = pluginURL;
    } else {
        shortURL = LINKDUPEID.toString();
    }
    var authority = shortURL.indexOf('://');
    if (authority < 0) return shortURL;
    var shorterURL = shortURL.substring(authority + ('://').length);
    if (turboEncabulator(shortURL.substring(0, authority))) return shorterURL.substring(shorterURL.indexOf('/') + 1);
    return shorterURL;
}

function turboEncabulator(bit) {
    var protocols = ['http', 'https', 'ftp'];
    if (protocols.indexOf(bit.toLowerCase()) >= 0) return true;
    return false;
}
I got this :
Code:
Demongornot's :
embed/xxxx
_______________________________
mgpai's 1 :
streamango.com/embed/xxxx
_______________________________
mgpai's 2 :
embed/xxxx
_______________________________
Host :
fruithosts.net
_______________________________
LINKDUPEID or Plugin URL :
(protocol)streamango.com/embed/xxxx
As the host name and url don't match, other than that your first script work in any cases when there are subdomains before or after AFAIK.

Last edited by Demongornot; 10.10.2019 at 23:10.
Reply With Quote
  #979  
Old 11.10.2019, 19:17
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 81,007
Default

@mgpai/Demongornot: I'll suggest to create a new thread for the discussion about the development/ideas/questions for the dupe/history support. I can then move the posts to the new thread.
sorry that I'm so quiet but I don't have much time at the moment :(
__________________
JD-Dev & Server-Admin
Reply With Quote
  #980  
Old 11.10.2019, 19:50
dsfsdfasdfasf dsfsdfasdfasf is offline
Registered / Inactive
 
Join Date: May 2012
Posts: 17
Default

Hey mgpai, jiaz sent me here. Is it possible to blacklist a proxy via eventscripter when it causes a 403 geoblocking state?
Reply With Quote
  #981  
Old 12.10.2019, 10:36
Amiganer Amiganer is offline
JD Fan
 
Join Date: Mar 2019
Posts: 72
Default Preventing double Downloads

Hello.

I'm now trying to put the Scripts vom #950 and #954 together.
Is it possible to get the directory path from the JD2/cfg/* directory, were the other databese are hold or is it really necessary to put a absolute Path in there?

Bye, Christian
Reply With Quote
  #982  
Old 12.10.2019, 10:56
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by Amiganer View Post
Is it possible to get the directory path from the JD2/cfg/* directory, were the other databese are hold or is it really necessary to put a absolute Path in there?
I am assuming you mean this path:
Code:
var list = "c:/downloads/finished.txt"; // <- Set path to history file

You can use any valid path. It can also contain variables. To set the 'cfg' folder as directory path, you can use:
Code:
var list = JD_HOME + "/cfg/finished.txt"; // <- Set path to history file
Reply With Quote
  #983  
Old 13.10.2019, 02:13
Demongornot Demongornot is offline
Registered / Inactive
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Quote:
Originally Posted by Jiaz View Post
@mgpai/Demongornot: I'll suggest to create a new thread for the discussion about the development/ideas/questions for the dupe/history support. I can then move the posts to the new thread.
sorry that I'm so quiet but I don't have much time at the moment :(
@Jiaz @mgpai
Here it is : https://board.jdownloader.org/showpo...88&postcount=1
And don't worry, no one forbid you to have a life
Reply With Quote
  #984  
Old 13.10.2019, 17:11
RPNet-user's Avatar
RPNet-user RPNet-user is offline
Tornado
 
Join Date: Apr 2017
Posts: 237
Default

mgpai,

How do I set an event script to auto-resume partially downloaded files while JD2 is running?

Example: I have a list of 10 files in the download queue and it is simultaneously downloading one or two files, but at some point 2 or 3 of those files will have stopped at any given point with an error of "invalid download directory" so I have to manually right-click-resume and they will automatically complete successfully while JD2 is running, otherwise they will remain incomplete(partially downloaded). I think a 60-second wait is more than an ample wait-time.

So basically; set an auto-resume flag for partially downloaded non-resumable links with a 60-second wait time.

Minus the 60-sec wait, would this work?

var links = getAllDownloadLinks();

for (i = 0; i < links.length; i++) {
var link = links[i];
if (link.getBytesLoaded() == 0) link.setSkipped(true);
}

startDownloads();

Last edited by RPNet-user; 13.10.2019 at 17:16. Reason: added script sample
Reply With Quote
  #985  
Old 13.10.2019, 20:50
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by RPNet-user View Post
... set an event script to auto-resume partially downloaded files while JD2 is running?
Code:
// Unskip and start downloading links with "Invalid download directory" message, if the destination folder is available.
// Trigger : A Download Controller Stopped

getAllFilePackages().forEach(function(package) {
    package.getDownloadLinks().forEach(function(link) {
        if (link.getSkippedReason() == "INVALID_DESTINATION") {
            if (getPath(package.getDownloadFolder()).exists()) {
                link.setSkipped(false);
                if (!isDownloadControllerRunning()) startDownloads();
            }
        }
    })
})

I would not recommend using 'interval' unless you have to, especially if you are having a lot of links. You may end up needlessly iterating through the list using up valuable system resources.

Also, the error might be symptomatic of hardware issues. It might be better to fix the underlying cause. If you provide more information, Jiaz might be able to look into it.
Reply With Quote
  #986  
Old 13.10.2019, 21:12
RPNet-user's Avatar
RPNet-user RPNet-user is offline
Tornado
 
Join Date: Apr 2017
Posts: 237
Default

Thanks, I already have a thread open on this, however, it is neither hardware nor permission related as this is only occurring when downloading files from only one host: "uploaded", and via the uploaded premium account; uploaded links that are generated via the rpnet-multihoster account plugin does not cause the issue.
So I will enable this script only when downloading "uploaded" links and the uploaded account is enabled since I currently have it as a priority over my multihoster account under account usage rules.
Reply With Quote
  #987  
Old 13.10.2019, 21:26
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by RPNet-user View Post
... I will enable this script only when ... the uploaded account is enabled
You can also try this. It will check/process the links only when "uploaded.to" account is enabled.

Code:
// Unskip and start downloading links with "Invalid download directory" message, if the destination folder is available.
// Trigger : Download Controller Stopped

var accountEnabled = callAPI("accounts", "queryAccounts", {
    "enabled": true
}).some(function(account) {
    return account.hostname == "uploaded.to" && account.enabled;
})

if (accountEnabled) {
    getAllFilePackages().forEach(function(package) {
        package.getDownloadLinks().forEach(function(link) {
            if (link.getSkippedReason() == "INVALID_DESTINATION") {
                if (getPath(package.getDownloadFolder()).exists()) {
                    link.setSkipped(false);
                    if (!isDownloadControllerRunning()) startDownloads();
                }
            }
        })
    })
}

Last edited by mgpai; 13.10.2019 at 22:00. Reason: Corrected the description in script
Reply With Quote
  #988  
Old 13.10.2019, 22:00
RPNet-user's Avatar
RPNet-user RPNet-user is offline
Tornado
 
Join Date: Apr 2017
Posts: 237
Default

thank you mgpai, that one is even better since i would not have to enable/disable it every time i enable the uploaded account.
Reply With Quote
  #989  
Old 14.10.2019, 18:52
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 81,007
Default

@RPNet-User: please see your other thread. create a log for that error. There must be a reason for this and not *workaround* by unskipping
__________________
JD-Dev & Server-Admin
Reply With Quote
  #990  
Old 15.10.2019, 04:30
professorstrangelove
Guest
 
Posts: n/a
Default

mgpai,

Not sure if this is the right place to post this question... Jiaz told me you were very talented with eventscripter and jdownloader and said that you would be the one I should contact.

It would be very helpful if J-downloader could automatically check a you tube channel and download any new videos that have been posted. This would save a lot of time if a channel has hundreds of videos on it, so that we would not have to sort through the whole channel each time to find new updates.

How would I go about doing this? Are you able to build a script plug in which would enable this feature?

Thank you for your help
Reply With Quote
  #991  
Old 15.10.2019, 12:59
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by professorstrangelove View Post
... automatically check a you tube channel and download any new videos that have been posted ...
You can get the latest 15 videos of a user/channel/playlist using XML feed. Here are a set of scripts and rules which can be added to JD to query the RSS feeds at regular intervals and automatically get the links.

Code:
gist.github.com/mgpai/09252b6b72828c290fd141da81be14a1/download
Reply With Quote
  #992  
Old 15.10.2019, 18:40
pspzockerscene's Avatar
pspzockerscene pspzockerscene is offline
Community Manager
 
Join Date: Mar 2009
Location: Deutschland
Posts: 72,950
Default

Merged EventScripter threads.

-psp-
__________________
JD Supporter, Plugin Dev. & Community Manager

Erste Schritte & Tutorials || JDownloader 2 Setup Download
Spoiler:

A users' JD crashes and the first thing to ask is:
Quote:
Originally Posted by Jiaz View Post
Do you have Nero installed?
Reply With Quote
  #993  
Old 15.10.2019, 19:21
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by dsfsdfasdfasf View Post
... Is it possible to blacklist a proxy via eventscripter when it causes a 403 geoblocking state?
I have created a script, but will have to wait until developers add new methods which return the details (IP,PORT etc.) of the blocked connection.



In the meanwhile you can prevent, or atleast reduce the chance of using the same GEO-Blocked connection for the hoster, by setting "GeneralSettings.freeproxybalancemode" to "RANDOM".

Last edited by raztoki; 15.10.2019 at 23:42. Reason: spelling and grammar
Reply With Quote
  #994  
Old 18.10.2019, 08:01
RPNet-user's Avatar
RPNet-user RPNet-user is offline
Tornado
 
Join Date: Apr 2017
Posts: 237
Default Neither scripts are working

mgpai,
Neither one of these scripts are working, I'm still having to manually right-click and resume.
No issue of any missing directory as I was able to resume them immediately after the error and the download completes them without issues.

Last edited by RPNet-user; 18.10.2019 at 08:10. Reason: added information
Reply With Quote
  #995  
Old 18.10.2019, 08:38
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,590
Default

Quote:
Originally Posted by Jiaz View Post
@RPNet-User: please see your other thread. create a log for that error. There must be a reason for this and not *workaround* by unskipping
Quote:
Originally Posted by RPNet-user View Post
Neither one of these scripts are working, I'm still having to manually right-click and resume.
I was unable to reproduce the issue. I you prefer to unskip the links using a script, I can try to help you troubleshoot it if you find me in JD Chat, when you have such links in the list.

Code:
webchat.freenode.net//#jdownloader?nick=JD_00?
Reply With Quote
  #996  
Old 19.10.2019, 01:04
RPNet-user's Avatar
RPNet-user RPNet-user is offline
Tornado
 
Join Date: Apr 2017
Posts: 237
Default

Quote:
Originally Posted by mgpai View Post
I was unable to reproduce the issue. I you prefer to unskip the links using a script, I can try to help you troubleshoot it if you find me in JD Chat, when you have such links in the list.

Code:
webchat.freenode.net//#jdownloader?nick=JD_00?
mgpai,
Ignore my previous post, the script does work, I just didn't know that it waits till the end of the downloads and then required a one time script permission prompt to run.
In my impatient mind, I was thinking that the script would unskip and resume immediately just after the invalid directory error. See screenshot, all those partial downloads were invalid directories before the script permission prompt at the end of the completed download queue. By the way, just before the error messages, I know it will happen every time because the download bandwidth drops down to "0" just before the invalid download directory message error occurs, which is only caused by "uploaded" premium account downloads.
Attached Images
File Type: png Invalid.Download.Directory.png (67.6 KB, 1 views)
Reply With Quote
  #997  
Old 20.10.2019, 08:45
Amiganer Amiganer is offline
JD Fan
 
Join Date: Mar 2019
Posts: 72
Default

Quote:
Originally Posted by Demongornot View Post
@Jiaz @mgpai
Here it is : **External links are only visible to Support Staff**...
And don't worry, no one forbid you to have a life
Hello.
The first thing I see in the script: What is with mirror links?
As I can see, you use "link.isFinished", that event is only triggert for the link that is really downloaded. How do you handle mirror-links?

With what event is the Script triggert?
Maybe you can put some more comments in for explanation, please.

bye, Christian
Reply With Quote
  #998  
Old 21.10.2019, 03:36
Demongornot Demongornot is offline
Registered / Inactive
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Quote:
Originally Posted by pspzockerscene View Post
Merged EventScripter threads.

-psp-
@pspzockerscene Were you aware that the discussion was purposefully not posted in this thread as asked by Jiaz and this is a conscious decision or did you merged it ignoring Jiaz made this request ?

@Amiganer

I have basically done the script which write finished downloads now, I am already in the commenting phase (as it is a small and simple script I didn't needed to comment it while writing it) be prepared as I have close to no experience with commenting my code for sharing it, so I don't know if it will end up correctly or with too much or not enough informations.
I have no idea what to do with mirror links as I haven't found any way to get one to experiment with them.
Since the thread have been merged back again I won't update too much to not overload this general use thread with my script progress.
I have few other things going on so the script isn't progressing super fast but it is still in my priorities.
The part 1 (Writing finished downloads) use "A Download stopped" trigger.
If I got my hand on a pair of links that will be consider as mirror by JD, I'll test out the necessary code to handle them if required.
Edit :
Nevermind, I found a valid mirror, I'll experiment tomorrow with them.
Edit 2 :
For what I see there will be two things I'll add, first, when a download finish, an option to check for their mirrors (as they don't trigger "A Download stopped") in the same package.
And a third script which will work with right click on a download to add it to the history list, it won't be long to code as it will basically pretty much the same as the one for finished downloads, except it won't check for mirrors or if it is finished to lets the user add whatever URL he want.

Last edited by Demongornot; 21.10.2019 at 13:25.
Reply With Quote
  #999  
Old 22.10.2019, 18:32
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 81,007
Default

@Demongornot: pspzocker didn't know about the decision and just merged it back by accident
__________________
JD-Dev & Server-Admin
Reply With Quote
  #1000  
Old 23.10.2019, 23:28
pspzockerscene's Avatar
pspzockerscene pspzockerscene is offline
Community Manager
 
Join Date: Mar 2009
Location: Deutschland
Posts: 72,950
Default

Quote:
Originally Posted by Demongornot View Post
@pspzockerscene Were you aware that the discussion was purposefully not posted in this thread as asked by Jiaz and this is a conscious decision or did you merged it ignoring Jiaz made this request ?
Sorry, my fault :(

-psp-
__________________
JD Supporter, Plugin Dev. & Community Manager

Erste Schritte & Tutorials || JDownloader 2 Setup Download
Spoiler:

A users' JD crashes and the first thing to ask is:
Quote:
Originally Posted by Jiaz View Post
Do you have Nero installed?
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +2. The time now is 15:42.
Provided By AppWork GmbH | Privacy | Imprint
Parts of the Design are used from Kirsch designed by Andrew & Austin
Powered by vBulletin® Version 3.8.10 Beta 1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.