JDownloader Community - Appwork GmbH
 

Notices

Reply
 
Thread Tools Display Modes
  #881  
Old 15.09.2019, 09:20
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,533
Default

Quote:
Originally Posted by Bach View Post
... script for an automatic update + restart after all active downloads.
Code:
// Update when JD is Idle
// Trigger Required: "Interval"
// Set interval to 600000 (10 mins.) or more.

(function() {
    if (callAPI("update", "isUpdateAvailable") == false) return;
    if (callAPI("linkcrawler", "isCrawling")) return;
    if (callAPI("linkgrabberv2", "isCollecting")) return;
    if (callAPI("extraction", "getQueue").length > 0) return;
    if (callAPI("downloadcontroller", "getCurrentState") != "IDLE") return;
    callAPI("update", "restartAndUpdate");
})();

Code:
/*
    Update when JD is Idle
    Trigger Required: "Interval" (Recommended: 600000 (10 mins.) or more)
*/

if (
    callAPI("update", "isUpdateAvailable") &&
    !callAPI("linkcrawler", "isCrawling") &&
    !callAPI("linkgrabberv2", "isCollecting") &&
    !callAPI("extraction", "getQueue").length &&
    isDownloadControllerIdle()
) {
    callAPI("update", "restartAndUpdate");
}

Last edited by mgpai; 18.01.2021 at 21:38. Reason: Replaced script
Reply With Quote
  #882  
Old 15.09.2019, 09:25
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,533
Default

Quote:
Originally Posted by Demongornot View Post
... adding, deleting and changing or selecting subtitles language ".srt" files for Youtube video in the LinkGrabber ?
You can use myjd API methods:
Code:
my.jdownloader.org/developers/

Namespace: linkgrabberv2
Methods: getVariants, setVariant, addVariantCopy.
Reply With Quote
  #883  
Old 15.09.2019, 15:35
Bach Bach is offline
Junior Loader
 
Join Date: Jul 2017
Posts: 12
Default

Quote:
Originally Posted by mgpai View Post
Code:
// Update when JD is Idle
// Trigger Required: "Interval"
// Set interval to 600000 (10 mins.) or more.

(function() {
    if (callAPI("update", "isUpdateAvailable") == false) return;
    if (callAPI("linkcrawler", "isCrawling")) return;
    if (callAPI("linkgrabberv2", "isCollecting")) return;
    if (callAPI("extraction", "getQueue").length > 0) return;
    if (callAPI("downloadcontroller", "getCurrentState") != "IDLE") return;
    callAPI("update", "restartAndUpdate");
})();

Danke!

Reply With Quote
  #884  
Old 18.09.2019, 16:40
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Quote:
Originally Posted by mgpai View Post
You can use myjd API methods:
Code:
my.jdownloader.org/developers/

Namespace: linkgrabberv2
Methods: getVariants, setVariant, addVariantCopy.
Sorry for the delay and thanks a lot for the answer, it took me a while but I made a script thanks to your informations.

Here is my script : Youtube Smart Subtitle Selector Script.
As it name suggest, it is made to have control over subtitles files.
All needed informations are in comment inside the script itself here :

Code:
/*
Youtube Smart Subtitle Selector Script by Demongornot for JDownloader 2.
This script allow you to set a list of subtitles and alternatives to download with variant filters.
If none of the alternatives have been found, it won't pick any, here is how it work :

First you'll need to enable JDownloader 2 Event Scripter,
if this isn't already done and then create this script, follow those steps :
Go to JDownloader 2 > Settings > Extension Modules > Event Scripter,
then on the top right check the Checkbox to enable Event Scripters if this is not already done.
Down left click the "Add" button if the list is empty though one should already be available,
but you'll only see a Checkbox, empty space, a drop down menu with "None" and a "edit" button.
Now (double) click on the left part of the empty script, if it is already named it might be an existing one.
Name it as you want "Youtube Smart Subtitle Selector Script by Demongornot" for example.
On the drop down menu right to the name part, click and select "A new link has been added", click "edit".
A windows appear, remove the text already in it and paste this entire code including "/*" at the beginning,
uncheck the "Synchronous execution of script" checkbox, this is really important !!!
Now down right click the "Save" button, the script windows is now closed,
check the Checkbox of this script to enable it, and it should now run automatically at every new links.

The work will be done when a new youtube video is added and a subtitles is added by the Youtue Plugin.


How to :
There is a variable called "deleteSubNotFound" by default at true, if set to "true",
if no subtitle in your list have been found, it will delete the one Jdownloader 2 & Youtube plugin took.
If set to "false", it will still be selected, this script have no control over which one is selected.
You also have a set of variables, the first one being called "ext" is the subtitle file extension,
if you need to you can change it, but this script being made for Youtube...
I don't think Youtube I don't think another format of subtitles will be taken in a close futur.
The other variables all starting by "lng" are in case the default Youtube subtitle code change.
I also doubt this will happen but this also lets anyone free to tweak if it was to happen anyway.
All those things shoudn't require any change, I also advise you to know what you are doing,
if you are going to change them, I won't take responsibility for miss use of my script.
Now the "SubLangList" array is where this is at, the process is actually really simple :

If you don't know how array work, simply consider it as a list.
It work that way : "var" "array_name" "[your list here]" ";".
Inside the "[]" brackets, you'll write your subtitle choices, inside brackets separated with a comma :
[[Subtitle choice 1],[Subtitle choice 2],[Subtitle choice 3]]; you can create as many as as you want.
By default JDownloader 2 will split them into lines, looking like this :
[[Subtitle choice 1],
[Subtitle choice 2],
[Subtitle choice 3]
];

Each "Subtitle choice" will be a possible subtitle file, so inside "Subtitle choice",
you'll write your chosen subtitle and its alternative if it is not found.
Making something like this :
[[GB English -> US English -> English],[French -> Canada French],[Brasil Portuguese -> Portugal Portuguese]]
As for how you write this, the language respect the IETF Language tag, you can find details here :
https://en.wikipedia.org/wiki/IETF_language_tag
As for which letters you'll need per country, here is a list :
https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2#Officially_assigned_code_elements

You must use quote or double quote around each of them, and a coma, so the result will look like this :
[["en-GB", "en-US", "en"],['fr', 'fr-CA'],["pt-BR", 'pt-PT']];
The subtitle main language have to be lowercase and the country variant have to be uppercase.
There is no preference over quote or double quote, which is why I mixed them in this example, both work.
This example list will create up to 3 subtitle files if it find the 3 matching languages.
For the first file, it will look for UK English, if not found, US English, if not found, general English.
Of course if the last one isn't found, no file will be added.
The second list will look for French, if not found for Canadian French.
The last list will look for Brazilian Portuguese and if not found for Portuguese from Portugal.
Note that automatically generated and "special" version will be ignored in those cases.

You can also add special filters by using the caracter #, I'll use French ('fr') and as exemple.
A "#" before the language mean automatically generated language, I don't think it exist as country variant.
So '#fr' mean that only the automatically generated French will be looked for.
A "#" after the language mean special variant, with text in its name, like (English - Created by x).
So 'fr#' mean only a "special" variant of French will be looked for, like (English - With commentaries).
The language surrounded by two "#" mean "any variant of this language".
So '#fr#' will look for any subvariant (like 'fr-CA' or 'fr-CH'), but in priority the main one ('fr').
And finally, language with "-" and surrounded by "#" mean any subvariant but not the main language.
So '#fr-#' will look for any subvariant (like 'fr-CA' or 'fr-CH') but will ignore ('fr').
Note that for the "#any# and #any-#" code, automatically generated and special subtitles will be ignored.


In short :

//xx = Exact primary language, no variant, no autogeneration, no additional text in language name.
//xx-XX = xx language version of XX country variant, no autogeneration or additional text.
//xx# = only language with additional text in its name like (English - Created by x).
//#xx = only automatically generated subtitles of this language.
//#xx# = any version and country variant of xx language except autogenerated and additional text.
//#xx-# = any country variant of xx language except original, autogenerated and additional text.
*/

//___________________________________________________________________________________________________________

var deleteSubNotFound = true;
//'true' = delete JDownloader's chosen language if none is found, 'false' will keep it.

//In case the Youtube Subtitles format changed, here is some variables to fix it.
//Change it only if you know what you are doing, at your own risks.
var ext = '.srt'; //Write here the file extension name of the subtitle file.
var lngSep = '-'; //Write here the main and country language separator.
var lngIdf = 'lng='; //Write here the language value indicator.
var lngAGS = 'kind=asr'; //Write here the autogenerated subtitle value and type.
var lngNam = 'name='; //Write here the language display name value indicator.
var lngVID = '&'; //Write here the value indicator separator.
var lngTAS = '+'; //Write here the language display name string separator.

var subLangList = [
    ['en'], /*Will only take English*/
    ['xx'], /*Will be ignored*/
    ['en-CA'], /*Will only take Canadian English*/
    ['en-XX'], /*Will also be ignored*/
    ['#en'], /*Will only choose Auto Generated English*/
    ['en#'], /*Will only choose "special" subtitle in English*/
    ['#en#'], /*Will take any non special and non autogenerated English*/
    ['#en-#'], /*Will take any non special and non autogenerated English subvariant*/
    ['fr', 'fr-FR', 'fr-CA', '#fr'], /*Either French, France French, Canadian French or autogenerated*/
    ['fr' + lngSep + 'CA', '#fr'], /*You can also write it this way to always have the right separator*/
    ['fr-CH', '#fr-#'], /*Either Switzerland French or any non special, non autogenerated variant*/
    ['ja'], /*Japanese*/
    ['ko'], /*Korean*/
    ['ru'], /*Russian*/
    ['es-419', 'es'], /*Latin American Spanish or regular Spanish*/
    ['fr', '#fr#', 'en', '#en#'], /*You can also mix multiples languages*/
    ['xx', 'fr', 'en-US'] /*Incorrect first language still allow to choose other languages of the list*/
];
//Note that though there is multiples instance of the same language for demonstrations purposes,
//A specific subtitle can only be added once in the list, so you won't have two ['fr-FR'],
//But you still can have ['fr'] and ['fr-FR'] and/or ['en'] and ['en'] autogenerated and/or special.
//If you want to test, Ted Talk and Kurzgesagt Youtube channels often have many subtitles per video.

//___________________________________________________________________________________________________________


//Use the name of the added link to check if it is a subtitle file by checking it contain ".srt" at the end.
var download = link;
var linkName = download.getName();
if (download.getURL != null && download.name != null && linkName != null) {
    if (linkName.lastIndexOf(ext) == linkName.length - ext.length && linkName.indexOf(ext) >= 0) {

        //If it is a "srt" subtitle file, set some variables and get the list of the languages.
        var linkPackageUUID = null; //download.getPackage().getUUID();
        var parentUUID = null; //download.getUUID();
        //I didn't find any other way to get download UUID and download's package UUID than this loop...
        var myCrawledLink = getAllCrawledLinks();
        for (var i = myCrawledLink.length - 1; i >= 0; i--) {
            if (linkName == myCrawledLink[i].getName() && download.getURL() == myCrawledLink[i].getUrl()) {
                linkPackageUUID = myCrawledLink[i].getPackage().getUUID();
                parentUUID = myCrawledLink[i].getUUID();
                break;
            }
        }
        var subtitleList = callAPI("linkgrabberv2", "getVariants", parentUUID);
        var subtitleData = [];

        //Go through all the variants of the video and get informations about them.
        for (var counterA = 0; counterA < subtitleList.length; counterA++) {
            var subID = subtitleList[counterA].id;

            var mainLanguage = subID.substring(subID.indexOf(lngIdf) + lngIdf.length, subID.indexOf(lngVID, subID.indexOf(lngIdf) + lngIdf.length));

            var language = mainLanguage;

            var isMainLanguage = false;
            if (language.indexOf(lngSep) < 0) isMainLanguage = true;

            if (!isMainLanguage) mainLanguage = language.substring(0, language.indexOf(lngSep));

            var isLangAutoGenerated = false;
            if (subID.indexOf(lngAGS) > 0) isLangAutoGenerated = true;

            var isSpecial = false;
            if ((isMainLanguage && subID.indexOf(lngTAS, subID.indexOf(lngNam)) > 0) && !isLangAutoGenerated) isSpecial = true;

            //Put all gathered informations in an array so they are easily accessible without requiring long code.
            subtitleData.push([subID, mainLanguage, language, isMainLanguage, isLangAutoGenerated, isSpecial]);
        }
        var firstPass = true;

        //Go through all the user determined languages in its array and gather data.
        for (var counterB = 0; counterB < subLangList.length; counterB++) {
            for (var counterC = 0; counterC < subLangList[counterB].length; counterC++) {
                var userLanguage = subLangList[counterB][counterC];

                var isMainLanguage = true;
                if (userLanguage.indexOf(lngSep) > 0) isMainLanguage = false;

                var isAutoGenerated = false;
                if (userLanguage.lastIndexOf('#') == 0) isAutoGenerated = true;

                var isSpecial = false;
                if (userLanguage.indexOf('#') == userLanguage.length - 1) isSpecial = true;

                var isAnyVariant = false;
                if (userLanguage.indexOf('#') == 0 && userLanguage.lastIndexOf('#') ==
                    userLanguage.length - 1) isAnyVariant = true;

                var userMainLanguage = userLanguage.replace(/#/g, '');

                if (!isMainLanguage) {
                    userMainLanguage =
                        userLanguage.substring(0, userLanguage.indexOf(lngSep)).replace(/#/g, '');
                }
                var tmpLanguage = -1;

                /*Check through all the informations gathered from the subtitle list,
                and compare with previously gathered user subtitle(s) list/code.*/
                for (var counterD = 0; counterD < subtitleData.length; counterD++) {

                    //Check if this is the same main language (no point checking anything else if it's not).
                    if (subtitleData[counterD][1] == userMainLanguage) {
                        //Check if the two language (main or with country variant) match (meaning no filters).
                        if (subtitleData[counterD][2] == userLanguage && !subtitleData[counterD][4] && !subtitleData[counterD][5] && userLanguage.indexOf('#') < 0) {
                            //If match, check if it is needed to change the first or add a new subtitle.
                            if (firstPass) {
                                callAPI("linkgrabberv2", "setVariant", parentUUID, subtitleData[counterD][0]);
                                firstPass = false;
                                break;
                            } else {
                                callAPI("linkgrabberv2", "addVariantCopy", parentUUID, parentUUID, linkPackageUUID, subtitleData[counterD][0]);
                                break;
                            }

                            //If it don't match, it might be because there are filters, check without them.
                        } else if (subtitleData[counterD][2] == userLanguage.replace(/#/g, '') && !isAnyVariant && (isSpecial || isAutoGenerated) && (subtitleData[counterD][4] || subtitleData[counterD][5])) {
                            //Check if this is the autogenerated variant.
                            if (subtitleData[counterD][4] && isAutoGenerated) {
                                if (firstPass) {
                                    callAPI("linkgrabberv2", "setVariant", parentUUID, subtitleData[counterD][0]);
                                    firstPass = false;
                                    break;
                                } else {
                                    callAPI("linkgrabberv2", "addVariantCopy", parentUUID, parentUUID, linkPackageUUID, subtitleData[counterD][0]);
                                    break;
                                }
                                //Check if this is a special variant.
                            } else if (subtitleData[counterD][5] && isSpecial) {
                                if (firstPass) {
                                    callAPI("linkgrabberv2", "setVariant", parentUUID, subtitleData[counterD][0]);
                                    firstPass = false;
                                    break;
                                } else {
                                    callAPI("linkgrabberv2", "addVariantCopy", parentUUID, parentUUID, linkPackageUUID, subtitleData[counterD][0]);
                                    break;
                                }
                            }

                            //It might still be any variant user mode, check for matching language.
                        } else if (isAnyVariant && !isAutoGenerated && !isSpecial && !subtitleData[counterD][4] && !subtitleData[counterD][5]) {
                            if (isMainLanguage) {
                                //If it is ""main language"", in any language mode it already match since only equal main language reach here.
                                if (subtitleData[counterD][3]) {
                                    tmpLanguage = -1;
                                    if (firstPass) {
                                        callAPI("linkgrabberv2", "setVariant", parentUUID, subtitleData[counterD][0]);
                                        firstPass = false;
                                        break;
                                    } else {
                                        callAPI("linkgrabberv2", "addVariantCopy", parentUUID, parentUUID, linkPackageUUID, subtitleData[counterD][0]);
                                        break;
                                    }
                                    //If this isn't the main language, this is a variant, it will save its position for latter.
                                } else if (tmpLanguage == -1) {
                                    tmpLanguage = counterD;
                                }

                                //If it get to the end without finding the main language, it will add first found variant if one was found.
                                if (counterD >= subtitleData.length - 1 && tmpLanguage >= 0) {
                                    if (firstPass) {
                                        callAPI("linkgrabberv2", "setVariant", parentUUID, subtitleData[counterD][0]);
                                        firstPass = false;
                                        break;
                                    } else {
                                        callAPI("linkgrabberv2", "addVariantCopy", parentUUID, parentUUID, linkPackageUUID, subtitleData[counterD][0]);
                                        break;
                                    }
                                }

                                //Here it check for 'any except main'
                            } else {
                                if (!subtitleData[counterD][3]) {
                                    if (firstPass) {
                                        callAPI("linkgrabberv2", "setVariant", parentUUID, subtitleData[counterD][0]);
                                        firstPass = false;
                                        break;
                                    } else {
                                        callAPI("linkgrabberv2", "addVariantCopy", parentUUID, parentUUID, linkPackageUUID, subtitleData[counterD][0]);
                                        break;
                                    }
                                }
                            }
                        }
                    }
                }

            }
            //If no desired subtitles have been found, delete the JDownloader's choosen one if user want to.
            if (counterB >= subLangList.length - 1 && firstPass && deleteSubNotFound) {
                getCrawledLinkByUUID(parentUUID).remove();
            }
        }
    }
}

The code should work properly (I didn't deeply test it) but if you see any problem or issues or encounter a bug, don't hesitate to report it !

Last edited by Demongornot; 23.09.2019 at 02:15. Reason: Fix little mistakes (again) and use the break; rather than counterC = x.length to exit loop and delete crawled link with UUID
Reply With Quote
  #885  
Old 23.09.2019, 16:35
Amiganer Amiganer is offline
JD Fan
 
Join Date: Mar 2019
Posts: 72
Default Prevent double Downloads

Hello, I asked that in another forum and I was linked here, so I'll ask here.

Is i possible to prevent JD2 to download a link a second time?
At the Moment it is possible while keeping the downloaded files in the download window. That works for a while....

Can that be done in another way?
As I found out, the EventScripter can make files, are something like a maybe ".csv" DB or something else and compare every new link with that pseudo DB?

Bye, Christian
Reply With Quote
  #886  
Old 23.09.2019, 17:15
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Quote:
Originally Posted by Amiganer View Post
Hello, I asked that in another forum and I was linked here, so I'll ask here.

Is i possible to prevent JD2 to download a link a second time?
At the Moment it is possible while keeping the downloaded files in the download window. That works for a while....

Can that be done in another way?
As I found out, the EventScripter can make files, are something like a maybe ".csv" DB or something else and compare every new link with that pseudo DB?

Bye, Christian
I can try and make it for you, I am not familiar with .csv files so I'll simply make a text file with each URL as line, if you insist I can learn to work with csv and make it use them.
But I'll require some informations, what do you want the script do to exactly, warn you, delete the download, disable it and write in its comment it have been already downloaded (recommended), detect them when you add them in downloads or directly from the link grabber and if so make an exception if you try to add the the same file twice in a certain amount of time (recommended), make exception for same links with different names added in the same period of time (like multiples Youtube comments for example who share the same URL) or excluding files from the same URL with different extensions etc...
Reply With Quote
  #887  
Old 23.09.2019, 22:37
Fetter Biff
Guest
 
Posts: n/a
Default

Very good idea, such an option would be very great, indeed!

If I could, if it was allowed, I would give some points, if it would not mind Christian.

It would be good, if that option would do the same like the link grabber window does, if the duplicate links were added to the link grabber window, they are shown in red and can be delted by clicking that item:


And a way to export / transfer the links to the list / csv / txt file from a dlc file would be great.
Reply With Quote
  #888  
Old 24.09.2019, 00:02
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Ok, that mean I'll code it so, it detect duplicates when the files are added in the link grabber, automatically disable them and put in their comment that they are duplicate, leaving the user free to decide whatever or not he should download them or not, and it will only take into account file already in the download list with an option to choose between added and finished download to be added to the already downloaded link list.
Also I think I'll also make so that on first launch, if the option is enabled, and no downloads history have been already recorded, that he add every existing files to the list.
It will work through two scripts, one to add every freshly started downloads or finished downloads (user choice) into the script, and the other to verify added links.

I'll just have to experiment on multiple trigger reading and/or writing on the same file "at the same time" as I don't know how JDownloader 2 handle that.
Might have to create temporary files for each instance of the script and add all the downloads at a certain moment (when downloads are stopped and/or when ETA of all active downloads is big enough) or when JDownloader is closing, for safer writing into the main list though it would mean a third script and you'll need to close JD to update the list, but this is my backup solution.

For the red downloads, I'll have to experiment and search a lot to find how to do that, but I'll try, it will be better than disabling and commenting them.
For the DLC I never tried that and I completely ignore what this is, so I'll do the script without that first, learn what this is and depending of what this is and how it work try to make as you suggest.

Last edited by Demongornot; 24.09.2019 at 00:05.
Reply With Quote
  #889  
Old 24.09.2019, 00:36
Fetter Biff
Guest
 
Posts: n/a
Default

That sounds very great, many thanks!

Quote:
and it will only take into account file already in the download list with an option to choose between added and finished download to be added to the already downloaded link list.
In JD's download list? Or you mean in the list you will build?


Quote:
For the DLC I never tried that and I completely ignore what this is, so I'll do the script without that first, learn what this is and depending of what this is and how it work try to make as you suggest.
So one first could JD let load the container, the DLC file respectively the links of it. And once all of these links are added to the link grabber window they automatically could be added to the anti duplicate list you will make to avoid duplicate downloads. So an export from DLC to the anto duplicate list would not be necessary.
Reply With Quote
  #890  
Old 24.09.2019, 01:45
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Quote:
Originally Posted by Fetter Biff View Post
That sounds very great, many thanks!
Your welcome, though I'll do it tomorrow, bed time now.

Quote:
Originally Posted by Fetter Biff View Post
In JD's download list? Or you mean in the list you will build?
Not sure yet, it will depend on how JD handle multiple script instance on the same file, it could be both (read only for the file and the JD DL list) so no downloads links are skipped from those already in the file list and those recently added.


Quote:
Originally Posted by Fetter Biff View Post
So one first could JD let load the container, the DLC file respectively the links of it. And once all of these links are added to the link grabber window they automatically could be added to the anti duplicate list you will make to avoid duplicate downloads. So an export from DLC to the anto duplicate list would not be necessary.
I'll try to look into it tomorrow.

About the links in the grabber in red, I haven't found a way to do iot, it don't appear in link properties and I haven't found any API method for that, so it might not be possible...Or at least I don't know how.

@mgpai Any idea if this is possible of how we can set a link inside the link grabber as an already existing download even if there isn't a matching file/url in the download list ?
Reply With Quote
  #891  
Old 24.09.2019, 10:49
Amiganer Amiganer is offline
JD Fan
 
Join Date: Mar 2019
Posts: 72
Default

Quote:
Originally Posted by Demongornot View Post
I can try and make it for you, I am not familiar with .csv files so I'll simply make a text file with each URL as line, if you insist I can learn to work with csv and make it use them.
But I'll require some informations, what do you want the script do to exactly, warn you, delete the download, disable it and write in its comment it have been already downloaded (recommended), detect them when you add them in downloads or directly from the link grabber and if so make an exception if you try to add the the same file twice in a certain amount of time (recommended), make exception for same links with different names added in the same period of time (like multiples Youtube comments for example who share the same URL) or excluding files from the same URL with different extensions etc...
Hello. CSV (= Comma Separated Data) is not necessary, what the Script does, is what I want.
What I need: If the link is exactly as one, that were downloaded before, don't download it again. So delete it from the download list.

The best would be as it happend if the downloads remain in the Download-Screen. If the links comes in with the grabber and it is in the list, pop up the window: "delete allready downloaded files from list". If that is possible.

So: Only add a second possibility to skip already downloaded links.

If that is possible, would be great.

From the other articles that goes the way, it is working right now accept it is compared against his own list, that would be great, if possible...

Thanks....

Bye Christian

Last edited by Amiganer; 24.09.2019 at 11:14.
Reply With Quote
  #892  
Old 24.09.2019, 12:25
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

@Demongornot: you can mark scripts to allow execution concurrently or only sequentially.
currently it's not possible to color highlight via scripting
I would recommend to have one *history* file per hoster where you can read and append new entries. one file per hoster to reduce overall memory requirements. for example after download append the url to file and before download/after adding, check against that file content. another script to compress that history file on startup (remove duplicates, maybe sort....). I would not use CSV as it just creates unneeded overhead. Maybe just add a dummy comment text that can be used for search AND maybe another script to mass remove the duplicates.
You can also read the file on first use and then hold the map in memory for faster processing. maybe don't hold the URLs in memory but only sort of meta information/hash and only check against file if there is a hit. memory consumption is most important here.

Please feel free to contact me via support@jdownloader.org or chat if you've got questions or need help or maybe a new api method

@Fetter_Biff: DLC is not meant for this. It doesn't support all hoster nor does export all meta information. Doesn't make any sense at all to make use of DLC for this.
__________________
JD-Dev & Server-Admin

Last edited by Jiaz; 24.09.2019 at 12:27.
Reply With Quote
  #893  
Old 24.09.2019, 12:37
Fetter Biff
Guest
 
Posts: n/a
Default

Quote:
@Fetter_Biff: DLC is not meant for this. It doesn't support all hoster nor does export all meta information. Doesn't make any sense at all to make use of DLC for this.
Yes, but I somehow have to get the links indicated as already downloaded.
Reply With Quote
  #894  
Old 24.09.2019, 12:46
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

I'm sorry but I don't understand what you need DLC for? in case you mean to *fill the history* then this must be done differently because the history should differ between added/loaded state
__________________
JD-Dev & Server-Admin
Reply With Quote
  #895  
Old 24.09.2019, 13:05
Fetter Biff
Guest
 
Posts: n/a
Default

Quote:
I'm sorry but I don't understand what you need DLC for?
To indicate these links as already being downloaded for Demongornot's script.
Reply With Quote
  #896  
Old 24.09.2019, 13:15
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

Thanks for the feedback. In that case @Demongornot should add another script to auto learn from current list
__________________
JD-Dev & Server-Admin
Reply With Quote
  #897  
Old 24.09.2019, 13:35
Fetter Biff
Guest
 
Posts: n/a
Default

The current list?

Alright, I do not have any idea of anything.
Reply With Quote
  #898  
Old 24.09.2019, 13:38
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Quote:
Originally Posted by Amiganer View Post
From the other articles that goes the way, it is working right now accept it is compared against his own list, that would be great, if possible...
Sorry but I did not understand this sentence, could you please reformulate differently ?

Quote:
Originally Posted by Jiaz View Post
...
Thanks for those advices !
I'll make one file per host as it sound better, I might actually try and use a URL limit per file so it create subversions, the more chances of hit per more sub files, the less memory it will take, I'll just need to check if the file "hostname + incremental number" using "readFile" and "try".
So far I've already made a script that can create an history file if none exist or if it is empty.
I'll use API trigger to make both check new downloads in the Link Grabber windows and put finished ones in the list.
I am currently looking at what API calls id and data are to select the right one.
When adding a new download to the list, it already check if it is already or not in the list using the "indexOf" function and only add it if it doesn't exist, I prefer that over another file who fix things, also this is the same function already used for checking of a new download in the LinkGrabber is in the history or not, so it avoid coding two codes for basically the same thing.
Also a "var myBoolean = fileExist(/*file path*/);" function would be really handy here !


I'll see what I can do with the DLC files after making at least a main script without it.
Reply With Quote
  #899  
Old 24.09.2019, 13:47
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,533
Default

Quote:
Originally Posted by Demongornot View Post
... a "var myBoolean = fileExist(/*file path*/);" function would be really handy ...
Code:
var myFilePath = getPath(myString/*Path to a file or folder*/);/*Get a FilePath Object*/
var myBoolean = myFilePath.exists();
Reply With Quote
  #900  
Old 24.09.2019, 15:13
Amiganer Amiganer is offline
JD Fan
 
Join Date: Mar 2019
Posts: 72
Default Prevent double Downloads

Quote:
Originally Posted by Fetter Biff View Post
Yes, but I somehow have to get the links indicated as already downloaded.
I'm not 100% sure, what DLC exact is, I have not used it. If I'm correct it is a short form to provide Links for Download, maybe that links should be in that Database as that links were for download (and so should be checked).
Were can I get Information to DLC?

Bye, Christian
Reply With Quote
  #901  
Old 24.09.2019, 15:14
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

thanks @mgpai for your fast help as always a great teacher
__________________
JD-Dev & Server-Admin
Reply With Quote
  #902  
Old 24.09.2019, 15:16
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

@Amiganer: DLC is a crypted container format meant for sharing files. Not required here. I would recommend another script @Demongornot for context menu(action) maybe to add/remove a link from history that way Fetter Biff can just add links to list and manually add them to history
__________________
JD-Dev & Server-Admin
Reply With Quote
  #903  
Old 24.09.2019, 15:22
Fetter Biff
Guest
 
Posts: n/a
Default

Quote:
maybe that links should be in that Database as that links were for download (and so should be checked)
Yes, that would be good.

DLC is a "container", a file, that stores links, status of links, downloads and some more information, if I am right. One can save the links from JD's download and linkgrabber list to a DLC and the links in a DLC back to the link grabber window / download window.
Reply With Quote
  #904  
Old 24.09.2019, 15:23
Fetter Biff
Guest
 
Posts: n/a
Default

What actually is the history?
Reply With Quote
  #905  
Old 24.09.2019, 15:25
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

Quote:
Originally Posted by Fetter Biff View Post
Yes, that would be good.

DLC is a "container", a file, that stores links, status of links, downloads and some more information, if I am right.
I'm sorry but DLC doesn't store status/some more information. those meta information are not supported at all and that's why this container is purely meant for sharing and not export/import/backup


Quote:
Originally Posted by Fetter Biff View Post
One can save the links from JD's download and linkgrabber list to a DLC and the links in a DLC back to the link grabber window / download window.
You can easily add the downloadListXXX.zip from cfg folder as Container and then chose to import into Linkgrabber. That way all meta information is kept!
__________________
JD-Dev & Server-Admin
Reply With Quote
  #906  
Old 24.09.2019, 15:36
Fetter Biff
Guest
 
Posts: n/a
Default

Quote:
I'm sorry but DLC doesn't store status/some more information. those meta information are not supported at all and that's why this container is purely meant for sharing and not export/import/backup
Very sorry. So the links only are stored in a DLC file? Encrypted? So you cannot use them outside of JD?

Quote:
You can easily add the downloadListXXX.zip from cfg folder as Container and then chose to import into Linkgrabber.
I do not have that any more, just the DLC I have.
Reply With Quote
  #907  
Old 24.09.2019, 15:48
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

DLC containers can be opened with other Downloadmanagers as well but not with Texteditor because they are encrypted, correct.
For what you're trying to achieve (train/fill the history) the DLC will do fine
__________________
JD-Dev & Server-Admin
Reply With Quote
  #908  
Old 24.09.2019, 15:56
Fetter Biff
Guest
 
Posts: n/a
Default

Alright, thank you.
Reply With Quote
  #909  
Old 25.09.2019, 08:23
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

How can I get the list of added crawled links from a crawler id or a job id when it is finished ?

I tried multiples things using JSON and objects but I always got error...
Code:
    var lscq = '{\r\n  "collectorInfo" : true,\r\n  "jobIds": ' + jid + '\r\n}';
    var lscq2 = '{"collectorInfo" : true, "jobIds": ' + jid + '}';
    var jst = JSON.stringify(lscq);
    alert(callAPI("linkgrabberv2", "queryLinkCrawlerJobs", lscq2));


Edit :
Nevermind for the error, after trying many variant I finally found a way :
Code:
    var lscq = {
        "collectorInfo": true,
        "jobId": jid
    };
    alert(callAPI("linkgrabberv2", "queryLinkCrawlerJobs", lscq));
Still don't know what API call to use for getting which links said crawler job id added, but I'll still try to find it myself until someone can answer me anyway.

Last edited by Demongornot; 25.09.2019 at 11:03.
Reply With Quote
  #910  
Old 25.09.2019, 11:27
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

@Demongornot: When adding links, you can specify to *remember/save* the jobID as meta information, so you can later use the jobID to query all found/processed links from the job. What do you want to achieve? then I can help better
__________________
JD-Dev & Server-Admin
Reply With Quote
  #911  
Old 25.09.2019, 12:39
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Well for the script of anti double downloads, I try to use only one script using API trigger, so far by looking at all API call using "alert(event);" during download end and adding links to the link grabber, I found those two who fit my needs :
"event.id : LINK_UPDATE.finished" for download ended, and it provide me all info I need.
"event.id : STOPPED event.publisher : linkcrawler"
Though that latter don't really get me what I need, only the job id and crawler id are really exploitable, the event id "FINISHED" give only those two.

I tried using those ID to get to find the list of links that it added using the api call "queryLinkCrawlerJobs" who return nothing and "CrawledLinkQuery" who don't work as the job id cause an error by being read as a float while it require a double, even if I use "parseInt(job id);" while the same variable containing said work fine with "queryLinkCrawlerJobs".
I previously got the same error with the first code on my previous post, it read :
Code:
Can not deserialize instance of long[] out of VALUE_NUMBER_FLOAT token
 at [Source: {
  "collectorInfo" : false,
  "jobIds" : 1.569388206182E12

I lately tried to see if I could actually use the opposite way by using "var myCrawlerJob = myCrawledLink.getSourceJob();" but I got a "null" result...

I could simply go through all crawled links and check their URL, but this isn't a really optimised solution of multiple crawler jobs each adding multiples links are runnings...

Also I found that using "getAllCrawledLinks" after the API trigger "STOPPED" or "FINISHED" only return a partial list of links when crawling an URL with multiples links as the last links are not to be found in my array, actually only few of the crawled links show up...So I was forced to use a sleep delay to get them all...

My other solution would be to use the job id and crawler id (whichever is the biggest) and go through all the crawled links in descending order and treat every links who have an UUID larger than the job id or the crawler id, sadly the list isn't in order of the first to latest added, so I might have one of the added link in the first package, forcing me to go through basically all other links and check their UUID...
The only optimisation I can do is using the event.data of "STOPPED" API trigger and count how many links have been added using the "offline", "online" and "links" properties and once I got the same number of links UUID analysed end the loop, but here is the trap, I need a delay to lets all the file be available to "getAllCrawledLinks" which mean I can overlap with the next crawler job and don't get the correct number of links analysed...

So I am out of idea about how to analyse only the latest links and already existing ones in a CPU and memory usage friendly way...
Reply With Quote
  #912  
Old 25.09.2019, 14:20
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

Lots of text So I try my best to understand and help

Notice: please don't make use of UUID the way you do now because there is now guarantee that they will stay the same. Just use them as numbers but avoid *magic* like comparing those via greater/less...

Instead of using api event system, better use the *ON_NEW_LINK* event that is triggered for every new link added to linkgrabber list and you just have to check for duplicate and then can modify/remove the link.

Or you can make use of *NEW_CRAWLER_JOB* and then toggle setAssignJobID(true) and (after next core update) and getUUID(next core update) and later you can make use of queryLinksParameter method with jobUUIDs, see https://my.jdownloader.org/developers/#tag_265
jobUUIDs is a long array and that's the cause of the error you got. you're trying to cast a number to a long[] to retrieve all links that are result of the job.

the api event system is very lightweight whereas the native events provide much more data/methods.

getSourceJob is only available during the crawling process and is cleared after the link is in linkgrabber.

remember you can contact me via mail and irc chat
__________________
JD-Dev & Server-Admin
Reply With Quote
  #913  
Old 25.09.2019, 15:28
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Ok I see, thanks for the answer, I would still like to use API event if possible because it would allow to use a single script.
Obviously my first though was to use two script, one at new link added and another on download stop, but if I could get the list of added links per job id or crawler id it would be more practical and more optimised as all identical required variables and some of the tests will only got loaded in memory and executed once, also the LINK_UPDATE.finished is better than the trigger Download Stop which don't necessarily happen only when the download have finished, forcing to test every time a download stop, helping making the code simpler getting rid of the test to see if it is finished and rather only test of this is the API event I am looking for, which enclose the whole code for that in a "if" while giving possibility to use function shared by the two main feature of this script.
Edit : Getting the crawler job from the crawler id would be not only practical but also quite logical to be able to.

Last edited by Demongornot; 25.09.2019 at 15:31.
Reply With Quote
  #914  
Old 25.09.2019, 17:03
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

Quote:
Originally Posted by Demongornot View Post
Edit : Getting the crawler job from the crawler id would be not only practical but also quite logical to be able to.
hmm, maybe I don't understand but /linkgrabberv2/queryLinkCrawlerJobs?query is what you are looking for?
https://my.jdownloader.org/developers/#tag_262
there is no method to lookup crawlerJob by crawlerID because there is no way to retreive that information.
when you add new links/jobs, you will get a jobID for this crawlerJob and then can use it to query for status/links..
there is no entry point that returns crawlerID because there can be multiple crawlerIDs involved during crawling


Quote:
Originally Posted by Demongornot View Post
if I could get the list of added links per job id or crawler id it would be more practical
this requires setAssignJobID to be set to true, so each resulting link will have a reference to its source job. you must either add the links with this option enabled or toggle it via script because it increases memory footprint

please know that a single jobID can result in multiple crawlers with different ids.
crawlerJob is the input -> one or more crawler are processing it -> resulting links
enable setAssignJobID and resultinglinks for crawlerJob can be queried via
queryLinksParameter method with jobUUIDs, see https://my.jdownloader.org/developers/#tag_265
__________________
JD-Dev & Server-Admin

Last edited by Jiaz; 25.09.2019 at 17:06.
Reply With Quote
  #915  
Old 25.09.2019, 20:58
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Well I tried "queryLinkCrawlerJobs" using this :
Code:
var eventX = event;
if (eventX.publisher == 'linkcrawler' && eventX.id == 'STARTED') {
    var dt = JSON.parse(eventX.data);
    var jid = dt.jobId;
    var cid = dt.crawlerId;
    var lscq = {
        "collectorInfo": true,
        "jobId": jid
    };
    alert((callAPI("linkgrabberv2", "queryLinkCrawlerJobs", lscq)));
}
But it return "[]" only, also if I get the term correctly, "Crawler ID" is the ID of a crawler searching links from a single URL while "Job ID" is basically the process of looking for links from all the URLs that have been put into JD, which encapsulate as many Crawler as there is URLs (which is why there can be multiple Crawler id ?) ?
Or did I got it wrong ? I mean between "Job", "Crawler Job" (though I guess those two are the same, but we never know) and "Crawler" I am not sure what is actually what...
But well yes I get a jobID and I would like to retrieve links from it.

The issue is that it look like a complicated mess, "queryLinkCrawlerJob" return nothing, and even if it did I would get a "List<JobLinkCrawler>" but the "JobLinkCrawler" isn't used by any API method, and I need to set "setAssignJobID" to "true" but the only place I find it is in "AddLinksQuery" which also isn't returned by any API methods, I can't find any "queryLinksParameter" either...

It look like it would require a really messy way of lot of API call to get from a JobID to the list of links it added...
If Only I could get an "added time" for crawled links, it would simplify things as I would simply look for the latest added one and find those who came from the same job using ".getSourceJob" but I don't know how to get jobID from that and ".getSourceJob" return "null", though with added date itself I could look for those older than the Job...
Also if from ".getContainerURL", ".getContentURL", ".getOriginURL", ".getReferrerURL" and ".getURL" I knew which one was actually the original URL the crawler used to search them, I could simply compare all those with the same xxxURL.

I'm out of ideas of how to only get the latest added links, even by going through the whole crawled link list.
Reply With Quote
  #916  
Old 26.09.2019, 08:41
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,533
Default

Quote:
Originally Posted by Demongornot View Post
... I need to set "setAssignJobID" to "true" but the only place I find it is in "AddLinksQuery"...
Quote:
Originally Posted by Jiaz View Post
... you can make use of *NEW_CRAWLER_JOB* and then toggle setAssignJobID(true) ...
Code:
// Store Job ID in crawled links
// Trigger: "New Crawler Job"

job.setAssignJobID(true);
Reply With Quote
  #917  
Old 26.09.2019, 14:03
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

@mgpai Thanks you !
Just a question, if I use this command after the first link has been added, will said link still be referenced ?

@Jiaz
It increase the memory only during the crawling execution or it stay ? And if it is the latter, can it be cleared ?

Also I would love an API method to get "job" from "jobId" please !
Reply With Quote
  #918  
Old 26.09.2019, 15:14
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,533
Default

Quote:
Originally Posted by Demongornot View Post
.. if I use this command after the first link has been added, will said link still be referenced ?
It needs to be enabled/run BEFORE the source url/text is added to JD.

Quote:
Originally Posted by Demongornot View Post
Also I would love an API method to get "job" from "jobId" please !
Code:
var myJobId = jobId;
var apiLinks = callAPI("linkgrabberv2", "queryLinks", {
    "jobUUIDs": [myJobId]
});
alert(apiLinks);
Reply With Quote
  #919  
Old 26.09.2019, 16:45
Demongornot Demongornot is offline
JD Beta
 
Join Date: Sep 2019
Location: Universe, Local group, Milky Way, Solar System, Earth, France
Posts: 50
Default

Well...Using "job.setAssignJobID(true);" on "New Crawler Job" trigger cause that :
"TypeError: Cannot find function setAssignJobID in object org.jdownloader.extensions.eventscripter.sandboxobjects.CrawlerJobSandbox@1c578d. (#1)"

Details here :
Spoiler:
net.sourceforge.htmlunit.corejs.javascript.EcmaError: TypeError: Cannot find function setAssignJobID in object org.jdownloader.extensions.eventscripter.sandboxobjects.CrawlerJobSandbox@86a6d7. (#1)
at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.constructError(ScriptRuntime.java:3629)
at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.constructError(ScriptRuntime.java:3613)
at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.typeError(ScriptRuntime.java:3634)
at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.typeError2(ScriptRuntime.java:3650)
at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.notFunctionError(ScriptRuntime.java:3714)
at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.getPropFunctionAndThisHelper(ScriptRuntime. java:2233)
at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.getPropFunctionAndThis(ScriptRuntime.java:2 215)
at net.sourceforge.htmlunit.corejs.javascript.Interpreter.interpretLoop(Interpreter.java:1333)
at script(:1)
at net.sourceforge.htmlunit.corejs.javascript.Interpreter.interpret(Interpreter.java:798)
at net.sourceforge.htmlunit.corejs.javascript.InterpretedFunction.call(InterpretedFunction.java:105)
at net.sourceforge.htmlunit.corejs.javascript.ContextFactory.doTopCall(ContextFactory.java:411)
at org.jdownloader.scripting.JSHtmlUnitPermissionRestricter$SandboxContextFactory.doTopCall(JSHtmlUnitP ermissionRestricter.java:119)
at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3057)
at net.sourceforge.htmlunit.corejs.javascript.InterpretedFunction.exec(InterpretedFunction.java:115)
at net.sourceforge.htmlunit.corejs.javascript.Context.evaluateString(Context.java:1212)
at org.jdownloader.extensions.eventscripter.ScriptThread.evalUNtrusted(ScriptThread.java:286)
at org.jdownloader.extensions.eventscripter.ScriptThread.executeScipt(ScriptThread.java:178)
at org.jdownloader.extensions.eventscripter.ScriptThread.run(ScriptThread.java:158)


Quote:
Originally Posted by mgpai View Post
It needs to be enabled/run BEFORE the source url/text is added to JD.
Thanks you, though sadly for me it probably mean I can't actually toggle it by using anything else than "New Crawler Job" trigger, as I guess the API event "STARTED" by "linkconnector" fire it too late/after links are url&text are added and even if it didn't, the time I extract the jobId from the API data to get the job to call that line, the first link could already be added anyway...


Quote:
Originally Posted by mgpai View Post
Code:
var myJobId = jobId;
var apiLinks = callAPI("linkgrabberv2", "queryLinks", {
    "jobUUIDs": [myJobId]
});
alert(apiLinks);
Thanks, but it return [] only, even when I make sure links are still crawled.
Using this code :
Spoiler:
Code:
if (event.publisher == 'linkcrawler' && event.id == 'STARTED') {
    var dt = JSON.parse(event.data);
    var myJobId = dt.jobId;
    var apiLinks = callAPI("linkgrabberv2", "queryLinks", {
        "jobUUIDs": [myJobId]
    });
    alert(apiLinks);
}
Reply With Quote
  #920  
Old 26.09.2019, 16:55
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,290
Default

Quote:
Originally Posted by Demongornot View Post
Well...Using "job.setAssignJobID(true);" on "New Crawler Job" trigger cause that :
"TypeError: Cannot find function setAssignJobID in object org.jdownloader.extensions.eventscripter.sandboxobjects.CrawlerJobSandbox@1c578d. (#1)"
the new methods in Job are available since yesterday evening with latest update. Just update your JDownloader

Quote:
Originally Posted by Demongornot View Post
Thanks you, though sadly for me it probably mean I can't actually toggle it by using anything else than "New Crawler Job" trigger, as I guess the API event "STARTED" by "linkconnector" fire it too late/after links are url&text are added and even if it didn't, the time I extract the jobId from the API data to get the job to call that line, the first link could already be added anyway...
It's important to make that script blocking/synchronized so the crawling process doesn't start and you can change settings. "New Crawler Job" is the easiest way, as you already have
access to the job itself and can chance stuff. the crawling process will start after the script has ended (synchronized). using api even will also be possible (after next core update + synchronized)
but at the moment there is no api method available to change job remotely.

Quote:
Originally Posted by Demongornot View Post
Thanks, but it return [] only, even when I make sure links are still crawled.
Using this code :
Spoiler:
Code:
if (event.publisher == 'linkcrawler' && event.id == 'STARTED') {
    var dt = JSON.parse(event.data);
    var myJobId = dt.jobId;
    var apiLinks = callAPI("linkgrabberv2", "queryLinks", {
        "jobUUIDs": [myJobId]
    });
    alert(apiLinks);
}
jobUUIDs is expecting a long array and on STARTED it returns [] because the crawler has not yet started. you will have to wait for crawling to be finished
or you'll get none/partial results
__________________
JD-Dev & Server-Admin
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +2. The time now is 00:58.
Provided By AppWork GmbH | Privacy | Imprint
Parts of the Design are used from Kirsch designed by Andrew & Austin
Powered by vBulletin® Version 3.8.10 Beta 1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.