View Single Post
Old 10.02.2020, 07:06
mgpai mgpai is offline
Script Master
Join Date: Sep 2013
Posts: 939

So deactivating (instead of skipping) the links would not work, I guess.
To disable the link instead of skipping it, replace link.setSkipped(true); with link.setEnabled(false); in both the scripts.

So it is not possible to keep the "#duplicatefile" in the comments when JD closes / restarts, I assume.
The script will be present in the comment field till it is deleted by you. The links will however be unskipped automatically when the download is started, so the script will check the comment and skip it again.

How could one remove the "#duplicatelink" from all comments / for all duplicates with a single action / click or so instead of removing them maunally one by one?
// Remove #duplicatelink and #dupliatefile from comment field for selected links
// Trigger: Downloadlist Contextmenu Button Pressed

if (name = "Clear dupe comment") {
    dlSelection.getLinks().forEach(function(link) {
        var myComments = ["#duplicatelink", "#duplicatefile"];
        myComments.forEach(function(myComment) {
            var comment = link.getComment() || "";
            if (comment.indexOf(myComment) > -1) link.setComment(comment.replace("#duplicatelink", "").replace("#duplicatefile", ""));

Could the links be checked for duplicates in the link grabber already?

So if there is no connection to the server the files to be downloaded are stored on or there is a wainting time for downloads it is not possible to check for duplicates? So one had to wait for the waiting time to finish to let the script check for the dupllicats?
Script follows the same sequence as JD for dupe check. Dupe link check is done only when adding links to linkgrabber. Dupe file check is done only when the download starts.

The script does not wait for a server respons. It just checks the link or file name against the text file.

Although links (from Mega) just were downloaded and added to the history they were downloaded a second time after having added them to JD again. Why is this? In a next try they are skipped.
The dupe check is performed only when adding the link to linkgrabber. I am assuming the it was present in the download list before it could be checked. To ensure all existing pending links in download list are dupe checked, you will have to remove and add them back again.

In this list ... there are trailing spaces in each line (after the name of the archivies already being download). Does that - the spaces - avoid the already downloaded archives to be recognized?
Yes, leading and trailing spaces can prevent the filenames from matching. I have modified the script in Post #24, to remove leading/trailing spaces in filenames from your text file before matching them.
Reply With Quote