|
[See Bugtracker] Skript oder sonst eine automatisierte Möglichkeit: Links nicht 1 weiteres Mal laden? |
|
Thread Tools | Display Modes |
#41
|
|||
|
|||
Many thanks!
Quote:
Quote:
How could one remove the "#duplicatelink" from all comments / for all duplicates with a single action / click or so instead of removing them maunally one by one? Could the links be checked for duplicates in the link grabber already? So if there is no connection to the server the files to be downloaded are stored on or there is a wainting time for downloads it is not possible to check for duplicates? So one had to wait for the waiting time to finish to let the script check for the dupllicats? Although links (from Mega) just were downloaded and added to the history they were downloaded a second time after having added them to JD again. Why is this? In a next try they are skipped. In this list Code:
Last edited by Fetter Biff; 09.02.2020 at 13:56. |
#42
|
||||||
|
||||||
Quote:
Quote:
Quote:
Code:
// Remove #duplicatelink and #dupliatefile from comment field for selected links // Trigger: Downloadlist Contextmenu Button Pressed if (name = "Clear dupe comment") { dlSelection.getLinks().forEach(function(link) { var myComments = ["#duplicatelink", "#duplicatefile"]; myComments.forEach(function(myComment) { var comment = link.getComment() || ""; if (comment.indexOf(myComment) > -1) link.setComment(comment.replace("#duplicatelink", "").replace("#duplicatefile", "")); }) }) } Quote:
The script does not wait for a server respons. It just checks the link or file name against the text file. Quote:
Quote:
|
#43
|
||||
|
||||
Quote:
Quote:
Thank you very much for the new script! Quote:
Quote:
|
#44
|
|||
|
|||
Quote:
Quote:
|
#45
|
|||
|
|||
Quote:
Quote:
|
#46
|
|||
|
|||
Quote:
Quote:
If you just want to find which links are marked as duplicate, use the search bar in the bottom toolbar. Select search by comment and add #duplicatelink in the search field. |
#47
|
|||
|
|||
Quote:
Code:
// Skip download link, if filename exists in the user-spcified list // To download the file (prevent skipping), remove "#duplicatefile" from comment // Trigger required: A Download Started var fileNamesList = "I:\\jD-Downloads\\jD-Dummies.txt"; // < Set path to text file which contain the file names. Use "\\" or "/" as path separators. var dupeFilecheck = link.getProperty("dupeFileCheck"); var linkName = link.getName(); var comment = link.getComment() || ""; var skipLink = function() { link.setSkipped(true); //To disable the link instead of skipping it, replace link.setSkipped(true); with link.setEnabled(false); in both the scripts. alert("Download Skipped: File \"" + linkName + "\" is present in files list."); } if (dupeFilecheck) { if (comment.indexOf("#duplicatefile") > -1) skipLink(); } else { var fileNames = readFile(getPath(fileNamesList)).trim().split("\r\n"); fileNames.some(function(fileName) { if (linkName == fileName.trim()) { var text = "#duplicatefile"; comment = comment ? text + " " + comment : text; link.setComment(comment); link.setEnabled(false); skipLink(); return true; } }) link.setProperty("dupeFileCheck", true); } Quote:
|
#48
|
|||
|
|||
Quote:
skipLink(); is a invalid command. To skip link you have to use link.setSkipped(true);. Also, if you choose to disable the link it is not necessary to also skip it. You will also have to add link.setEnabled(false); in the link check script, if you want to disble the links in linkgrabber tab. |
#49
|
|||
|
|||
Oh sorry, so many scripts, the wrong one. So I comment it out and could use it if need be. So I can use link.setEnabled(false); twice (if need be)(?) - where the lines are commentet out:
Quote:
Very strange, the script (obviously) worked with an invalid command. Quote:
Hope, it is correct now: Code:
// Skip download link, if filename exists in the user-spcified list // To download the file (prevent skipping), remove "#duplicatefile" from comment // Trigger required: A Download Started var fileNamesList = "I:\\jD-Downloads\\jD-Dummies.txt"; // < Set path to text file which contain the file names. Use "\\" or "/" as path separators. var dupeFilecheck = link.getProperty("dupeFileCheck"); var linkName = link.getName(); var comment = link.getComment() || ""; var skipLink = function() { link.setSkipped(true); //To disable the link instead of skipping it, replace link.setSkipped(true); with link.setEnabled(false); in both the scripts. alert("Download Skipped: File \"" + linkName + "\" is present in files list."); } if (dupeFilecheck) { if (comment.indexOf("#duplicatefile") > -1) link.setSkipped(true); } else { var fileNames = readFile(getPath(fileNamesList)).trim().split("\r\n"); fileNames.some(function(fileName) { if (linkName == fileName.trim()) { var text = "#duplicatefile"; comment = comment ? text + " " + comment : text; link.setComment(comment); // link.setEnabled(false); falls Duplikate im Download-Fenster enabled werden sollen link.setSkipped(true); return true; } }) link.setProperty("dupeFileCheck", true); } Quote:
Code:
// If download link is present in history file, mark it as duplicae (add "#duplicatelink" to comment) // Trigger required: Packagizer Hook if (state == "AFTER") { var url = link.getURL(); var historyFile = getPath(JD_HOME + "/cfg/history.txt"); var history = historyFile.exists() ? readFile(historyFile) : ""; if (history.indexOf(url) > -1) { var text = "#duplicatelink"; var comment = link.getComment(); comment = comment ? text + " " + comment : text; link.setComment(comment); link.setEnabled(false); } } |
#50
|
|||
|
|||
Quote:
skipLink(); is a valid command. It is a custom command which I had created in the script to skip the link and show the message, which I totally forgot. You can use skipLink(); (to skip and show the message) or link.setSkpped(true); (to only skip the link). You can disable the link in both file check and link check scripts if you wish. I used skip insted of disable, because it is easier to unskp the link. Only one click in status bar or toolbar. Modification appears to be OK. |
#51
|
|||||
|
|||||
Quote:
Quote:
Quote:
Quote:
Quote:
|
#52
|
|||
|
|||
Quote:
Quote:
Quote:
|
#53
|
|||
|
|||
Quote:
Code:
//skipLink(); link.setSkpped(true); Code:
skipLink(); //link.setSkpped(true); Quote:
Quote:
|
#54
|
|||
|
|||
Quote:
It is also possible to comment part of the line. Code:
// Entire line is commented This is part is not commented // This is part is commented This is not commented /* This is commented */ This is not commented /* This is commented This is commented This is commented */ |
#55
|
|||
|
|||
Ah yes, very understandable! Thank you very much!
|
#56
|
|||
|
|||
An unbelievably great script.
It seems with YouTube links / videos it does not work perfectly because somehow the YouTube links of (the same) videos are changing? And because one can choose different variants (of the same video) with different compression. Is there a way to treat all of the different variations of the same videos as duplicates? So once a video is downloaded all of the other smaller or bigger sizes of a video would count as duplicate by the script?
__________________
Aktuelles Windows |
#57
|
|||
|
|||
Quote:
Code:
var url = link.getContentURL() || link.getPluginURL(); with: Code:
var url = link.getHost() == "youtube.com" ? link.getContainerURL() : link.getContentURL() || link.getPluginURL(); You can manually strip the variant string (following the video ID) from the urls which already exist in your 'history' file. (Keep a backup of the file, just in case). Example string to remove from url: Code:
#variant=ew0KICAiaWQiIDogIk1QNF9IMjY0XzM2MFBfMzBGUFNfQUFDXzk2S0JJVCIsDQogICJkYXRhIiA6ICJ7XHJcbiAgXCJhQml0cmF0ZVwiIDogLTEsXHJcbiAgXCJoZWlnaHRcIiA6IDM1MixcclxuICBcIndpZHRoXCIgOiA2NDAsXHJcbiAgXCJmcHNcIiA6IDI5LFxyXG4gIFwicHJvamVjdGlvblwiIDogXCJOT1JNQUxcIlxyXG59Ig0KfQ%3D%3D |
#58
|
|||
|
|||
Thank you very much!
I have changed one of the scripts like this: Code:
// Skip link if it present in download history (has "#duplicatelink" in comment) // To download the file (prevent skipping), remove "#duplicatelink" from comment // Trigger required: A Download Started var comment = link.getComment() || ""; if (comment.indexOf("#duplicatelink") > -1) { // var url = link.getContentURL() || link.getPluginURL(); // Nur tatsächlich runtergeladene Varianten von YouTube werden als Duplikate behandelt var url = link.getHost() == "youtube.com" ? link.getContainerURL() : link.getContentURL() || link.getPluginURL(); // alle Varianten auf YouTube werden als Duplikate behandelt link.setSkipped(true); //To disable the link instead of skipping it, replace link.setSkipped(true); with link.setEnabled(false); in both the scripts alert("Download Skipped: \"" + url + "\" is present in history file."); }ff Quote:
What do I have to do? Search a part of the example string in the history file?
__________________
Aktuelles Windows |
#59
|
|||
|
|||
Obviously I maed a mistake, the script does not seem to properly work anymore:
Code:
net.sourceforge.htmlunit.corejs.javascript.EcmaError: ReferenceError: "ff" is not defined. (#12) at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.constructError(ScriptRuntime.java:3629) at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.constructError(ScriptRuntime.java:3613) at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.notFoundError(ScriptRuntime.java:3683) at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.name(ScriptRuntime.java:1690) at net.sourceforge.htmlunit.corejs.javascript.Interpreter.interpretLoop(Interpreter.java:1622) at script(:12) at net.sourceforge.htmlunit.corejs.javascript.Interpreter.interpret(Interpreter.java:798) at net.sourceforge.htmlunit.corejs.javascript.InterpretedFunction.call(InterpretedFunction.java:105) at net.sourceforge.htmlunit.corejs.javascript.ContextFactory.doTopCall(ContextFactory.java:411) at org.jdownloader.scripting.JSHtmlUnitPermissionRestricter$SandboxContextFactory.doTopCall(JSHtmlUnitPermissionRestricter.java:119) at net.sourceforge.htmlunit.corejs.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3057) at net.sourceforge.htmlunit.corejs.javascript.InterpretedFunction.exec(InterpretedFunction.java:115) at net.sourceforge.htmlunit.corejs.javascript.Context.evaluateString(Context.java:1212) at org.jdownloader.extensions.eventscripter.ScriptThread.evalUNtrusted(ScriptThread.java:288) at org.jdownloader.extensions.eventscripter.ScriptThread.executeScipt(ScriptThread.java:180) at org.jdownloader.extensions.eventscripter.ScriptThread.run(ScriptThread.java:160)
__________________
Aktuelles Windows Last edited by Dockel; 10.06.2020 at 22:32. |
#60
|
|||
|
|||
Change the last line
Code:
}ff Code:
} |
#61
|
|||
|
|||
Ah, completely missed that. I guess, I entered it by trying pressing STRG#+F for searching in the script.
Works now, many thanks!
__________________
Aktuelles Windows |
#62
|
|||
|
|||
Vielleicht so...
Hallo.
Es gibt einn "EventScript", dass fertig downloads in eine neues Package schreibt: board.jdownloader.org/showpost.php?p=450153&postcount=950 Dann werden Links, die schon mal da waren als Doubletten markiert. Ich benutze das hier, um das "Doppelte Downloaden" zu vermeiden. Achtung: In dem Package dürfen nicht nur <100.000 Links drin sein. Bevor es soweit ist, einfach umbenennen. Hoffe, das hilft. Es gibt in dem Thread etwas später auch noch ein anderes Script, das habe ich aber nicht getestet. Bye, Christian |
#63
|
|||
|
|||
Hallo Christian,
vielen Dank für den Link. Ich kann noch nicht ganz den Vorteil bei dem Skript verstehen, wozu kann man das verwenden, dass doppelt Links in einen anderen Ordner (ist wohl dasselbe wie Package, oder?) schieben lassen kann? So ein Skript, von mgpai, das doppelte Downloads verhindert benutze ich auch, das ist aber wohl ein anderes, als das, das Du meinst. Die als doppelt angezeigten Links kann ich dann einfach löschen.
__________________
Aktuelles Windows |
#64
|
|||
|
|||
Quote:
Das Script, auf das ich gezeigt habe macht folgendes: Im Linksammler werden Links, die schon im Downloader sind als "Doubletten" markiert. Bevor Du nun alle gesammelten Links zum Download hinzufügen kannst, wirst Du gefragt, ob du die Doubletten auch hinzu fügen willst und kannst so Links vermeiden, die schon im Downloader stehen. Das Script selber verschiebt die Links, die fertig geladen sind, in ein neues Package "Already Downloaded", dadurch werden die Links nicht wirklich aus dem Downloader entfernt. Da die Links also weiterhin in Downloader stehen, werden sie beim Linksammler als Doubletten erkannt und du kannst ein erneutes laden verhindern. Man muß nur aufpassen, da max 99.999 (<100.000) Links in ein Package passen, das man "Already Downloaded" von Zeit zu Zeit umbennent. Ich mache das Monatlich und hänge dann zB "202005" an (für die Links aus May 2020). Einen kleinen Nachteil gibt es: Das Script erkennt nicht, wenn es zu einem geladenen Link einen Mirrorlink gibt, der wird nicht automatisch verschoben, das muß man dann selber machen. Ich hoffe, die Beschreibung hilft. Bye, Christian |
#65
|
|||
|
|||
Quote:
Quote:
Quote:
Ja, ja, hilft durchaus, vielen Dank!
__________________
Aktuelles Windows |
#66
|
|||
|
|||
@tarkett: Thanks for helping with the error.
Quote:
On a side note, having large amount of links in the list requires lot of memory. It would be better to keep only the minimum required links in it and move the rest to text file. |
#67
|
|||
|
|||
How could one exclude downloads / files to be marked / handled as duplicates to avoid that(?):
Downloads links from zippy almost always / often are added to the link grabber and after to the download window named "file.html". In the download window some links change to their real name, some are keeping the old names (file.html, being marked as duplicate). Or when one clicks force download or check online status they alsways or often get its real names.
__________________
Aktuelles Windows |
#68
|
|||
|
|||
Quote:
PSP has fixed the file name issue with ZS, but I guess it is not live since core updates are pending. As a workaround, till the update is available, you can create a packagizer rule which can rename the file using file ID in the url. If you desired workflow is different, I can modify the existing scripts or create a new one. |
#69
|
|||
|
|||
Quote:
it is marked as duplicate although it is not a duplicate (Could one avoid it so that it will be downloaded because it is not a duplicate). When I remove the duplcate comment the name of "file.html" will be changed to the real name and the file will be downloaded. Could one somehow make JD downloading it without having to do such? ? Quote:
Quote:
__________________
Aktuelles Windows |
#70
|
|||
|
|||
Very unlikely, since only urls are matched and not file names. Also,they are matched and the link is marked as 'duplicate' when the links are added to linkgrabber tab. It should not be marked as such, unless the same url is present in the history text file.
If you wish to troubleshoot it, please find me in JD Chat when you are free. Code:
kiwiirc.com/nextclient/irc.freenode.net/#jdownloader |
#71
|
||||
|
||||
@mgpai
He is asking for a script to work-around a Zippyshare plugin issue which has already been fixed but updates have not yet been released. He can also easily work-around this by using packagizer rules to set the fileID of zippyshare URLs as filename though this might not work either as this will be set as final filename then. -psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#72
|
|||
|
|||
Quote:
Quote:
|
#73
|
||||
|
||||
Yeah - though I'm not yet 100% sure if that will work.
I was never able to reproduce the users' issue. We'll see after the update. -psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#74
|
|||
|
|||
To reproduce, add a dummy (non-working) connection in connection manager, which will make ZS links un-checkable and all links will be added to linkgrabber with same package/file name.
Move the package to download tab, disable the dummy connection and start the downloads. Only one of them will start downloading with correct name and the rest will be marked as mirrors. Not sure if this will help you in any way to diagnose the issue, but though there is no harm sharing it. Can also use a script, if the fix doesn't work for any reason. |
#75
|
||||
|
||||
ZS URLs are already uncheckable hier because ZS is GEO-blocking german IPs and I'm in Germany
-psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#76
|
|||
|
|||
I am able to reproduce it consistently. Let me know if I can help in anyway. You can also check via TV if it will be helpful.
|
#77
|
||||
|
||||
Quote:
-psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#78
|
|||
|
|||
I have not set up an IDE. Only public release. I had tried packagizer rule though, and it did not work.
|
#79
|
||||
|
||||
Ahh sorry forgot that the ZS plugin is closed src.
Wait for the update and try again then. -psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#80
|
|||
|
|||
Here are some errors shown while adding two uloz links:
Is there anything I could do?
__________________
Aktuelles Windows |
|
|