|
#1
|
|||
|
|||
Adding links from the clipboard via cmd line or something else...
Hi!
wondering... Is it possible to to paste a clipboard (that contains download links per line) and make JD accept and start those downloads through a windows bat file? like for example. c:path\jdownloader.exe -getclipboard -startdownloads & alternatively? can we command JD to get clipboard and start downloads on start up, via event scripter? Longer story: I am trying to fully automate my daily download procedure via task manager. First , I got help from a friend and this powershell script goes to a specified web site and grabs all necessary links to clipboard. Then i manually activate JD, ctrl+v , hit yes and hit "start all downloads". We could not figure out how to make JD receive the clipboard(that contains those links) , answer yes to linkgrabber's question that has to be "yes" and make those downloads start. The clipboard's contents are like this: somesite.com/some_article_01 somesite.com/some_article_02 somesite.com/some_article_03... |
#2
|
||||
|
||||
@I3ordo: Please check Folderwatch/Crawljobs, see https://support.jdownloader.org/Know...ch-basic-usage
That way you can create Crawljob files that contain links and can customize some stuff like package name or download directory. You can then add those crawljob files or place them in specific folder and JDownloader will pick it up automatically Why query/paste the clipboard? JDownloader will auto pickup the link once copied to clipboard, so just let JDownloader run in background and tell it to auto-add/auto-start new links.
__________________
JD-Dev & Server-Admin |
#3
|
||||
|
||||
Quote:
__________________
JD-Dev & Server-Admin |
#4
|
||||
|
||||
Quote:
That way you teach JDownloader how to handle unsupported URLs and what content you're interested in
__________________
JD-Dev & Server-Admin |
#5
|
||||
|
||||
In case you need further help with this /got questions, just ask
__________________
JD-Dev & Server-Admin |
#6
|
|||
|
|||
Well, i already this powershell code that gets the links
Code:
$Pages = "3" #Set Page number you would like this to stop at #$Pages = Read-Host "What Pages number would you like this to stop at?" #Will ask you everytime you run #If you uncomment with # between the top two lines. line 1 is just set line 2 will ask you when you run $export = @() $currentPageString = "1" $Pageint = [int]$currentPageString While($Pageint -le $Pages){ $link = ‘**External links are only visible to Support Staff** + $pageint $links = (Invoke-WebRequest –Uri $link).Links | select href $export += $links | where {$_.Href -Like "***External links are only visible to Support Staff** -and $_.Href -notlike "**External links are only visible to Support Staff** -and $_.Href -ne "**External links are only visible to Support Staff** -and $_.Href -ne "**External links are only visible to Support Staff** -and $_.href -notlike "*#more*" -and $_.href -notlike "*#respond*"-and $_.href -notlike "*#comments*"} | select Href $Pageint++ } Set-Clipboard $export.href C:\temp\Jdownloader\Jdownloader2.exe #Change Patch to where you Jdownloader2.exe sits I already paid some $ for the code above, now i could pay a bit more but cant find anyone with JD coding services in fiverr yet... |
#7
|
||||
|
||||
I hope not that much $ because you could also easily have achieved the same via Linkcrawler Rules natively in JDownloader.
__________________
JD-Dev & Server-Admin |
#8
|
||||
|
||||
@l3ordo: Please see here for a linkcrawler rule for your links,
https://board.jdownloader.org/showth...t=down3dmodels You just have to add those rules to your JDownloader and it will be able to pickup those links from clipboard. Just let JDownloader run in background. It's important that JDownloader runs before you add something to clipboard, else it will not be picked up.
__________________
JD-Dev & Server-Admin |
#9
|
|||
|
|||
well i was stuck at the
"2. Ask for help in our Eventcripter thread: Ask for a script that will auto-add a specified URL every X hours and also ask what you can do to avoid duplicates when doing that."... |
#10
|
||||
|
||||
@I3ordo: Your script should do fine when you have the linkcrawler rule in place so JDownloader knows how to handle those URLs
Habe you checked the content of the Clipboard after execution of the script? that it matches your expectation?
__________________
JD-Dev & Server-Admin |
#11
|
|||
|
|||
Hi , i generally try to squeeze in some jd knowledge in between breaks that i take when i am not modeling and rendering stuff, and i am sure you have got a lot going too
anyway , just wanted to say i appreciate it greatly..thanks for always responding to my questions. The script i ordered on fiverr. the powershell script, works as expected. The clipboard is populated with article links at each line. Then all i have to do is manually add the links from clipboard to link grabber (by ctrl+v) and have jd autostart downloads. Then archive files and images automatically renamed and assigned to a sperarte folder by jd's packagizer rules. At the moment, I just want JD to grab the clipboard links to linkgrabber by it self, so that s why i asked for a command line or a JD script that grabs clipboard and initiates the downloads without human interaction. Last edited by I3ordo; 23.09.2022 at 14:07. |
#12
|
||||
|
||||
You have to CTRL+V (Paste) because JDownloader doesn't support these URLs out of the box but we already provided you working linkcrawler rules here, https://board.jdownloader.org/showpo...69&postcount=6
No need for any command line any anything else. You just have to setup the already provided linkcrawler rule so JDownloader will be able to auto pickup/process those links from clipboard.
__________________
JD-Dev & Server-Admin |
#13
|
|||
|
|||
A video explanation of the current situation, short one
**External links are only visible to Support Staff****External links are only visible to Support Staff** |
#14
|
||||
|
||||
I'm sorry I haven't watched your video 100% but, referring to the end where you say "I can't code" while you opened the forum thread containing the LinkCrawler rules Jiaz mentioned:
You do not need to be able to code in order to use those rules but you need those rules so that JD can auto handle those URLs when they get copied in your clipboard 1. Quickly read our support article about LinkCrawler rules. 2. Go to this thread, grab those two rules, add them to JD and afterwards JD should handle the links shown in your video whenever you copy them and clipboard observation is active. Also ignore everything that was posted about "EventScripter" in the linked thread: That user wanted JD to auto-re-scan a specific link every X time to find new content. This is not what you want thus you can ignore all info we gave him as a reply to that specific question.
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#15
|
||||
|
||||
Quote:
__________________
JD-Dev & Server-Admin |
#16
|
|||
|
|||
Yep that was me
Also, i pasted the link crawler rule and had already clipboard monitoring active... That rule; Code:
[ { "cookies" : null, "deepPattern" : "<link>(https?://[^<]+)</link>", "formPattern" : null, "id" : 1664062694270, "maxDecryptDepth" : 1, "name" : "crawl all single item URLs from 'down3dmodels.com/feed/'", "packageNamePattern" : null, "passwordPattern" : null, "pattern" : "**External links are only visible to Support Staff**, "rewriteReplaceWith" : null, "rule" : "DEEPDECRYPT", "enabled" : true, "logging" : false, "updateCookies" : true }, { "cookies" : [], "deepPattern" : ">(https?://(?!down3dmodels\\.com/)[^<]+)</", "formPattern" : null, "id" : 1664062694271, "maxDecryptDepth" : 1, "name" : "crawl all URLs inside all URLs from 'down3dmodels.com' except 'down3dmodels.com/feed/'", "packageNamePattern" : "<title>(.*?)</title>", "passwordPattern" : null, "pattern" : "**External links are only visible to Support Staff**, "rewriteReplaceWith" : null, "rule" : "DEEPDECRYPT", "enabled" : true, "logging" : false, "updateCookies" : true } ] My only existing packagizer rule is this one that only works with archive files and images and i do the additional filtering like host and filetype filtering in the link grabber window(only archives from rapidgator, hoster as down3dmodels, no facebook links or nitroflare etc) Can the crawler rule be modified to include also the article images? I dont see any thing in pockerscene's rule that includes rar files and excludes everything else. it seems Last edited by I3ordo; 25.09.2022 at 15:08. |
#17
|
||||
|
||||
Ohh I see. Lack of concentration was clearly there :D
Quote:
I've modified your rule and enhanced the regular expression to include imgur.com URLs. Please keep in mind that this will fail if the article image is hosted on another website and sure it is possible to make the rule more dynamic if needed. Please also keep in mind that I recommend learning regular expressions so you can work on such rules yourself in the future. Our support article links tools you can use to test/learn regular expressions: https://support.jdownloader.org/Know...kcrawler-rules Quote:
For rules of type DEEPDECRYPT, they look for certain stuff inside the websites' html code. What to look for is up to you and regular expressions are used to find/filter those things. Modified LinkCrawler rule(s): Code:
[ { "deepPattern": "<link>(https?://[^<]+)</link>", "maxDecryptDepth": 1, "name": "crawl all single item URLs from 'down3dmodels.com/feed/'", "packageNamePattern": null, "pattern": "**External links are only visible to Support Staff**, "rewriteReplaceWith": null, "rule": "DEEPDECRYPT", "enabled": true, "logging": false, "updateCookies": true }, { "cookies": [], "deepPattern": "((https?://i\\.imgur\\.com/[A-Za-z0-9]+\\.[A-Za-z]+)|>(https?://(?!down3dmodels\\.com/)[^<]+)</)", "maxDecryptDepth": 1, "name": "crawl all URLs inside all URLs from 'down3dmodels.com' except 'down3dmodels.com/feed/'", "packageNamePattern": "<title>(.*?)</title>", "pattern": "**External links are only visible to Support Staff**, "rewriteReplaceWith": null, "rule": "DEEPDECRYPT", "enabled": true, "logging": false, "updateCookies": true } ] pastebin.com/raw/SHAH8dhn
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#18
|
|||
|
|||
@pspzockerscene
Hey this is great, now seeing the added rule, i can make a sense of how i can build one my self, i got familiar with regex (by the help of regexr.com during the packagizer rule ) Now , i can see that the downloads are grabbed and waiting to be confirmed. Then i re-visited my packagizer rule for d3d and ticked the "auto confirm" option. Now it is fully automated, super happy now! Checking the rule that you have modified, it was just initiating the deepcrawling process upon regex matching. and the other crucial part was done by my packagizer rule and filterings... Funny things become much more evident when the solution arrives. Was spoon fed again but that was an educational spoon in the end! thank your very much, I am thrilled to have solved this repetitive task in to the "automation" @pspzockerscene @jiaz |
#19
|
||||
|
||||
Thanks for your feedback I'm glad it's working for you now.
I'll mark this thread as "Solved" now though it will remain open in case you got further questions.
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download Last edited by pspzockerscene; 27.09.2022 at 15:10. Reason: Fixed typo |
#20
|
||||
|
||||
@I3ordo: Thanks for your feedback! And sorry for the long road and many ping/pong
__________________
JD-Dev & Server-Admin |
Thread Tools | |
Display Modes | |
|
|