#1
|
|||
|
|||
![]()
Hello,
I have been doing this manually for months now and i tried few things like rss to txt file but they real solution to this would be through eventscripter. What i need is really simple though. i just want the feed of this specific url, to JD **External links are only visible to Support Staff****External links are only visible to Support Staff** the RSS urls appears to be **External links are only visible to Support Staff****External links are only visible to Support Staff** it should grab these urls for the time being... **External links are only visible to Support Staff****External links are only visible to Support Staff** **External links are only visible to Support Staff****External links are only visible to Support Staff** please guide me through writing the script or just spoon feed me idc! i have done all the necessary filterings and packagizer rules thanx to support i got here.. it s just this last thing is missing. |
#2
|
||||
|
||||
![]()
@I3ordo: you want to add the feed? you already have rules for the individual posts/items? and add the feed and JDownloader should find all links and your existing rules take over, right?
__________________
JD-Dev & Server-Admin |
#3
|
|||
|
|||
![]()
exactly diaz!, i have previously made the packagizer rules so that if the source matches the regex, it gives the source url to the files, there is nothing wrong with my setup due to the nature of link contents (always a single rar file and an image file that will get auto renamed by the source url adress. so what i just need is get the souce urls from the link added to jd.
event scripter should dig this adress **External links are only visible to Support Staff****External links are only visible to Support Staff** and it should grab the feed links (just links like **External links are only visible to Support Staff****External links are only visible to Support Staff** ) |
#4
|
||||
|
||||
![]()
And why do you want to go with an EventScripter script instead of a simple LinkCrawler rule?
You would only need an EventScripter script if you e.g. wanted JD to look for new items in that feed every X days...
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#5
|
|||
|
|||
![]()
as you said, i want it to check everyday, besides, link crawler rule creation is so alien to me... it is as complicated as event scripter but also it does not seem to have to scheduling features.
|
#6
|
||||
|
||||
![]()
Well both things you want/that need to be done needs advanced functionality.
You will most likely not find any other tools that will allow you to accomplish that more easily. Here is my suggestion for you on what to do: 1. I wrote two LinkCrawler Rules for you that will crawl all relevant items from the relevant source pages. Here it is: Code:
[ { "enabled": true, "logging": false, "maxDecryptDepth": 1, "name": "crawl all single item URLs from 'down3dmodels.com/feed/'", "pattern": "**External links are only visible to Support Staff**, "rule": "DEEPDECRYPT", "packageNamePattern": null, "passwordPattern": null, "deepPattern": "<link>(https?://[^<]+)</link>" }, { "enabled": true, "logging": false, "maxDecryptDepth": 1, "name": "crawl all URLs inside all URLs from 'down3dmodels.com' except 'down3dmodels.com/feed/'", "pattern": "**External links are only visible to Support Staff**, "rule": "DEEPDECRYPT", "packageNamePattern": "<title>(.*?)</title>", "passwordPattern": null, "deepPattern": ">(https?://(?!down3dmodels\\.com/)[^<]+)</" } ] pastebin.com/raw/rHqH1YCJ I noticed that the "/feed/" URLs won't get crawled but JD will instead download the feed as it gets recognized as downloadable content. I can kinda see how that happens but still Jiaz will need to investigate this in order to find a solution... Once this is working for you, you do this: 2. Ask for help in our Eventcripter thread: Ask for a script that will auto-add a specified URL every X hours and also ask what you can do to avoid duplicates when doing that.
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#7
|
||||
|
||||
![]()
@I3ordo: I recommend a solution direct in Eventscripter. A script that loads the feed and parses the wanted links that matches the pattern on 2nd linkcrawler rule by pspzockerscene.
I will look into the reported issue by pspzockerscene @I3ordo: @pspzockerscene: with next update JDownloader won't try to download the feed and allow it for rules to be processed
__________________
JD-Dev & Server-Admin Last edited by Jiaz; 31.05.2022 at 19:09. |
#8
|
|||
|
|||
![]()
Hi, i saw an update on JDownloader, waited for it to finish then
created an event rigger button and my event script now looks like this , and nothing happens. Code:
if (name == "D3D_Grab") [ { "enabled": true, "logging": false, "maxDecryptDepth": 1, "name": "crawl all single item URLs from 'down3dmodels.com/feed/'", "pattern": "**External links are only visible to Support Staff**, "rule": "DEEPDECRYPT", "packageNamePattern": null, "passwordPattern": null, "deepPattern": "<link>(https?://[^<]+)</link>" }, { "enabled": true, "logging": false, "maxDecryptDepth": 1, "name": "crawl all URLs inside all URLs from 'down3dmodels.com' except 'down3dmodels.com/feed/'", "pattern": "**External links are only visible to Support Staff**, "rule": "DEEPDECRYPT", "packageNamePattern": "<title>(.*?)</title>", "passwordPattern": null, "deepPattern": ">(https?://(?!down3dmodels\\.com/)[^<]+)</" } ] Last edited by I3ordo; 01.06.2022 at 23:40. |
#9
|
||||
|
||||
![]()
@I3ordo: those are linkcrawler rules and no eventscripter
![]() The two rules are 1.) support/parse the rss feed url 2.) support/parse a page The eventscripter script you have to write yourself/ask for help here, https://board.jdownloader.org/showthread.php?t=70525 , so it periodically adds the feed url to crawler
__________________
JD-Dev & Server-Admin |
![]() |
Thread Tools | |
Display Modes | |
|
|