#21
|
||||
|
||||
The plugin only supports /release/[a-z0-9\\-]+
__________________
FAQ: How to upload a Log Last edited by raztoki; 23.02.2020 at 15:17. |
#22
|
||||
|
||||
Is there anyway around this?
|
#23
|
||||
|
||||
by creating a linkcrawler rules for the unsupported patterns, or edit the decrypter plugin to support additional website function.
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#24
|
||||
|
||||
Quote:
Where is the decrypter plugin for this domain and what is it named because I have yet to find it? |
#25
|
||||
|
||||
Quote:
Quote:
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#26
|
||||
|
||||
@RPNet-user
I merged these so you can see post #6 for your second question. Domain is the same, rmz.cr. The rule should crawl /l/m to find /release/ that is supported by plugin.
__________________
FAQ: How to upload a Log |
#27
|
||||
|
||||
@raztoki
It will only create support for a url pattern that originate from the top level domain(rmz.cr) which is in the plugin and not rmz.cr/l/m. The pattern I modified should have worked even without the numbered pages so unless I have to add/edit the deepPattern it will not work provided that the domain in the rule over-rides the domain supported by the plugin. All file links originate from the top level domain plus the '/release' sub-path even if I get there from rmz.cr/l/m. Last edited by RPNet-user; 24.02.2020 at 07:47. |
#28
|
||||
|
||||
Quote:
I don't agree with your statement. helps you write and match in real time. You then adapt for the situation, either java code with the extra escaping (as it will show up as error). Inside JD graphical interface you shouldn't need the extra escaping. Link crawler rules you might as its JSON
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#29
|
||||
|
||||
Yes it will create the pattern but it will not grab any links from posts/titles where the url is rmz.cr/l/m because when you go to any pages or posts originating from the newly created pattern it will always be rmz.cr/release. The rmz.cr/l/m path on their site is simply for displaying different categories (/l/m for movies, /l/s for series, and /l/b for both) it does not provide url displayable access via those paths, therefore, how will the pattern know to grab a link that contains that path.
|
#30
|
||||
|
||||
Create another rule where deepPattern contains /l/m for example.
__________________
FAQ: How to upload a Log |
#31
|
||||
|
||||
Quote:
Check the screenshot for a further explanation. The left side is rmz.cr and the right side is rmz.cr/l/m but at the bottom you will see that they are in the same location--->rmz.cr/release/movietitle Therefore, no pattern will be able to differentiate between the location of the links since the path to the actual title/post will be the same. |
#32
|
||||
|
||||
it wont matter, as dedicated plugin doesn't scan this either.
so you need first "pattern": to listen to /l/m etc, and then "deepPattern" : within the html body (maybe <table>) to just return /release links then dedicated plugin will do the rest. I wouldn't personally follow multiple pages deep, just keep the links you want to scan on txt file and copy them all. Then they are all single tasks.
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#33
|
||||
|
||||
In the page source html you can find /l/m, so first rule will get that /l/m page, then the second rule grab /l/m page and find /release/.
__________________
FAQ: How to upload a Log |
#34
|
||||
|
||||
Ok, so I will just go with the first page in /l/m only.
This is what is currently working properly but from the main rmz.cr page only. [ { "enabled" : true, "updateCookies" : true, "logging" : false, "maxDecryptDepth" : 1, "id" : 1422443765154, "name" : "1080p rarbg and vxt", "pattern" : "https?://rmz\\.cr/", "rule" : "DEEPDECRYPT", "packageNamePattern" : null, "passwordPattern" : null, "formPattern" : null, "deepPattern" : "(/release/[a-z0-9\\-]+1080p[a-z0-9\\-]+rarbg)|(/release/[a-z0-9\\-]+1080p[a-z0-9\\-]+vxt)", "rewriteReplaceWith" : null } ] These patterns will not work as it just grabs everything instead of the keyword links as the above regex: "pattern" : "https?://rmz\\.cr/l/m/[0-5]", "pattern" : "https?://rmz\\.cr/l/m/", |
#35
|
||||
|
||||
Quote:
I've limited it to the "release" URLs only for the next update which means this would be possible and would cover adding URLs to desired pages (without number = first page): Code:
[ { "enabled" : true, "updateCookies" : true, "logging" : false, "maxDecryptDepth" : 1, "id" : 1422443765154, "name" : "1080p rarbg and vxt", "pattern" : "https?://rmz\\.cr/l/b/[0-9]*?", "rule" : "DEEPDECRYPT", "packageNamePattern" : null, "passwordPattern" : null, "formPattern" : null, "deepPattern" : "(/release/[a-z0-9\\-]+1080p[a-z0-9\\-]+rarbg)|(/release/[a-z0-9\\-]+1080p[a-z0-9\\-]+vxt)", "rewriteReplaceWith" : null } ]
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#36
|
||||
|
||||
So as of this moment it is not possible to crawl and grab from /l/m due to the plugin correct?
The next update would make it possible? Last edited by RPNet-user; 25.02.2020 at 03:53. |
#37
|
||||
|
||||
Yes.
Plugins always have priority which is good and makes sense ... usually. This is just an edge-case and soon not anymore. We have a ticket about creating link crawler rules with higher priority than plugins but again in this case, your rules would then override our plugin completely and you'd have to add another rule to manually handle "/release" URLs ... This is the ticket: Plugin updates have just been released - you can now test the above mentioned rule! -psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#38
|
||||
|
||||
Thanks psp,
I was confused earlier as to which one over rides the other: rules>plugin or plugin>rules. When trying to modify the pattern earlier I suspected that the plugin had the higher priority due to the wide regex regardless of what I specified. I will update, test, and feedback. |
#39
|
||||
|
||||
Yeah basically you were really really unlucky.
This is our last RegEx: Code:
https?://(?:www\\.)?rmz\\.cr/(?:release/)?(?!l/)[^/]+ Code:
https?://(?:www\\.)?rmz\\.cr/release/[^/]+ The other alternative would have been to block "l/" in our RegEx but the current solution is the nicer one^^ -psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
#40
|
||||
|
||||
Thanks psp, it is working perfectly now.
I set the pattern to: "pattern" : "https?://rmz\\.cr/l/m/*?", I set the crawl for ----> rmz.cr/l/m The event scripter is working perfectly as well as I just test it with rmz.cr/l/m. I'm having problems with the logic for the linkgrabber filter 'views' rule as I'm trying to exclude 'srt' files during the grab so I tried setting a simple rule with only the file type to 'is not' 'srt'. So the rule is: Allow Links if, File isn't a 'srt'-File! However, when I test the rule it still adds the srt files along with the video files. |
Thread Tools | |
Display Modes | |
|
|