#1
|
|||
|
|||
![]()
Hi, I am big fan of Jdownloader.
I used a packagizer rule to download comic chapters in a folder named as the the chapters number in the comic folder. Example: link: **External links are only visible to Support Staff****External links are only visible to Support Staff** source url rule: read/*/*/*/* download folder rule: \\192.168.1.100\multimedia\fumetti\<jd:source:1>\<jd:source:4> Now they change the site and the new URL is something like that: **External links are only visible to Support Staff****External links are only visible to Support Staff** I would like to use the webpage title as package name but i read that it's impossible. Is there another way to solve this problem? Thank you very much. |
#2
|
||||
|
||||
![]()
its impossible via the package customiser as it does not have access to html.
you need todo this at either decrypter level (plugin might already exist??), else you can do this in linkcrawler rule.
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#3
|
|||
|
|||
![]()
Ok, I tried something like that:
{ "enabled" : true, "cookies" : null, "updateCookies" : true, "logging" : false, "maxDecryptDepth" : 0, "id" : 1532375949826, "name" : null, "pattern" : ".*mangaworld.*", "rule" : "DEEPDECRYPT", "packageNamePattern" : "<title>(.*?)</title>", "passwordPattern" : null, "formPattern" : null, "deepPattern" : null, "rewriteReplaceWith" : null } It worked. Thank you very much. |
#4
|
||||
|
||||
![]()
most welcome
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] |
#5
|
||||
|
||||
![]()
@fiesta90: you should also optimize hat linkcrawler rule to only fetch stuff/images you want via proper deepPattern
see https://support.jdownloader.org/Know...kcrawler-rules
__________________
JD-Dev & Server-Admin |
#6
|
|||
|
|||
![]()
This is an oldish thread, but I wanted to say "thank you".. o)
I thought I knew most of the JD tricks, but a crawler rule is something I obviously never used before, but such a rule was what was required today! o) It worked perfectly for me! I used a part of the url/page <title> as package name, all crawled image urls from these pages got into their respective "sub page" package, instead of into hundreds of unrelated single folders for each image. The crawler rule is a powerful feature, just like the packagizer and the crawljob handling. It would be nice if one smart developer drags this hidden functionality into a little GUI and into a dedicated section in the settings area of JD. Maybe something for JD3?.. o) Thanks again for posting solutions like this, really helpful! |
#7
|
||||
|
||||
![]()
You're welcome! Maybe you would like to post your rule as an example? you can leave/cencor the pattern if you like. so other can use it like sort of template
![]()
__________________
JD-Dev & Server-Admin |
![]() |
Thread Tools | |
Display Modes | |
|
|