#1
|
|||
|
|||
![]()
So, I'm trying to mass download entire directories from a website that I have direct access too. It's a basic folder structure.
Code:
www.websiteDOTcom/dir1/dir2/userDirectory/username/ When I point jDownloader at "userDirectory", it will only find files in that specific folder, and not look into any of the links and download files recursively. However, if I mass copy all the HTML links from that page and paste it into the add links field, it finds all the files and adds them, except instead of having a package name of the "username" folder, it's a package name based on the file. This creates a jumbled mess that I'm too lazy to sort manually. I can't figure out a rule/packagizer option to set the package name to the folder name. Any thoughts or suggestions? |
#2
|
||||
|
||||
![]()
You've got to teach JDownloader how to follow/parse unsupported sites. Please use board search for Linkcrawler rules. JDownloader does only parse the current page/html and there is no traversal. That's what Linkcrawler rules are for.
Via Packagizer rules you can parse URL and autoset package name. Without real working example links we cannot provide help on this. You can either post them here or send to support@jdownloader.org
__________________
JD-Dev & Server-Admin |
![]() |
Thread Tools | |
Display Modes | |
|
|