#1
|
|||
|
|||
Prevent Crawling
Hello,
I go to a website that has a bunch of links to uploaded.net, rapidgator and so on. I do a select all and copy. The link grabber grabs all the download on that page, but then it starts crawling. I want to prevent it from ever crawling (like in JDownloader 0.9). Can anyone tell me how to prevent crawling? Thanks PS - I tried searching for an answer the last couple of hours, but couldn't figure it out. |
#2
|
||||
|
||||
Settings-Filter-setup rules what to filter
Use SourceURL contains to prevent any crawling activity on matches
__________________
JD-Dev & Server-Admin |
#3
|
|||
|
|||
Thanks Jiaz. I read that earlier, but I still can't seem to figure it out. What variable did you put into the field?
|
#4
|
||||
|
||||
For example you want to filter out test.com urls
Settings-Filter-create new filter Code:
SourceURL contains test.com Code:
SourceURL contains .*(test\.com|example\.net|google\.com).* and enable the regex checkbox on the right side
__________________
JD-Dev & Server-Admin Last edited by raztoki; 28.11.2017 at 09:57. Reason: regex . should contain escaping |
Thread Tools | |
Display Modes | |
|
|