Thread: [Solved] Link Crawler Rules
View Single Post
  #4  
Old 10.07.2017, 18:09
Pbl
Guest
 
Posts: n/a
Default

Quote:
Originally Posted by tony2long View Post
Id will be created automatically, packageNamePattern to get the packageName from the page, deepPattern to get what you want in the page (I think).

One of the video that I checked is hosted at publish2.me that say: This file is available only for premium members.

Rule example that doesn't give error (catches many pictures):

[ {
"enabled": true,
"maxDecryptDepth": 2,
"name": "Emma_Lu1 WebCamRec",
"pattern": "**External links are only visible to Support Staff**,
"rule": "DEEPDECRYPT",
"packageNamePattern": null,
"deepPattern": null
} ]
I put this into the Link Crawler Rules in Advanced Settings and no error message was thrown. Yay!

Is there something else I need to do to actually initiate the crawling process?

Last edited by Pbl; 10.07.2017 at 18:12.
Reply With Quote