Well both things you want/that need to be done needs advanced functionality.
You will most likely not find any other tools that will allow you to accomplish that more easily.
Here is my suggestion for you on what to do:
1. I wrote two LinkCrawler Rules for you that will crawl all relevant items from the relevant source pages.
Here it is:
Code:
[
{
"enabled": true,
"logging": false,
"maxDecryptDepth": 1,
"name": "crawl all single item URLs from 'down3dmodels.com/feed/'",
"pattern": "**External links are only visible to Support Staff**,
"rule": "DEEPDECRYPT",
"packageNamePattern": null,
"passwordPattern": null,
"deepPattern": "<link>(https?://[^<]+)</link>"
},
{
"enabled": true,
"logging": false,
"maxDecryptDepth": 1,
"name": "crawl all URLs inside all URLs from 'down3dmodels.com' except 'down3dmodels.com/feed/'",
"pattern": "**External links are only visible to Support Staff**,
"rule": "DEEPDECRYPT",
"packageNamePattern": "<title>(.*?)</title>",
"passwordPattern": null,
"deepPattern": ">(https?://(?!down3dmodels\\.com/)[^<]+)</"
}
]
Rule in plaintext on pastebin for easier copy & paste:
pastebin.com/raw/rHqH1YCJ
I noticed that the "/feed/" URLs won't get crawled but JD will instead download the feed as it gets recognized as downloadable content.
I can kinda see how that happens but still Jiaz will need to investigate this in order to find a solution...
Once this is working for you, you do this:
2. Ask for help in our Eventcripter thread:
Ask for a script that will auto-add a specified URL every X hours and also ask what you can do to avoid duplicates when doing that.