Jiaz will be back =]
you can create decrypter plugins or link crawler rules to scrape&parse your desired content.
to add on interval you could do this via event scripter on interval.
what is missing is feedback on previously crawled content. crawlers just return everything, and has no feedback to previously crawled content. You really need feedback on previously scraped content to halt process otherwise you could be crawling unnecessarily creating load on server end and yours.
I said for years that JD requires Database support, eg. for previously downloaded content (history svn tickets exist), but also for previously crawled content based on generic (standard) and specific named databases (for specific user desired crawling).
raztoki
Last edited by raztoki; 06.01.2020 at 10:33.
|