Thread: [Solved] Automation for the crawljob?
View Single Post
  #1  
Old 26.04.2022, 16:33
I3ordo I3ordo is offline
Mega Loader
 
Join Date: Mar 2022
Posts: 65
Default Automation for the crawljob?

My last two months have been "religiously" visiting my favourite site and visiting my booked marked (fixed url) search results that shows new entries as first at top: **External links are only visible to Support Staff****External links are only visible to Support Staff**

After opening each post individually, i ctr+a and ctrl+c the whole page so that the link grabber can find the links and add them. Normally, if i don't open the actual paste but paste the link of the results page to JD, it finds nothing even with deep crawl...

It s practical already, but it is semi automated in a way. Would love to know if it can get fully automated = opening each post on the first two pages(there are hundreds of pages out there) and grab their texts that contain links that lead to archives and auto-start download with appropriate provider (which it already does already).

I have checked that there s an extension called "folder watch" which scans the "list.crawljob" and auto adds them to downloads which is a very good starting point.

maybe i should find someone with phyton automation and selenium experience to create a script that auto creates those links for the crawljob?.

So the actual question becomes, can JD auto crawl web site pages daily or should i go for the "phyton-selenium" option?
Reply With Quote