Thanks again for your response!
As far as I understand how this site works, it basically acts like an aggregator and archiver of nsfw content on reddit. Other than its image-focused UI, its main features that make it work better than normal reddit browsing (and likewise JD's crawler for reddit):
- It already pre-filters out the huge amount of duplicates that some posters might have tendency to spam/cross-post across numerous subreddits or re-post over time
- It pre-filters out non-nsfw content that users mix in with their posts
- It archives content that might have been deleted and lost (by the original poster, by subreddit moderators, etc.)
I've also just tried to use the "Link Gopher" Chrome extension you mention to copy-paste the list of relevant URLs (all those beginning with
Quote:
**External links are only visible to Support Staff****External links are only visible to Support Staff**
|
), but the end results (images and videos only) captured by LinkGrabber and downloaded is approx. 3x the expected number of downloads, and includes a bunch of small thumbnails, random advertisement images, etc.
I'd appreciate any pointers you can give to better automate such a process?