#1
|
|||
|
|||
Avoid downloading files already downloaded before
Hi, I hope you are all right.
Active links I usually download photos and videos of models on twitter and Instagram, the theme is that in order to not download the same photos I have to leave the links because if I delete them I would not have the reference of what I downloaded, I grouped them in the lower part. This eventually makes the program heavy and slow. In jdownloader 1 I'm not sure but I think I did not have this problem, I ask if there is something to avoid downloading links already downloaded in JB 2 ?. Link grabber Example of leaving the links already downloaded active links appear in red and I delete them with a single click, leaving only the new thing to download. Also, with twitter it happens to me that even if I have already downloaded links when it comes to capturing links, JB can not detect this 100% to delete them with the option "selected links and duplicates" I would like to know why, it only happens to me twitter. Suspect that Twitter changes the name of the photos but when I want to download it and JB tells me that the file "already exists" this suits me, This only happens when I forget to save the photos on another disk. * With Instagram this does not happen to me it works 100%. Maybe you have a better method than mine. That is all. Thank you. |
#2
|
||||
|
||||
Many links will lead to higher memory usage. The more links you keep in list, the more memory is required. Memory usage per link heavily depends on the *type* of the link.
Check about dialog of your JDownloader to check memory usage/availability. You can customize memory usage via .vmoptions file or customized vm-parameters. search for vmoptions in forum. As long as there is still memory left to work with, many links is possible. This feature depends on some sort of *unique* id that can be used for duplicate checks. Some plugins set this id, others simply fallback to use the download url. And in case the url changes (for example timestamps, tokens, sessions...) this will make the duplicate detection fail. Do you add the twitter links yourself or use plugin of JDownloader? In case you let JDownloader process/find the links, it *may* be possible. Please provide some example links
__________________
JD-Dev & Server-Admin |
#3
|
|||
|
|||
With the current pornhub stayhome free premium I downloaded lots of contents (10k links) from the models page.
Thanks to eventscripter I have all the downloaded links on a history file. For the future I have a question: is there a way to not download previous downloaded content? I was thinking something like:
is this possible? thanks |
#4
|
||||
|
||||
Using just JD, this is impossible as there is no real relation between "pornhub main link" and the different qualities.
Our pornhub crawler is a crawler like every other meaning it will crawl URLs to find the content behind as this can theoretically change. For this to work you will either have to do source code modifications or extend that EventScripter script and e.g. try to compare the stored "source" URLs to what the user adds and then abort the crawl process if that content has been added before. Please keep in mind that JD is not any "site rip" tool and we will not actively help you doing that. -psp-
__________________
JD Supporter, Plugin Dev. & Community Manager
Erste Schritte & Tutorials || JDownloader 2 Setup Download |
Thread Tools | |
Display Modes | |
|
|