#1
|
|||
|
|||
Trying to add certain urls to JD
I am trying to add a few urls to JD to download (about 3 million) but linkgrabber will not add them since they don't point directly to a file or have a plugin and there does not seem to be any override to make linkgrabber add any/all urls.
the urls I want to add look like this **External links are only visible to Support Staff****External links are only visible to Support Staff** and redirect to something like this **External links are only visible to Support Staff****External links are only visible to Support Staff** I can deep analyses the links and it works but even after all these years, JD still does this thing where it slows down on analyzing and eventually gives up when you give it a lot of urls. it did this with Flickr accounts that had hundreds of thousands of photos. I would like to be able to add the "thing:*/zip" links to the download list as they are and have JD dl the zip file upon download rather then have linkgrabber get the zip file location beforehand Of course this is specific to one site but I think JD should have an option to add/dl from any kind of url so people can have more ways to use JD without a plugin for their specific use. |
#2
|
||||
|
||||
without a dedicated plugin to scan for filename information (within source code) and if the filename info isn't within the URL it will mean you will have to resort to the following..
If you want fast adding of links, you can do this with changing 'linkchecking in advanced setting > LinkCollector.dolinkcheck. Note: requires client restart, and you wont have filename/filesize/availablity information best solution for you as I see it if its just a small amount of links you have the ability to add "directhttp:/ /" + url(https:/ /domain/path)" prefix to urls to treat them as directdownloadable and the directhttp plugin will pick it up + the disable linkcheck all links will add instantly. you wont have filename or filesize or availability. else you can create linkcrawler rule to listen to your url pattern. Make the rule for directhttp and not deepanlyse as you don't want to crawl. Their is plenty info on the forum about crawler rules. with linkchecking disabled it will be fast as above solution without the need to copy urls to notepad and add url prefix. else create dedicate plugin(s) to perform specific task. I would say though, there is a clear design flaw when users deep analyse directly downloadable content, as this task is done within crawler classes and then has to pass over the directhttp plugin in order to get filename/filesize checks. And yet http requests are already done in crawler... == twice as slow.
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] Last edited by raztoki; 06.03.2018 at 02:41. |
#3
|
||||
|
||||
Quote:
JDownloader can easily handle million of links. Of course with enough memory provided.
__________________
JD-Dev & Server-Admin |
#4
|
||||
|
||||
Quote:
__________________
JD-Dev & Server-Admin |
#5
|
|||
|
|||
I have tried everything raztoki suggested and I cant get JD to add these links
disabling link check did nothing. adding a filter rule did nothing. and adding the directhttp stuff did nothing. the only way I can seem to get JD to accept these urls is to do a deep analysis. |
#6
|
||||
|
||||
Just tested and directhttp way works perfectly fine.
Filter rules are for filtering! Raztoki was talking about linkcrawler rules. just append 'directhttp: //' without space in front of the url and JDownloader will *eat* the url as direct download. Or you create linkcrawler rules and teach JDownloader what urls(regex) to process as direct downloads.
__________________
JD-Dev & Server-Admin |
#7
|
|||
|
|||
Thanks Jiaz I got it to work
|
#8
|
||||
|
||||
You're welcome!
__________________
JD-Dev & Server-Admin |
|
|