#1
|
|||
|
|||
![]()
Hi everyone!
I use jdownloader since years without problem and allways surprised by its power!!! Unfortunately since the last update, when I try to analyse and find links through pages, it doesn't find anymore the links hosted on these pages!! And before it worked like a charm!! What could I do? Thanks alot for your help |
#2
|
||||
|
||||
![]()
Please provide example links. Nothing has changed parser, so must be something with the website you are trying to parse
__________________
JD-Dev & Server-Admin |
#3
|
|||
|
|||
![]()
Yes thanks, but on the pages if I select and copy links, jdownloader detect it immediately:
for example: **External links are only visible to Support Staff****External links are only visible to Support Staff** It should find: **External links are only visible to Support Staff****External links are only visible to Support Staff** and **External links are only visible to Support Staff****External links are only visible to Support Staff** but it only find images from the pages and audio files from bandcamp, I don't understand what has changed. Last time I used it it was middle of july and I don't find any changement on the website. |
#4
|
|||
|
|||
![]()
**External links are only visible to Support Staff****External links are only visible to Support Staff**
|
#5
|
||||
|
||||
![]()
Have you checked page source of that link to find those 2 llinks?
I can't find it.
__________________
FAQ: How to upload a Log |
#6
|
|||
|
|||
![]()
Yes, when I see the code source (with Ctrl+U on firefox), we can find these links (I found it using Ctrl+F)
On line 496 there are the 3 links:It's a huge line ! firstly Bandcamp (which is found by JD) then zippyshare and psy-music.ru (which are not found by JD).Strange! thank you Last edited by flowlapache; 11.10.2016 at 17:33. |
#7
|
||||
|
||||
![]()
You need to add LinkCrawler Rules to tell JDownloader to auto parse this type of url.
JDownloader does not support this url and therefore does not autohandle/parse it. With a customized LinkCrawler Rule (use board search) you can tell JDownloader how to handle this url. You can also Copy/Paste the URL into JDownloader to force auto deep decrypt of the url
__________________
JD-Dev & Server-Admin |
#8
|
|||
|
|||
![]()
Ok, I will try this "link crawler rule". But it strange because it works since 4 years at least!! Maybe they change the type of url this summer. Thank you for your quick answers and help!
|
#9
|
||||
|
||||
![]()
JDownloader does not support this type of url and never did.
Copy/Paste will work fine and a Linkcrawler Rule too ![]()
__________________
JD-Dev & Server-Admin |
#10
|
|||
|
|||
![]()
Ok, so the website have changed without something changed for me.
When I try with deep decrypt, it doesn't find what I search for. I'm looking for linkcrawler Rule, JD is really surprising! |
#11
|
||||
|
||||
![]()
Simply copy/paste the url into JDownloader, it will deep decrypt and show all supported links.
__________________
JD-Dev & Server-Admin |
#12
|
|||
|
|||
![]()
Thats what I did, but it doesn't find the two links I would like. And I'm not sure what I should change in the linkcrawler rule. Can I create a new rule specific to this website? It seems to be allways on the line 96 with same type of codes, on every pages from this website (psy-music.ru) If I copy this line, JD find immediately what I look for!!
I don't find on other thread how to do..I see often "mail to support@..." to obtain help about making a rule for this website... edit: I can't find how to tell JD to auto parse pages from this website. I look some other rules for specific sites but I don't understand which code I should add to linkcrawler rules.... Last edited by flowlapache; 11.10.2016 at 21:14. |
#13
|
|||
|
|||
![]()
I have forgotten something important! (thanks to mgpai from JDchat)
links are not visible unless I login. That's why you didn't find links when you asked me. I'm still trying to make a rule for linkcrawler. For this login, it was already this since years.When I logged and had the cookie of the website, it was visible for JD. But now no more. I tried this, but it's not a valid rule... Code:
[ { "enabled" : true, "name" : "psy-music", "pattern" : " psy-music\\.ru/news/.+/[0-9-]+ ", "rule" : "DEEPDECRYPT", } ] I will sleep, maybe tomorrow regex will be clearer... Last edited by flowlapache; 11.10.2016 at 23:08. |
#14
|
||||
|
||||
![]()
pattern requires protocol prefix
Code:
https?:// raztoki
__________________
raztoki @ jDownloader reporter/developer http://svn.jdownloader.org/users/170 Don't fight the system, use it to your advantage. :] Last edited by raztoki; 12.10.2016 at 00:42. |
#15
|
||||
|
||||
![]()
see fixed version
__________________
JD-Dev & Server-Admin |
#16
|
|||
|
|||
![]()
Ok, thanks guys, I'm learning....
I tried with this fixed version (without and with the space after the "+" in pattern) but JD doesn't accept it. And one thing more, I have already a linkcrawler rule (I don't know where and when it comes from) for zip format: Code:
[ { "enabled" : true, "maxDecryptDepth" : 0, "id" : 1476219085998, "name" : "psy_music", "pattern" : " psy-music\\.ru/news/.+/[0-9-]+ ", "rule" : "DEEPDECRYPT", "packageNamePattern" : null, "formPattern" : null, "deepPattern" : null, "rewriteReplaceWith" : null } ] I tried the 3 solutions, but because it's not valid, the value doesn't change (only zip rule stays...) |
#17
|
||||
|
||||
![]()
your pattern is still invalid!
it must begin with https?:// an url does not start with _(space) ![]()
__________________
JD-Dev & Server-Admin |
#18
|
|||
|
|||
![]()
Sorry, it wasn't this one, I copied the (first) bad one.
I would like to show the existing one for zip format: Code:
[ { "enabled" : true, "maxDecryptDepth" : 2, "id" : 1433746432948, "name" : "Learned file extension:zip", "pattern" : "(?i).*\\.zip($|\\?.*$)", "rule" : "DIRECTHTTP", "packageNamePattern" : null, "formPattern" : null, "deepPattern" : null, "rewriteReplaceWith" : null } ] Code:
[ { "enabled" : true, "name" : "psy-music", "pattern" : "https?://psy-music\\.ru/news/.+/[0-9-]+", "rule" : "DEEPDECRYPT", } ] And must I replace the ".zip" rule or I add it wrinting something like: Code:
[ { "enabled" : true, "name" : "psy-music", "pattern" : "https?://psy-music\\.ru/news/.+/[0-9-]+", "rule" : "DEEPDECRYPT", } ] [ { "enabled" : true, "maxDecryptDepth" : 2, "id" : 1433746432948, "name" : "Learned file extension:zip", "pattern" : "(?i).*\\.zip($|\\?.*$)", "rule" : "DIRECTHTTP", "packageNamePattern" : null, "formPattern" : null, "deepPattern" : null, "rewriteReplaceWith" : null } ] |
#19
|
||||
|
||||
![]()
[{rule1},{rule2},{rule3}]
and NOT [{rule1}][{rule2}][{rule3}]
__________________
JD-Dev & Server-Admin |
#20
|
|||
|
|||
![]()
Ok thanks! I see the importance of "," and priority of "[" and "{". I will try
|
![]() |
Thread Tools | |
Display Modes | |
|
|