#21
|
||||
|
||||
![]()
It is json format. You can google for json beautifier. If it *eats* your input, it is valid json
__________________
JD-Dev & Server-Admin |
#22
|
|||
|
|||
![]() Quote:
For the max decrypt depth, 0 is 0 or it's everything? Because now (with 0), when one link is parsed, it finds a lot of differents bancamp links until endless (maybe it parses all thye site). I will try with depth=1 Last thing, do you know if it's possible in a browser (firefox) to "copy the code source of the link" with a right clic? It could save me to open each page and making Ctrl+A then Ctrl+C to obtain good links in JD. Last edited by flowlapache; 13.10.2016 at 14:50. |
#23
|
||||
|
||||
![]()
Depth=0 -> only parse the input url
Depth=1 -> also parse first level of found links Ctrl+A = Select All + Ctrl+C = Copy = Copy all to Clipboard. Browser will also put html code into clipboard. It works perfectly fine for me. I don't need any logins. Browser and JDownloader do show the links just fine without any logins?!
__________________
JD-Dev & Server-Admin |
#24
|
|||
|
|||
![]()
WHAAAT!!! you are experiencing what happened since ever!! You really obtain the http link from psy-music.ru and zippishare or rusfolder and others?
it's crazy, I have only, images, bandcamp audio files, videos but no archive!! I'm lost with this new news! |
#25
|
|||
|
|||
![]()
oops, sorry (i can't edit before it's approved...)I misunderstood...
In fact yes, it works with control+A when we are on the page. But I would like to make it like before, just put the link (of the page) in JD without need to open the page... like parsing and analysing the page to find these archives! |
#26
|
|||
|
|||
![]()
I don't understand why "deepdecrypt" don't find it.
Is there a rule saying to JD to parse the code source of a page (Ctrl+A, paste by itself), because it's just what I want. I search how "copy the code source of a page without open this page in a browser" but for now I have nothing. It's boring, it's only 2 or 4 actions to do what I want ( a link>the source code>JD find the archives!), But I can't do it manually for 100 pages! I continue to search this "code source copy" feature in firefox or otherz... In all cases, thank you for your support, I'm really goodly surprised by the support by chat, or board. It's so better than a lot of applications or webshops which are not free moreover!!! |
#27
|
||||
|
||||
![]()
Do you have to log in to see those links?
__________________
FAQ: How to upload a Log |
#28
|
|||
|
|||
![]()
Thanks to come! Yes I mentionned it a bit before...
Normally I log in, then I just put links of all pages I want in the "parser" of JD and every archives are found! it works with Select All + Ctrl+C (even without be logged! like tried Jiaz) after have opened the page. But now, I can't obtain archives without open each page. Even with the deepdecrypt linkcrawler rule And I didn't find how to make a "copy the code source of this link" in Firefox. Thing which could do the job... |
#29
|
||||
|
||||
![]()
I think previously the link was shown to everybody without login, not anymore.
I am not sure, but you can try to enter your credentials in Basic Authentication and try to parse it again.
__________________
FAQ: How to upload a Log |
#30
|
|||
|
|||
![]()
Yes, I try it firstly but it changes nothing.
The website has changed but I don't know how type... Previously links were already hidden to public.Like now, it has been allways necessary to log to see the links, mostly for the http links from psy-music.ru... |
#31
|
||||
|
||||
![]()
During my tests the links were public available without login. I guess this is changed now as I no longer can see them. When you are logged in, are the links plain visible or via rightclick or do you need to do a left click?
__________________
JD-Dev & Server-Admin |
#32
|
|||
|
|||
![]()
I guess, it's the trick.
We need to left click to open it (or launch it for the http link). It's not plain visible. We see the adress only with the mouse on the link. it's directly visible in the code source. JD find it if we select it and copy. |
#33
|
||||
|
||||
![]()
In that case I would stick to *select and copy* as it seems to work
![]() When you just copy the url then the links are no longer available as the site requires a login and your JDownloader is not logged in. Is that solution acceptable for you?
__________________
JD-Dev & Server-Admin |
#34
|
|||
|
|||
![]()
Hi, thank you.
If I found nothing else, yes I will use the "open>select all>copy". But I was used to go every one or two months on this website et download around 100 links....Which is really "hard and time eater" if I must open each page and select+copy...instead of only copy each adress like before... In the same time I'm trying with "menu wizard" on firefox to add a "copy the source code of the link" to my right click. If it works, I thing I could obtain links without open every pages. For now the "source results" are different than the source on the opened page. But they probably make it weel good to force to open each page to obtain the links... maybe they are no more JD friendly!!^^ |
#35
|
||||
|
||||
![]()
We are currently working on updated browser extensions, then it should be possible to add links from pages with logins as well
![]()
__________________
JD-Dev & Server-Admin |
#36
|
|||
|
|||
![]()
@flowlapache,
Add this script to event scripter. Change the username and password. You should now be able to grab the links which were otherwise not detected by JD. Will only detect single URLs. Create linkfilter rule to block unwanted links. @jiaz, Thanks for the browser API and code ![]() Code:
// Crawler: psy-music.ru // Trigger required: "New Crawler Job" var sourceURL = job.text; var matchURL = (/^http:\/\/psy-music.ru\/news\/.+\/[\d-]+$/).test(sourceURL); try { if (matchURL) { var mainURL = "**External links are only visible to Support Staff**; var username = "myUser"; // Change username var password = "myPasss"; // Change password var br = getBrowser(); var postURL = mainURL + "index/sub/"; var postData = "user=" + username + "&password=" + password + "&rem=1&a=2&ajax=1&rnd=691"; var mainPage = br.getPage(mainURL); var mainPage = br.postPage(postURL, postData); var links = br.getPage(sourceURL); callAPI("linkgrabberv2", "addLinks", { "links": links }); job.setText(""); } } catch (e) { alert("Error occured while adding \"" + sourceURL + "\". Please try again"); } |
#37
|
|||
|
|||
![]()
OOh yeah guys!!
Thanks so much! You made it! It works very well and fast! Thank you for this nice support! very efficient! |
#38
|
||||
|
||||
![]()
Thanks for the feedback
![]()
__________________
JD-Dev & Server-Admin |
#39
|
|||
|
|||
![]()
I try the same thing for another website but it doesn't work an dI don't understand why...
|
#40
|
||||
|
||||
![]()
@flowlapache: can you provide example links?
__________________
JD-Dev & Server-Admin |
![]() |
Thread Tools | |
Display Modes | |
|
|