JDownloader Community - Appwork GmbH
 

Reply
 
Thread Tools Display Modes
  #1  
Old 07.12.2023, 21:19
Johny554 Johny554 is offline
I will play nice!
 
Join Date: Jul 2020
Posts: 5
Default Folder Watch with link crawler rule on txt files

Hey
I'm looking for an answer for 2 hours, and I think it should work.

I tried things from here:
https://board.jdownloader.org/showthread.php?t=77614
https://board.jdownloader.org/showthread.php?t=70907
https://support.jdownloader.org/Know...kcrawler-rules

I have links in .txt file in folderwatch, folder watch is enabled, link crawler rule is set and nothing happens...

Spoiler:
[
{
"cookies" : null,
"deepPattern" : null,
"formPattern" : null,
"id" : 1701975189857,
"maxDecryptDepth" : 1,
"name" : null,
"packageNamePattern" : null,
"passwordPattern" : null,
"pattern" : "file:/.*?\\.txt$",
"rewriteReplaceWith" : null,
"rule" : "DEEPDECRYPT",
"enabled" : true,
"logging" : false,
"updateCookies" : false
}
]


Can someone point me what's wrong or am I missing something?

Last edited by pspzockerscene; 08.12.2023 at 15:14. Reason: Removed image containing sensitive data
Reply With Quote
  #2  
Old 08.12.2023, 15:15
pspzockerscene's Avatar
pspzockerscene pspzockerscene is offline
Community Manager
 
Join Date: Mar 2009
Location: Deutschland
Posts: 70,743
Default

Hi,
you are mixing two things here.
LinkCrawler Rules and FolderWatch CrawlJobs are two completely different things.

So to start from the beginning:
Folder watch support articles are here:
https://support.jdownloader.org/Know...3/folder-watch
--> Please read them.

In the examples e.g. "Format 1" in said support article you can see that you will need to create a file called <somename>.crawljob which contains the parameters in the explained syntax.
The field "text" can be used to add the links which this rule is supposed to add to JDownloader e.g. your rapidgator.net links.
If I were you I'd first take the example rule like it is and replace the link behind "text=" with a single rapidgator.net link for testing.

P.S.: I've removed your first screenshot because it contains sensitive data.
__________________
JD Supporter, Plugin Dev. & Community Manager

Erste Schritte & Tutorials || JDownloader 2 Setup Download
Spoiler:

A users' JD crashes and the first thing to ask is:
Quote:
Originally Posted by Jiaz View Post
Do you have Nero installed?
Reply With Quote
  #3  
Old 08.12.2023, 16:31
Johny554 Johny554 is offline
I will play nice!
 
Join Date: Jul 2020
Posts: 5
Default

Hey
Thank You for the input.
I've created crawljob file with the example url, and it was added to LinkGrabber.
The problem I've right not are:

1. The .crawljob was moved to added folder, and it will not trigger again. How to make a crawljob rule permanent?
2. How to add a pattern (regex?) to grab all link from a certain domain?

Hmm
But the option you mentioned with crawljob doesn't say how to add urls from .txt file, it just added the url from the crawljob. I would like to add them from the txt file.

Last edited by raztoki; 09.12.2023 at 14:20.
Reply With Quote
  #4  
Old 19.12.2023, 22:08
pspzockerscene's Avatar
pspzockerscene pspzockerscene is offline
Community Manager
 
Join Date: Mar 2009
Location: Deutschland
Posts: 70,743
Default

Hi again,

1. That is normal.
If you want to run a crawljob again you will need to move it out of that added folder.
Typically crawljobs are jobs which should be run one time so if you need to execute one more often, you might need to run an external scripts which will create that file again in the folder watch directory.

2. Please explain in more detail.
If you want JD to deep-scan for specific items behind a specific URLs, you may want to create a Link Crawler Rule of type DEEPDECRYPT.

Again:
Folder Watch = Add links via a file, typically one time

LinkCrawler Rule = Tell JD how to process URLs which match a specific pattern.

Quote:
Originally Posted by Johny554 View Post
But the option you mentioned with crawljob doesn't say how to add urls from .txt file, it just added the url from the crawljob. I would like to add them from the txt file.
It doesn't say that because that is not what it does.
It adds the URLs which are specified in the .crawljob file and not in another / external .txt file.

Please re-read the crawljob docs:
Field: text
Explanation of that field: Text containing URL(s) to add
--> So put the URL(s) you want to add into the mentioned "text" field inside(!) your crawljob.

I have the feeling that we're still talking past each other.
__________________
JD Supporter, Plugin Dev. & Community Manager

Erste Schritte & Tutorials || JDownloader 2 Setup Download
Spoiler:

A users' JD crashes and the first thing to ask is:
Quote:
Originally Posted by Jiaz View Post
Do you have Nero installed?
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +2. The time now is 03:54.
Provided By AppWork GmbH | Privacy | Imprint
Parts of the Design are used from Kirsch designed by Andrew & Austin
Powered by vBulletin® Version 3.8.10 Beta 1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.