JDownloader Community - Appwork GmbH
 

Notices

Reply
 
Thread Tools Display Modes
  #1901  
Old 30.09.2021, 15:59
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,533
Default

Quote:
Originally Posted by mirino View Post
Ich brauche allerdings die .txt-Datei mit den ersten beiden Zeilen, also ohne Video-Description.
You can query filetype instead of hostname. In the script, replace:
Code:
link.host == "joinpeertube.org"
with:
Code:
link.linkInfo.group == "VideoExtensions"
Reply With Quote
  #1902  
Old 30.09.2021, 16:23
mirino mirino is offline
JD VIP
 
Join Date: Mar 2009
Posts: 365
Default

Quote:
Du hast recht!
Jiaz wird sich das anschauen.
In den Rohdaten ist der Text definitiv korrekt.
Danke

Last edited by mirino; 30.09.2021 at 17:16.
Reply With Quote
  #1903  
Old 30.09.2021, 17:21
mirino mirino is offline
JD VIP
 
Join Date: Mar 2009
Posts: 365
Default

Quote:
Originally Posted by mgpai View Post
You can query filetype …
Yes it works. Thank you very much

Quote:
Originally Posted by pspzockerscene View Post
Wie meinst du das?
Ich brauche bei dem Link (*), siehe Post https://board.jdownloader.org/showpo...postcount=1898, eine andere Abfrage. Ich habe link.linkInfo.group == "VideoExtensions" ausprobiert. Damit funktioniert es: Es wird die .txt-Datei in jedem Video-Fall geschrieben, auch ohne dass eine Video-Description existiert und für andere Videos, als nur für "joinpeertube.org". Hier ist meine aktuelle Version:

Code:
// Write downloadLink, fileName and the videoDescriptionInfo to file.
// Trigger: A download stopped

if (link.finished && link.linkInfo.group == "VideoExtensions") { // query filetype instead of hostname (link.host == "joinpeertube.org")
	var ext = getPath (link.downloadPath).extension;
	var file = link.downloadPath.replace (ext, "txt");
	var data = [link.contentURL, link.name + "\n", link.comment].join("\n") + "\n";

	try {
		writeFile (file, data, true); // 'true' = append, 
		// 'false' = do not append (this will not overwrite the file, 
		// but throw an error if a file with same name exists on disk).
	} catch (e) {};
}

Last edited by mirino; 30.09.2021 at 17:34.
Reply With Quote
  #1904  
Old 30.09.2021, 17:37
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,343
Default

@mirino: Du hast im Skript ja den link.host Prüfung wieder entfernt, dein Skript greift auf alle fertigen Videos, wie es die if Bedingung auch besagt.
__________________
JD-Dev & Server-Admin
Reply With Quote
  #1905  
Old 30.09.2021, 17:41
mirino mirino is offline
JD VIP
 
Join Date: Mar 2009
Posts: 365
Default

For the other part 10., 11. and 12. of https://board.jdownloader.org/showpo...postcount=1898 i have an idea with the program
Code:
wget
. The Problem that then remains is, how can i get a video link or the video links from a site. E.g.: How can i get the video-link (*) in an Event-Scripter-Script like:

Per Copy & Clipboard, from the content of this site: www .wahrheitskongress .de/2021-tag-1-1-interview-mit-die-weisse-bueffelkalbfrau/ i became the video-link (*): player.vimeo .com/video/605104717?h=a3aecf410c
JDL takes this link (*) with password (without 3 Blanks): https ://www .wahrheitskongress .de/

Last edited by mirino; 30.09.2021 at 17:50.
Reply With Quote
  #1906  
Old 30.09.2021, 17:48
mirino mirino is offline
JD VIP
 
Join Date: Mar 2009
Posts: 365
Default

Quote:
Originally Posted by Jiaz View Post
@mirino: Du hast im Skript ja den link.host Prüfung wieder entfernt, dein Skript greift auf alle fertigen Videos, wie es die if Bedingung auch besagt.
Ich habe "link.linkInfo.group == "VideoExtensions"" genommen, weil "link.host == "joinpeertube.org"" nicht für alle Videos funktioniert. Was meinst du genau, was ich falsch gemacht oder anders machen sollte?
Reply With Quote
  #1907  
Old 30.09.2021, 18:13
pspzockerscene's Avatar
pspzockerscene pspzockerscene is online now
Community Manager
 
Join Date: Mar 2009
Location: Deutschland
Posts: 71,143
Default

Folgendes ist (grob) zu tun, um zu den Vimeo Links die Beschreibung der "wahrheitskongress.de" Webseite zu bekommen:

1. Sichergehen, dass JD entweder den korrekten Referer bereits kennt:
"wahrheitskongress.de" Links über den deepcrawler einfügen, sodass die vimeo.com Links automatisch ohne Passwort- bzw. Referer-Abfrage übernommen werden).
Ich will damit u.a. sagen:
Wenn du nach dem "Passwort" gefragt wirst, gib nicht nur die Hauptseite von "wahrheitskongress.de" ein sondern den kompletten Link in dem der vimeo.com Link eingebettet ist -> Diesen brauchst du ja später, um daraus den Text zu ziehen.

2. Im Script den Referer Holen.
Im EventScripter Script den Referer aus dem vimeo Downloadlink (PluginPatternMatcher) holen und in einen Link umwandeln:
#forced_referer=<HierStehtDerLinkAlsHexwert>

3. Den Link aufrufen, Text über reguläre Ausdrücke aus dem HTML Code holen und entweder direkt speichern oder auf die vimeo.com URLs als Kommentar setzen.

Grüße, psp
__________________
JD Supporter, Plugin Dev. & Community Manager

Erste Schritte & Tutorials || JDownloader 2 Setup Download
Spoiler:

A users' JD crashes and the first thing to ask is:
Quote:
Originally Posted by Jiaz View Post
Do you have Nero installed?
Reply With Quote
  #1908  
Old 30.09.2021, 18:19
Jiaz's Avatar
Jiaz Jiaz is offline
JD Manager
 
Join Date: Mar 2009
Location: Germany
Posts: 79,343
Default

@mirino
Quote:
Originally Posted by mirino View Post
Ich habe "link.linkInfo.group == "VideoExtensions"" genommen, weil "link.host == "joinpeertube.org"" nicht für alle Videos funktioniert. Was meinst du genau, was ich falsch gemacht oder anders machen sollte?
Ich weiß ja nicht was du erreichen willst, aber du hast ja geschrieben
Quote:
Es wird die .txt-Datei in jedem Video-Fall geschrieben, auch ohne dass eine Video-Description existiert und für andere Videos, als nur für "joinpeertube.org".
Und genau das soll das Skript laut der if Bedingung ja machen. Es wird für alle Video Links ausgeführt die Fertig sind. Wenn du eine andere/weitere Bedinungen haben willst, dann müssen die in die if rein

Du kannst ebenfalls auch im Skript Webseite aufrufen und parsen, brauchst also kein externes wget hierfür nutzen
__________________
JD-Dev & Server-Admin
Reply With Quote
  #1909  
Old 30.09.2021, 18:19
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,533
Default

Quote:
Originally Posted by mirino View Post
JDL takes this link (*) with password
You can create a linkcrawler rule to crawl that page and find the links. You can also specify password pattern in the same rule.

Whiile you can use the eventscripter 'browser' methods to fetch/query the html for an url and extract information for it, it is more suited for simple/lightweight tasks. If you download from the site quite often, I would recommend creating a plugin instead, since you can perform both those tasks in it.
Reply With Quote
  #1910  
Old 30.09.2021, 18:47
pspzockerscene's Avatar
pspzockerscene pspzockerscene is online now
Community Manager
 
Join Date: Mar 2009
Location: Deutschland
Posts: 71,143
Default

@mgpai
He only wants the text from that one website so I guess yeah, creating a LinkCrawler Rule is a first step to make things easier.

@mirino
Hier ist eine LinkCrawler Regel für "wahrheitskongress.de".
Die sorgt dafür, dass JD diese Links automatisch erkennt und darin nach den vimeo Links sucht.
So musst du auch nie wieder Passwort/Referrer für diese Links angeben:
Code:
[
  {
    "enabled": true,
    "logging": false,
    "maxDecryptDepth": 1,
    "name": "wahrheitskongress.de: find all embedded vimeo URLs",
    "pattern": "https?://(www\\.)?wahrheitskongress\\.de/[a-z0-9\\-]+/",
    "rule": "DEEPDECRYPT",
    "packageNamePattern": null,
    "passwordPattern": null,
    "deepPattern": "(https?://player\\.vimeo\\.com/video/[^\"]+)"
  }
]
Regel als plaintext zum einfacheren kopieren/einfügen:
pastebin.com/raw/Y85RMRvF

Jetzt must du wie HIER beschrieben bei Schritt 2 fortfahren und den "wahrheitskongress" Link im Script holen und den Text aus dem html Code extrahieren.

Grüße, psp
__________________
JD Supporter, Plugin Dev. & Community Manager

Erste Schritte & Tutorials || JDownloader 2 Setup Download
Spoiler:

A users' JD crashes and the first thing to ask is:
Quote:
Originally Posted by Jiaz View Post
Do you have Nero installed?
Reply With Quote
  #1911  
Old 30.09.2021, 18:53
pspzockerscene's Avatar
pspzockerscene pspzockerscene is online now
Community Manager
 
Join Date: Mar 2009
Location: Deutschland
Posts: 71,143
Default

Quote:
Originally Posted by mirino View Post
Vielen Dank für das schnelle Bearbeiten. Es funktioniert Allerdings steht vor jedem Absatz, außer dem ersten Absatz, ein Leerzeichen, z.B. " Achtung, dieses Video …"
Nochmal dazu:
Jiaz und ich haben uns das angeschaut.
Es passiert, wenn man in Linksammer/Downloadliste die Dateieigenschaften aktiviert hat die erscheinen, wenn man auf einen Link klickt.
Der Wert wird dann ebenfalls falsch (also mit den Leerzeichen) gespeichert.

Was du vorerst tun kannst, um dem Problem aus dem Weg zu gehen:
  1. Rechts unten im Linksammler/Downloadliste auf die Schnelleinstellungen -> "Eigenschaften von Paketen und Links zeigen" deaktivieren.
  2. Betroffene Links löschen und neu hinzufügen.
  3. Wenn du den Kommentar aus irgendeinem Grund manuell kopieren willst, nimm eine der folgenden beiden Möglichkeiten:
    - Rechtsklick oben auf die Spalten -> Kommentar --> Den Kommentar kannst du nun in einer extra Spalte ansehen/markieren/kopieren
    oder:
    - Rechtsklick auf den Link -> Einstellungen -> Kommentar(e) setzen

Jiaz schaut sich das nochmal an und wird diesen Fehler ggf. fixen.

Grüße, psp
__________________
JD Supporter, Plugin Dev. & Community Manager

Erste Schritte & Tutorials || JDownloader 2 Setup Download
Spoiler:

A users' JD crashes and the first thing to ask is:
Quote:
Originally Posted by Jiaz View Post
Do you have Nero installed?

Last edited by pspzockerscene; 30.09.2021 at 19:58.
Reply With Quote
  #1912  
Old 03.10.2021, 05:11
TomNguyen TomNguyen is offline
DSL Light User
 
Join Date: Jul 2017
Posts: 33
Default

Dear @mgpai,
Could you please help with a script that automatically deletes files have been downloaded for more than 30 days (and bigger than 5GB if possible to add this filter) when HDD space reached 900GB (or 90% HDD used).
I have been using scheduled task on windows to delete the 30-days old files as per this instruction
Quote:
jackworthen.com/2018/03/15/creating-a-scheduled-task-to-automatically-delete-files-older-than-x-in-windows
But it would be great to keep the old files, especially the small files until there is only 10% space left. Thank you!

Last edited by TomNguyen; 03.10.2021 at 08:56.
Reply With Quote
  #1913  
Old 03.10.2021, 10:07
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,533
Thumbs down

Quote:
Originally Posted by TomNguyen View Post
... script that automatically deletes files have been downloaded for more than 30 days (and bigger than 5GB if possible to add this filter) when HDD space reached 900GB (or 90% HDD used) ...
Code:
/*
    Disk cleanup
    Trigger: Interval
    Recommended interval: 3600000 (1 hour) or more
*/

var targetSpace = 100;
var targetSize = 5;
var targetDays = 30;

getAllDownloadLinks().forEach(function(link) {
    var file = getPath(link.downloadPath);

    if (
        file.freeDiskSpace < targetSpace * 1.0E9 &&
        file.size > targetSize * 1.0E9 &&
        Date.now() - file.createdDate > targetDays * 8.64E7
    ) {
        link.skipped = true;
    }
})

Since the action cannot be undone, I would recommend moving the files to a folder of your choice, instead of deleting them directly.

Alternatively, if you also delete the corresponding links from the list when you delete the files, the API has an option to move the files to 'recycle' bin if possible.

The script is currently set to only 'skip' the files. Click 'Test Run' to test the script and verify if the files 'skipped' match your selection criteria and find me in JD Chat:

Code:
kiwiirc.com/nextclient/irc.libera.chat/#jdownloader
Reply With Quote
  #1914  
Old 04.10.2021, 05:00
TomNguyen TomNguyen is offline
DSL Light User
 
Join Date: Jul 2017
Posts: 33
Default

Brilliant coder. Thanks a lot, mgpai!
Reply With Quote
  #1915  
Old 04.10.2021, 19:07
pspzockerscene's Avatar
pspzockerscene pspzockerscene is online now
Community Manager
 
Join Date: Mar 2009
Location: Deutschland
Posts: 71,143
Default

@mgpai
Since we did never add ther abolity to upload scripts or add a nice overview, I'm curious:
Do you have a current overview of your scripts somewhere e.g. GitHub?

-psp-
__________________
JD Supporter, Plugin Dev. & Community Manager

Erste Schritte & Tutorials || JDownloader 2 Setup Download
Spoiler:

A users' JD crashes and the first thing to ask is:
Quote:
Originally Posted by Jiaz View Post
Do you have Nero installed?
Reply With Quote
  #1916  
Old 07.10.2021, 12:45
mgpai mgpai is offline
Script Master
 
Join Date: Sep 2013
Posts: 1,533
Default

@psp: A few can be found on github/gitlab, but most of them are in this thread/forum. Having an index of sorts would definitely be helpful, but who will bell the cat?
Reply With Quote
  #1917  
Old 07.10.2021, 13:20
pspzockerscene's Avatar
pspzockerscene pspzockerscene is online now
Community Manager
 
Join Date: Mar 2009
Location: Deutschland
Posts: 71,143
Default

Quote:
Originally Posted by mgpai View Post
but who will bell the cat?
I was gonna say the same... not me
__________________
JD Supporter, Plugin Dev. & Community Manager

Erste Schritte & Tutorials || JDownloader 2 Setup Download
Spoiler:

A users' JD crashes and the first thing to ask is:
Quote:
Originally Posted by Jiaz View Post
Do you have Nero installed?
Reply With Quote
  #1918  
Old 09.10.2021, 20:28
TomNguyen TomNguyen is offline
DSL Light User
 
Join Date: Jul 2017
Posts: 33
Default

Quote:
Originally Posted by mgpai View Post
@psp: A few can be found on github/gitlab, but most of them are in this thread/forum. Having an index of sorts would definitely be helpful, but who will bell the cat?
Quote:
Originally Posted by pspzockerscene View Post
I was gonna say the same... not me
As I went through all pages the other day, I have made a list of event scripts created by @mgpai. It seems can be a temp index. Hope this helps.

Quote:
1 Add alternate url, if current url is offline
2 Add cover art
3 Add downloadlinks as speedtest
4 Add file content
5 Add link at user defined interval
6 Add metadata to image file, using external program
7 Add multiple URLs at user defined interval
8 Add Playlist Position to file name
9 Add prefix to filename
10 Add single URL at user defined interval
11 Append download folder to comment
12 Auto stop/restart downloads if the current average speed is below limit.
13 Auto-show/Auto-hide infobar
14 Automatically stop/restart slow downloads
15 Backup link lists at startup
16 Build/update downloaded links history and add comment to finished link
17 Call external program
18 Check RSS feeds
19 Clean-up packagenames
20 Cleanup names
21 Convert aac/m4a/ogg files to mp3
22 Convert aac/m4a/ogg files to mp3.
23 Convert dts to ac3 and create new video file
24 Daily Backup
25 Delete 'jpg' and 'txt' files and remove download folder if empty.
26 Delete downloaded files from user-specified folder, at user-specified interval.
27 Delete downloaded files from user-specified folder, immediately after it's been downloaded
28 Delete from extracted files, any file/folder which contains user specified keywords
29 Delete junk folders
30 Detect duplicate files
31 Disable (instagram) links if file exists on disk
32 Disable all download link(s) of an archive, if the download folder contains a sub-folder with archive name
33 Disable Auto reconnect for user specified period, if downloads fail to start/resume (no bytes downloaded) after an IP change.
34 Disable download link, if file exists in download folder/subfolders
35 Disable download link, if file exists on disk
36 Disable links with "FILE_TEMP_UNAVAILABLE" status
37 Disable links, if they exist in history
38 Disable matching files
39 Disable pending links of hosts, after x retries
40 Enable Folder Watch on user specified dates
41 Enable Tray Icon if it is disabled
42 Export download URLs
43 Export related URLs
44 Export URLs (LinkGrabber)
45 Extract archives after all packages have finished
46 Extract/set package name from file name
47 Filename to lower case
48 Flatten Archives (Move extracted files from sub-folders to the main extraction folder)
49 Format Date
50 From selected links, move only the pending download links to packages by Host
51 Generate md5 hash file in download folder
52 Get Archive Password (Downloads)
53 Get links from finished crawljob
54 Get Selected packageName
55 Hide/Unhide Infobar Window
56 If file exists, stop the download and remove the link, else rename it.
57 if update is avaialble, finish running downloads and restart and update when JD is Idle
58 Import accounts from text/csv file
59 Limit number of reconnects allowed during user defined duration
60 Limit per download session
61 Limit running links/archives
62 Limit running packages
63 LinkGrabber Proxy cycle
64 Log crawledlink info to file
65 Monitor connection
66 Move all links to download list
67 Move archive files after extraction
68 Move downloaded non-archive files to user-specified folder
69 Move extracted files
70 Move extracted files to base folder and remove empty sub-folders.
71 Move finished link to user-defined package
72 Move finished non-archive files, to user defined folder
73 Move links to to "Already Downloaded" package, if they exist in history
74 Move media files based on duration
75 On download finished, append download folder to comment
76 On package finished, if the package does not contain archives, move files to user-specified location
77 On package finished, move files to user-specified location
78 Open container link in browser
79 Open finished or partially download file for the selected download link
80 Open partially download file for the selected download link
81 Open php file in text editor
82 Open selected files
83 Open selected folder in media player
84 Open selected package
85 Open url in browser
86 Pause downloads during extraction
87 Pause ond resume downloads if average speed is below target speed
88 Play sound when download finished
89 Play sound when new links added (per job)
90 Preview file
91 Proxylist updates for hosters (e.g. Zippyshare.com)
92 Read comment from file and add it to selected links
93 Reconnect if all downloads have been running for at least one minute, and the average (global) download speed is below user specified speed
94 Reconnect if all downloads have been running for at least one minute, and the average (global) download speed is below user specified speed
95 Remove crawledlinks which match user-specified pattern
96 Remove download link, if file exists on disk
97 Remove linkgrabber links which are present in user-specified text file
98 Remove links - from download list, if a file with matching name + size exists in user-specified folders
99 Remove orphaned archive settings files from "cfg/archives" folder
100 Remove prefix from filename
101 Remove selected links and move files to trash
102 Remove test/word from packagenames
103 Remux 'mp4' to 'mkv'
104 Rename files by matching download url pattern
105 Replace 128kbit audio with 192kbit audio in 1080p youtube videos
106 Replace characters in file name
107 Replace file name with archive name
108 Replace space with underscore in filenames
109 Request reconnect if the current average speed is below limit.
110 Reset finished link, if size is zero bytes
111 Restart & Update when JD is idle, or after 'x' hours.
112 Restart and Update JD when idle
113 Restart and update, if updates are available
114 Restart download links if the current average speed is less than user-specified speed
115 Restart JD if download speed in 0 and JD is idle.
116 Restart slow links
117 Resume offline links
118 Rewrite URL
119 Run external command on extracted files & delete archive links from list
120 Run external program if package contains 'pdf' file
121 Run external program when all packages finished
122 Run external script
123 Run filebot on package finished (custom event)
124 Save finsihed link
125 Save Link URLs
126 Save linkgrabber urls to text file
127 Save links to text file
128 Save youtube links and basic information to a html page
129 Schedule Extraction
130 See new links (in the comment field)
131 Send email notification (NAS)
132 Set filename as comment
133 Set last download destination as download folder for new links
134 Set Package Name based on matching host
135 Simple history
136 Skip links based on size and package name
137 Skip links from specified host when daily limit is reached
138 Skip slow links and resume them after wait time.
139 Skip/Unskip (or reset) unfinished downloads from user-spcified hosters
140 Skip/Unskip resumable links, based on user-specified speed and duration
141 Split Packages and create sub-folder based on package length
142 Start new downloads based of ETA of running downloads
143 Start pending downloads after extraction is finished
144 Stop and restart all downloads if average speed is below target speed
145 Stop and restart slow links.
146 Stop downloads when target reached
147 Time based download control
148 Toggle sound notification
149 Toggle Speed Limit
150 Unskip 'account missing' and 'skipped - captcha input' links at user specified interval
151 Unskip 'account missing' links at user specified interval
152 Unskip and start downloading links with "Invalid download directory" message, if the destination folder is available.
153 Unskip links with unsolved captcha
154 Update when Idle
155 Update when JD is Idle
156 View extracted files
157 View recent JD update log(s)
158 View Update Log
159 When package is removed, do something with the extraction folders of each archive in that package
160 Write info to text file
161 Write link comments from a package to 'csv' file
162 Write link comments to 'csv' file.

Last edited by TomNguyen; 09.10.2021 at 20:32.
Reply With Quote
  #1919  
Old 10.10.2021, 23:16
JDL1059 JDL1059 is offline
Junior Loader
 
Join Date: Oct 2021
Posts: 10
Default Setting Up Script to Auto-Download from a List of Links That Adds Files Over Time

So I have a list of megaupload links where I would like a script to basically check on those list of links periodically, and if new files are added to the link that it hasn't already downloaded, it will start downloading that.

Is that possible?
Reply With Quote
  #1920  
Old 11.10.2021, 12:02
BillyCool BillyCool is offline
Super Loader
 
Join Date: Sep 2016
Location: Australia
Posts: 28
Default

Hello,

I'm working on a script and having trouble getting the download path from a crawled link:
  • link.getDownloadPath() results in error
    Code:
    Security Violation org.appwork.exceptions.WTFException (#10)
  • link.getPackage().getDownloadFolder() results in null

Any ideas?
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +2. The time now is 13:27.
Provided By AppWork GmbH | Privacy | Imprint
Parts of the Design are used from Kirsch designed by Andrew & Austin
Powered by vBulletin® Version 3.8.10 Beta 1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.