Thread: [Solved] Hitomi.la
View Single Post
  #7  
Old 20.01.2020, 23:20
Etshy Etshy is offline
DSL Light User
 
Join Date: Oct 2019
Posts: 34
Default

From what I see the "c/1a" in you example is the same as the thumbnails (I guess you should have access to them when we copy the comics URL).
If you have an example where it's not the case, let me know, I'll look if it's generated elsewhere.

For the subdomain part that seems to change often, even in the same comic, I'll try to dig a bit.

First you need their comicId (they name it galleryId in their files)
You can get it via the "Read Online" button, that's the number part, just before ".html"

Then you need to get their "json" (it's a js array/object but that's basically a json)
in this URL : "//ltn.hitomi.la/galleries/" + galleryid + ".js"
Example : //ltn.hitomi.la/galleries/1553604.js
By deleting "var galleryinfo = " you can get a JSON.

Then you can loop this json and call
url_from_url_from_hash(galleryid, our_galleryinfo[i], 'webp');
Here is a pastebin copy of their common.js with url related functions
**External links are only visible to Support Staff****External links are only visible to Support Staff**
You can "copy" and "convert" the code (it's mostly a bit of regex and String.replace things)

If there is specific part in the JS where you need help, let me know.



edit: btw, it seems my call would work only if the comic have webP (as I pass "webp" as argument)
(seems all news comics have webP but not sure).
You can have the information if the image have a webP in the JSON linked, so a condition on that would be needed to pass webP if true and null if false (I guess, not tested)

Last edited by Etshy; 20.01.2020 at 23:25.
Reply With Quote