Hi, On Mon, Jun 17, 2024 at 02:57:44PM +0200, a1ex@dismail.de wrote:
So there is no need to download the file completely before it is piped to the tools.
I don't think mupdf will open a file piped from stdin. Neither does nsxiv. Maybe I'm missing something.
Apparently that's the case, although it should be possible to do: https://mupdf.readthedocs.io/en/1.22.0/progressive-loading.html However, Okular and Zathura seem to be able to open PDFs from the standard input as a pipe. Same with feh for images.
It is possible to determine the Content-Type without downloading the file with something like:
curl -XHEAD -s -w '%{content_type}' $url
Yes, but it will require another GET request to download the actual file. I think we can just begin a GET request from Dillo, parse the HTTP headers, select the appropriate handler from the mime, and either pipe the content to it or write it to a file and then pass the file path. The first option would allow doing it without waiting for the whole file to download. In fact, it should be possible to not download more than what is being consumed by the handler program. One of the Dillo main objectives is to support slow download speeds (or metered) gracefully.
Executing a user script is fine, but I also want to be able to rewrite the page and bring it back to Dillo for display, which requires more cooperation.
I'm curious what the use-case for this is. Sounds interesting.
For example, you could write a filter program that parses HTML and rewrites the links to JS hungry websites to alternative ones, in the same way libredirect[1] works. [1]: https://libredirect.github.io/ This also allows patching the HTML of sites so you can fix them to work better (or at all) in Dillo. This is also done by Firefox from the webcompat[2] project in what they call "interventions", as sometimes page authors don't fix them or take a long time, so they patch it from the browser directly. You can open about:compat to see the long list of patches, here[3] is one for YouTube. [2]: https://webcompat.com/ [3]: https://hg.mozilla.org/mozilla-central/rev/1fa7de8dec52 This already happened to Dillo with Hacker News, and there are still some minor issues not solved. The matching rules should apply those corrections only to the set of matching URLs. Best, Rodrigo.