Hi Kevin, On Sun, Jul 07, 2024 at 10:21:05AM +1000, Kevin Koster wrote:
Rodrigo Arias wrote:
Working on these examples is very helpful to design the rule system, so feel free to mention more cases.
In addition to HTTP header feilds, it would be handy to assign rewrite rules according to HTTP status codes. I spend a lot of time browsing dead/dying websites so I'm always using the Wayback Machine. I'd like to use this system to process 404 (500, etc.) error pages through a program/script that adds a link at the top to look up the URL at the Wayback Machine. It might also check for a copy I've archived locally, if I get my website archives more organised, and link to that as well.
Good idea, this should be doable with the rule mechanism. An important consideration is to be able to quickly forward good requests or responses, so we reduce the overhead in overall browsing. We could probably just hook it to >= 400 and then you handle those broken responses as you want, while quickly forwarding 200 to Dillo: match http-status-code >= 400 action broken-page I think we may need to define several "tables" like in iptables so we can have rules that handle the traffic at different stages. This one doesn't require decoding the compressed content.
I've already got the Wayback Machine set up as a search option in dillorc which returns the latest archived copy of a URL: search_url="w Wayback Machine http://web.archive.org/web/%s"
But I use it so much that the ease of just clicking a link would be a real advantage, especially after opening multiple broken links from a page. I might even try to write something that detects archived error pages and works back to the last actual copy of the page.
I think this is a very good addition to the default set of search engines. Thanks, Rodrigo.