Hi, On Thu, 30 Oct 2003, Indan Zupancic wrote:
Updated the diff, gcc 2.x should work now, link is still http://www.xs4all.nl/~dorinek/dillo/dillo-ssl.diff.gz
Here go my comments. (An important part of the answer is in this email: http://article.gmane.org/gmane.comp.web.dillo.devel/1407 )
It would be nice to get some feedback from the Dillo developers, I'd like to know what they think should change before the code is good enough to go into the official Dillo, and if they even consider that of course.
Of course it is considered! (see above URL)
I have plenty of ideas, but most of them aren't feasible without merging the two IOData structures (now there is one for the read part, and one for the write part, why, I don't know, doesn't make things simpler),
It serves to isolate the query process from the answer process. Also as dillo uses pipes to talk to itself, it makes sense to have read-only and write-only FDs. Of course it can be merged, but it may be not simple.
doing that makes it much easier to do certain, more complex things that need to have all the fd related info in one place, like ssl, connection caching (currently Dillo opens and closes a connection for each action, although it seems to work well, it's still more efficient to keep connections open longer, especially with slow/bad network connections), "keep alive" (to avoid timeout problems in ssl, quite annoying when you're taking too much time typing an email), simple ftp support, etc.
Currently dillo does that. Every time you add a bookmark, dillo and the server chat a little while before adding it (with connection caching).
I know that the Dillo developers want to implement most "extra" features like https/ftp/bookmarks/etc as plugins, but I think it's beter to implement the features that are used a lot, or just very important, like https and ftp, natively in Dillo with a small implementation. The overall code is most times smaller too, can say that Dillo is just 300 K, but it would be a bit weird if the plugins together are bigger than Dillo after a while. https took around 2 Kb, I think I can implement basic FTP (and possibly ftps) in about 15 K, using no external ftp library or program (hopefully 10, worst case 20K).
That is a design decision: Monolithic vs Distributed. Is hard to have a definitive choice. Dillo follows the distributed approach.
If you let me loose on Dillo's code then this is what I would do:
# Cleanup the IO engine, one IOData per fd.
Could be.
# Moving the dns stuff to IO.
# Move all the socket stuff into IO.c, the module that wants to handle opening the connection should also close it (all non-internet sockets, like files. Unix sockets may be added to IO.c.).
Gzilla had a flat design, we've come a long way to layer dillo and to separate its code in modules.
# Add ftp support.
It has.
# Add file upload support in html forms.
Agreed.
# Add connection caching to IO.
It has.
# Auto keep alive like behaviour for ssl connections.
# Maybe always ACCEPT_SESSION for cookies when they come via a https connection.
Could be.
# Get rid of the ccc construction, or change it so that it behaves more logical. My opinion is that you should have either a clear unique API for each module, or one general API that is the same for each module. Something in between like the current ccc functions is only confusing (if you want a chain like behaviour, then use function pointer stuctures or something).
The CCC is all about parallelism and error handling. It is the supporting core of the application. Not easy to substitute.
# When changing the internal API anyway, making shared library plugins possible too, at least for the IO part (maybe ftp library plugin as proof of concept).
Changing the internal API...
# Get rid of the splash stuff, move it to an external html file that is opened by default when nothing else is opened. Currently the splash screen is even opened when you do "dillo http://site.org". Loading a file is almost as fast as the build in splash screen. Removing it makes Dillo smaller (about 10 K), simpler and lets people make their own startup pages (for instance a to hd saved google.com).
In the past that approach was used. ~/.dillo ended having a splash file for each release. The point is that what's more memory consuming is the widget tree for rendering, not the source itself. Having a dillorc option to replace the Splash with a local file may be handy though.
As you could notice, I'm mainly interested in the internet/IO part, graphical stuff doesn't attract me much. If it did, I would help/start porting Dillo to FLTK. If Dillo decides to go with GTK 2 then I will probably stop using it :(.
It puzzles me: changing the IO engine, integrating parts of the code that were separated on purpose, changing the internal API, getting rid of the CCC and porting to FLTK, we'd be talking about another browser.
I don't have enough faith in GTK2, it's already too big (and I saw the test results). If you have GTK 2 running anyway it doesn't matter much if you run Dillo too of course, but that's the case too with all those KDE/Gnome programs. IMHO, you should look at all the code that a program uses, including the one in dependencies. I prefer a program that has it's own small and efficient implementation of something to one that is smaller but uses huge libraries.
Agreed. The GTK2 issue is under study now.
I'm also thinking about making a Dillo installer (sh script) that downloads and installs the cvs version of Dillo and applies the patches the user wants. This needs some coordination, I saw links to other Dillo patches, but I don't know if they are up to date, nor if the maker is willing to keep that so. Currently the only patches I know that are up to date are Frank's patches and my own https patch.
Making a web site (or page for it) requires time, if you're willing to do that, go ahead but beware that we do not support unofficial patches so you'd have to answer all the incoming questions too. Cheers Jorge.-