On Thu, 07 Apr 2011 03:53:01 -0400, Diego . <darkspirit5000@gmail.com> wrote:
I will try to explain some reasons
1 DPI system avoid duplicate code and complex share and sync code. 2 It is a universal solution. It can be used for file, cookies, etc. 3 Each new open dillo have access to previous running DPIs. 4 Isolates DPIs and dillos each in a process.
I think DPI system could be better too. It is not a fully stabilised API too. And there is no docs or SDK to write DPIs
More in detail: 1.a: With built-in local file browsing each dillo running have the code for it. The most of the time that code is not used an each dillo waste that mem. With DPIs it is only loaded when needed and unloaded if not. In the file case the DPI is a high performance solution (threads) that in dillo can be a waste of mem, but not in an external DPI (you can write and use other file DPI focused for mem saving)
A couple extra kilobytes for file browsing is not wasteful, IMHO. If space is that tight, like in an embedded device, it could easily be #define'd out and added as a ./configure option, like with cookies. As I understand it, most of the memory used by local file browsing is dynamically allocated, anyway; it's not like those DilloDirs stay in memory for all eternity. There's a LOT of duplicate code in DPI -- it's just *different* duplicate code. For example, each one has to implement signal handling, authentication, etc., not to mention the mess of interprocess communication. There's also the added overhead of each separate DPI process, and the DPI daemon itself.
1.b: With the cookies the DPI system avoid code in dillo to share write in the cookies file. It makes easy to debug cookies (an other paralles code) with varios dillos.
This is the example everyone uses to point out DPI's "strengths". Well, I've looked at the cookies code, and I'm going to call bullshit. 1) The cookies file is only accessed twice: to read cookies when the browser opens, and to write them when it closes. It doesn't need to be open at any other time. 2) As I understand the code, if multiple Dillo instances are running, all but the first won't have cookies anyway. 3) Here's the real kicker: *you don't need the cookies file*. Here's the code to prove it: http://dillo-win32.sourceforge.net/files/23-dillo-r1803.cookies.diff This is a long patch, but basically it removes the cookies DPI; integrates the code into the browser (fairly straightforward, because src/cookies.c already duplicates large chunks of the DPI verbatim -- what's this about redundant code, again?); and changes it to not read/write cookies from/to disk. That last part is trivial: just initialize the cookies file pointer to NULL, check for a file pointer before any read/write operations, and don't abort if there is no file pointer in a_Cookies_init. Dillo holds cookies in memory while running anyway, so again, *it doesn't need a cookies file* except for persistence between sessions. By the way, holding cookies in memory has several advantages. For one thing, it's a better balance between functionality and privacy, because you can access cookie-dependent sites without leaving traces on your disk. For another, if you have multiple Dillo instances running, the "redundancy" (read: isolation) can be a good thing. I often have two separate instances running, one for each of my Gmail accounts. Speaking of which, I've also moved HTTPS in-browser. Rather than re-implementing the HTTP engine as the DPI seems to do, I simply added transparent SSL support to dSock, my Winsock/BSD sockets portability wrapper. It required some nastiness to support the HTTP CONNECT command for proxying, since that has to be sent in plain text, but otherwise it needed only minimal patching; once you tell dSock to go secure, it will automatically call the SSL-enabled read/write functions for you. Here's the patch for that: http://dillo-win32.sourceforge.net/files/22-dillo-r1803.https.3.diff
2: With DPI system you dont need to do one thing for protocols, other for downloads, other for cookies, etc. Each thing is a diferent problem, but is handled in a similar way.
src/capi.c, src/cookies.c, and various other source files beg to differ; those two in particular are very poor examples, since they're called from separate source files, under completely different circumstances, in completely different ways. In other words, you do one thing for protocols, ... I vaguely recall that my built-in downloader took only one function call to start, as opposed to at least three for the DPI (it cleans itself up automatically). It also has no external dependency on wget, or any interprocess communication for that matter.
3: All different process dillos (dillos running in different process) have the same access to data an code of DPIs. Is it easy to share DPIs processed data and functions of DPIs. What dillo run a external program or what dillo process have the data never mind if it is done with a DPI.
What part of the DPI code do you consider "easy"? Interprocess communication is messy enough as it is, let alone the way DPI implements it.
4: If a process crash the other DPIs and the other dillo process continue running. If it is a DPI it is re-run and the work continues (there is a bug in the code: sometimes a defunct DPI can block a dillo)
If a process is properly coded, it should never crash in the first place. I'm just saying.
About DPI system faults: - Documentation - Changing API and protocol - Do not work on windows - Try to simplify protocol, API and code
There is a lot to talk to do better DPI system. Maybe you and me can try to have DPI running in windows.
I've considered porting DPI. The problem is, interprocess communication is completely different on Windows, and the best case scenario is that porting it would require a complete API overhaul. (Though I understand the API changes in every release anyway, so I suppose that isn't much of a problem.) A much more practical solution, as far as I'm concerned, is simply to build that functionality into the browser: - It's simpler, since there's on interprocess communication required. - It's more portable, for the same reason. - It's easier to distribute, since you only need one executable. I'm especially insistent on the last point. With all dependencies statically linked, I've got my port down to a single executable, and I hope to keep it that way. Up until the last release, you could fit the entire browser on a floppy disk -- and it's only bigger now because of OpenSSL, not because of anything in Dillo itself. (I've looked at the embedded libraries like yaSSL, which are much smaller, but I can't find a reliable download link.) Anyway, that's my two cents. ~Benjamin