I'm using dillo on FreeBSD 9-CURRENT, and ever since the dpi framework was converted to use Internet domain sockets, I've been plagued by long delays when starting dpis because I'm using blackhole(4) by setting net.inet.tcp.blackhole to be nonzero. An example of the problem: Nav_open_url: new url='dpi:/bm/' Dpi_check_dpid_ids: Operation timed out [dpid]: a_Misc_mksecret: e1ab7e05 Dpi_blocking_start_dpid: try 1 dpid started where truss -faedD -s 128 dillo shows: 41523: 12.860150103 0.000110350 write(1,"Nav_open_url: new url='dpi:/bm/'\n",33) = 33 (0x21) 41523: 12.860414941 0.000037435 open("/root/.dillo/dpid_comm_keys",O_RDONLY,0666) = 4 (0x4) 41523: 12.860511880 0.000017599 fstat(4,{ mode=-rw------- ,inode=57842,size=14,blksize=4096 }) = 0 (0x0) 41523: 12.860635080 0.000034920 read(4,"5020 f9097805\n",4096) = 14 (0xe) 41523: 12.860728389 0.000018718 close(4) = 0 (0x0) 41523: 12.860826446 0.000016762 madvise(0x8036a8000,0x1000,0x5,0xfc0,0x2008,0x1) = 0 (0x0) 41523: 12.860962496 0.000029892 socket(PF_INET,SOCK_STREAM,0) = 4 (0x4) 41523: 12.861073963 0.000015086 setsockopt(0x4,0x6,0x1,0x7fffffffe2cc,0x4,0x1) = 0 (0x0) *** long delay here *** 41523: 87.854362496 74.993148012 connect(4,{ AF_INET 127.0.0.1:5020 },16) ERR#60 'Operation timed out' 41523: 87.854777073 0.000028215 stat("/usr/share/nls/C/libc.cat",0x7fffffffdd90) ERR#2 'No such file or directory' 41523: 87.854968159 0.000026260 stat("/usr/share/nls/libc/C",0x7fffffffdd90) ERR#2 'No such file or directory' 41523: 87.855157289 0.000030171 stat("/usr/local/share/nls/C/libc.cat",0x7fffffffdd90) ERR#2 'No such file or directory' 41523: 87.855342508 0.000028774 stat("/usr/local/share/nls/libc/C",0x7fffffffdd90) ERR#2 'No such file or directory' 41523: 87.855672718 0.000116775 write(1,"Dpi_check_dpid_ids: Operation timed out\n",40) = 40 (0x28) 41523: 87.855797315 0.000024026 pipe(0x7fffffffe2e0) = 0 (0x0) 41523: 87.855956274 0.000014248 sigprocmask(SIG_BLOCK,SIGHUP|SIGINT|SIGQUIT|SIGABRT|SIGEMT|SIGKILL|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGSTOP|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2,0x0) = 0 (0x0) 41523: 87.867098751 0.011049728 fork() = 41861 (0xa385) 41523: 87.868057253 0.000017321 sigprocmask(SIG_SETMASK,0x0,0x0) = 0 (0x0) 41523: 87.868278789 0.000018158 close(6) = 0 (0x0) 41523: 87.868445291 0.000014806 read(5,0x7fffffffc290,8192) = 0 (0x0) 41523: 87.868540834 0.000016762 madvise(0x8036ed000,0x3000,0x5,0xec,0x6008,0x3) = 0 (0x0) 41523: 87.868635259 0.000017879 close(5) = 0 (0x0) 41523: 87.868838358 0.000086603 write(1,"Dpi_blocking_start_dpid: try 1\n",31) = 41861 (0xa385) 41861: 0.007480839 0.000052800 write(4,"5020 e1ab7e05\n",14) = 14 (0xe) 41861: 0.007616331 0.000027098 close(4) = 0 (0x0) 41861: 0.007730592 0.000027937 socket(PF_INET,SOCK_STREAM,0) = 4 (0x4) 41861: 0.007815798 0.000012851 setsockopt(0x4,0x6,0x1,0x7fffffffe7cc,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.007892903 0.000012013 fcntl(4,F_GETFD,) = 0 (0x0) 41861: 0.007975595 0.000011733 fcntl(4,F_SETFD,FD_CLOEXEC) = 0 (0x0) 41861: 0.008087620 0.000020952 bind(4,{ AF_INET 127.0.0.1:5020 },16) ERR#48 'Address already in use' 41861: 0.008303849 0.000017880 bind(4,{ AF_INET 127.0.0.1:5021 },16) = 0 (0x0) 41861: 0.008372573 0.000014248 listen(0x4,0x5,0x10,0x30,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.008458058 0.000019276 socket(PF_INET,SOCK_STREAM,0) = 5 (0x5) 41861: 0.008537677 0.000012012 setsockopt(0x5,0x6,0x1,0x7fffffffe7cc,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.008611709 0.000011733 fcntl(5,F_GETFD,) = 0 (0x0) 41861: 0.008692725 0.000011454 fcntl(5,F_SETFD,FD_CLOEXEC) = 0 (0x0) 41861: 0.008794414 0.000016483 bind(5,{ AF_INET 127.0.0.1:5020 },16) ERR#48 'Address already in use' 41861: 0.008956725 0.000016483 bind(5,{ AF_INET 127.0.0.1:5021 },16) ERR#48 'Address already in use' 41861: 0.009117081 0.000016762 bind(5,{ AF_INET 127.0.0.1:5022 },16) = 0 (0x0) 41861: 0.009184128 0.000013130 listen(0x5,0x5,0x10,0x30,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.009269055 0.000019276 socket(PF_INET,SOCK_STREAM,0) = 6 (0x6) 41861: 0.009348674 0.000011733 setsockopt(0x6,0x6,0x1,0x7fffffffe7cc,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.009422706 0.000011454 fcntl(6,F_GETFD,) = 0 (0x0) 41861: 0.009503722 0.000011454 fcntl(6,F_SETFD,FD_CLOEXEC) = 0 (0x0) 41861: 0.009616865 0.000016483 bind(6,{ AF_INET 127.0.0.1:5020 },16) ERR#48 'Address already in use' 41861: 0.009778617 0.000016482 bind(6,{ AF_INET 127.0.0.1:5021 },16) ERR#48 'Address already in use' 41861: 0.009939252 0.000016203 bind(6,{ AF_INET 127.0.0.1:5022 },16) ERR#48 'Address already in use' 41861: 0.010118605 0.000016762 bind(6,{ AF_INET 127.0.0.1:5023 },16) = 0 (0x0) 41861: 0.010185932 0.000013131 listen(0x6,0x5,0x10,0x30,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.010271417 0.000019276 socket(PF_INET,SOCK_STREAM,0) = 7 (0x7) 41861: 0.010351595 0.000012013 setsockopt(0x7,0x6,0x1,0x7fffffffe7cc,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.010425906 0.000011454 fcntl(7,F_GETFD,) = 0 (0x0) 41861: 0.010507481 0.000011454 fcntl(7,F_SETFD,FD_CLOEXEC) = 0 (0x0) 41861: 0.010609728 0.000016203 bind(7,{ AF_INET 127.0.0.1:5020 },16) ERR#48 'Address already in use' 41861: 0.010770922 0.000016203 bind(7,{ AF_INET 127.0.0.1:5021 },16) ERR#48 'Address already in use' 41861: 0.010931557 0.000016483 bind(7,{ AF_INET 127.0.0.1:5022 },16) ERR#48 'Address already in use' 41861: 0.011091633 0.000016482 bind(7,{ AF_INET 127.0.0.1:5023 },16) ERR#48 'Address already in use' 41861: 0.011252268 0.000016762 bind(7,{ AF_INET 127.0.0.1:5024 },16) = 0 (0x0) 41861: 0.011319037 0.000013131 listen(0x7,0x5,0x10,0x30,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.011404243 0.000018997 socket(PF_INET,SOCK_STREAM,0) = 8 (0x8) 41861: 0.011484141 0.000012012 setsockopt(0x8,0x6,0x1,0x7fffffffe7cc,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.011558452 0.000011733 fcntl(8,F_GETFD,) = 0 (0x0) 41861: 0.011640027 0.000011454 fcntl(8,F_SETFD,FD_CLOEXEC) = 0 (0x0) 41861: 0.011742275 0.000016203 bind(8,{ AF_INET 127.0.0.1:5020 },16) ERR#48 'Address already in use' 41861: 0.011903748 0.000016483 bind(8,{ AF_INET 127.0.0.1:5021 },16) ERR#48 'Address already in use' 41861: 0.012063824 0.000016203 bind(8,{ AF_INET 127.0.0.1:5022 },16) ERR#48 'Address already in use' 41861: 0.012224179 0.000016482 bind(8,{ AF_INET 127.0.0.1:5023 },16) ERR#48 'Address already in use' 41861: 0.012384256 0.000016483 bind(8,{ AF_INET 127.0.0.1:5024 },16) ERR#48 'Address already in use' 41861: 0.012544611 0.000016482 bind(8,{ AF_INET 127.0.0.1:5025 },16) = 0 (0x0) 41861: 0.012611659 0.000013130 listen(0x8,0x5,0x10,0x30,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.012697424 0.000019556 socket(PF_INET,SOCK_STREAM,0) = 9 (0x9) 41861: 0.012777322 0.000012012 setsockopt(0x9,0x6,0x1,0x7fffffffe7cc,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.012851634 0.000011734 fcntl(9,F_GETFD,) = 0 (0x0) 41861: 0.012932929 0.000011454 fcntl(9,F_SETFD,FD_CLOEXEC) = 0 (0x0) 41861: 0.013034897 0.000015924 bind(9,{ AF_INET 127.0.0.1:5020 },16) ERR#48 'Address already in use' 41861: 0.013196370 0.000016203 bind(9,{ AF_INET 127.0.0.1:5021 },16) ERR#48 'Address already in use' 41861: 0.013356726 0.000016483 bind(9,{ AF_INET 127.0.0.1:5022 },16) ERR#48 'Address already in use' 41861: 0.013517081 0.000016482 bind(9,{ AF_INET 127.0.0.1:5023 },16) ERR#48 'Address already in use' 41861: 0.013677157 0.000016482 bind(9,{ AF_INET 127.0.0.1:5024 },16) ERR#48 'Address already in use' 41861: 0.013837234 0.000016483 bind(9,{ AF_INET 127.0.0.1:5025 },16) ERR#48 'Address already in use' 41861: 0.013997310 0.000016483 bind(9,{ AF_INET 127.0.0.1:5026 },16) = 0 (0x0) 41861: 0.014064078 0.000013130 listen(0x9,0x5,0x10,0x30,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.014149284 0.000018996 socket(PF_INET,SOCK_STREAM,0) = 10 (0xa) 41861: 0.014229183 0.000011733 setsockopt(0xa,0x6,0x1,0x7fffffffe7cc,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.014303773 0.000011733 fcntl(10,F_GETFD,) = 0 (0x0) 41861: 0.014385348 0.000011733 fcntl(10,F_SETFD,FD_CLOEXEC) = 0 (0x0) 41861: 0.014487316 0.000016203 bind(10,{ AF_INET 127.0.0.1:5020 },16) ERR#48 'Address already in use' 41861: 0.014648510 0.000016203 bind(10,{ AF_INET 127.0.0.1:5021 },16) ERR#48 'Address already in use' 41861: 0.014808865 0.000016482 bind(10,{ AF_INET 127.0.0.1:5022 },16) ERR#48 'Address already in use' 41861: 0.014968942 0.000016483 bind(10,{ AF_INET 127.0.0.1:5023 },16) ERR#48 'Address already in use' 41861: 0.015129018 0.000016483 bind(10,{ AF_INET 127.0.0.1:5024 },16) ERR#48 'Address already in use' 41861: 0.015297475 0.000016483 bind(10,{ AF_INET 127.0.0.1:5025 },16) ERR#48 'Address already in use' 41861: 0.015457831 0.000016483 bind(10,{ AF_INET 127.0.0.1:5026 },16) ERR#48 'Address already in use' 41861: 0.015618186 0.000016482 bind(10,{ AF_INET 127.0.0.1:5027 },16) = 0 (0x0) 41861: 0.015685234 0.000013130 listen(0xa,0x5,0x10,0x30,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.015770999 0.000019556 socket(PF_INET,SOCK_STREAM,0) = 11 (0xb) 41861: 0.015851177 0.000012013 setsockopt(0xb,0x6,0x1,0x7fffffffe7cc,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.015925767 0.000011733 fcntl(11,F_GETFD,) = 0 (0x0) 41861: 0.016007342 0.000011733 fcntl(11,F_SETFD,FD_CLOEXEC) = 0 (0x0) 41861: 0.016109310 0.000016203 bind(11,{ AF_INET 127.0.0.1:5020 },16) ERR#48 'Address already in use' 41861: 0.016270783 0.000016482 bind(11,{ AF_INET 127.0.0.1:5021 },16) ERR#48 'Address already in use' 41861: 0.016431139 0.000016204 bind(11,{ AF_INET 127.0.0.1:5022 },16) ERR#48 'Address already in use' 41861: 0.016591494 0.000016482 bind(11,{ AF_INET 127.0.0.1:5023 },16) ERR#48 'Address already in use' 41861: 0.016751850 0.000016483 bind(11,{ AF_INET 127.0.0.1:5024 },16) ERR#48 'Address already in use' 41861: 0.016911926 0.000016482 bind(11,{ AF_INET 127.0.0.1:5025 },16) ERR#48 'Address already in use' 41861: 0.017072282 0.000016483 bind(11,{ AF_INET 127.0.0.1:5026 },16) ERR#48 'Address already in use' 41861: 0.017232637 0.000016482 bind(11,{ AF_INET 127.0.0.1:5027 },16) ERR#48 'Address already in use' 41861: 0.017392993 0.000016762 bind(11,{ AF_INET 127.0.0.1:5028 },16) = 0 (0x0) 41861: 0.017459761 0.000012851 listen(0xb,0x5,0x10,0x30,0x4,0x7fffffffe57c) = 0 (0x0) 41861: 0.017559215 0.000015086 sigaction(SIGCHLD,{ 0x401f70 SA_NOCLDSTOP ss_t },0x0) = 0 (0x0) 41861: 0.017650847 0.000013968 sigprocmask(SIG_SETMASK,0x0,0x0) = 0 (0x0) 41861: 0.017851710 0.000086324 write(1,"dpid started\n",13) = 13 (0xd) The particular example of the problem is in is in src/IO/dpi.c: 463 if (Dpi_read_comm_keys(&dpid_port) != -1) { 464 sin.sin_port = htons(dpid_port); 465 if ((sock_fd = Dpi_make_socket_fd()) == -1) { 466 MSG("Dpi_check_dpid_ids: sock_fd=%d %s\n", sock_fd, dStrerror(errno)); 467 } else if (connect(sock_fd, (struct sockaddr *)&sin, sin_sz) == -1) { 468 MSG("Dpi_check_dpid_ids: %s\n", dStrerror(errno)); Can we change this behavior, here and elsewhere, before the release, to avoid long delays for those who are using blackhole(4) or the equivalent for some added measure of security? Regards, b.
On Sat, Jan 30, 2010 at 04:13:29AM -0800, bf wrote:
I'm using dillo on FreeBSD 9-CURRENT, and ever since the dpi framework was converted to use Internet domain sockets, I've been plagued by long delays when starting dpis because I'm using blackhole(4) by setting net.inet.tcp.blackhole to be nonzero. An example of the problem:
Nav_open_url: new url='dpi:/bm/' Dpi_check_dpid_ids: Operation timed out [dpid]: a_Misc_mksecret: e1ab7e05 Dpi_blocking_start_dpid: try 1 dpid started
[...]
I understand. Although please notice that it looks as expected behaviour: the check for a responsive dpid socket is delayed just as with any port scan. Simple solution: try something like this script to start dillo: #!/bin/sh ST="`ps -ef|grep "[0-9] dpid"|wc -l`" if [ "$ST" = "1" ]; then echo "Dpid is running OK!" dillo else echo "Dpid is NOT running" dpid & dillo fi -- Cheers Jorge.-
Jorge wrote:
On Sat, Jan 30, 2010 at 04:13:29AM -0800, bf wrote:
I'm using dillo on FreeBSD 9-CURRENT, and ever since the dpi framework was converted to use Internet domain sockets, I've been plagued by long delays when starting dpis because I'm using blackhole(4) by setting net.inet.tcp.blackhole to be nonzero. An example of the problem:
Nav_open_url: new url='dpi:/bm/' Dpi_check_dpid_ids: Operation timed out [dpid]: a_Misc_mksecret: e1ab7e05 Dpi_blocking_start_dpid: try 1 dpid started
[...]
I understand. Although please notice that it looks as expected behaviour: the check for a responsive dpid socket is delayed just as with any port scan.
Should this issue be listed under *BSD in README? I'd think so.
On Sun, Jan 31, 2010 at 03:00:45AM +0000, corvid wrote:
Jorge wrote:
On Sat, Jan 30, 2010 at 04:13:29AM -0800, bf wrote:
I'm using dillo on FreeBSD 9-CURRENT, and ever since the dpi framework was converted to use Internet domain sockets, I've been plagued by long delays when starting dpis because I'm using blackhole(4) by setting net.inet.tcp.blackhole to be nonzero. An example of the problem:
Nav_open_url: new url='dpi:/bm/' Dpi_check_dpid_ids: Operation timed out [dpid]: a_Misc_mksecret: e1ab7e05 Dpi_blocking_start_dpid: try 1 dpid started
[...]
I understand. Although please notice that it looks as expected behaviour: the check for a responsive dpid socket is delayed just as with any port scan.
Should this issue be listed under *BSD in README? I'd think so.
Yes, but let's see how it works. Another required measure would be to increase dpid's time to live. -- Cheers Jorge.-
--- On Sat, 1/30/10, Jorge Arellano Cid <jcid@dillo.org> wrote:
From: Jorge Arellano Cid <jcid@dillo.org> Subject: Re: [Dillo-dev] Starting dpis on FreeBSD: blackhole(4) timeouts To: dillo-dev@dillo.org Date: Saturday, January 30, 2010, 9:08 AM On Sat, Jan 30, 2010 at 04:13:29AM -0800, bf wrote:
? I'm? using? dillo on FreeBSD 9-CURRENT, and ever since the dpi framework was converted to use Internet domain sockets, I've been plagued? by? long? delays? when? starting? dpis because I'm using blackhole(4)? by setting net.inet.tcp.blackhole to be nonzero. An example of the problem:
Nav_open_url: new url='dpi:/bm/' Dpi_check_dpid_ids: Operation timed out [dpid]: a_Misc_mksecret: e1ab7e05 Dpi_blocking_start_dpid: try 1 dpid started
[...]
? I? understand. Although please notice that it looks as expected behaviour: the check for a responsive dpid socket is delayed just as with any port scan.
? Simple solution: try something like this script to start dillo:
#!/bin/sh ST="`ps -ef|grep "[0-9] dpid"|wc -l`" if [ "$ST" = "1" ]; then ???echo "Dpid is running OK!" ???dillo else ???echo "Dpid is NOT running" ???dpid & ???dillo fi
Thanks, Jorge, for taking the time to consider this problem and to propose a workaround. But, as I think you have noticed from a remark in your second message, this only deals with the initial timeouts, and not those arising after a period of quiescence. Because blackhole(4) is a common security measure, and because this problem did not exist under the previous dpi framework, it must be considered a regression, even if, in hindsight, it is to be "expected". I was hoping for a solution that did not require me to either disable blackhole(4), or sit on my hands during periodic timeouts. Does anyone have any ideas? Regards, b.
On Sat, Jan 30, 2010 at 08:32:49PM -0800, bf wrote:
--- On Sat, 1/30/10, Jorge Arellano Cid <jcid@dillo.org> wrote:
From: Jorge Arellano Cid <jcid@dillo.org> Subject: Re: [Dillo-dev] Starting dpis on FreeBSD: blackhole(4) timeouts To: dillo-dev@dillo.org Date: Saturday, January 30, 2010, 9:08 AM On Sat, Jan 30, 2010 at 04:13:29AM -0800, bf wrote:
? I'm? using? dillo on FreeBSD 9-CURRENT, and ever since the dpi framework was converted to use Internet domain sockets, I've been plagued? by? long? delays? when? starting? dpis because I'm using blackhole(4)? by setting net.inet.tcp.blackhole to be nonzero. An example of the problem:
Nav_open_url: new url='dpi:/bm/' Dpi_check_dpid_ids: Operation timed out [dpid]: a_Misc_mksecret: e1ab7e05 Dpi_blocking_start_dpid: try 1 dpid started
[...]
? I? understand. Although please notice that it looks as expected behaviour: the check for a responsive dpid socket is delayed just as with any port scan.
? Simple solution: try something like this script to start dillo:
#!/bin/sh ST="`ps -ef|grep "[0-9] dpid"|wc -l`" if [ "$ST" = "1" ]; then ???echo "Dpid is running OK!" ???dillo else ???echo "Dpid is NOT running" ???dpid & ???dillo fi
Thanks, Jorge, for taking the time to consider this problem and to propose a workaround. But, as I think you have noticed from a remark in your second message, this only deals with the initial timeouts, and not those arising after a period of quiescence.
Yes, I notice...
Because blackhole(4) is a common security measure, and because this problem did not exist under the previous dpi framework, it must be considered a regression, even if, in hindsight, it is to be "expected". I was hoping for a solution that did not require me to either disable blackhole(4), or sit on my hands during periodic timeouts. Does anyone have any ideas?
and also notice an answer to this question there. Back to the problem: You can increase dpid's time to live: dpid/main.c:208 - int dpid_idle_timeout = 60 * 60; /* default, in seconds */ + int dpid_idle_timeout = 60 * 60 * 24; /* default, in seconds */ Or simply let it linger forever: dpid/main.c:207 - int i, n = 0, open_max; + int i, n = 0, linger = 1, open_max; dpid/main.c:295 - if (server_is_running("downloads")) + if (linger || server_is_running("downloads")) Please test both and send your feedback. Once we have a usable workaround, we may think of a good patch (e.g. a dillorc option). Last but not the least, IMHO, using verbs like "plagued" and expressions like "or sit on my hands" doesn't help in dillo-dev. -- Cheers Jorge.-
--- On Sun, 1/31/10, Jorge Arellano Cid <jcid@dillo.org> wrote: ...
Simple solution: try something like this
script to
start dillo:
#!/bin/sh ST="`ps -ef|grep "[0-9] dpid"|wc -l`"
Since ps(1) on *BSD differs from POSIX, I used instead: ps -ax -ocomm | grep -ce "dpid"
if [ "$ST" = "1" ]; then echo "Dpid is running OK!" dillo else echo "Dpid is NOT running" dpid & dillo fi
...
You can increase dpid's time to live:
dpid/main.c:208 - int dpid_idle_timeout = 60 * 60; /* default, in seconds */ + int dpid_idle_timeout = 60 * 60 * 24; /* default, in seconds */
Or simply let it linger forever:
dpid/main.c:207 - int i, n = 0, open_max; + int i, n = 0, linger = 1, open_max; dpid/main.c:295 - if (server_is_running("downloads")) + if (linger || server_is_running("downloads"))
Please test both and send your feedback. Once we have a usable workaround, we may think of a good patch (e.g. a dillorc option).
I have used tested both methods, and both seem to prevent the timeouts (or at least the great majority of them, in the case of the patch that just increases the ttl), when used with something like the script above. Such a script is not quite as convenient as simply starting dillo, but probably offers an adequate workaround for the initial part of this problem, for the moment.
Last but not the least, IMHO, using verbs like "plagued" and expressions like "or sit on my hands" doesn't help in dillo-dev.
Perhaps something is lost in translation, or we're just proving that what one person considers colorful or descriptive is considered to be too strongly worded by another. In any event, my language was not meant to be an attack upon anyone, and no one should construe it as such. On the whole, the new version looks good. Congratulations and thanks to those who put time and energy into improving dillo since the last release. b.
bf wrote:
Because blackhole(4) is a common security measure, and because this problem did not exist under the previous dpi framework, it must be considered a regression, even if, in hindsight, it is to be "expected".
I finally looked back to refresh my memory on why we switched to inet sockets: Minix. If we do return to Unix sockets eventually, what is the thinking on what we would do for Minix?
On Sun, Jan 31, 2010 at 07:02:54PM +0000, corvid wrote:
bf wrote:
Because blackhole(4) is a common security measure, and because this problem did not exist under the previous dpi framework, it must be considered a regression, even if, in hindsight, it is to be "expected".
I finally looked back to refresh my memory on why we switched to inet sockets: Minix. If we do return to Unix sockets eventually, what is the thinking on what we would do for Minix?
At some point in time Minix 3 will have to implement UDS. In the meanwhile they can use the version with inet sockets. -- Cheers Jorge.-
On Sun, Jan 31, 2010 at 04:44:57PM -0300, Jorge Arellano Cid wrote:
On Sun, Jan 31, 2010 at 07:02:54PM +0000, corvid wrote:
bf wrote:
Because blackhole(4) is a common security measure, and because this problem did not exist under the previous dpi framework, it must be considered a regression, even if, in hindsight, it is to be "expected".
I finally looked back to refresh my memory on why we switched to inet sockets: Minix. If we do return to Unix sockets eventually, what is the thinking on what we would do for Minix?
At some point in time Minix 3 will have to implement UDS. In the meanwhile they can use the version with inet sockets.
What shall we do about this TCP blackhole issue? Should we try to switch back to UDS before release? Cheers, Johannes
On Tue, Feb 02, 2010 at 09:13:14PM +0100, Johannes Hofmann wrote:
On Sun, Jan 31, 2010 at 04:44:57PM -0300, Jorge Arellano Cid wrote:
On Sun, Jan 31, 2010 at 07:02:54PM +0000, corvid wrote:
bf wrote:
Because blackhole(4) is a common security measure, and because this problem did not exist under the previous dpi framework, it must be considered a regression, even if, in hindsight, it is to be "expected".
I finally looked back to refresh my memory on why we switched to inet sockets: Minix. If we do return to Unix sockets eventually, what is the thinking on what we would do for Minix?
At some point in time Minix 3 will have to implement UDS. In the meanwhile they can use the version with inet sockets.
What shall we do about this TCP blackhole issue? Should we try to switch back to UDS before release?
Sorry for the delayed answer Johannes (I was out of the city). IMO, UDS before the release is too risky. It will need lots of testing time. I believe we may choose the solution bf has found to work better as workaround, make a patch with it and release. A few times in the past I rushed a "better" solution into an rc, only to learn not to do it! :) With regard to UDS, it can be scheduled into the repo shortly after the release; Thereafter users can help us polish it from there. -- Cheers Jorge.-
Jorge wrote:
With regard to UDS, it can be scheduled into the repo shortly after the release; Thereafter users can help us polish it from there.
Do you expect that you will have time to do it yourself? Would it make sense to make a 2.2.1 with just the uds dpi changes and nothing else that isn't rather trivial?
On Sun, Feb 07, 2010 at 05:04:18PM +0000, corvid wrote:
Jorge wrote:
With regard to UDS, it can be scheduled into the repo shortly after the release; Thereafter users can help us polish it from there.
Do you expect that you will have time to do it yourself?
Yes.
Would it make sense to make a 2.2.1 with just the uds dpi changes and nothing else that isn't rather trivial?
I guess it'll be UDS plus other things as the testing period needs to extend around a few weeks. -- Cheers Jorge.-
On Sun, Feb 07, 2010 at 08:59:44AM -0300, Jorge Arellano Cid wrote:
On Tue, Feb 02, 2010 at 09:13:14PM +0100, Johannes Hofmann wrote:
On Sun, Jan 31, 2010 at 04:44:57PM -0300, Jorge Arellano Cid wrote:
On Sun, Jan 31, 2010 at 07:02:54PM +0000, corvid wrote:
bf wrote:
Because blackhole(4) is a common security measure, and because this problem did not exist under the previous dpi framework, it must be considered a regression, even if, in hindsight, it is to be "expected".
I finally looked back to refresh my memory on why we switched to inet sockets: Minix. If we do return to Unix sockets eventually, what is the thinking on what we would do for Minix?
At some point in time Minix 3 will have to implement UDS. In the meanwhile they can use the version with inet sockets.
What shall we do about this TCP blackhole issue? Should we try to switch back to UDS before release?
Sorry for the delayed answer Johannes (I was out of the city).
IMO, UDS before the release is too risky. It will need lots of testing time.
Just came to the same conclusion ...
I believe we may choose the solution bf has found to work better as workaround, make a patch with it and release. A few times in the past I rushed a "better" solution into an rc, only to learn not to do it! :)
What patch exactly would you propose? I see * increasing dpid_idle_timeout * adding a linger variable * and a shell wrapper to start dillo
With regard to UDS, it can be scheduled into the repo shortly after the release; Thereafter users can help us polish it from there.
ok. Cheers, Johannes
On Sun, Feb 07, 2010 at 07:15:59PM +0100, Johannes Hofmann wrote:
On Sun, Feb 07, 2010 at 08:59:44AM -0300, Jorge Arellano Cid wrote:
On Tue, Feb 02, 2010 at 09:13:14PM +0100, Johannes Hofmann wrote:
On Sun, Jan 31, 2010 at 04:44:57PM -0300, Jorge Arellano Cid wrote:
On Sun, Jan 31, 2010 at 07:02:54PM +0000, corvid wrote:
bf wrote:
Because blackhole(4) is a common security measure, and because this problem did not exist under the previous dpi framework, it must be considered a regression, even if, in hindsight, it is to be "expected".
I finally looked back to refresh my memory on why we switched to inet sockets: Minix. If we do return to Unix sockets eventually, what is the thinking on what we would do for Minix?
At some point in time Minix 3 will have to implement UDS. In the meanwhile they can use the version with inet sockets.
What shall we do about this TCP blackhole issue? Should we try to switch back to UDS before release?
Sorry for the delayed answer Johannes (I was out of the city).
IMO, UDS before the release is too risky. It will need lots of testing time.
Just came to the same conclusion ...
I believe we may choose the solution bf has found to work better as workaround, make a patch with it and release. A few times in the past I rushed a "better" solution into an rc, only to learn not to do it! :)
What patch exactly would you propose? I see
* increasing dpid_idle_timeout * adding a linger variable * and a shell wrapper to start dillo
a dpid_linger dillorc variable would save the user the trouble of re-compiling, but it would add the burden of prefs parsing in dpid. So, I'd say centinel solution could work: e.g. "touch ~/.dillo/DPID_LINGER" and dpid acting accordingly. Also a mention of the shell wrapper script in the README, with a link to a working example in our site would do the rest. An entry in the FAQ could help too. -- Cheers Jorge.-
On Mon, Feb 08, 2010 at 02:16:08PM -0300, Jorge Arellano Cid wrote:
On Sun, Feb 07, 2010 at 07:15:59PM +0100, Johannes Hofmann wrote:
On Sun, Feb 07, 2010 at 08:59:44AM -0300, Jorge Arellano Cid wrote:
On Tue, Feb 02, 2010 at 09:13:14PM +0100, Johannes Hofmann wrote:
On Sun, Jan 31, 2010 at 04:44:57PM -0300, Jorge Arellano Cid wrote:
On Sun, Jan 31, 2010 at 07:02:54PM +0000, corvid wrote:
bf wrote: > Because blackhole(4) is a common security measure, and because this > problem did not exist under the previous dpi framework, it must be > considered a regression, even if, in hindsight, it is to be "expected".
I finally looked back to refresh my memory on why we switched to inet sockets: Minix. If we do return to Unix sockets eventually, what is the thinking on what we would do for Minix?
At some point in time Minix 3 will have to implement UDS. In the meanwhile they can use the version with inet sockets.
What shall we do about this TCP blackhole issue? Should we try to switch back to UDS before release?
Sorry for the delayed answer Johannes (I was out of the city).
IMO, UDS before the release is too risky. It will need lots of testing time.
Just came to the same conclusion ...
I believe we may choose the solution bf has found to work better as workaround, make a patch with it and release. A few times in the past I rushed a "better" solution into an rc, only to learn not to do it! :)
What patch exactly would you propose? I see
* increasing dpid_idle_timeout * adding a linger variable * and a shell wrapper to start dillo
a dpid_linger dillorc variable would save the user the trouble of re-compiling, but it would add the burden of prefs parsing in dpid. So, I'd say centinel solution could work:
e.g. "touch ~/.dillo/DPID_LINGER"
and dpid acting accordingly.
Also a mention of the shell wrapper script in the README, with a link to a working example in our site would do the rest. An entry in the FAQ could help too.
Jorge, as you might have seen already I committed a different fix after reproducing the issue here on my system. Please check it in the repo: http://hg.dillo.org/dillo/rev/3a159d7e5098 Cheers, Johannes
On Tue, Feb 09, 2010 at 06:24:48PM +0100, Johannes Hofmann wrote:
On Mon, Feb 08, 2010 at 02:16:08PM -0300, Jorge Arellano Cid wrote:
On Sun, Feb 07, 2010 at 07:15:59PM +0100, Johannes Hofmann wrote:
On Sun, Feb 07, 2010 at 08:59:44AM -0300, Jorge Arellano Cid wrote:
On Tue, Feb 02, 2010 at 09:13:14PM +0100, Johannes Hofmann wrote:
On Sun, Jan 31, 2010 at 04:44:57PM -0300, Jorge Arellano Cid wrote:
On Sun, Jan 31, 2010 at 07:02:54PM +0000, corvid wrote: > bf wrote: > > Because blackhole(4) is a common security measure, and because this > > problem did not exist under the previous dpi framework, it must be > > considered a regression, even if, in hindsight, it is to be "expected". > > I finally looked back to refresh my memory on why we switched to inet > sockets: Minix. If we do return to Unix sockets eventually, what is the > thinking on what we would do for Minix?
At some point in time Minix 3 will have to implement UDS. In the meanwhile they can use the version with inet sockets.
What shall we do about this TCP blackhole issue? Should we try to switch back to UDS before release?
Sorry for the delayed answer Johannes (I was out of the city).
IMO, UDS before the release is too risky. It will need lots of testing time.
Just came to the same conclusion ...
I believe we may choose the solution bf has found to work better as workaround, make a patch with it and release. A few times in the past I rushed a "better" solution into an rc, only to learn not to do it! :)
What patch exactly would you propose? I see
* increasing dpid_idle_timeout * adding a linger variable * and a shell wrapper to start dillo
a dpid_linger dillorc variable would save the user the trouble of re-compiling, but it would add the burden of prefs parsing in dpid. So, I'd say centinel solution could work:
e.g. "touch ~/.dillo/DPID_LINGER"
and dpid acting accordingly.
Also a mention of the shell wrapper script in the README, with a link to a working example in our site would do the rest. An entry in the FAQ could help too.
Jorge, as you might have seen already I committed a different fix after reproducing the issue here on my system.
Yes. When back home, I started answering email, then saw the patch ;-)
Please check it in the repo: http://hg.dillo.org/dillo/rev/3a159d7e5098
+1 I think the committed solution is much better. My only concern with this type of solution is that it fails when the dpid gets hung (and then "kill -9" doesn't help). Maybe adding an unlink() call in dpidc's stop part is worth the effort. That way "dpidc stop" would always clear the centinel. ... but checking whether dpid is still alive after dpibye, killing it and thence going after each dpi with a kill signal looks like overkill (pun intended!) to me. ;)
* \todo what is the most portable way to ignore the signo argument of without generating a warning? Is "int signo __unused" gcc specific?
a_Http_ccc() uses this nice trick: (void)Data2; /* suppress unused parameter warning */ -- Cheers Jorge.-
participants (4)
-
bf2006a@yahoo.com
-
corvid@lavabit.com
-
jcid@dillo.org
-
Johannes.Hofmann@gmx.de