cURL
Haxx ad
libcurl

curl's project page on SourceForge.net

Sponsors:
Haxx

cURL > Mailing List > Monthly Index > Single Mail

curl-tracker mailing list Archives

[ curl-Bugs-2824938 ] SIGSEGV in Curl_pgrsTime

From: SourceForge.net <noreply_at_sourceforge.net>
Date: Mon, 24 Aug 2009 11:39:56 +0000

Bugs item #2824938, was opened at 2009-07-21 20:47
Message generated for change (Settings changed) made by bagder
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=100976&aid=2824938&group_id=976

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: libcurl
Group: crash
>Status: Closed
>Resolution: Fixed
Priority: 5
Private: No
Submitted By: johansen (kjohansen)
Assigned to: Daniel Stenberg (bagder)
Summary: SIGSEGV in Curl_pgrsTime

Initial Comment:
This is on OpenSolaris build 118, using libcurl 7.19.4, pycurl 7.19.0 with patches, and Python 2.4.4

> ::status
debugging core file of pkg (32-bit) from mcescher
file: /usr/bin/i86/isapython2.4
initial argv: /usr/bin/python2.4 /usr/bin/pkg install --no-index redistributable
threading model: native threads
status: process terminated by SIGSEGV (Segmentation Fault), addr=40c

This is the stack from the only LWP running in the process:

> $C
08045f24 libcurl.so.3.0.0`Curl_pgrsTime+0xb3(10, 4, 853acd8, 0)
08045f74 libcurl.so.3.0.0`checkPendPipeline+0xa4(25b384b0, 853ac90, fecc2a00, fd5546fe)
08045fc4 libcurl.so.3.0.0`multi_runsingle+0x18f(ac850f0, 25a2cf00, 8363f08, fd5553d6)
08046014 libcurl.so.3.0.0`curl_multi_perform+0x4c(ac850f0, 804603c)
08046048 pycurl.so`do_multi_perform+0xb3(852c9bc, 0, 831ff54, fd5b16a0)
08046098 libpython2.4.so.1.0`call_function+0x315(804610c, 0, da8b7165, 1)
Stack shows that we died in Curl_pgrsTime()

> libcurl.so.3.0.0`Curl_pgrsTime+0xb3::dis
libcurl.so.3.0.0`Curl_pgrsTime+0x92: addl $0x10,%esp
libcurl.so.3.0.0`Curl_pgrsTime+0x95: fstpl 0x3e0(%esi)
libcurl.so.3.0.0`Curl_pgrsTime+0x9b: fwait
libcurl.so.3.0.0`Curl_pgrsTime+0x9c:
jmp +0x102 <libcurl.so.3.0.0`Curl_pgrsTime+0x1a3>
libcurl.so.3.0.0`Curl_pgrsTime+0xa1: leal -0x28(%ebp),%eax
libcurl.so.3.0.0`Curl_pgrsTime+0xa4: subl $0xc,%esp
libcurl.so.3.0.0`Curl_pgrsTime+0xa7: pushl %eax
libcurl.so.3.0.0`Curl_pgrsTime+0xa8:
call -0x3ca5 <PLT=libcurl.so.3.0.0`curlx_tvnow>
libcurl.so.3.0.0`Curl_pgrsTime+0xad: addl $0xc,%esp
libcurl.so.3.0.0`Curl_pgrsTime+0xb0: movl 0x8(%ebp),%esi
libcurl.so.3.0.0`Curl_pgrsTime+0xb3: pushl 0x3fc(%esi)

Disassembly shows that we're dying after a call to curlx_tvnow, a leaf function in Curl_pgrsTime, called like this:

        Curl_tvdiff_secs(Curl_tvnow(), data->progress.t_startsingle);

> ::regs
%fs = 0x0000 %esi = 0x00000010

%esi = 0x10

0x3fc + 0x10 is 0x40c which corresponds to our fault address. This is unmapped, obviously the cause of the segfault.

%ebp = 0x08045f24

> 0x08045f24+0x8/X
0x8045f2c: 10

So far, this makes sense. We're moving timevalues from the arguments and return values of one function to another, as per this nested invocation.

If the Curl_pgrsTime call frame is to be believed, we were called with 0x10 as the address for the SessionHandle pointer. The second argument is 4. This corresponds to a timerid of TIMER_PRETRANSFER, which is verified by inspection of checkPendPipeline(). It only makes one call to Curl_pgrsTime and that's with TIMER_PRETRANSFER as an argument.

Working backwards a bit, it's possible to see that the second argument to multi_runsingle() is a pointer to struct Curl_one_easy. The stack says that our pointer to Curl_one_easy is 0x25a2cf00. multi_runsingle calls checkPendPipeline() with a pointer to connectdata at offset 0xc in the Curl_one_easy structure.

Just for the sake of sanity, it's possible to work backwards even further by looking at the Curl_multi that was supplied in curl_multi_perform. It contains a list of Curl_one_easy structs that are linked together and traversed like this:

  easy=multi->easy.next;
  while(easy != &multi->easy) {
    CURLMcode result;

    result = multi_runsingle(multi, easy);
    if(result)
      returncode = result;

    easy = easy->next; /* operate on next handle */
  }

In both curl_multi_perform() and multi_runsingle() the first argument is a pointer to a Curl_multi struct. Walking that struct, it has the following Curl_one_easy structs:
        
0x256f3208, 0x1e987c58, 0x2569de48, 0x25a2cf00, 0x2672e0d8, 0xbced6b0,0xb3dfae8, 0x1dc4aa70, 0xc1082a0, 0xc106c78, and 0xac850f4.

The last points back to 0x256f3208, completing the loop. We seem to have gotten hung up processing 0x25a2cf00, at least according to the call stack of
multi_runsingle().

Based upon this information 0x25a2cf00 + 0xc should be the connectdata:

> 0x25a2cf00 + 0xc/K
0x25a2cf0c: 25b384b0

The connect data has a SessionHandle pointer as its first member:

> 25b384b0/K
0x25b384b0: ad0cc48

So this should be the session handle where we ran into trouble.

The session handle keeps a magic number in its final member. This allows us to identify that it's actually a SessionHandle:

> ad0cc48+0x85ec/K
0xad15234: 0

In this case the magic number is zero, which means that the handle has already been closed, or isn't a SessionHandle object. However, the
SessionHandle keeps a backpointer to the Multi object at offset 0x8:

> ad0cc48+0x8/K
0xad0cc50: ac850f0

This points back to the correct multi handle, which means we should be looking at a SessionHandle object here.

Getting back to the task at hand, we need to fish out the easy_conn from our Curl_easy_one in order to double-check the sanity of our stack
frame:
    
> 0x25a2cf00+0xc/K
0x25a2cf0c: 25b384b0
    
This seems to match what we saw before.

checkPendPipeline() looks at the head of the pend_pipe list and moves each pending object to the send_pipe, after calling Curl_pgrsTime() to
perform some accounting.

Unfortunately for us, the connectdata's pipes appear to be NULL. Fromcode inspection, it's not clear how this might happen. However, the
recv_pipe, send_pipe, and pend_pipe are all pointers that live near 0x25b38640 (0x25b384b0+0x190 or specifically 194 for send_pipe and 19c
for pend_pipe). However, dumping out the structure shows a large range of nothing:

            0 1 2 3 \/ 5 6 7 8 9 a b c d e f
25b38630: 00000000 0a644660 00000000 00000000
                     send recv pend
25b38640: 00000000 00000000 00000000 00000000
25b38650: 00000000 00000000 00000000 00000000
25b38660: 00000000 00000000 00000000 00000000
25b38670: 00000000 00000000 00000000 00000000
25b38680: 00000000 00000000 00000000 0853acd8

So, we can't figure it out by working backwards. We'll have to go fishing in the stack to see if we can find the object that caused all of
the trouble.

> 0x08045f24,64::dump -qe
8045f24: 08045f74 fd555ff8 00000010 00000004
8045f34: 0853acd8 00000000 0021ea91 fee10000
8045f44: 0853ac88 00000000 00000001 25b384b0
8045f54: fef1de6f 27258b8c 00000001 080a659c
8045f64: 00000000 25a2cf00 25a2cf00 fd5774fc
8045f74: 08045fc4 fd55487f 25b384b0 0853ac90
8045f84: fecc2a00 fd5546fe 08045fe0 fed7fcfd

Digging around in the stack turned out to be fruitful, as unlikely as that might be. At 0x08045f44 there's a pointer to 0x0853ac88, which when
dereferenced returns 0x10. 0x10 has showed up in every dump that we've seen. It's not clear, yet, why this is always the first failed argument
to Curl_pgrsTime().

pmap says the following about the heap:

08061000 512936K rwx-- [ heap ]

This means that every address from 0x08061000 to 0x2754b000 is within the heap. This means that 0x0853ac88 was allocated from the heap rather
early on in the process.

There was some suggestion that this might be related to the Curl_getoff_all_pipelines() bug, if so, I could use some additional information in determining whether I've been affected by that issue. In this crash, it looks like the connectdata already had Curl_done() called on it and had the pipeline lists free'd. Since the code hits checkPendPipeline() before Curl_done() in the multi_runsingle CURLM_STATE_DONE case, it's not clear to me that this would help. Further information about how to diagnose the getoff_all_pipelines bug would be appreciated.

I'm having a difficult time coming up with a hypothesis for how the library ended up in the state that it did at the time of the crash. The bits in the connectdata show close and done as True, which means that Curl_done() had already been called on the handle, and it also needed to be closed. The reuse and retry bits were false. If there something else that I should check, please let me know.

----------------------------------------------------------------------

Comment By: Daniel Stenberg (bagder)
Date: 2009-08-24 13:39

Message:
Thanks for the report, this problem is now fixed in CVS!

----------------------------------------------------------------------

Comment By: johansen (kjohansen)
Date: 2009-07-22 20:25

Message:
A co-worker with a machine that is from the twilight zone seems able to
reproduce this problem every time he runs our application. I got him to
run against a version of libcurl with CTF, CURLDEBUG, and libumem.

Lucky for us, we got a coredump:

> ::status
file: /usr/bin/i86/isapython2.4
initial argv: /usr/bin/python2.4 /usr/bin/pkg install --no-index
redistributable
threading model: native threads
status: process terminated by SIGSEGV (Segmentation Fault), addr=deadbeef

In this case, the segfault is on 0xdeadbeef, which means we got caught
trying to access memory that was already free'd. Very naughty.

The stack is different with libumem, presumably because it stops us from
carrying on with the bad behavior once it catches us doing things that
we shouldn't.

> $C
08046094 libcurl.so.3.0.0`Curl_removeHandleFromPipeline+0x16(cf3b00c,
deadbeef,
2ec2298c, fd50b332)
08046144 libcurl.so.3.0.0`multi_runsingle+0xe5b(cb9524c, 2ec22b0c,
cf79370,
fd50c39a)
08046194 libcurl.so.3.0.0`curl_multi_perform+0x62(cb9524c, 80461bc)
080461c8 pycurl.so`do_multi_perform+0xb3(8805414, 0, 8454e94, fd5716a0)
08046218 libpython2.4.so.1.0`call_function+0x315(804628c, 0, da8b7165, 0)

The final stack frame shows the address of the SessionHandle, but not
the connecdata or easy handle that caused us the trouble. Fortunately,
the previous frame contains the Curl_one_easy:

> 2ec22b0c::print struct Curl_one_easy
{
    next = 0x2ec22e6c
    prev = 0x2ec226ec
    easy_handle = 0xcf3b00c
    easy_conn = 0x2f98c88c
    state = 0t14 (CURLM_STATE_DONE)
    result = 0 (CURLE_OK)
    msg = 0
    msg_num = 0
    sockets = [ 0, 0, 0, 0, 0 ]
    numsocks = 0
}

This looks legit. We're in CURLM_STATE_DONE and just called
removeHandleFromPipeline. Without memory debugging, we would have died
in the next call, which is to checkPendPipeline.

Looking at our struct connectdata, it has been freed just as we had
suspected.

> 0x2f98c88c::whatis
2f98c88c is 2f98c880+c, bufctl 2f9976e0 freed from umem_alloc_896

Fortunately for us, we still have the bufctl available:

> 2f9976e0::bufctl -v
            ADDR BUFADDR TIMESTAMP THREAD
                            CACHE LASTLOG CONTENTS
        2f9976e0 2f98c880 a14aa73b08306 1
                          80d7710 8073140 80905a0
                 libumem.so.1`umem_cache_free_debug+0x13c
                 libumem.so.1`umem_cache_free+0x43
                 libumem.so.1`umem_free+0xe2
                 libumem.so.1`process_free+0x55
                 libumem.so.1`free+0x1a
                 libcurl.so.3.0.0`curl_dofree+0x69
                 libcurl.so.3.0.0`conn_free+0x243
                 libcurl.so.3.0.0`Curl_disconnect+0x219
                 libcurl.so.3.0.0`Curl_done+0x214
                 libcurl.so.3.0.0`multi_runsingle+0x728
                 libcurl.so.3.0.0`curl_multi_perform+0x62

Disassembling multi_runsingle gives us some clues about what's going on
at the time that runsingle called Curl_done:

> multi_runsingle+0x728::dis
libcurl.so.3.0.0`multi_runsingle+0x709:
call -0x2f503 <PLT=libcurl.so.3.0.0`Curl_posttransfer>
libcurl.so.3.0.0`multi_runsingle+0x70e: addl $0x10,%esp
libcurl.so.3.0.0`multi_runsingle+0x711: subl $0x4,%esp
libcurl.so.3.0.0`multi_runsingle+0x714: pushl $0x0
libcurl.so.3.0.0`multi_runsingle+0x716: movl 0xc(%ebp),%eax
libcurl.so.3.0.0`multi_runsingle+0x719: pushl 0x14(%eax)
libcurl.so.3.0.0`multi_runsingle+0x71c: movl 0xc(%ebp),%eax
libcurl.so.3.0.0`multi_runsingle+0x71f: addl $0xc,%eax
libcurl.so.3.0.0`multi_runsingle+0x722: pushl %eax
libcurl.so.3.0.0`multi_runsingle+0x723:
call -0x2ff9d <PLT=libcurl.so.3.0.0`Curl_done>
libcurl.so.3.0.0`multi_runsingle+0x728: addl $0x10,%esp
libcurl.so.3.0.0`multi_runsingle+0x72b: movb $0x1,-0x25(%ebp)
libcurl.so.3.0.0`multi_runsingle+0x72f:
jmp +0x797 <libcurl.so.3.0.0`multi_runsingle+0xecb>

This code looks like it calls Curl_posttransfer, immediately followed by
Curl_done. There are a few places in multi_runsingle that do this.
More investigation is required:

bcurl.so.3.0.0`multi_runsingle+0x630: movl 0xc(%ebp),%eax
libcurl.so.3.0.0`multi_runsingle+0x633: movl 0x8(%eax),%eax
libcurl.so.3.0.0`multi_runsingle+0x636: cmpb $0x0,0x29d(%eax)

This is a bit before runsingle+0x728, but it's comparing offset 0x29d
from a particular pointer. Looking at the plausible candidates yields
the following:

> ::print -at struct SessionHandle set.connect_only
29d _Bool set.connect_only

So this code is the CURLM_STATE_DO portion of multi_runsingle.

libcurl.so.3.0.0`multi_runsingle+0x683:
call -0x2f4bd <PLT=libcurl.so.3.0.0`Curl_do>
libcurl.so.3.0.0`multi_runsingle+0x688: addl $0x10,%esp
libcurl.so.3.0.0`multi_runsingle+0x68b: movl %eax,0x14(%esi)
libcurl.so.3.0.0`multi_runsingle+0x68e: movl 0xc(%ebp),%eax
libcurl.so.3.0.0`multi_runsingle+0x691: cmpl $0x0,0x14(%eax)
libcurl.so.3.0.0`multi_runsingle+0x695:
jne +0x69 <libcurl.so.3.0.0`multi_runsingle+0x700>

Here we are calling Curl_do, and then checking easy->result (offset
0x14) against 0. In this case we must have taken the jump, since we're
down near 0x728.

libcurl.so.3.0.0`multi_runsingle+0x700: subl $0xc,%esp
libcurl.so.3.0.0`multi_runsingle+0x703: movl 0xc(%ebp),%eax
libcurl.so.3.0.0`multi_runsingle+0x706: pushl 0x8(%eax)
libcurl.so.3.0.0`multi_runsingle+0x709:
call -0x2f503 <PLT=libcurl.so.3.0.0`Curl_posttransfer>

Offset 0x700 is the preamble to Curl_posstransfer, which comes
immediately before our call to Curl_done, so this is where we hit the
error.

Unfortunately, finding the location of the error doesn't give us a lot
of insight into what went wrong here.

Looking at the handles helps, though:

One interesting discovery involves looking at all of the Curl_one_easy
structures that are associated with this multi handle:

cb95250 {
    cb95250 next = 0x2ec22cec
    cb95254 prev = 0x2ec2298c
    cb95258 easy_handle = 0
    cb9525c easy_conn = 0
    cb95260 state = 0 (CURLM_STATE_INIT)
    cb95264 result = 0 (CURLE_OK)
    cb95268 msg = 0
    cb9526c msg_num = 0
    cb95270 sockets = [ 0, 0, 0, 0, 0 ]
    cb95284 numsocks = 0
}
2ec22cec {
    2ec22cec next = 0x2ec229ec
    2ec22cf0 prev = 0xcb95250
    2ec22cf4 easy_handle = 0xceeb00c
    2ec22cf8 easy_conn = 0
    2ec22cfc state = 0t15 (CURLM_STATE_COMPLETED)
    2ec22d00 result = 0 (CURLE_OK)
    2ec22d04 msg = 0xceda69c
    2ec22d08 msg_num = 0x1
    2ec22d0c sockets = [ 0, 0, 0, 0, 0 ]
    2ec22d20 numsocks = 0
}
2ec229ec {
    2ec229ec next = 0x2ec226ec
    2ec229f0 prev = 0x2ec22cec
    2ec229f4 easy_handle = 0xcf4600c
    2ec229f8 easy_conn = 0
    2ec229fc state = 0t15 (CURLM_STATE_COMPLETED)
    2ec22a00 result = 0 (CURLE_OK)
    2ec22a04 msg = 0xceda54c
    2ec22a08 msg_num = 0x1
    2ec22a0c sockets = [ 0, 0, 0, 0, 0 ]
    2ec22a20 numsocks = 0
}
2ec226ec {
    2ec226ec next = 0x2ec22b0c
    2ec226f0 prev = 0x2ec229ec
    2ec226f4 easy_handle = 0xcf2900c
    2ec226f8 easy_conn = 0
    2ec226fc state = 0t15 (CURLM_STATE_COMPLETED)
    2ec22700 result = 0 (CURLE_OK)
    2ec22704 msg = 0xceda7b4
    2ec22708 msg_num = 0x1
    2ec2270c sockets = [ 0, 0, 0, 0, 0 ]
    2ec22720 numsocks = 0
}
2ec22b0c {
    2ec22b0c next = 0x2ec22e6c
    2ec22b10 prev = 0x2ec226ec
    2ec22b14 easy_handle = 0xcf3b00c
    2ec22b18 easy_conn = 0x2f98c88c
    2ec22b1c state = 0t14 (CURLM_STATE_DONE)
    2ec22b20 result = 0 (CURLE_OK)
    2ec22b24 msg = 0
    2ec22b28 msg_num = 0
    2ec22b2c sockets = [ 0, 0, 0, 0, 0 ]
    2ec22b40 numsocks = 0
}
2ec22e6c {
    2ec22e6c next = 0x2ec2286c
    2ec22e70 prev = 0x2ec22b0c
    2ec22e74 easy_handle = 0xcf0a00c
    2ec22e78 easy_conn = 0
    2ec22e7c state = 0t12 (CURLM_STATE_PERFORM)
    2ec22e80 result = 0 (CURLE_OK)
    2ec22e84 msg = 0
    2ec22e88 msg_num = 0
    2ec22e8c sockets = [ 0, 0, 0, 0, 0 ]
    2ec22ea0 numsocks = 0
}
2ec2286c {
    2ec2286c next = 0x2ec22a4c
    2ec22870 prev = 0x2ec22e6c
    2ec22874 easy_handle = 0xceff00c
    2ec22878 easy_conn = 0
    2ec2287c state = 0t11 (CURLM_STATE_WAITPERFORM)
    2ec22880 result = 0 (CURLE_OK)
    2ec22884 msg = 0
    2ec22888 msg_num = 0
    2ec2288c sockets = [ 0, 0, 0, 0, 0 ]
    2ec228a0 numsocks = 0
}
2ec22a4c {
    2ec22a4c next = 0x2ec22bcc
    2ec22a50 prev = 0x2ec2286c
    2ec22a54 easy_handle = 0xcef600c
    2ec22a58 easy_conn = 0
    2ec22a5c state = 0t15 (CURLM_STATE_COMPLETED)
    2ec22a60 result = 0t55 (CURLE_SEND_ERROR)
    2ec22a64 msg = 0xceda824
    2ec22a68 msg_num = 0x1
    2ec22a6c sockets = [ 0, 0, 0, 0, 0 ]
    2ec22a80 numsocks = 0
}
2ec22bcc {
    2ec22bcc next = 0x2ec22b6c
    2ec22bd0 prev = 0x2ec22a4c
    2ec22bd4 easy_handle = 0xcf3200c
    2ec22bd8 easy_conn = 0
    2ec22bdc state = 1 (CURLM_STATE_CONNECT)
    2ec22be0 result = 0 (CURLE_OK)
    2ec22be4 msg = 0
    2ec22be8 msg_num = 0
    2ec22bec sockets = [ 0, 0, 0, 0, 0 ]
    2ec22c00 numsocks = 0
}
2ec22b6c {
    2ec22b6c next = 0x2ec2298c
    2ec22b70 prev = 0x2ec22bcc
    2ec22b74 easy_handle = 0xcf1600c
    2ec22b78 easy_conn = 0
    2ec22b7c state = 1 (CURLM_STATE_CONNECT)
    2ec22b80 result = 0 (CURLE_OK)
    2ec22b84 msg = 0
    2ec22b88 msg_num = 0
    2ec22b8c sockets = [ 0, 0, 0, 0, 0 ]
    2ec22ba0 numsocks = 0
}
2ec2298c {
    2ec2298c next = 0xcb95250
    2ec22990 prev = 0x2ec22b6c
    2ec22994 easy_handle = 0xcf1f00c
    2ec22998 easy_conn = 0
    2ec2299c state = 1 (CURLM_STATE_CONNECT)
    2ec229a0 result = 0 (CURLE_OK)
    2ec229a4 msg = 0
    2ec229a8 msg_num = 0
    2ec229ac sockets = [ 0, 0, 0, 0, 0 ]
    2ec229c0 numsocks = 0
}

Notice that every Curl_one_easy except for 0x2ec22b0c has it's easy_conn
set to NULL. It doesn't seem like a coincidence that 0x2ec22b0c is also
the handle that had the use-after-free problem. The multi handle has
pipelining enabled, so we should have called signalPipeClose() on all of
the outstanding sessions:

> 0xcb9524c::print struct Curl_multi pipelining_enabled
pipelining_enabled = 0x1
> 0xcf3b00c::print struct SessionHandle multi
multi = 0xcb9524c

Only the 2ec22e6c and 2ec2286c Curl_one_easy structures have
SessionHandles where pipe_broke is True. This means that
multi_runsingle hasn't yet been re-invoked on these handles. They're in
the PERFORM and WAITPERFORM states, respectively.

It looks as though one handle had a send error, went to kick other
Curl_one_easies off it's pipeline, but somehow failed to clear the
easy_conn for one last handle.

As far as I can tell, this is how we got into trouble:

The Curl_one_easy with address 0x2ec22b0c completes in
CURLM_STATE_PERFORM successfully, this results in its SessionHandle
being removed from the connectdata's receive pipeline in this piece of
code:

[starting at multi.c:1318]

      else if(TRUE == done) {
        char *newurl;
        bool retry = Curl_retry_request(easy->easy_conn, &newurl);
        followtype follow=FOLLOW_NONE;

        /* call this even if the readwrite function returned error */
        Curl_posttransfer(easy->easy_handle);

        /* we're no longer receving */
        Curl_removeHandleFromPipeline(easy->easy_handle,
                                      easy->easy_conn->recv_pipe);

        /* expire the new receiving pipeline head */
        if(easy->easy_conn->recv_pipe->head)
          Curl_expire(easy->easy_conn->recv_pipe->head->ptr, 1);

        /* Check if we can move pending requests to send pipe */
        checkPendPipeline(easy->easy_conn);

        /* When we follow redirects or is set to retry the connection, we
must
           to go back to the CONNECT state */
        if(easy->easy_handle->req.newurl || retry) {
          if(!retry) {
            /* if the URL is a follow-location and not just a retried
request
               then figure out the URL here */
            newurl = easy->easy_handle->req.newurl;
            easy->easy_handle->req.newurl = NULL;
            follow = FOLLOW_REDIR;
          }
          else
            follow = FOLLOW_RETRY;
          easy->result = Curl_done(&easy->easy_conn, CURLE_OK, FALSE);
          if(easy->result == CURLE_OK)
            easy->result = Curl_follow(easy->easy_handle, newurl,
follow);
          if(CURLE_OK == easy->result) {
            multistate(easy, CURLM_STATE_CONNECT);
            result = CURLM_CALL_MULTI_PERFORM;
          }
          else
            /* Since we "took it", we are in charge of freeing this on
               failure */
            free(newurl);
        }
        else {
          /* after the transfer is done, go DONE */

          /* but first check to see if we got a location info even though
we're
             not following redirects */
          if (easy->easy_handle->req.location) {
            newurl = easy->easy_handle->req.location;
            easy->easy_handle->req.location = NULL;
            easy->result = Curl_follow(easy->easy_handle, newurl,
FOLLOW_FAKE);
            if (easy->result)
              free(newurl);
          }

          multistate(easy, CURLM_STATE_DONE);
          result = CURLM_CALL_MULTI_PERFORM;
        }
      }

Above we can see that after posttransfer, the removeHandleFromPipeline
call takes our easy_handle (SessionHandle) off of the recv_pipe. If
everything else goes accoriding to plan, we transition to
CURLM_STATE_DONE and go on with life.

The list of Curl_one_easy's show that nobody else has an easy_conn, and
that 0x2ec22a4c encountered a SEND_ERROR. That send error would have
resulted in a Curl_done() and we have the bufctl showing the connect
data being free'd. A call to Curl_disconnect() calls signalPipeClose(),
followed by Curl_multi_handlePipeBreak(), which finally sets the
easy_conn to NULL. If 0x2ec22b0c removed its SessionHandle from the
pipelines as part of moving from CURLM_STATE_PERFORM ->
CURLM_STATE_DONE, and it was sharing a connection with 0x2ec22a4c, then
it seems plausible that an error on the shared connection would cause
the easy_conn to get free'd, but never set to NULL in 0x2ec22b0c since
its SessionHandle was removed from the connection's recv pipeline.

This raises the following questions:

a. Does this hypothesis sound reasonable?
b. How do we contrive a synthetic test that can reproduce this problem?
Counting on a magic computer is not going to scale.
c. What's the appropriate solution if this is indeed the problem?

----------------------------------------------------------------------

Comment By: johansen (kjohansen)
Date: 2009-07-22 20:22

Message:
Until we have established a root-cause for the issue, it's going to be
tricky to figure out how to reproduce the problem. At the moment, it seems
to occur serendipitously on certain machines but not others.

----------------------------------------------------------------------

Comment By: Daniel Stenberg (bagder)
Date: 2009-07-22 00:38

Message:
Without a way to repeat the problem (with the latest version), I really
cannot spend a lot of time or energy on this problem.

----------------------------------------------------------------------

You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=100976&aid=2824938&group_id=976
Received on 2009-08-24

These mail archives are generated by hypermail.

donate! Page updated November 12, 2010.
web site info

File upload with ASP.NET