Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

segfault in h2_session_send #3541

Closed
jnbr opened this issue Feb 8, 2019 · 5 comments
Closed

segfault in h2_session_send #3541

jnbr opened this issue Feb 8, 2019 · 5 comments

Comments

@jnbr
Copy link

jnbr commented Feb 8, 2019

I did this

cargo install cargo-watch

I expected the following

cargo should exit without error. This worked with libcurl-7.63.0 but fails since fb445a1

Here is a backtrace:

#0  0x00007ffff7e21127 in h2_session_send () from /lib/libcurl.so.4
#1  0x00007ffff7e2195a in h2_process_pending_input () from /lib/libcurl.so.4
#2  0x00007ffff7e22b79 in http2_conncheck () from /lib/libcurl.so.4
#3  0x00007ffff7df280b in extract_if_dead () from /lib/libcurl.so.4
#4  0x00007ffff7df679c in Curl_connect () from /lib/libcurl.so.4
#5  0x00007ffff7e06b87 in multi_runsingle () from /lib/libcurl.so.4
#6  0x00007ffff7e07f29 in curl_multi_perform () from /lib/libcurl.so.4
#7  0x0000555555b85bd7 in curl::multi::Multi::perform ()
#8  0x000055555599adcb in <std::thread::local::LocalKey<T>>::with ()
#9  0x00005555556867a5 in cargo::core::package::Downloads::wait ()
#10 0x0000555555780e91 in cargo::core::compiler::context::unit_dependencies::build_unit_dependencies ()
#11 0x00005555558dda0d in cargo::core::compiler::context::Context::prepare_units ()
#12 0x00005555558dade8 in cargo::core::compiler::context::Context::compile ()
#13 0x0000555555690416 in cargo::ops::cargo_compile::compile_ws ()
#14 0x000055555563cb83 in cargo::ops::cargo_install::install_one ()
#15 0x0000555555637941 in cargo::ops::cargo_install::install ()
#16 0x00005555555d4f6b in cargo::commands::install::exec ()
#17 0x00005555555ca9d8 in cargo::cli::main ()
#18 0x00005555555fefec in cargo::main ()
#19 0x00005555555f6983 in std::rt::lang_start::{{closure}} ()
#20 0x0000555555c080a3 in std::panicking::try::do_call ()
#21 0x0000555555c1903a in __rust_maybe_catch_panic ()
#22 0x0000555555bf83dc in std::rt::lang_start_internal ()
#23 0x00005555556013e5 in main ()

curl/libcurl version

curl 7.64.0 (x86_64-unknown-linux-musl) libcurl/7.64.0 LibreSSL/2.8.3 zlib/1.2.11 libssh2/1.8.0 nghttp2/1.36.0
Release-Date: 2019-02-06
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz HTTP2 UnixSockets HTTPS-proxy

operating system

Voidlinux x86_64-musl

@bagder
Copy link
Member

bagder commented Feb 8, 2019

Can we reproduce this in any way with just libcurl? Can you re-run your crashing case with a debug-enabled libcurl so that we get symbols/variables too, maybe we can suggest some debugging points or checks to do as well?

@jnbr
Copy link
Author

jnbr commented Feb 9, 2019

Can you re-run your crashing case with a debug-enabled libcurl so that we get symbols/variables too

This is with current HEAD (b3cc801):

#0  0x00007ffff7de7a96 in h2_session_send (data=0x555556bfcde8, h2=0x555556a4d720) at http2.c:1512
        pri_spec = {stream_id = 1818845510, weight = 32767, exclusive = 1 '\001'}
        rv = 21845
        stream = 0x6e6f6974636e7566
#1  0x00007ffff7de7535 in h2_process_pending_input (conn=0x5555563edc28, httpc=0x5555563ee290,
    err=0x7fffffff3450) at http2.c:1348
        nread = 9
        inbuf = 0x555556a56de8 ""
        rv = 9
        data = 0x555556bfcde8
#2  0x00007ffff7de53fe in http2_connisdead (conn=0x5555563edc28) at http2.c:219
        result = 1455767528
        httpc = 0x5555563ee290
        nread = 9
        sval = 1
        dead = false
#3  0x00007ffff7de545c in http2_conncheck (check=0x5555563edc28, checks_to_perform=1) at http2.c:239
        ret_val = 0
        c = 0x5555563ee290
        rc = 2
        send_frames = false
#4  0x00007ffff7d99d25 in extract_if_dead (conn=0x5555563edc28, data=0x555556c2b128) at url.c:967
        state = 32767
        dead = 255
        pipeLen = 0
#5  0x00007ffff7d9a21e in ConnectionExists (data=0x555556c2b128, needle=0x555556bfd968,
    usethis=0x7fffffff35d8, force_reuse=0x7fffffff35cf, waitpipe=0x7fffffff35d0) at url.c:1134
        match = false
        pipeLen = 140737488303504
        max_pipe_len = 0
        best_pipe_len = 0
        curr = 0x555556701cd0
        check = 0x5555563edc28
        chosen = 0x0
        foundPendingCandidate = false
        canpipe = 0
        bundle = 0x55555639d828
        wantNTLMhttp = false
        wantProxyNTLMhttp = false
#6  0x00007ffff7d9fe9f in create_conn (data=0x555556c2b128, in_connect=0x7fffffff3660,
    async=0x7fffffff36a4) at url.c:3883
        result = CURLE_OK
        conn = 0x555556bfd968
        conn_temp = 0x0
        reuse = false
        connections_available = true
        force_reuse = false
        waitpipe = false
        max_host_connections = 2
        max_total_connections = 0
        __func__ = "create_conn"
#7  0x00007ffff7da0670 in Curl_connect (data=0x555556c2b128, asyncp=0x7fffffff36a4,
    protocol_done=0x7fffffff36a5) at url.c:4150
        result = 21845
        conn = 0x555556bfd968
#8  0x00007ffff7db8f85 in multi_runsingle (multi=0x5555563ef128, now=..., data=0x555556c2b128)
    at multi.c:1463
        stream_error = false
        msg = 0x0
        connected = 255
        async = false
        protocol_connect = false
        dophase_done = false
        done = true
        rc = CURLM_OK
        result = CURLE_OK
        k = 0x555556c2b218
        timeout_ms = 0
        recv_timeout_ms = 0
        send_timeout_ms = 0
        control = 32767
        __func__ = "multi_runsingle"
#9  0x00007ffff7dba99a in curl_multi_perform (multi=0x5555563ef128, running_handles=0x7fffffff3844)
    at multi.c:2217
        result = CURLM_OK
        pipe_st = {old_pipe_act = {__sa_handler = {sa_handler = 0x7fffffff37e0,
              sa_sigaction = 0x7fffffff37e0}, sa_mask = {__bits = {140737351784412, 8589939592,
                140737488304256, 1, 93825015858344, 21474836480002, 140733193388033, 2015, 780926,
                140737488304224, 140737488314120, 140737488304352, 140737351746825, 93825016240384, 0,
                140737488304420}}, sa_flags = 5000,
            sa_restorer = 0x555555c5db18 <anon.e715e55f8a38f894dd23c8d38ea2cd7a.20.llvm>},
          no_signal = true}
        data = 0x555556c2b128
        returncode = CURLM_OK
        t = 0x7fffffff37e0
        now = {tv_sec = 2015, tv_usec = 780928}

Can we reproduce this in any way with just libcurl?

will try

@bagder
Copy link
Member

bagder commented Feb 10, 2019

Can you try and see if this patch makes any difference? When the http2 connection is checked it now uses its conn->data pointer, but I think it might not be valid anymore here. We should set it explicitly to the one in use when the check is asked for.

--- a/lib/url.c
+++ b/lib/url.c
@@ -962,10 +962,11 @@ static bool extract_if_dead(struct connectdata *conn,
     bool dead;
     if(conn->handler->connection_check) {
       /* The protocol has a special method for checking the state of the
          connection. Use it to check if the connection is dead. */
       unsigned int state;
+      conn->data = data; /* use this transfer for now */
       state = conn->handler->connection_check(conn, CONNCHECK_ISDEAD);
       dead = (state & CONNRESULT_DEAD);
     }
     else {
       /* Use the general method for determining the death of a connection */

@jnbr
Copy link
Author

jnbr commented Feb 10, 2019

Yes, this fixes it. I cannot reproduce the segfault anymore with the patch applied.

bagder added a commit that referenced this issue Feb 11, 2019
The http2 code for connection checking needs a transfer to use. Make
sure a working one is set before handler->connection_check() is called.

Reported-by: jnbr on github
Fixes #3541
@bagder bagder closed this as completed in 38d8e1b Feb 11, 2019
@jnbr
Copy link
Author

jnbr commented Feb 11, 2019

Thanks a lot!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Development

Successfully merging a pull request may close this issue.

2 participants