cURL / Mailing Lists / curl-library / Single Mail

curl-library

Re: Slow HTTP/2 download speed per connection

From: Tatsuhiro Tsujikawa <tatsuhiro.t_at_gmail.com>
Date: Sat, 26 Sep 2015 17:21:42 +0900

​Hi,

On Sat, Sep 26, 2015 at 7:28 AM, Daniel Stenberg <daniel_at_haxx.se> wrote:

> On Fri, 25 Sep 2015, Daniel Stenberg wrote:
>
> Hey Tatsuhiro,
>
> Can you help me out with a look at this nghttpd server output that I get
> when downloading a 8GB file from localhost (HTTPS).
>
> It so looks like we have a window size related problem based solely on
> what packages we see that that go out and come in.
>
> The server sends out 4 DATA frames quickly, then it takes 5 milliseconds
> before a WINDOW_UPDATE is received (adding 32K to stream 0) and then it
> takes an additional about 39 milliseconds (I'm not sure what makes it take
> so long) for a second WINDOW_UPDATE (on strean 1) and then a 3rd (stream 0,
> 32767 bytes) and a 4th update (stream 1, also 32767 bytes). The three
> latter ones all within the same millisecond.
>
> Then it loops and it sends another 4 DATA frames really fast and back to
> the same window update dance.
>
> See attachment for a snippet from the log. This problem is easily
> reproducable for me.
>
> Does it give you any ideas?
>
>
​I could reproduce this. First thing I noticed was TCP_NODELAY. If I
enabled it with --tcp-nodelay option, I immediately got around 50MiB/​s.
But it is not still as fast as nghttp and normal HTTP/1 transfer.
I did quick profiling using google perftools, and it reported the following:

Total: 1566 samples
     604 38.6% 38.6% 604 38.6% curlx_tvdiff
     289 18.5% 57.0% 289 18.5% CRYPTO_128_unwrap
     286 18.3% 75.3% 286 18.3% AES_cbc_encrypt
     203 13.0% 88.3% 791 50.5% multi_addtimeout
      38 2.4% 90.7% 38 2.4% __read_nocancel
      32 2.0% 92.7% 32 2.0% __write_nocancel
      22 1.4% 94.1% 22 1.4% strerror_l
      12 0.8% 94.9% 103 6.6% SSLv3_client_method
       7 0.4% 95.3% 7 0.4% mempcpy
       5 0.3% 95.7% 8 0.5% EVP_rc4_40

So, curlx_tvdiff was called many times. When I dump this into call graph,
I saw that Curl_expire() call in on_data_chunk_recv() called this every
time curl got data chunk from server.
If I just commented out Curl_expire(), I got 120MiB/s.
I'm not sure the purpose of Curl_expire() here, probably we should not just
comment it out.
Perhaps, there may be better way to reduce call this or do something the
other way around to produce the same result but without calling too much
timeout processing stuff?

BTW, these calls were added in the commit:

commit 61a94d6f77f78c72e15e34910dd4e0e035bc7bc7
Author: Daniel Stenberg <daniel_at_haxx.se>
Date: Mon May 11 14:18:53 2015 +0200

    http2: Curl_expire() all handles with incoming traffic

    ... so that they'll get handled next in the multi loop.

Best regards,
Tatsuhiro Tsujikawa

> --
>
> / daniel.haxx.se

-------------------------------------------------------------------
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette: http://curl.haxx.se/mail/etiquette.html
Received on 2015-09-26