curl / Mailing Lists / curl-library / Single Mail

curl-library

Re: libcurl reconnect

From: Stefan Eissing via curl-library <curl-library_at_cool.haxx.se>
Date: Wed, 27 Feb 2019 14:13:25 +0100

Hei,

I did some tests with the tool. Always nice to see someone digging into benchmarks.

First of all, I can confirm that the --noreuse option makes things very slow. But, as I found out, it means you use a new connection for each request. This *should* be very slow as compared to reusing the connection. Setting up a new connection is not only work for the TCP stack, but the server needs to allocate and initialize new resources as well.

Servers are optimized for handling many requests on one connection. And HTTP/2 is optimized for handling many in parallel. Which is quite visible in:

./ahsbt --tcpnagle --connections=1 --threads=1 --requests=N --http=2 http://test.example.org:12345/index.html

ahsbt, HTTP/1.1
N Measured total time
10000: 1.14 sec
15000: 1.76 sec
20000: 2.27 sec
30000: 3.49 sec
40000: 4.53 sec
50000: 5.71 sec

ahsbt, HTTP/2
N Measured total time
10000: 1.70 sec - 2.14 sec
15000: 4.55 sec
20000: 8.44 sec
30000: 12.78 sec - 22.02 sec
40000: 22.32 sec
50000: 28.68 sec - 28.95 sec

This makes the HTTP/2 implementation look really slow. However, what ahsbt seems to do is to only every have a single request outstanding. In h2load there is parameter -m for the number of concurrent requests. This defaults to 1 and indeed, this gives similar (and worse) numbers:

The HTTP/2 numbers are

h2load -m 1, HTTP/2
10000: 2.11 sec
15000: 4.30 sec
20000: 8.17 sec
30000: 9.96 sec - 21.52 sec
40000: 42.05 - 42,16 sec
50000: 68.95 sec

(So, I should probably have a look at this.) However, when you increase this, you see the power of parallel requests:

h2load -m 100, HTTP/2
10000: 0.37 sec
15000: 0.61 sec
20000: 0.79 sec
30000: 1.10 sec
40000: 1.48 sec
50000: 1.85 sec

Hope this helps. If I can test particular variations for you, please let me know.

Cheers, Stefan

> Am 27.02.2019 um 12:50 schrieb Aleksandar Lazic via curl-library <curl-library_at_cool.haxx.se>:
>
> Hi Andrew.
>
> Am 27.02.2019 um 08:17 schrieb Andrew Kurushin via curl-library:
>> Hello All
>>
>>
>> I have made small http server benchmark tool base on libcurl library
>> (https://github.com/ajax16384/ahsbt).
>>
>> There are plenty number of tools with same functionality but they have some
>> disadvantages:
>>
>> - ab (https://httpd.apache.org/docs/2.4/programs/ab.html) and wrk
>> (https://github.com/wg/wrk) lack http2 support
>>
>> - h2load (https://nghttp2.org/documentation/h2load.1.html) has no non keep
>> alive mode
>>
>> - go based tools
>> (https://github.com/rakyll/hey, https://github.com/tsenart/vegeta) has non user
>> land friendly thread-process usage
>
> What do you mean whit this statement?
>
> [snipp]
>
>> Issue can be observed without real network (http://localhost
>> <http://localhost/> tests)
>>
>> It would be great if You help to resolve this issue or even if You have any idea
>> why libcurl socket serial re-connection are so slow compared to apache "ab" ( it
>> may be my ignorense in libcurl api usage etc or it's a libcurl bug) .
>>
>> Thanks
>>
>> Andrei
>>
>>
>> -------------------------------------------------------------------
>> Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
>> Etiquette: https://curl.haxx.se/mail/etiquette.html
>>
>
>
> -------------------------------------------------------------------
> Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
> Etiquette: https://curl.haxx.se/mail/etiquette.html

-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette: https://curl.haxx.se/mail/etiquette.html
Received on 2019-02-27