Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

curl: set a 100K buffer size by default #1446

Closed
wants to merge 2 commits into from

Conversation

bagder
Copy link
Member

@bagder bagder commented Apr 24, 2017

Test command 'time curl http://localhost/80GB -so /dev/null' on a Debian
Linux.

Before (middle performing run out 9):

real 0m28.078s
user 0m11.240s
sys 0m12.876s

After (middle performing run out 9)

real 0m26.356s (93.9%)
user 0m5.324s (47.4%)
sys 0m8.368s (65.0%)

Test command 'time curl http://localhost/80GB -so /dev/null' on a Debian
Linux.

Before (middle performing run out 9):

 real    0m28.078s
 user    0m11.240s
 sys     0m12.876s

After (middle performing run out 9)

 real    0m26.356s (93.9%)
 user    0m5.324s  (47.4%)
 sys     0m8.368s  (65.0%)
@mention-bot
Copy link

@bagder, thanks for your PR! By analyzing the history of the files in this pull request, we identified @yangtse, @captain-caveman2k and @tatsuhiro-t to be potential reviewers.

@bagder
Copy link
Member Author

bagder commented Apr 24, 2017

Ah, forgot to update some tests. Will do.

Interestingly, changing CURL_MAX_WRITE_SIZE to 100K and a recompile generates even lower numbers, indicating that there's some glitches in the use of the larger buffer...

@jay
Copy link
Member

jay commented Apr 24, 2017

I suspect this is unnecessary and you should back it up with some typical real world data scenarios. Even if there's a very slight difference in a corner case iirc the user can already set up to a 500k buffer size via rate limiting.

For comparison I ran your 80GB test in Windows. There is no discernible difference (correction: very slight difference, 3.47 seconds (0.023%) faster on average under the most ideal circumstances) in a larger buffer size. I recall I discussed this with you or someone else before and didn't we reach the conclusion that this is only helpful for sftp? Also you may have blogged about it. Sorry my memory is foggy on this.

create file:

fsutil file createnew 80GB 0
fsutil sparse setflag 80GB
c:\gnuwin32\bin\dd if=nul of=80GB bs=1 count=0 seek=80GB oflag=append

bagder/curl-larger-buffer:

curl http://localhost/80GB -so NUL --write-out "time_total: %{time_total}"
time_total: 147.811000
curl http://localhost/80GB -so NUL --write-out "time_total: %{time_total}"
time_total: 145.939000
curl http://localhost/80GB -so NUL --write-out "time_total: %{time_total}"
time_total: 145.798000

parent you branched from, 7474418:

curl http://localhost/80GB -so NUL --write-out "time_total: %{time_total}"
time_total: 150.729000
curl http://localhost/80GB -so NUL --write-out "time_total: %{time_total}"
time_total: 149.683000
curl http://localhost/80GB -so NUL --write-out "time_total: %{time_total}"
time_total: 149.542000
curl 7.54.1-DEV (i386-pc-win32) libcurl/7.54.1-DEV OpenSSL/1.0.2k nghttp2/1.21.1
Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS Largefile NTLM SSL HTTP2 HTTPS-proxy

Server: Apache/2.4.18 (Win64)

@bagder
Copy link
Member Author

bagder commented Apr 25, 2017

I think this is an easy fix that makes some improvements at a very low cost. And it doesn't touch the library which makes it even less risky. To me, the numbers I've already shown is reason enough: curl will use less CPU and system resources this way. Sure, for regular network transfers it will be bottle-necked by the bandwidth but being lightweight is also a good virtue.

But there are some actual network speed reasons too, as you mention. SFTP in particular will get a serious boost with this.

SFTP 4GB over localhost

old

real 0m32.391s
user 0m28.824s
sys 0m2.060s

new (larger buffer)

real 0m28.212s
user 0m26.728s
sys 0m1.332s

So, about 10% faster here.

SFTP 4GB over a 200ms latency link

(test run by adding latency to 'lo')

old

Transfer rate: 310KB/sec

new

Transfer rate: 1930KB/sec

(The ratio in this speed increase is almost identical to the ratio of the buffer increase...)

HTTP over a 200 ms latency link

As a comparison to see what it fares when we add latency to the mix and thus it isn't completely CPU bound anymore.

old

real 0m35.536s
user 0m0.144s
sys 0m0.176s

new

real 0m35.319s
user 0m0.072s
sys 0m0.124s

Virtually no speed difference, but slightly lower system load.

@jay
Copy link
Member

jay commented Apr 25, 2017

sftp is the real winner here. I take issue though with it being less resources. You're trading one resource for another. sftp it's clearly worth it.

@bagder bagder closed this in 96ece5c Apr 25, 2017
@bagder bagder deleted the bagder/curl-larger-buffer branch May 15, 2017 22:06
@lock lock bot locked as resolved and limited conversation to collaborators May 14, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants