cURL

curl's project page on SourceForge.net

Sponsors:
Haxx

cURL > Mailing List > Monthly Index > Single Mail

curl-tracker Archives

[curl:bugs] #1400 Quadratic slowdown in curl_multi

From: Daniel Stenberg <bagder_at_users.sf.net>
Date: Wed, 23 Jul 2014 10:55:32 +0000

- **labels**: multi, performance --> multi, performance, regression
- **status**: open --> open-confirmed

---
** [bugs:#1400] Quadratic slowdown in curl_multi**
**Status:** open-confirmed
**Labels:** multi performance regression 
**Created:** Wed Jul 23, 2014 06:45 AM UTC by Daniel Stenberg
**Last Updated:** Wed Jul 23, 2014 06:45 AM UTC
**Owner:** Daniel Stenberg
(copied here from the mailing list to track it, see http://curl.haxx.se/mail/lib-2014-07/0206.html )
I've found that curl_multi slows down quadratically with the number of running requests.
For example, with 10,000 concurrent requests, each request taking 10 seconds for the server to respond, curl should be able to complete 1000 requests per second. Instead, curl_multi_perform() spins at 100% cpu for several seconds at a time, making almost no forward progress.
Profiling shows that curl_multi_perform() is spending all its time in Curl_multi_process_pending_handles(). This function is called every time a request completes, and it iterates over every running request.
I am able to completely eliminate the performance problem by commenting out the body of Curl_multi_process_pending_handles(). It appears this code is only needed when CURLMOPT_MAX_TOTAL_CONNECTIONS is set.
I've attached a minimal demonstration of the problem (two source files).
mock_http_server.c: (60 lines)
  Creates a mock http server (on port 8080) with an average 10 second
request delay (uses libevent)
test_curl_throughput.c: (99 lines)
  Performs requests using curl_multi (with 10,000 handles running
concurrently)
To run the demonstration:
gcc mock_http_server.c -o mock_http_server -levent
gcc test_curl_throughput.c -o test_curl_throughput -lcurl
ulimit -n 100000 # requires root
./mock_http_server | ./test_curl_throughput # the pipe is to run them
concurrently
Would it make sense to store the list of pending handles as a separate
linked list, to avoid iterating through every easy_handle? 
---
Sent from sourceforge.net because curl-tracker@cool.haxx.se is subscribed to https://sourceforge.net/p/curl/bugs/
To unsubscribe from further messages, a project admin can change settings at https://sourceforge.net/p/curl/admin/bugs/options.  Or, if this is a mailing list, you can unsubscribe from the mailing list.
Received on 2014-07-23

These mail archives are generated by hypermail.

donate! Page updated March 21, 2014.
web site info

File upload with ASP.NET