Re: Issues in constraining number of open connections with curl multi interface
Date: Tue, 3 Jan 2012 21:59:35 +0530
As suggested, we have tried out callbacks for sockets.
We keep socket fd in our connection pool object along with the curl easy
handle, this way we limit the number of fds with the same connection pool
logic of ours.
Following are our observations :
1. opensocket resolves our problem of restricting number of connections
2. close socket is getting invoked frequently, debugged it and found out
the close call in curl source code at lib/connect.c : line no 1013 was
/*connect failed or timed out */
This occured when waitconnect() returned WAITCONN_SELECT_ERROR and
isconnected was set to 1.
What could be the reason for this?
3. We then implemented CURLOPT_SOCKOPTFUNCTION callback , to connect on our
fd and observed that there were no close callback function invoked. This
implies total control of the connection to the application and hence socket
is not getting closed (is this correct ?).
Note that, we also set KeepAlive option. This solves the problem mentioned
in point 2.
*However, we only want control of restricting open fds (i.e connections)
and the connect should be handled by libcurl. This will facilitate us with
not to worry about address resolution and dns caching.
*PFA, the sample code with above implementation.
Please do let us know your thoughts on this and provide guidelines for
proper implementation :)
On Mon, Jan 2, 2012 at 3:49 AM, Daniel Stenberg <daniel_at_haxx.se> wrote:
> On Fri, 30 Dec 2011, Mukul Kedare wrote:
> It seems that above assumption was wrong, since we are using multi curl
>> per thread and with multi curl, caching will happen at multi curl handle
>> and not at easy handles. So for 100 threads, even though we restrict the
>> number of curl easy handles for "URL_1" to 10, each with MAXCONNECT=1, we
>> are seeing open connections >= 10 and <= 100. I assume this is due to use
>> of multi handle at thread level. "URL_1" connections are getting cached at
>> each thread (number of threads = 100).
> Q1. We want to make persistent open connections and restrict the number
>> of open connections with URLs. Can you suggest the correct way in which we
>> should be using curl multi and easy interfaces to solve this problem?
> I can't think of any immediate easy fix for your problem. I understand
> what you're trying to get out from libcurl, but its caching doesn't quite
> offer that level of flexibilty that you'd like here.
> You can possibly change your logic. Like if you would be more careful to
> only add URL_1 to multi handles that should have it, it would limit the
> amount to 10. Or use some kind of counter of your own and use the
> OPENSOCKETFUNCTION to fail socket creations when the counter for a specific
> address has reached 10 etc.
> Q2. Can we have a common curl connection cache shared between curl multi
> Not currently, no. But:
> Q3. Also I have read a thread in the mailing list (
>> where you stated that "Share Interface" can be used for sharing connection
>> cache. Is this possible? :)
> The share interface is our generic way to share certain kinds of data
> between handles. It would make perfect sense to make sure a share object
> also can hold the connection pool as then you would be able to make all 100
> multi handles use the *same* connection cache/pool and a single maximum
> limit would be in use.
> That would of course require that someone writes the code to bring this
> / daniel.haxx.se
> List admin: http://cool.haxx.se/list/**listinfo/curl-library<http://cool.haxx.se/list/listinfo/curl-library>
> Etiquette: http://curl.haxx.se/mail/**etiquette.html<http://curl.haxx.se/mail/etiquette.html>
- text/x-csrc attachment: socket_multi.c