Re: RFC: libcURL and persistent connection management
Date: Sat, 21 Jan 2012 00:00:20 +0100 (CET)
On Thu, 19 Jan 2012, Ryan Beasley wrote:
> I started by looking into fixing up our app to rein in outbound connections,
> but now I'm thinking this stuff would better fit in libcURL itself. I'd like
> to run the ideas past Daniel et al., and then—with approval from my
> management—whip up the related code and send out some patches. Please let
> me know if the following seem useful. If so, I'd appreciate any feedback
> before getting to work.
Thanks a lot Ryan for bring your ideas here for discussion. I like them all!
I'm seeing a lot of questions from users with similar situations such as yours
and many issues are related or identical to those you're talking about. I
think it will be appreciated with some multi interface improvements for how
libcurl would handle groups of connections and multiple connections to
> Request + connection tagging:
> Addresses problem 2.
> The idea here is to provide a mechanism whereby a client application can
> signal to libcURL that a selection of persistent connections need not be
> kept around, because the application won't need to issue any new requests on
> said connections anytime soon. The relationship between connections and
> tags is m:n (sets).
Let me just test if I understood your idea correctly:
I would set the tag on the easy handle and it would then tag the connection it
subsequently uses. When I later on ask libcurl to "cleanup" the tag, libcurl
marks all such connections to either get closed at once if they are in the
connection pool, or get marked to get closed after use if they are then
currently used by any ongoing transfer?
Sounds like a lovely feature. We could perhaps also have the cleanup function
return some info about how many connections it closed/marked.
I have one incomplete thought that brings more work and makes this feature a
bit harder to implement: how to handle connections that need some actions to
get closed. Like for example FTP that sends a "QUIT". We probably need to give
the cleanup function a non-blocking API somehow. I'm already suffering a
mental pain from not having thought about that for curl_multi_remove_handle()
which has this problem and one of these days I will have to introduced a new
API call in order to handle exactly that...
> Per host/proxy persistent connection limits:
Yes, this is a good idea and is requested every now and then.
> Tangent: It seems like libcURL ignores CURLMOPTS_MAXCONNECTS where the
> number of connections is less than the default of 10 connections.
We should fix that then...
> Open question: Should limits for origin servers & proxies differ? Web
> browsers tend to make this distinction.
Yes I think the different situations need different treatment. When using a
proxy everything is going through the same proxy so we can't split anything up
on separate hosts so the per-host option should probably simply not be used
> Limiting outstanding requests, implementing request queues in libcURL:
This is also frequently missed by users and will indeed make it even easier
for libcurl users to write programs without duplicating logic from other
programs to get this very common pattern implemented. I'm all for it!
-- / daniel.haxx.se
List admin: http://cool.haxx.se/list/listinfo/curl-library
Received on 2012-01-21