I'd been relying on the CURLOPT_TIMEOUT feature for request timeouts
in my multi-handle curl app, but recent discussions and my own
observations show that this is unreliable. So I added end-to-end
timeouts in my own code as Daniel suggests. Since this was
frustrating to me and since it seems to work great, here is my
solution. Or maybe someone can point out if I'm misusing something.
I'm using the progress callback. So first for all for my easy
handles, I set CURLOPT_NOPROGRESS to 0, CURLOPT_PROGRESSFUNCTION to my
callback, CURLOPT_PROGRESSDATA to the the data structure I use for all
my callbacks (let's call this a Fetcher with instance f).
Before adding a handle to the multi-handle I use time() to set a field
on my Fetcher struct: f->initializeTime. In the progress callback I
just check time(NULL) - f->initializeTime > MY_TIMEOUT. If true, I
return a non-zero value to abort the request, as per
. Otherwise I return zero, since everything is ok.
Since the progress function should only be called roughly once a
second (again as per the docs, I haven't verified this, but see no
negative impact to my app) this shouldn't be too bad for performance
and gives me the right amount of timeout resolution.
More importantly, this gives me nearly the same failure-case behavior
I was expecting previously: I get the curl handle back from
curl_multi_info_read with an error code set. So this has minor impact
on the rest of my app.
Received on 2008-12-30