curl / Mailing Lists / curl-library / Single Mail


Re: Pause timeout timers while processing data?

From: Daniel Stenberg via curl-library <>
Date: Thu, 4 Oct 2018 11:21:17 +0200 (CEST)

On Wed, 3 Oct 2018, Alex Crichton via curl-library wrote:

> Hello! We've run into an issue [1] in rust-lang/cargo's usage of curl where
> timeouts are triggering when the filesystem is quite slow. I believe the
> cause for this is that we have a multi socket interface and whenever a
> transfer finishes we synchronously decompress it and write it out to the
> filesystem. It looks like the timeout timers keep running while we're doing
> filesystem operations, which means the next time we come back to check the
> socket states we get a timeout error.

This seems like an odd design choice. Why not simply let the transfer complete
*first*, then you know everything was downloaded fine and the timeout period
is passed, and *then* do the decompression/writing totally outside of curl's
knowledge and "reach" ?

> 1. Is there a way to across-the-board pause timeouts? For example just
> before we do synchronous operations we could pause everything and then
> unpause it soon afterwards.

There isn't. We don't have any such "extend timeout" functionality in libcurl.
The timeout you're using is meant to cover the "entire operation", including
the callbacks.

> 2. Failing a "one method" solution, would `curl_easy_pause` work for this
> purpose? If we paused everything just after we did our network activity and
> repaused it when it was finished, would `curl_easy_pause` also pause timeout
> timers?

I can certainly see how that could be useful for an applicaton but no, that's
not done right now.

> 3. And failing those two solutions, is this simply a case where the progress
> callback is recommended to implement custom timeout strategies?

I think it is, yes. Or you'd do as I suggested above: do the magic after
libcurl is done with the transfer.

Received on 2018-10-04