curl / Mailing Lists / curl-library / Single Mail

curl-library

RE: http2: libcurl keeps reading from connection after WRITEFUNCTION has requested pause

From: Sangamkar, Dheeraj via curl-library <curl-library_at_cool.haxx.se>
Date: Mon, 5 Nov 2018 19:42:18 +0000

>
> On Sat, 3 Nov 2018, Sangamkar, Dheeraj via curl-library wrote:
>
> > libcurl version: 7.52.1-5+deb9u6
>
> We have done plenty of fixes since then, including a few related to
> pause/unpause.
>
> > Libcurl stops calling the WRITEFUNCTION after pause is requested. However,
> > libcurl keeps receiving data on the connection and buffers it. At some
> > point, when transferring 1GB objects, memory allocation fails and libcurl
> > returns CURLE_OUT_OF_MEMORY.
>
> I think there's certainly room for improvement here. With HTTP/2, streams that
> are paused should get their window zeroed so that we properly tell the server
> to stop delivering data and I'm pretty sure we do not do that. (and
> subsequently we should then also do the reverse when such a stream is unpaused
> again)
>
> > 1. The code in lib/transfer.c:Curl_readwrite does seem to avoid reading from
> > connection when KEEP_RECV_PAUSE is set. Is the intent of the pause
> > functionality to stop reading from socket when WRITING is paused?
>
> When its HTTP/2, there can be many inidividual transfers over the same
> physical connection so we *can't* stop reading from a connection just because
> a stream is paused - the connection also has other things going on like ping
> packets.

After some more testing, I found that the problem occurs with HTTP 1.1 as well but with lower frequency.

>
> > 2. Is pausing/un-pausing the right way for applications to control the flow
> > of data between libcurl and http server? Is there any other mechanism?
>
> If you really need to stop data to flow, that's the method I can think of.
>
> > 3. My understanding is that curl_multi_perform ultimately calls
> > lib/transfer.c:readwrite_data to read data off the connection from http
> > server and delivers to WRITEFUNCTION using Curl_client_write. When WRITEFUNC
> > requests pause, this state is set in the easy-handle but readwrite_data does
> > not check this state and continues to read data from socket but cannot
> > deliver to the WRITEFUNC. If this is indeed a bug, should readwrite_data
> > check the KEEP_RECV_PAUSE flag in the easy handle and stop reading?
>
> Hm, yes that looks like maybe it should!
>
> > 4. Curl_easy_pause may call the writefunc hander synchronously. This goes
> > against the intent of using multi-interface where curl_multi_perform is used
> > to move all data. When using multi interface, is curl_easy_unpause the right
> > way to un-pause paused transfers?
>
> There is no curl_easy_unpause. And "intent" or not, that's the only way I
> could think of making it work when I worked on adding the support for pause.
> If you can improve that, then please help us out!

My question was from an API user's viewpoint. Based on your reply, using curl_easy_pause(easy_handle, CURLPAUSE_CONT) is the right way to unpause.
I expected curl_easy_pause to reset the KEEP_RECV_PAUSE flag for the easy handle and let curl_multi_perform do the rest - resume the transfer.

>
> > 5. Why does maxloops=100 in readwrite_data not prevent it from stopping at
> > reading 100 times from the connection.
>
> It is only meant as a way to prevent libcurl from get "stuck" in that loop if
> there's always data to read. Why do you say that doesn't work?
>
> > I sometimes see more than a 1000 invocations of debugfunction that indicate
> > there were 1000 reads of data from the connection.
>
> Ok, but that doesn't necesarily mean that it looped more than 100 times in
> that specific loop, right?

I am doing only 1 transfer at a time during these tests. That's why I expect the reads to stop after 100 iterations.
With http1.1 also behaving the same way, I find it strange that there are that many receives.

> > A change(lib/transfer.c:readwrite_data(…)) to stop reading from a connection
> > when the WRITEFUNCTION has requested pause is:
>
> Would you like to submit this as a PR on github?
>
> > I understand this is against the limit of maxloops=100.
>
> Why so? That limit is there to make it not loop "too much", and this break
> certainly will not work against that!
>
> > Are there other gotchas with this approach or is this against the principle
> > of what is being done in this code path?
>
> I would like to get a full example source code that can reproduce this problem
> before I can judge this suggested fix properly - ideally an example that uses
> two streams from the same host and that pauses only one of them. At a first
> glance the fix seems reasonable, but with multiplexed HTTP/2 things are
> somewhat complicated at times...
>

My code does not directly translate to a simple program that I can provide. So, I cant promise that.
I will work on a PR though.

-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette: https://curl.haxx.se/mail/etiquette.html
Received on 2018-11-05