cURL / Mailing Lists / curl-library / Single Mail


Re: Anyone for HTTP Pipelining?

From: Dan Fandrich <>
Date: Thu, 22 Jun 2006 15:40:15 -0700

On Thu, Jun 22, 2006 at 11:25:39PM +0200, Daniel Stenberg wrote:
> - Have an option like "attempt pipelining" and then it _may_ use that if an
> existing connection is already present against our target HTTP server? May
> cause funny effects if the first transfer is a slow big file and the
> second
> is a very small one... Also probably requires some kind of notification
> support so that the app can get to know that the handle is put "in line"
> for
> pipelining.
> - We need options to control max pipeline length, and probably how to behave
> if we reach that limit.

From an application point of view, I would prefer libcurl to handle all the
scheduling for me so the app doesn't need to have any complicated
run-time scheduling logic. That would mean being able to specify max
pipeline depth D, max number of simultaneous HTTP connections during
normal operation S, plus ideally a latency parameter L and perhaps a
priority setting P for each transfer.

For an application requiring N transfers, libcurl would then open
1 HTTP connection and fill the pipeline with D requests. If N > D,
libcurl would then open a second HTTP connection and fill its pipeline,
and continue for up to S connections. If the application requests more
transfers and all existing pipelines are stalled (due to large downloads),
then after latency period L (specified as a time or bandwidth figure),
libcurl would open yet another connection (we'd probably need an S2
parameter for absolute maximum connections).

The priority setting P could be used in addition to L or instead of L. If
libcurl has an important transfer it submits it with a high priority and
libcurl will immediately (or after a period L) open a new connection if no
connections are completely free. If a low-priority transfer is submitted,
libcurl would never open a new connection, but wait until all higher-priority
requests are complete.

A web browser application might have something like D=5 and S=1. Requests for
things like embedded style sheets would have a high priority, favicon.ico
would have a low priority, images variable priority depending on if they're
within the current view, and downloaded files low priority. As it parses
the page it submits requests and libcurl orders them as the application
requested for optimum user experience. Yet from the application's point of
view, there's no change to the multi API (besides turning pipelining on,
and the optional transfer priority).

The more I think about this, the more I realize this kind of scheduling could
get pretty complicated. I'll bet there's research published out there on
how best to implement HTTP pipelining done by people who've put a lot more
thought into it than I have.

>>> Dan

--              The web change of address service
          Let webmasters know that your web site has moved
Received on 2006-06-23