cURL / Mailing Lists / curl-library / Single Mail

curl-library

Re: [patch] preliminary high_speed_limit patch

From: Peter <peter_at_bsqt.homeip.net>
Date: Tue, 13 Jun 2006 06:45:55 -0400

On Monday 12 June 2006 08:08, you wrote:
>
> I agree that the feature has been wanted and has been missed by other users
> in the past so I appreciate your work on this!
>

Thanks!

> > -- added -Z to cli set high_speed_limit (just for testing, someone might
> > have a better name later...)
>
> What's wrong with simply using the --limit-rate that already is there?
> There's hardly any idea to offer the same feature implemented twice, is it?
>

It is to stay out of the way during development:
-- It does not break current functionality that way,
-- one can compare mainline and library implementations without re-compiling.
-- I don´t have to cleanly dissociate the existing uses of the variables
before I start.

At the end of the process, it would be reasonable to remove -Z and replace the
implementation of --limit-rate.

> > -- used all the data available from the progress module and stuck an
> > if statement before the calls to Curl_readwrite.
> >
> > -- in transfer.c ... if too fast, then usleep (single streamed) &
> > curl_pgrsUpdate
>
> usleep() is not a portable function and cannot be used unconditionally like
> that. Compare with the code used for this purpose in src/main.c.
>

OK, as a start, would a patch to make rename main.c/go_sleep to a curlx_sleep
routine move into lib somwehere, and have it called from mainline be
welcome ?

> Also, if you are you doing it with a sleep why not do a calculation for how
> long you should sleep and do a single sleep() call instead? It would save
> an awful lot of CPU used in vain. Again, that is how the command line tool
> logic works.
>

I was doing something quick to see what the reaction would be. Now have
looked at the command line stuff, and it seems to be doing separate
calculations, I´d prefer to calculate based on the values already setup for
me by progress.c and not maintain any state.

The initial patch seems (just eyeball tests) to work as comparably to the
command line stuff in terms of accuracy of the rate delivered, but has a lot
less code.

I will study a bit...

> > -- in multi.c ... if too fast, then skip your turn. (curl_pgrsUpdate,
> > then break)
>
> Whoa, that's not a good solution. If a socket is found readable by select()
> and curl_multi_perform() just returning will still have the socket readable
> and any ordinary app will busy-loop like mad.
>
> No, the multi interface needs to make sure that the socket is not
> re-triggered in select() until a certain time has elapsed.
>

OK so it looks like you have a finite-state-machine implementation... if I do
something like:

        -- when overspeed (as detected in existing patch), go into
CURLM_STATE_TOO_FAST
        -- when state of handle is CURLM_STATE_TOO_FAST, I adjust
           curl_multi_fdset to NOT put it´s fd in the list for listening.
        -- add a case in curl_multi_perform which:
                -- calls pgrs_Update
                -- transitions to CURLM_STATE_PERFORM if we have waited long
enough.

Would that be OK?

The thing is... I don´t have an easy way to test the ´easy´ calls / multi
routines in libcurl... does someone have something command line I can invoke
for testing? All I can check for is lack of breakage in the 500 series of
tests.

> Also, I would think that we want two different limits for upload and
> download.

The code is simpler if I do not differentiate. Each individual transfer is
going to be in one direction, if we take the highest rate of the two it
should be the right answer... no? Is there a reason to differentiate?
Received on 2006-06-13