cURL
Haxx ad
libcurl

curl's project page on SourceForge.net

Sponsors:
Haxx

cURL > Mailing List > Monthly Index > Single Mail

curl-library Mailing List Archives

Re: [patch] preliminary high_speed_limit patch

From: Daniel Stenberg <daniel_at_haxx.se>
Date: Tue, 13 Jun 2006 14:25:45 +0200 (CEST)

On Tue, 13 Jun 2006, Peter wrote:

> At the end of the process, it would be reasonable to remove -Z and replace
> the implementation of --limit-rate.

That was kind of my point.

>> usleep() is not a portable function and cannot be used unconditionally like
>> that. Compare with the code used for this purpose in src/main.c.
>
> OK, as a start, would a patch to make rename main.c/go_sleep to a
> curlx_sleep routine move into lib somwehere, and have it called from
> mainline be welcome ?

Yes, that sounds like a good plan.

>> Also, if you are you doing it with a sleep why not do a calculation for how
>> long you should sleep and do a single sleep() call instead? It would save
>> an awful lot of CPU used in vain. Again, that is how the command line tool
>> logic works.
>
> I was doing something quick to see what the reaction would be. Now have
> looked at the command line stuff, and it seems to be doing separate
> calculations, Id prefer to calculate based on the values already setup for
> me by progress.c and not maintain any state.

"separate calculations" ? You mean separate for read and write? Yes it does,
since it has two different limits and the logic for the callbacks is actually
somewhat different so it is necessary to do them differently

> The initial patch seems (just eyeball tests) to work as comparably to the
> command line stuff in terms of accuracy of the rate delivered, but has a lot
> less code.

Yes, but you accept many thousands of laps in a loop while I do not, so when
you convert the loop to a single calculation, you might see that you need to
do it slightly differently and perhaps slightly longer code-wise.

>> Whoa, that's not a good solution. If a socket is found readable by select()
>> and curl_multi_perform() just returning will still have the socket readable
>> and any ordinary app will busy-loop like mad.
>>
>> No, the multi interface needs to make sure that the socket is not
>> re-triggered in select() until a certain time has elapsed.
>
> OK so it looks like you have a finite-state-machine implementation...

Correct.

> -- when overspeed (as detected in existing patch), go into
> CURLM_STATE_TOO_FAST
> -- when state of handle is CURLM_STATE_TOO_FAST, I adjust
> curl_multi_fdset to NOT put its fd in the list for listening.
> -- add a case in curl_multi_perform which:
> -- calls pgrs_Update
> -- transitions to CURLM_STATE_PERFORM if we have waited long
> enough.
>
> Would that be OK?

Sounds perfect. I can only think of two minor improvements:

  1 - it could work for all states, and when it goes into CURLM_STATE_TOO_FAST
      it could save the previous state so that it can go back to the very same
      one afterwards. This is just an nitpick though since I guess this
      condition will only occur in CURLM_STATE_PERFORM - but it would mostly
      be a matter of writing code that would be more reliable for future
      changes.

  2 - when you calculate for how long the app can/should wait until it calls
      curl_multi_perform() the next time it should set that time with
      Curl_expire() since then nice upcoming apps that use curl_multi_timeout()
      will work just perfectly with this concept too.

> The thing is... I dont have an easy way to test the easy calls / multi
> routines in libcurl... does someone have something command line I can invoke
> for testing? All I can check for is lack of breakage in the 500 series of
> tests.

Some of them are using the multi interface, so you could take one of those (or
one of the docs/examples codes) and convert into a little testing app for your
own purpose.

One of my long-term goals with libcurl is to _only_ work with the multi
interface internally, and provide the easy interface entirely by wrapping
multi interface functions.

>> Also, I would think that we want two different limits for upload and
>> download.
>
> The code is simpler if I do not differentiate. Each individual transfer is
> going to be in one direction, if we take the highest rate of the two it
> should be the right answer... no? Is there a reason to differentiate?

A single transfer can go in both directions:

  1 - HTTP operations such as PUT and POST send and receive data

  2 - Upcoming HTTP Pipelining will introduce even further simultanoues upload
      and download

... and as pointed out by Armel Asselin, in the current world of ADSL
connections and similar, lots of people have assymetric bandwidths so
differentating them make sense to me.

-- 
  Commercial curl and libcurl Technical Support: http://haxx.se/curl.html
Received on 2006-06-13

These mail archives are generated by hypermail.

donate! Page updated November 12, 2010.
web site info

File upload with ASP.NET