On Thursday 26 September 2002 21:23, Daniel Stenberg wrote:
> Sounds like a school-book example for which we introduced the multi
> It doesn't require you to immediately call curl_multi_perform(). The return
> code merely tells you that there is (at least might) still be more data to
> transfer and thus you're kindly requested to call curl_multi_perform()
> again before you select() again.
OK. So, any time I call curl_multi_perform, if I want to be sure that all the
available data has been transferred, I have to loop until
CURLM_CALL_MULTI_PERFORM is returned, or I can decide to arbitrarily call
curl_multi_perform less times and call it later. This is the most elastic
way, I think, leaving the decision to the application.
> That's a question of definition. How can it take "more time than
> necessary"? If there's data to transfer all the time, then how can that be
> more than necessary?
You are totally right ! I asked these questions because yesterday I didn't
exactly know what was the behaviour of the library, and what to expect from
Today I have written a little test program which, within the infinite loop,
the first n iterations it calls select() with a timeout of 1/100s, and upon
each timeout a new easy handle is created, added to the multi handle and
curl_multi_perform is called until something different from
CURLM_CALL_MULTI_PERFORM is returned.
After all the planned connections have started, I use a blocking select to
save CPU cycles :-) and then I loop again calling curl_multi_perform only
when data is available.
I've seen that, even with n = 60, the while loop after the creation of the
easy handle executes at most three times, and the same kind of loop, executed
when data is available, executes at most one time. This is VERY GOOD, because
it guarantees me that the application will never block more than necessary
(except for DNS resolution maybe, but I know that a solution to this is in
being worked). And this, transferring in parallel bytes ranging from 1Kb up
to 1.2Mb ! It works very well !!
> So you add N number of easy handles but only want L of them to transfer
> data? May I ask you why?
Yesterday I made that convoluted thinking because I didn't know what to expect
from the library in the real world ! It was all based upon the fact that loop
calling call_multi_perform could have lasted too much time.. which is not
happening instead. So forget about it, sorry. :-)
> But no, we don't have any current way to only transfer data on a selected
> set of easy handles within a single multi handle. I haven't yet come up
> with a scenario in which one would want that.
> If this is really needed/wanted, we must consider a way to add this
I don't think it should ever be really needed... in my case only for making
the response time to that particular client slower than the others :-) which
has no particular sense after all...
> When the number of still running connections change, you can get the
> completed easy handles (as there may be more than one that are completed in
> the same function call) by calling curl_multi_info_read(). If the amount
> doesn't change, there's no point in calling it.
After calling curl_multi_perform I have a check, made only after at least one
easy handle has been started and if the previous number of running
connections is greater than the current one, that repeatedly calls
curl_multi_info_read and for each CURLMSG_DONE the correspondent easy handle
is looked up into my connections array in order to clean it up. It works. :-)
I have one doubt: after the end of an easy transfer, its state into the multi
handle it set to something like "COMPLETED", right ? So, if nothing sets its
state to "INIT", how is it possible to reuse the same easy handle to
immediately perform another transfer within the same multi handle ? There's
another way to set again its state to "INIT" other than removing and readding
the easy handle to the multi handle ? Which is not complicated, but it seems
avoidable to me...
> I'm interested in your opinions and ideas on what we can do to make libcurl
> fit your needs. The multi interface is still growing up.
Many thanks for your suggestions, and for the quality of the library !
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
Received on 2002-09-27