On Wed, May 14, 2008 at 12:34:08AM +0200, Daniel Stenberg wrote:
> On Tue, 13 May 2008, Jeff Weber wrote:
>> I need to make nonblocking uploads of multiple files using either the
>> scp:// or ftp:// protocols. In some cases, the code must upload via
>> scp://, in other cases ftp://.
>> Given the comment in the libcurl-multi.3 -- man page, that ftp:// still
>> contains some blocking code, even in the multi interface, I decided to use
>> the easy interface, and split my transfer into pthreads.
> Yeps, there are a few blocking cases still around.
>> Most of the code is in the parent thread, and only the curl_easy_perform()
>> in a child thread. If the child thread "takes too long", I cancel it.
>> Regardless, the parent always calls curl_easy_cleanup(),
>> curl_global_cleanup(). Unfortunately, cancelling curl_easy_perform()
>> leads to memory leaks.
> Yes. Doing that is badness! See this:
>> Are there any other suggestions for implementing nonblocking ftp://
> You're on the right track. Just stop the transfers the right way and you'll
> be fine.
Ok, I added read and progress callbacks that have timeout logic, and
can return abort values. In my tests, I upload the same file N times
through the same curl handle. I break the connection to the FTP
server randomly during the N file uploads. For some transfers, the
callbacks run and abort the transfer child thread (hooray!). For
other transfers, the callbacks never run, and the transfer hangs. If
I break the connection before running my test program, the callbacks
Is there a chance I am using the callbacks wrong, or are my results
consistent with the code?
Received on 2008-05-14