cURL / Mailing Lists / curl-library / Single Mail

curl-library

Re: libcurl segfault in curl_free()

From: Steve Webb <steve_at_badcheese.com>
Date: Mon, 13 Feb 2006 15:18:19 -0700 (MST)

Nevermind, I think that "NOSIGNAL" option did the trick. I'll respond
again if I get it to segfault again, but so-far, I've ran it about 20
times and it completed nicely.

- Steve

On Mon, 13 Feb 2006, Steve Webb wrote:

> Date: Mon, 13 Feb 2006 14:28:08 -0700 (MST)
> From: Steve Webb <steve_at_www.badcheese.com>
> To: curl-library_at_cool.haxx.se
> Subject: libcurl segfault in curl_free()
>
> Hello.
>
> I've got a multi-threaded app that uses curl in each thread. The core of
> the thread is here:
>
> tmp_url = get_next_url();
> while (tmp_url) {
> curl = curl_easy_init();
> curl_easy_setopt(curl, CURLOPT_URL, tmp_url->url);
> curl_easy_setopt(curl, CURLOPT_WRITEDATA, outfile);
> curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION,
> my_write_func);
> curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1);
> // curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 1);
> curl_easy_setopt(curl, CURLOPT_TIMEOUT, 30);
> curl_easy_perform(curl);
> ret = curl_easy_getinfo(curl, CURLINFO_SIZE_DOWNLOAD,&size);
> curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE,&retcode);
> if (retcode == 200) {
> tmp_url->new_size = size;
> } else {
> tmp_url->new_size = tmp_url->old_size;
> }
> fprintf (stderr,"thread #%02d, size:(%6g/%6g), ret: %3d,
> url: %s\n",thread_num,tmp_url->old_size,size,retcode,tmp_url->url);
> curl_easy_cleanup(curl);
> tmp_url = get_next_url();
> }
>
> When the threads all exit, I (occasionally) get a segfault in curl_free().
> It doesn't happen all of the time. Might need to run it several times
> before I can get a segfault, but it *does* happen. The source spawns 99
> threads - you can change it if you'd like, but keep the data file the
> same, shorter data files segfault less often, so the longer datafile will
> hopefully produce a segfault quicker.
>
> How to reproduce:
>
> cd /tmp
> wget http://badcheese.com/~steve/crawler.tar.gz
> tar xzvf crawler.tar.gz
> cd crawl
> make
> ./crawl
>
> Any help would be greatly appreciated!
>
> - Steve
>
> --
> EMAIL: (h) steve@badcheese.com WEB: http://badcheese.com/~steve
>
>

--
EMAIL: (h) steve@badcheese.com  WEB: http://badcheese.com/~steve
Received on 2006-02-13