cURL
Haxx ad
libcurl

curl's project page on SourceForge.net

Sponsors:
Haxx

cURL > Mailing List > Monthly Index > Single Mail

curl-tracker mailing list Archives

[ curl-Bugs-1229810 ] Memory leak in curl_easy_perform method

From: SourceForge.net <noreply_at_sourceforge.net>
Date: Tue, 17 Oct 2006 05:12:13 -0700

Bugs item #1229810, was opened at 2005-06-29 18:18
Message generated for change (Comment added) made by mister-x
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=100976&aid=1229810&group_id=976

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: libcurl
Group: wrong behaviour
Status: Closed
Resolution: Works For Me
Priority: 7
Submitted By: jgarvia (jgarvia)
Assigned to: Daniel Stenberg (bagder)
Summary: Memory leak in curl_easy_perform method

Initial Comment:
Hi all,

I am currently using the libcurl library to perform file
transmissions via FTP.

I have found memory leaks while using the
curl_easy_perform method.

I attach the report summary generated by Insure tool.
I could not generate the 'memdump' file because the
Makefile does not work on my Tru64 UNIX machine
(./configure --enable-debug), but I guess that Insure
report is really complete for this purpose.

On the other hand, I indicate the libcurl version (curl -V) I
am using (I know, it is not the latest one):

************************ curl -V ********************************
curl 7.13.0 (alphaev68-dec-osf5.1a) libcurl/7.13.0
zlib/1.0.8
Protocols: ftp gopher telnet dict ldap http file
Features: IPv6 Largefile libz
******************************************************************

Could anybody tell me if this bug is fixed in the latest
library? (Or if it is a new bug that should be fixed)

Thanks in advance for all your support.

Julio Garvía

----------------------------------------------------------------------

Comment By: Sven Wegener (mister-x)
Date: 2006-10-17 14:12

Message:
Logged In: YES
user_id=225018

Ignore that comment, I was looking at a pretty old version
of libcurl.

----------------------------------------------------------------------

Comment By: Daniel Stenberg (bagder)
Date: 2006-10-17 13:57

Message:
Logged In: YES
user_id=1110

uh, I meant to write "to the curl-library mailing list"

----------------------------------------------------------------------

Comment By: Daniel Stenberg (bagder)
Date: 2006-10-17 13:56

Message:
Logged In: YES
user_id=1110

Sven,

You added a comment to a bug report closed over a year ago.
It was not a leak then and I don't think this is a leak now
either, in spite of your analysis claiming the opposite.

If you really think you've found a leak, please post a
source code snippet we can use to repeat the problem (with
the latest libcurl) to the curl-library and we'll work on it.

----------------------------------------------------------------------

Comment By: Sven Wegener (mister-x)
Date: 2006-10-17 13:52

Message:
Logged In: YES
user_id=225018

Hi!

I stumbled across this bug myself, please read on:

The DNS cache is written as a linked list and stores the
returned structure from getaddrinfo. The structure is
dynamically allocated by getaddrinfo using malloc. But
getaddrinfo also uses strdup (which uses malloc) to store
the hostname in this structure. When Curl_close is called
and the list entry is free'd by Curl_llist_remove, it only
free's the addrinfo struct and not the strdup'ed hostname
contained in the structure. You need to use
Curl_freeaddrinfo to free it, this will take care of doing a
deep free() on the structure.

Best solution is to assign a free function to the list, that
gets used to free the list entries.

Sven

-- 
Sven Wegener
Gentoo Developer
http://www.gentoo.org/
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-08-04 10:01
Message:
Logged In: YES 
user_id=1110
Thanks for your report, your details and your hard work. I
hope you'll work it out, and don't hesitate to get in touch
again if you suspect problems in libcurl.
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-08-04 10:00
Message:
Logged In: YES 
user_id=1304874
According to your comments I think the subject is also
closed on my behalf (so the status of this bug can be
switched to 'closed', does it?).
I will investigate if the problem is in 'libc' or even
though in 'insure'.
Anyways, it has been a pleasure to work with you and
needless to say that if you need any kind of help that I
could give you,  you only have to contact me at:
jgarvia_at_mixmail.com
Thanks a lot for all your work.
Best wishes,
Julio Garvía.
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-08-03 20:18
Message:
Logged In: YES 
user_id=1110
I know you've reported ps uax outputs. But you did this in
the curl/libcurl bug tracker and I've only researched this
topic in the aspect of possible bugs in libcurl. I therefore
don't care about leaks or problems elsewhere. Hence I use
tools and ways to verify that libcurl works fine. When it
does, I'm happy and the subject is closed on my behalf.
That may not suit you, but that is as far as my effort goes.
I would suggest that your libc version has a memory leak or
that insure is wrong. Such things have happened before.
But as far as this bug tracker and libcurl goes, this is not
a bug.
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-08-01 14:38
Message:
Logged In: YES 
user_id=1304874
As I mentioned in previous messages, the key to alert us
about memory leaks was motivated by the `ps axu' reports.
That is, after running the command line process (in UNIX)
during three days (in indefinite mode and continuously
trasnferring files), I noticed that the memory increases
without stopping.
However, I ran the same scenario in Linux and no memory
increasing was reported. 
Moreover, running the process in UNIX and compiling with
Insure, the following line is pointed for leaving memory
leaks at exit:
#File: hostip6.c
error = getaddrinfo(hostname, sbuf, &hints, &res);
Additionally, Insure also points the following line as
'Outstanding allocated memory':
#File: inet_ntop.c
ptr = inet_ntoa_r(*(struct in_addr*)src, dst);
No more reports are generated by Insure.
By that way, I cannot assure that there were a memory leak,
but ... Is it normal such memory consumption? (I do not know)
Julio Garvía
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-08-01 13:46
Message:
Logged In: YES 
user_id=1110
Thanks for your work on making test programs!
I made it iterate three times with a full source dir each
time, thus uploading 600 files.
I didn't detect any leak in libcurl (using the built-in
libcurl memory debug tool).
I then re-ran it with valgrind with the same three
iterations, and it too found no memory leaks at all.
So, whatever you see I can't see that libcurl is to blame
for any memory leak.
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-08-01 12:14
Message:
Logged In: YES 
user_id=1304874
O.K.
I have attached a new *.c version to allow executing a
limited number of scanning  iterations.
If you use '-i' command line option, you could specify the
maximum number of scanning iterations (if you do not specify
it, then indefinite iterations will be taken by default)
PS: You can terminate the program (in a controlled way) by
sending TERM or INT signals.
Regards,
Julio Garvía
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-07-30 23:46
Message:
Logged In: YES 
user_id=1110
Can't you make the example run for a limited number of
rounds and then exit? If it leaks, it would make it "easy"
to figure out and detect the leaks then. A never-ending
program is a lot harder.
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-07-28 10:55
Message:
Logged In: YES 
user_id=1304874
Hello,
After a long time, I am here again with the memory leak
issue (first of all, sorry for the delay).
Well, I have prepared another command line program to
perform FTPs continuosly. It scans (in an infinite loop) in
a given directory looking for files to be transferred (once
the FTP is done, the file is remove from the source path).
(Please, execute MyFtp to see the command line options).
If you execute this process (in UNIX) you will see how 'ps
axu' reports that memory is increasing.
I have check it with the following scenario:
1. Starts MyFtp process as follows:
MyFtp -h <Host> -l <User> -p <Pwd>
-s/usr/users/jgarvia/MyFtp/FtpSource
-t/usr/users/jgarvia/MyFtp/FtpTarget -c 60
2. Inserts entries to be trasnferred in the source path.
I have put 200 entries every 2 minutes via 'PutFiles.sh' script.
3. Checks the memory reports with 'ps axu'.
I have done it with 'MemoryLog.sh' script.  
PS: I have run MyFtp process during two days.
PS: Affected files are attached in MyFtp_01.tar.gz
Hopefully, it could help us to determine if there is a
memory leak or not.
With best regards,
Julio Garvía
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-07-06 15:05
Message:
Logged In: YES 
user_id=1304874
Certainly, the points mentioned by valgrind are not reported
as memory leaks but as 'something strange'.
Anyways, I continue trying to know why the memory is
increasing. 
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-07-06 15:05
Message:
Logged In: YES 
user_id=1304874
Certainly, the points mentioned by valgrind are not reported
as memory leaks but as 'something strange'.
Anyways, I continue trying to know why the memory is
increasing. 
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-07-06 12:12
Message:
Logged In: YES 
user_id=1304874
Certainly, the points mentioned by valgrind are not reported
as memory leaks but as 'something strange'.
Anyways, I continue trying to know why the memory is
increasing. 
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-07-06 12:01
Message:
Logged In: YES 
user_id=1110
Your program never ends so whenever it is stopped of course
there are still some allocated resources.
If you make it loop only N laps, does it still show those
leaks? I certainly couldn't see any.
But feel free to prove me wrong!
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-07-06 11:45
Message:
Logged In: YES 
user_id=1304874
Just one appointment:
Executing the program I attached (in Linux) using 'valgrind'
with '--v' and '--leak-check=yes' options, some information
is reported about the same points reported by Insure report.
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-07-06 11:38
Message:
Logged In: YES 
user_id=1304874
Just one appointment:
Executing the program I attached (in Linux) using 'valgrind'
with '--v' and '--leak-check=yes' options, some information
is reported about the same points reported by Insure report.
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-07-06 00:24
Message:
Logged In: YES 
user_id=1110
Yes it certainly sounds like a leak. I just can't see
where/why libcurl would leak it.
I'm puzzled why your insure tool reports a leak when my
tests don't leak at that point...
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-07-05 16:40
Message:
Logged In: YES 
user_id=1304874
Not so far,
but 'ps axu' is telling that memory is increasing ...
I will continue investigating.
Hopefully, no leaks, but... Why is the memory increasing
when calling curl_easy_perform()? (I do not know so far)
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-07-05 16:03
Message:
Logged In: YES 
user_id=1110
Any news?
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-07-01 13:54
Message:
Logged In: YES 
user_id=1110
Ok, I modified the while(1) to become a counter=10;
while(counter--) instead and ran it on my linux box (using
getaddrinfo).
Not a single byte leaks...
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-07-01 11:57
Message:
Logged In: YES 
user_id=1304874
Hi again,
I have prepared an example for uploading the same file in an
infinite loop to see how the memory is increasing.
You could just modify the Makefile (LIBCURLDIR and
INCCURLDIR) and just call 'make' to compile.
Then type 'MyFtp' and the usage will be shown.
When you execute 'MyFtp' application the same file wil be
uploaded to the same location indefinitely (until
termination signal).
Then if you execute periodically a 'ps' (I have also attach
a script: 'Log_Memoria_00.sh >> LogMemoryLeaks.log &'), you
will see how the memory is increasing a lot. That is what
made me suspect that there is a memory leak (and the Insure
report provides us some clues), but probably the application
requires a huge memory to perform its activity (I do not
know). In any case, I am currently executing the process and
it will not stop until the next Monday (three days running)
in order to see if the memory do not stop of increasing (in
that case, I guess a memory leak could be there, otherwise
all could be OK).
Best regards,
Julio Garvía
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-06-30 17:47
Message:
Logged In: YES 
user_id=1110
but I should mention that I don't think any fixes related to
this problem have been made since the 7.14.0 release, so I
expect you get the same reports on the latest CVS sources as
well
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-06-30 17:37
Message:
Logged In: YES 
user_id=1110
Daily snapshots:
(and my recent tru64-related memdebug things will be
included in tomorrow's snapshot)
http://cool.haxx.se/curl-daily/
How to get it off CVS:
http://curl.haxx.se/cvs.html
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-06-30 17:31
Message:
Logged In: YES 
user_id=1304874
O.K.
I will do what you said, but, Could you tell me where can I
get the new sources?
Hopefully we wil solve this "trouble" soon :-)
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-06-30 16:15
Message:
Logged In: YES 
user_id=1110
built just now:
$ ./src/curl -V
curl 7.14.1-CVS (alphaev67-dec-osf5.1) libcurl/7.14.1-CVS
Protocols: ftp gopher telnet dict ldap http file
Features: Debug IPv6 Largefile
This is curl on tru64, with --enable-debug (I've just made
it possible).
It shows _no_ leak at all. When getting a file over FTP, I
see two calls to getaddrinfo() and I see two calls to
freeaddrinfo().
I guess now is the time where you show me exactly what you
do to see this leak! ;-)
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-06-30 15:06
Message:
Logged In: YES 
user_id=1110
libcurl features a DNS cache where all successful name
resolves are kept, so they are not freed immediately after use.
The call to Curl_cache_addr() adds the resolved data to the
cache.
If the entry remains in the cache until the handle is
closed, it gets freed from Curl_close() when
Curl_hash_destroy() is called.
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-06-30 14:52
Message:
Logged In: YES 
user_id=1304874
Yes, I call 'curl_easy_cleanup()' as well as
'curl_global_cleanup()'.
Well, let me show where I guess there is a memory leak
(according to insure report).
As you could see in the source code:
In hostip6.c (line 257) the getaddrinfo function allocates
memory for its fourth parameter and it is returned.
In hostip.c (line 423) the previously returned value is used
(and stored in a local variable - 'addr'), but only released
when 'dns' variable is NULL. Then my question is, Why the
'addr' relasing depends on another variable? I guess the
releasing should be performed as below:
        if(addr) {
            Curl_freeaddrinfo(addr);
        }
The memory leak seems to be provoked because 'addr' is not
always released, due to its releasing depends on another
variable different from 'addr'.
Ciao,
Julio Garvía
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-06-30 13:53
Message:
Logged In: YES 
user_id=1110
Did you really call curl_easy_cleanup() too?
----------------------------------------------------------------------
Comment By: jgarvia (jgarvia)
Date: 2005-06-30 11:52
Message:
Logged In: YES 
user_id=1304874
I have just installed the new library version and the same
memory leaks are still there.
Regarding the Insure report, I attach the new one.
The way to analyze it, is from down to top (for every
reported chunk). 
BTW, the first line of every reported chunk indicates the
function, file and line in which the memory is allocated
(the rest of the lines indicate the points that calls or use
the memory allocated by that first function).
So, analyzing the second reported chunk (in the latest
Insure report) if you go to file 'hostip6.c', function
'Curl_getaddrinfo()' and line '257' you could see how the
leaked memory is allocated (but never released in file
'ftp.c', function 'Curl_perform()' and line '2146').
(This particular leak is about the fourth parameter of
'getaddrinfo' function that is returned by
'Curl_getaddrinfo()', but never released)
Please, if you still need an example, tell me and I will
prepare it for you.
Thanks.
Julio Garvía 
----------------------------------------------------------------------
Comment By: Daniel Stenberg (bagder)
Date: 2005-06-29 19:46
Message:
Logged In: YES 
user_id=1110
First, I must admit I don't understand the report. What
source/line does it say allocated the memory that was
leaked? How much memory was it?
Can't you simply build 7.14.0 and test to see if the leak is
still present or not?
Can you show us an example program (that leaks this memory)
that we can use to repeat this problem?
----------------------------------------------------------------------
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=100976&aid=1229810&group_id=976
Received on 2006-10-17

These mail archives are generated by hypermail.

donate! Page updated November 12, 2010.
web site info

File upload with ASP.NET