cURL / Mailing Lists / curl-library / Single Mail

curl-library

RE: libcurl + https + multi = lost information

From: REISS Pierre <Pierre.REISS_at_3ds.com>
Date: Tue, 25 Mar 2008 11:19:10 +0100

Dear all,

  I tried yesterday to reply from my home, but it has been
unsuccessful...

  On 22 Mar 2008, Daniel Stenberg wrote:
> Anyway, your example code segfaults for me and valgrind reports as
below. (...)
  Since last Friday, the stock markets were closed, and some URLs
accessed by my
sample program were marked as "not available". This program was looking
for some
keywords in the underlying data, and that failed...
  So, in order to provide stability to my sample, even during long week
ends and
holidays, I replaced the read of 2 pages of data under "CAC 40" by the
read
of 2 header pages ("CAC 40" and "NASDAQ-100").
  In NoCurl.cpp, the Extrait() method flushes the 2 created files
Temp000 and Temp001,
by calling FillBuffer(). The 2 following statements (a while and a
printf) help
to be quickly aware if my sample fails (2 identical writes) or works
(different
ones); if a crash occurs again, it's the proof that the accessed URL is
not ready
(rare). Anyway, you may comment those 2 statements.
  "Readme.txt" describes now the accurate numbers of lines who are
concerned in
Temp000 and Temp001.

  Concerning "https://sourceforge.net/", on 22 Mar 2008, Daniel Stenberg
wrote:
> Can you find any reasonable explanation to this? Why would the actual
URL or contents matter to this problem?
Today I find no answer. I made a lot of tests the last 3 days.

  I am looking further on header data/not header data, on multithreaded
Curl_read's,
on SSL_read: bad or good trails, I don't know...

  Let me know if you have any difficulty trying my sample.
  Thank you very much,
         Pierre Reiss

  • application/octet-stream attachment: Main.cpp
  • application/octet-stream attachment: NoCurl.h
Received on 2008-03-25