Daniel Stenberg wrote:
> On Fri, 16 May 2008, Tim Tassonis wrote:
>> What do you think about a new option to store the response directly in
>> a char buffer, as I proposed earler? I posted a possible
>> implementation two days ago and attach it below, I would also rework
>> it if you want something changed. Or do you oppose the idea completely?
> I honestly don't see the point.
> The whole concept is already possible and we provide an example in every
> tarball, on the site and the concept is even explicitly mentioned in the
> I don't think providing this extra helper function/macro will make the
> problems go away. The people who can't figure out how to do that already
> today, those who can't read the docs or an example code and come up with
> a way to use that in their own code, they will still fail with this
> helper approach added. We will just lead them one step further into the
> woods until they get lost.
Well it's certainly true that the docs/examples have everything in it to
implement the desired functionality, but the helper would release the
user from "the burden" of implementing this one function for a very
generic requirement. For all my usage of curl, I always ended up having
to write this function, copy it from one project to the next etc. It's
not that big a deal, I agree, but it has the potential to save a lot of
users some lines of code/some source for errors.
> Also, in most cases it would make more sense to for example double the
> malloc'ed space for each time it runs out memory in the buffer instead
> of this realloc in every callback. Meaning that people are better off
> tweaking the actual receiver function for their own particular purpose
> and use-case to get the most performance out of it.
Sure, I fully agree that one can do better. The current do-it-yourself
approach would still allow that, though.
But often, optimal performance in this area is not the biggest issue in
a project. I'm probably not the only one that uses libcurl for
webservice type of stuff, where the response often is quite small and
the function will be called only once most of the time.
That said, I'm not saying my approach of on-demand allocation is the
best you can do for generic cases. It might well be better in most cases
to start with a buffer of, say 8k, which would be big enough in a lot of
cases and then reallocate double the size when the buffer gets full, as
> A concept I would understand and sympathizee with for addition, would
> involve providing an _existing_ buffer to libcurl and have libcurl
> "fill" that buffer with as little extra internal copies as possible so
> that we'd gain performance out of it all. But that is much more work and
> won't happen before 7.18.2 if at all...
I agree that this would be the better approach, compared to mine, which
- curl fetching the data into a buffer
- curl calling the writefunction and providing the data to it
- the writefuction copying the data to another buffer
My approach was cleary to make the changes to curl as small as
possible, performance was not in my mind, but simplification of user code.
Anyway, thanks a lot for the feedback, I take this as a rejection.
Received on 2008-05-16