On 30 May 2015 at 0:19, Thomas Heller wrote:
However, what is proposed in this thread is hardly usable in any context where concurrent operations may occur. What you missed is the message that it is not memory allocation or the existance of exception handling code that makes the futures "slow". In fact what makes futures slow is the mechanism to start asynchronous tasks (If you have the finest task granularity you can imagine).
And that has nothing to do with futures per se. Your claim (and that of those involved with HPX) is that optimising futures beyond what HPX has done is pointless because you found no cost benefit [relative to how the rest of HPX is implemented]. Which is fine, for HPX, but in your C++ Now presentation as soon as you said your futures were always using jemalloc I instantly knew you had just undermined all your claims about your design of futures. Only FreeBSD has jemalloc as its system allocator. Generic C++ code, especially library code, can make zero assumptions about dynamic memory allocation except that it can be catastrophically slow. I mentally budget about 1000 cycles for any malloc or free call, overkill but not wrong on a low end ARM CPU. The part in the square brackets is unstated by you and your colleagues, and you and your colleagues have a nasty habit of assuming that HPX is the end all and be all of perfect design, and therefore the part in brackets need not be stated because it is now some absolute truth. You then proceed to make sweeping assumptions that all other work on futures is futile as if the case were closed, and that we are not listening to you "the experts", and that therefore discussion or even thought that you are not perfectly correct in the general design is verging on blasphemy. But let me put this to you from a Boost community perspective: 1. Why haven't any of HPX's design choices been peer reviewed here? 2. Why isn't HPX being contributed to Boost, or the parts of HPX where it would be sensible to do so? Why do HPX authors repeatedly keep the new and often very useful libraries they write away from Boost? Even the non-HPX libraries? 3. Why do those working on HPX not contribute to the Boost community in general as they once used to in the past? Where are the reviews of libraries? Where are the pull requests fixing bugs? What mentoring of GSoC projects has been done in the last three years? What gives you, or any of your colleagues, the right to lecture people here about design imprimatur when none of your designs nor code have passed peer review here, and the last time I can see any of you contributed substantially to the community here was late 2012? If you or your colleagues were the active maintainer of the definitive Boost library implementing these then your lecturing would be taken very seriously. But all your work is occurring in an ivory tower very far removed from peer review here, and the fact you as a group are not engaging with the processes nor community here substantially weakens your authority. Nobody is undervaluing the depth of experience and the talent in the HPX team, nor you as a group's previous very substantial contributions to Boost and indeed the C++ standard. I for one have a huge respect for all of you, though as a group you make it hard sometimes. Your attitude as a group of people, well it stinks, and the bad attitude was noticed at C++ Now by a number of people. If as a group you did more positive contributing to the Boost community and less negative hectoring, especially in private conversations when the person you are beating down is not there, we'd all get along much better.
What's stupid is the assumption that your task decomposition has to go down to a single instruction on a single data elemtn.
Sigh. It's like talking to a brick wall sometimes. You appear to be actively refusing to change your preheld convictions that whatever I am doing I must be wrong. This is despite at least six people contributing positive ideas and feedback to this thread, at least three of whom have written and shown the community code implementing those new ideas. For the record, it's not about task decomposition, nor was it ever. It's about affording the maximum possible scope to the compiler's optimiser by not using design patterns which get in the way. That's why turning thousands of lines of code into a single mov $5, %eax instruction is important, because that's a unit testable proxy for the right design. I am the first to realise that in 95% of the use cases the compiler can't do such dramatic reductions, but to discount such optimiser friendly design out of hand as being automatically not worth the effort seems churlish. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/