[Root Pointer] New Documentation

Greetings, - I beefed up the tutorial in the following documentation and now it covers everything I know about it: http://philippeb8.github.io/root_ptr/ - As some of you already know I added a performance comparison chart in the introduction: http://philippeb8.github.io/root_ptr/root_ptr/intro.html - I fixed the deadlinks - I added a cast operation helper in the root_ptr class: https://github.com/philippeb8/root_ptr/blob/master/include/boost/smart_ptr/r... Regards, -Phil

I looked over documentation... Questions: 1. From what I understand it is basically reference-counting pointer with a "pool" that deletes pointers with dangling references. Am I right? 2. What I thread safety assumptions on this library? i.e. does it use atomic operations to handle reference counters? 3. What happens when root_ptr is deleted and node_ptr exists? Does use of node_ptr lead to undefined behavior? If so it should be marked as big warning. I want to add a small thing.
Run benchmarks of copying pointers as well in single core case and multiple core cases. IMHO it is interesting concept a sort of merge between object/memory pool and shared_ptr. I think that due to the simple fact that it is so basic library, before you even try to get to a formal review you need: (a) Rewrite documentation making it very clear what every thing does including something that look trivial to you as copy constructor: restrictions, relationships, behavior what it does etc. (b) If it is your own design/research of concept say it explicitly, otherwise provide references to books, research papers that discuss root pointer algorithms (c) Describe the algorithm in much wider manner including better examples, values of reference counters etc. (d) Provide much wider beginner tutorial with samples It looks interesting but for something that basic documentation isn't even 1/2 ready. My $0.02 Artyom Beilis

On 04/10/2016 01:01 PM, Artyom Beilis wrote:
It really is a set of allocated blocks that are linked together using an intrusive list. When the last root_ptr referring to this set is destroyed then the whole set of allocated blocks is destroyed, regardless of cycles.
2. What I thread safety assumptions on this library? i.e. does it use atomic operations to handle reference counters?
- It uses the same atomic operations as shared_ptr to handle the reference counter. - It also uses a global mutex when an assignment is made. This global mutex could be optimized but apparently there is no urgent need to do so because the speed of root_ptr is already faster than shared_ptr in multithreaded mode: http://philippeb8.github.io/root_ptr//images/performance.png
You can't construct a node_ptr without a root_ptr so node_ptrs are always instantiated after a root_ptr.
GC aren't deterministic so if you use Java or Javascript for a end-user interface, there's always going to be that annoying lag which looks unprofessional.
It is my own research so I have 0 reference to include.
It sounds like I am going to have write a book on the subject. Any expert is welcome to help me out because if that is the case that will take me some time to write down.
It looks interesting but for something that basic documentation isn't even 1/2 ready.
Thanks for your help it is much appreciated.

I mean stuff like (pseudo-code): node_ptr<int> get_int() { root_ptr<int> r=make_root()... node_ptr<int> p = make_node(r,...) ... return p; } main() { node_ptr<int> p = get_int()... }
Than you need to really describe the algorithm very well and probably prove the correctness, also it isn't strictly needed.
See... you are submitting a library for a review that provides entirely new memory management model (at least you describe it like that) Considering there lots of research done regarding reference counting, GC and cycle detection and other things in computer science literature, and considering that you come with a new concept (even if it is simple/trivial one) than you are expected to provide something very solid. If you are doing it as a part of academic research than it would be good to refer to relevant publications. It is just a part of the game you try to play :-) Good Luck, Artyom Beilis

On 04/10/2016 04:07 PM, Artyom Beilis wrote:
I just tried your code: node_ptr<int> get_int() { root_ptr<int> r = make_root<int>(9); node_ptr<int> p = make_node<int>(r, 10); return p; } int main() { node_ptr<int> p = get_int(); std::cout << *p << std::endl; } And it outputs: 10 I'll need to debug this code to see what is happening. You're the first one to bring this up.
Any help from these research centers would be much appreciated. Otherwise we'll spend the next 50 years wondering about GC and worse, making it part of the C++ standards (arghh!).
If you are doing it as a part of academic research than it would be good to refer to relevant publications.
I am also out of the academia.
It is just a part of the game you try to play :-)
Yeah I tend to choose difficult games, just like my astrophysics project ;/

Phil, On 04/11/2016 07:41 AM, Phil Bouchard wrote:
To begin with, I feel that your main argument in favor of your library compared to the std::shared_ptr is that your library takes care of cycles. I am personally yet to be convinced that the cycles-related problem is actually as big as you make it. More so, std::shared_ptr does manage cycles with weak_ptr and discipline. ;-) Second, I feel that Artyom asked an excellent question which IMO highlights a fundamental flaw with your library. I feel that your library does not solve the cycles problem but rather replaces it with another problem. Namely, the problem of managing the life-cycle of the memory manager (strangely called root_ptr). And IMO that problem is quite serious given that all node_ptrs spawned from the main root_ptr are freed (regardless of the cycles or existing references) when that root_ptr is destroyed. Regards, Vladimir.

On 04/10/2016 07:58 PM, Vladimir Batov wrote:
I haven't seen any implementation of a neural network with shared_ptr yet... Mine was done in nothing more than a month in my spare time.
Second, I feel that Artyom asked an excellent question which IMO highlights a fundamental flaw with your library.
He asked an excellent question indeed and I will answer it as soon as I can.
I feel that your library does not solve the cycles problem but rather replaces it with another problem.
The concept of having a root_ptr can be applied to all container-like classes because for example all implementations of lists have an internal cyclic nodes from what I've seen. This concept can also be easily applied to cyclic graphs as well. People do not use C++ for their implementation of neural network... Why? Because they prefer to use a garbage collected language to handle the memory given the network can be quite complex. C++ will need to handle that complexity sooner of later.
There is a minimum of explicitness that needs to be done by the programmer. Because letting the entire memory being managed implicitly results in slow performance like we see with Java or Javascript.

On 04/11/2016 11:07 AM, Phil Bouchard wrote:
I am not sure that argument can be used to justify acceptance of your library... or anything really :-) due to its subjectivity and lack of "measurability". Secondly, I am not sure a network/graph (neural or not :-)) needs to use std::shared_ptr to manage memory at all. I am debugging one of them right now... The main reason is that the network/graph class is already a nodes/resources/memory manager, i.e. it takes care of nodes management... memory management included.
I have not looked at the implementation but from the design description I suspect I know the answer. The core design decision (if I've got it right) is that all node_ptrs are freed when its root_ptr goes out of scope irrespectively of cycles or still legitimate references. Yes, it solves the cyclic references... but also it invalidates all other references.
Well, I've noticed you tend to make quite grandiose statements like "50 years wondering about GC", "People do not use" and "C++ will need". If I were you, I'd certainly refrain from those -- they add nothing but might show you in unfavorable light. Because if you and I disappear from the face of the earth tomorrow, the humankind, IT and C++ won't be left to "wonder for 50 years about" anything. Trust me. Secondly, I work with networks/graphs. I do not use GC and C++ handles that "complexity" just fine. In fact, if I had to name an area which I'd label with the "complexity" tag, it would not be the memory management.
That argument is as good for shared_ptr/weak_ptr combination as it's good for root_ptr/node_ptr... If I were forced to deploy one or the other.
Because letting the entire memory being managed implicitly results in slow performance like we see with Java or Javascript.
Well, we are not in Java. And from C++ perspective your statement is so wrong that I do not know even where to begin. :-) But that's a different altogether topic. V.

On 04/10/2016 09:58 PM, Vladimir Batov wrote:
Sorry for the delay on this, I had to double check on Linux and Windows before answering and I ended up with this test case: struct A { int i; A(int i) : i(i) { std::cout << BOOST_CURRENT_FUNCTION << ": " << i << std::endl; } ~A() { std::cout << BOOST_CURRENT_FUNCTION << ": " << i << std::endl; } }; node_ptr<A> get_int() { root_ptr<A> r = make_root<A>(9); node_ptr<A> p = make_node<A>(r, 10); return p; } int main() { node_ptr<A> p = get_int(); std::cout << p->i << std::endl; } And it outputs: A::A(int): 9 A::A(int): 10 A::~A(): 9 A::~A(): 10 10 But yes the behavior is undefined in this particular case so I need to write this down in the docs. Thanks Artyom...!
Sorry I used the wrong wording but let's just say that the stakes are high.
The same goal we all have is to write as few lines of code as possible for a given task. If I can prove that by using root_ptr, the code will be simpler then I think I will have made my point.
Well for example it is obvious a pool of objects of the same type is must faster than a pool of objects of any size but you need to call the right pool explicitly. But it's like you are saying one solution moves the problem somewhere else: - the GC is 100% implicit but you need "finalizers" - shared_ptr needs a weak_ptr to handle cycles - root_ptr needs a node_ptr and has potential undefined behaviors At the end of the day it's all about the most commonly used use cases and if the library can: - reduce the amount of lines of codes - is more efficient or not - is more robust or not - is more extensible or not

Phil, Unfortunately you test below confirmed my suspicion. Now could you kindly explain how your library is better than: template<typename T> struct manager { T* create(args) { all_.emplace_back(args); return &all_.back(); } void remove(T*) { ... } std::list<T> all_; }; T* serves as your node_ptr -- all pointers are valid as long as its manager instance is around. What am I missing? On 04/11/2016 12:46 PM, Phil Bouchard wrote:

On 04/11/2016 01:32 PM, Phil Bouchard wrote:
Your library does memory and "node" life-time management. It shares node_ptrs around and guarantees them to be valid as long as its main root_ptr is around. root_ptr clears all node_ptrs when it is destroyed. The class above does the same.
but I can see that remove() will have to do a linear search in the list, slowing down the performance in general.
Well, we are not discussing the efficiency of my "implementation", are
we? :-) Still, if you insist... How about:
struct sort
{
bool operator(T const& p1, T const& p2) { return &p1 < &p2; }
}
std::set

On 11/04/2016 16:08, Phil Bouchard wrote:
Unless I'm missing something, node_ptr doesn't implement a remove operation at all. You just discard the node_ptr when you aren't interested in it any more, and memory isn't released until the root_ptr is. That seems equivalent to Vladimir's code with the remove method omitted. As I've said before, it's basically an arena allocator, just with reference-counted arenas.

Seth, On 2016-04-12 16:36, Seth wrote:
Indeed... but I'll probably strangle the one who overloaded operator& and then 'd ask him why he needed to do that. :-) ... and for the primitive snipped used std::less<> does not seem essential (we do not seem to care for actual order). In fact, std::unordered_set would likely be more appropriate. ... But thank you for reminding how scary and unfamiliar a simple code might become to be truly generic.

On 12-04-16 12:33, work wrote:
... and for the primitive snipped used std::less<> does not seem essential (we do not seem to care for actual order).
Relative pointer comparison is unspecified unless the pointers point to different members of same object/array (or if they point to functions). It's also unspecified if only one of the pointers is null. ( § 5.9 ad 2.) std::less<> specifies an exception in § 20.8.5/8: "For templates greater, less, greater_equal, and less_equal, the specializations for any pointer type yield a total order, even if the built-in operators <, >, <=, >=do not." It's an interesting world :)

I don't to be rude but... This kind of stuff you should have foreseen. I mean literally, if you propose memory management algorithms you need to be able to answer straight away without writing even a single line of code especially that the algorithm is should be clear and straightforward. That is why your docs should include algorithms and clarifications of usage requirements. Don't get me wrong I actually think the library may be quite useful but...it has some way to go. Artyom

On 04/11/2016 09:48 AM, Artyom Beilis wrote:
Sorry after a week or two my brain is cooling off.
The only improvements that need to be done is to localize the global mutex and beef up the docs but other than that I'll try to get opinions from professors in the subject. There is nothing more I can do or else we'll wait forever.

On Mon, Apr 11, 2016 at 7:30 PM, Phil Bouchard
If there is enough interest in your work, one or more persons will probably volunteer to manage the review: but you cannot force that interest. You can evoke that interest by making the documentation compelling enough for someone to dig deeper into the implementation. If you feel you've done that, and your due diligence with the implementation, then all you can do is wait. Glen

On 04/10/2016 10:46 PM, Phil Bouchard wrote:
I put emphasis here on the resulting amount of lines of code necessary to implement something because I think that's the most important factor. I have shown that the implementation of a basic list using root_ptr takes no more than 30 lines of code.

On 2016-04-13 16:27, Oswin Krause wrote:
... And that's a repeated and unfortunate pattern. Every time the author gives an example of potentially useful application of root_ptr I feel like "why would I want that instead of the existing standard components?". The author might well implement a basic list in 30 lines but who'd want that when std::list is already available. Could that be that root_ptr is 20 years late? :-)

A colleague of mine who works on WebKit saw this thread in the Boost mailing list archive and pointed me to this thread on the WebKit mailing list archive: https://lists.webkit.org/pipermail/webkit-dev/2016-March/028075.html Phil: If you want to have a real world user of your idea (which will also organically help you improve your design and implementation), an opportunity has already presented itself. You mentioned you wrote this with WebKit in mind because parts of it use GC. If you replace their GC with a solution that uses your root/block pointer, and show before/after benchmarks, it will lend credibility to you and your idea. Glen -- View this message in context: http://boost.2283326.n4.nabble.com/Root-Pointer-New-Documentation-tp4685168p... Sent from the Boost - Dev mailing list archive at Nabble.com.

Phil wrote:
I know, the problem is it's not that simple to replace the GC in WebKit ;)
Of course: It would involve some non-trivial work on your part, so it only depends on whether you feel having users of your code (or convincing people of its merit) is worth it. And, as you've found out, it isn't that simple to get people interested in your idea, either.
But can you imagine the benefits of having Javascript running deterministically?
When you initially claimed "They [Webkit] aren't interested in cutting edge C++", it gave us the impression that they didn't want to use Root Pointer because the idea was radical. From the discussion on their list, that doesn't seem to be case. It seems they didn't want to make changes based purely on imagination of what the results should be. They appear to be open to it if you're able to do the work to and evidence the merits with benchmarks. My advice is to take this opportunity, try and replace their GC with a better solution. Along the way, if you find you need to improve your solution, all the better. At the end of it, you'll have two apples that you can compare. The results of the comparison, if in your favor, would also certainly be compelling to Boost and everyone. Glen Glen -- View this message in context: http://boost.2283326.n4.nabble.com/Root-Pointer-New-Documentation-tp4685168p... Sent from the Boost - Dev mailing list archive at Nabble.com.

On 04/13/2016 09:28 AM, Glen Fernandes wrote:
People do not see long term values. Here is an analysis of languages with a UI (what people want at the end of the day): - Java is the mostly used language: http://www.tiobe.com/tiobe_index?page=index - But Java's JVMs are written mostly in C++ http://www.answers.com/Q/Java_language_is_developed_in_which_programming_lan... - WebKit is written in C++ - Qt's QML is written in C++ - Even the C# compiler is written in C++ So by helping out the GC, you pretty much help out all other languages as well. So the benefits are clearly there.
Well it took me 8 years to write root_ptr and initially I thought that it was going to be easy. I know fixing WebKit on my own is not going to be an easy task so it'll probably going to take me 15 years, which is not reasonable. Even DukTape is not trivial (2 files in C). That's a chicken and egg thing. But any help would be appreciated obviously... the problem is expert in GCs aren't going to help because there's an industry behind it already.

On 4/10/16 4:58 PM, Vladimir Batov wrote:
This is my take on the library from a quick reading of the documentation. So as far as I'm concerned you've passed the first major hurdle that most submitters can't pass. You've managed to summarize the purpose and motivation for the library succintly enough for someone to determine in a couple of minutes whether it addresses a problem that he currently has. I should say that documentation for many boost library fail to do this - so in this sense you are ahead of the game. I am personally yet to be convinced that the cycles-related
problem is actually as big as you make it. More so, std::shared_ptr does manage cycles with weak_ptr and discipline. ;-)
And as I see it, this is the crux of the debate. Does the facility if offers worth the extra "overhead" that any library entails? Well, that's what we're here to debate! Robert Ramey

[...]
Ok this kind of suff is missing in the docs... I don't see the "algorithm" section. It isn't clear to me, The root_ptr holds linked list to blocks... that if you delete a single node_ptr so its ref-counter goes to 0 and object is destroyed, what happens to its block - removed from linked list, is it used as cache for reuse? How the list is protected for thread safety? By global mutex??? If so I see a great potential for a contention that makes it far worst than shared_ptr. Artyom

On 04/10/2016 04:22 PM, Artyom Beilis wrote:
If the ref count goes to 0, the block is deleted and removed from the intrusive list automatically. There is no caching.
I could optimize that global mutex and make it part of a node_proxy (set) but I didn't see an immediate need after running the benchmark and I had other cats to whip.

On 10 Apr 2016 at 14:21, Phil Bouchard wrote:
You might consider presenting on applications of Root Pointer at a year's worth of C++ conferences, so submit and present at each of the big three conferences. That will build a stock of online videos and get people's heads wrapped around the concept behind the library. After last summer's review of AFIO, I came to the conclusion at least two years' worth of conference presentations would be needed before returning here for review. I started the series last year at CppCon, the next episode is next week at the ACCU conference where I'll be presenting a workshop based on using AFIO v2 to implement a distributed mutual exclusion algorithm using the filesystem. I intend to submit a third workshop extending this further for CppCon next September. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/

On 4/10/16 6:13 PM, Phil Bouchard wrote:
I think this advice is misguided. I don't think that promoting your submission in this way will be successful. Basically you need to make a compelling case for someone who has the problem you're addressing. I had a lot to say about just this topic in my presentation at CppCon 2014: How you can make a Boost Library https://www.youtube.com/watch?v=ACeNgqBKL7E Robert Ramey

Somme time ago there seemed to be a serious effort to migrate Boost's build system to CMake. While the documentation to this effort is still around, it seems like the most recent activity dates to 2013. I tried cloning the git repository at git clone http://github.com/boost-lib/boost.git modules, but it seems to be in an invalid state. Whatever happned to this project? Was a decision made to stick to Boost.Build, or did the project peter out due to a lack of resources? Is there interest in resurrecting the project?

On 4/11/16 11:10 PM, boost@glenstark.net wrote:
I have a strong recollection of all this. I'm not sure it's entirely accurate though. There was a growing frustration with boost tools. Building, maintaining, and supporting them required a lot of time. The feeling was that weren't really doing a great job and that we should stick more to the stuff we're good at. The move to CMake was prposed and unveiled at BoostCon 2010 see: https://www.youtube.com/watch?v=oWHL8Y1WB9Y&index=6&list=PL_AKIMJc4roVg67uMOpzEpsYTolMvhxho and https://github.com/boostcon/2010_presentations . There was a lot effort expended on this project by David Abrahams, Troy Strazeim and others. They made a bunch of CMake components which worked as "add-ins" and got to the point where it could build all of boost and also I believe run all the tests. The main idea was to change the boost development environment in a coordinated fashion. This required a coordinated agreement and implementation and cutover. With already a large number of libraries many of which didn't have the maintainers around, this proved to be an over ambitious task. Basically it was a top-down effort which I'm personally always skeptical of. Also, in response to this effort, the developers of boost build, made many efforts to address complaints about their produce and the whole effort fizzled out. Finally, the skepticism of boost tools diminished with the wide success of quikbook. So here we are. I have a lot of complaints about boost build, but the developers of this product have a lot of respect from me. They have hung in there an supported the effort and kept it working. And it's a hard job. I think that there is the view that if we only did ... the problem would be solved - and that's not true. It takes a continuing effort and only the developers of boost build have shown willing to accept and implement this view. But I don't believe that the build tools have to be imposed from top and I don't believe that we have to exclude all other build/test alternatives. Each library author chooses his documentation system and each one chooses his test system, (Boost test, lightweight test, home brew, etc). At times this is inconvenient, but it all in all it works pretty well. I see no reason that library authors can't include a CMake directory in their libraries just as they have a build directory which supports boost build. Users can select the one they want. Boost build is requirement only because we want to support centralized monolithic testing. Personally I would like to see us consider more distributed testing on an individual library basis - but that's another battle. Robert Ramey

On 12.04.2016 12:09, Robert Ramey wrote:
Amen to all of the above ! I would even go a step further: to me true modularization (as we have discussed many times in the past) includes the *ability* to have a library be able to be built (and potentially released) stand-alone, against all its prerequisite components already installed. I still have plans to work on that for Boost.Python (which luckily has no runtime dependencies on Boost, making this relatively easy). I'm also moving away from Boost.Build, as I keep having trouble setting it up for my needs. (Note: I'm not criticizing the Boost.Build developers. I much appreciate their work. But that alone can't be reason to use the tool.) So, I think it would benefit all of Boost if more developers would start thinking of Boost libraries as separate projects, who may or may not want to share a common infrastructure for building, documenting, and testing. It's great that those tools are available, but they mustn't be imposed on individual projects. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

On April 12, 2016 2:10:55 AM EDT, boost@glenstark.net wrote:
Somme time ago there seemed to be a serious effort to migrate Boost's build system to CMake.
[snip] Don't hijack an existing thread. Changing the subject isn't enough. Start a new thread for a new topic. ___ Rob (Sent from my portable computation engine)

On 4/12/16 7:56 PM, Phil Bouchard wrote:
LOL - it's ALWAYS obvious to the author. But think about it. If I want to use root_ptr<T> what are the requirements on T. Does it have to be copiable?, movable?, default constructable?. Does it have to support any special calls. Does it have to support delete T. What if T is an array, will it still work? Can T be a constant? For what kinds of T will code fail to compile? Can I copy a root_ptr<T> to T* ? What about vice versa? Documenting the type requirements will go a long way toward addressing the reservations previously raised. Robert Ramey

Phil, You still haven't answered my earlier questions, which were - in what scenarios would one use root_ptr and in what node_ptr, - what happens when one doesn't observe the above guidelines Perhaps you didn't understand the questions, so I'll go into more detail. "Root" is GC terminology. It refers to the pointers from which tracing starts (those on the stack, in static variables, in registers.) So by your calling root_ptr root_ptr, I can deduce that you intend root_ptr to be used for root pointers, and node_ptr to be used for non-root pointers. Meaning, if we go by classic GC terminology, that automatic and static variables should be root_ptr, and the rest should be node_ptr. Right? If that's correct, my question is then what happens when you use node_ptr for roots, and root_ptr for non-roots. That is, what happens when you use node_ptr as an automatic or a static variable (currently a dangling pointer, as we learned in a previous post) and what happens when you use root_ptr for a non-root pointer. Specifically, in R1 -> N1 <-> N2 N1 and N2 are destroyed when R1 is destroyed. But what happens in R1 -> R2 <-> R3 or in R1 -> R2 <-> N3 or in R1 -> N2 <-> R3 when R1 is destroyed?

On 04/13/2016 10:10 AM, Peter Dimov wrote:
I think I tried to answer your questions with the following examples: http://philippeb8.github.io/root_ptr/root_ptr/tutorial.html#root_ptr.tutoria...
Right but root_ptr can be class members as well (container roots for example).
The cycle will not be destroyed there because cycles between sets themselves aren't detected.
or in
R1 -> R2 <-> N3
This cycle will get destroyed. This is because you can have subroots (root pointing to a subroot).
This cycle will not get destroyed.

On 4/10/16 10:01 AM, Artyom Beilis wrote:
Hmmm - I think it's much more interesting to think in terms of what problem the library solves as opposed to how it does it.
2. What I thread safety assumptions on this library? i.e. does it use atomic operations to handle reference counters?
These are implementation issues. Not uninteresting but not the crux of the library. Not a bad idea to mention in the documentation
To me, the main weakness of the document is the absence of type requirements - aka concepts. Documentating these requirements goes a long way to answering questions like that above. Again, you're being held to a higher standard than many libraries already in boost.
It seems to me that this is not the focus of the library but rather feature of the implementation. It seems that this is a criticism of the submission for not addressing some other problem. It's very possible that these features are more interesting that the original purpose of the library. But I don't think that it's fair criticism of the library itself, but rather an idea for a different library.
LOL - I think you're mixing what the library does with how it does it.
I think that adding Type requirements would address this very clearly.
We should all be doing this. But in this case, I didn't the idea that this is promoted as some new revolutionary algorithm but rather and implementation of a simple idea - a smart pointer which handles cycles without leaking memory. Maybe a better name would help - leakproof_pointer.
(c) Describe the algorithm in much wider manner including better examples, values of reference counters etc.
Given they above, I'm not convinced that is all that interesting.
(d) Provide much wider beginner tutorial with samples
I'm thinking that this is the implementation of a simple idea to solve a specific problem. Unless I'm getting this wrong, I don't think it needs more than one tutorial example.
It looks interesting but for something that basic documentation isn't even 1/2 ready.
It's certainly no worse that most other documentation for libraries at this stage in the process.

Yes and no, considering that GC and reference counting is widely used and researched concept than if you come with a new magic bullet you need to explain how and not what do you solve. For example python provides reference counting and cycle detection in gc...
Yes you are right but it is very basic utility library so, as it was proven the stuff like that must be documented.
I just provide an input. GC is very wide topic. But currently it isn't major issue from my point of view.
In case of memory management both are equally important. Artyom
participants (14)
-
Artyom Beilis
-
boost@glenstark.net
-
Gavin Lambert
-
Glen Fernandes
-
Niall Douglas
-
Oswin Krause
-
Peter Dimov
-
Phil Bouchard
-
Rob Stewart
-
Robert Ramey
-
Seth
-
Stefan Seefeld
-
Vladimir Batov
-
work