
1. Do the cases that fail have multiple, partially-overlapping Pareto-optimal paths to the goal? Do they have multiple paths that are tied for "goodness"?
This is very difficult for me to say, due to the size of the graphs which have around 400 nodes and up to 5000 edges. What I can tell you, though, is that the instances that cause the segfault seem to take significantly longer than the others (i.e. if a non-crashing instance completes in t, a crashing instance takes around 10^4 * t to crash - more on this in the next paragraph) and this makes me think that the reason is that there are less labels that are dominated and - henceforth - more paths "tied for goodness" as you suggest.
2. If you remove all of the clearing of b_is_valid, plus remove all of the object destruction and deallocation calls (to avoid the program crashing), does the answer seem reasonable?
I did as you suggested. The first thing I could do is to confirm the perceived difference in execution time, as mentioned above. Here is an example (program comiled with -O0 -g): Time elapsed (on 0.1-reduced graph, 194 edges): 0.000195 seconds. Time elapsed (on 0.2-reduced graph, 194 edges): 0.0002 seconds. Time elapsed (on 0.3-reduced graph, 907 edges): 0.000176 seconds. Time elapsed (on 0.4-reduced graph, 2271 edges): 0.000179 seconds. Time elapsed (on 0.5-reduced graph, 2922 edges): 0.000182 seconds. Time elapsed (on 0.6-reduced graph, 2922 edges): 0.00018 seconds. Time elapsed (on 0.7-reduced graph, 3003 edges): 0.707146 seconds. <******* Time elapsed (on 0.8-reduced graph, 3232 edges): 2.04194 seconds. <******* Time elapsed (on 0.9-reduced graph, 3470 edges): 3.71097 seconds. <******* I think the marked instances are the ones that would have caused a segfault. [Just for reference, the "reduced graph" the above snippets refer to is tied to the implementation of the algorithm. I first try labelling on a graph where I remove arcs whose cost is > N * maximum_prize_collectable, with N varying from 0.1 to 1 until some negative reduced cost path is found among the pareto-optimal ones.] Removing the destructions and deallocations makes the program complete without crashing. I can't check by hand if the labels obtained are correct until I mange somehow to scale down the problem and still bring the crash about. The only thing I can say is that the whole column generation algorithm always gives the same result (I ran it 40 times) and the result seems to make sense. The next step was to keep the destructions and only remove the deallocations and the program still works and produces the same result. I'm still working at dumping the graphs before executing r_c_shortest_paths, but it's not easy as vertices and edges have shared_ptr to quite complex objects as their bundled properties and so I would need to dump them first and then somehow reconstruct their relationships. In the meanwhile I'm also trying to generate smaller graphs that still exhibit the faulty behaviour - without success up to now. AS -- View this message in context: http://boost.2283326.n4.nabble.com/BGL-memory-leak-in-graph-r-c-shortest-pat... Sent from the Boost - Users mailing list archive at Nabble.com.