
On Mon, May 19, 2025 at 8:27 PM Joaquin M López Muñoz via Boost < boost@lists.boost.org> wrote:
It is IDE clang-tidy warning, not sure compiler matters.
https://clang.llvm.org/extra/clang-tidy/checks/readability/redundant-inline-...
Could you please check if adding this comment to the offending lines
/* NOLINT(readability-redundant-inline-specifier) */
makes the warnings go away? If so, I'd happily accept a PR with that change. Thank you!
I am now a bit confused :) I believe warning is correct, why not change the code instead of suppressing the warning? To be clear this is warning for literal inline keyword not boost force inline macro, few examples: inline constexpr std::size_t range()const noexcept{return (std::size_t)rng;} inline void prepare_hash(boost::uint64_t& hash)const noexcept // ignore gray for size_t cast, it does not know sometimes we compile on 32bit system. But to answer your question: suppression with NOLINT works. On Mon, May 19, 2025 at 8:44 PM Joaquin M López Muñoz via Boost < boost@lists.boost.org> wrote:
El 18/05/2025 a las 23:38, Ivan Matek escribió:
Had a bit more time to think :) so here are my replies and few more questions.
> 5. Why is BOOST_ASSERT(fpr>=0.0&&fpr<=1.0); not > BOOST_ASSERT(fpr>0.0&&fpr<=1.0); > , i.e. is there benefit of allowing calls with impossible fpr argument? fpr==0.0 is a legitimate (if uninteresting) argument value for capacity_for:
capacity_for(0, 0.0) --> 24 capacity_for(1, 0.0) --> 18446744073709549592
The formal reason why fpr==0.0 is supported is because of symmetry: some calls to fpr_for actually return 0.0 (for instance, fpr_for(0, 100)).
This is a bit philosophical, but I actually do not feel this is correct. First of all is (0, 0.0) only usecase where fpr of 0.0 makes sense? i.e any time when n>0 fpr 0.0 is impossible(or I misunderstood something).
Yes, it is impossible: the capacity would have to be infinite --the maximum attainable value is returned instead, though this is of little value as OOM would ensue (as you point out below).
So assert could be implies(it is funny because we had discussion about implies on ML few months ago), so something like: BOOST_IMPLICATION(fpr == 0.0, n == 0);
Similarly for (1, 0.0) I do not believe result should be size_t max value, as this is not correct value. Now we both know you will OOM before noticing this in reality, but even if we imagine magical computer that can allocate that much memory fpr is not 0.0.
I understand your point and can relate to it, but consider this:
capacity_for(1, 1.E-200)
Is this legit? OOM will happen here, too. Where do we put the limit?
I have actually considered that also(I actually thought of std::nextafter(0.0, 1.0);) but same idea... This is getting bit philosophical, but I see those as 2 different, although similar issues. 1. impossible to compute result or result over size_t max 2. practically impossible: e.g. result is 2^47: fits inside size_t but will OOM I am not a big fan of library picking a constant for which result is unreasonable( to handle 2.), but on the other hand nonsense values should be detected asap, before program has chance to continue... long story short I am not sure what is best decision here. For 1. I am much more convinced that returning nullopt is morally ;) correct API design.