On 11/20/2014 08:29 PM, Andrzej Krzemienski wrote:
2014-11-19 22:36 GMT+01:00 Vladimir Batov
There is no bug here! Not from the human perspectives. You are comparing two extremely closely related types! As long as we agree on how "none" is treated, then all kosher. We do compare "ints" with "doubles", don't we? No "safety" concerns. On the second thought I might probably agree that op<() might be questionable... Can we address that differently then? No. Allowing this comparison to work is the right thing to do. It is a natural consequence of Optional's conceptual model. You should look at optional<T> as a T plus one additional value, less than any other value. No-one stops you from adopting any other model (like container of size 0-or-1), but then you are risking that you will be surprised by the result.
Optional is not a container of size 0-or-1. You do not expect an element to be implicitly converted to its container type.
1. I am not (and never have been) advocating that "optional" is a container. I personally find it wrong... maybe interesting for curiosity sake but ultimately misleading and distracting. 2. I do understand and do agree with the decision made regarding "optional" sorting -- no-value is less than "any" value. Seems sensible and easily "documentable" and makes "optional" easily usable in ass. containers without hassle. 3. What I have doubts about (and I stress -- doubts -- as indicated by my previous conflicting posts) is op<(T, optional<T>). Yes, *mechanically*, one can say -- T is implicitly propagated to optional<T> and then op<(optional<T>, optional<T>) is applied. Easy-peasy. I wish life was that straightforward. *It feels to me* that, when T is compared to optional<T>, it is quite likely a bug -- it's far too open for misuse and (mis)interpretation. Based on that feeling I tend to suggest banning op<(T, optional<T>). With that we "kill two birds" -- (a) we address the safety concern you raised and (b) we achieve that within the "optional" framework without losing any existing functionality.
The source of the confusion in this example above is the wrong expectation that the compiler will warn you about any place where optional<T> is confused with T. They are supposed and expected to be confused and mixed. That's the idea behind implicit conversions.
Here with all due respect I have to cautiously disagree... probably due to too broad a brush you are using to paint the picture. I have no issue with T to optional<T> implicit conversion. I think it's essential. What I am cautious about is when and how liberally that conversion is applied. "Always" just does not sit well with me. The idea that you are seemingly advocating seems unduly mechanical -- when we need to do anything with a T and optional<T> pair, then propagate T to optional<T> and then apply the "optional natural model". The problem (as I see it) is that, when we are in the T land, we apply T rules, when we are in the optional<T> land, we apply optional<T> rules. However, when we have T *and* optional<T>, we are right on the border. You say, "apply optional<T> rules". I say "I do not know" and, therefore, as a library writer I want to leave that decision to the user -- i.e. force him to be explicit, i.e. "t < *ot" or "optional<T>(t) < ot". Does it make sense?
Yet, many people make this invalid expectation,
Hmm, maybe those expectations are not as invalid as you are trying to present them. I am hoping the paragraph above clarifies my feeling about it.
because what they are really looking for is something different: something that will detect as many potential programmer errors as possible (including false positives). ...