RE: [boost] Re: [shared_ptr] Interlocked* - possible bug?

Alexander Terekhov <terekhov@web.de> wrote:
Ben Hutchings wrote: [...]
If atomic_read returns 0 that means the use count has dropped to 0 [*] and can never increase again, so it must be the latest version. Otherwise weak_ptr makes a second test that is properly protected. So it can never use an old value.
[*] I think this is right but I'm not certain that it can't return a 0 that the processor read during creation of an object that uses enable_shared_from_this.
What makes you uncertain?
Simply not having thought enough about it. Now that I have done, I realise that it's impossible for a thread to use a weak_ptr to an object without either (1) constructing a shared_ptr to it, setting the use count to be non-zero, or (2) synchronising with another thread that constructs the shared_ptr and/or weak_ptr, which should provide an appropriate memory barrier that will prevent the use of any invalid pre-fetched value of the memory containing the use count.
Having said all that, I feel the naming of functions may give a false sense of generality and encapsulation when they actually only work in the way they are being used currently.
You mean that
bool expired() { return lock(); }
is better?
Not really. I will expand in my mail to Peter Dimov.

Ben Hutchings wrote: [...]
Having said all that, I feel the naming of functions may give a false sense of generality and encapsulation when they actually only work in the way they are being used currently.
You mean that
bool expired() { return lock();
I meant !lock();
}
is better?
Not really.
I mean that current expired() doesn't synchronize the local view of counter and hence may never return true (absent some user's synchronization). If I were maintainer, I'd probably strengthen it... for the sake of saving bandwidth trying to explain it. ;-) regards, alexander.

Peter Dimov wrote:
Alexander Terekhov wrote:
I mean that current expired() doesn't synchronize the local view of counter and hence may never return true (absent some user's synchronization).
On which platform?
Ok. Insert "in theory". It's PDQ in practice (unless you have a compiler smart enough to ignore volatile hack and use a cached copy in spite of it because you're not supposed to noticed the difference according to the MT memory model rules). regards, alexander.

Alexander Terekhov wrote:
Peter Dimov wrote:
Alexander Terekhov wrote:
I mean that current expired() doesn't synchronize the local view of counter and hence may never return true (absent some user's synchronization).
On which platform?
Ok. Insert "in theory".
Hm. I thought that we were discussing the platform-specific implementation. It seems that you are on a higher level: bool expired() { return !lock(); } I don't think that this is an improvement. The user that wants !lock() writes !lock(). The user that wants a (potentially more efficient version of - insert "in theory" here as well) use_count() == 0 writes expired(). It is deliberately "unspecified" whether expired() msyncs or not. Most expired()-based code isn't thread safe regardless of msync.
It's PDQ in practice (unless you have a compiler smart enough to ignore volatile hack and use a cached copy in spite of it because you're not supposed to noticed the difference according to the MT memory model rules).
You need to insert lots of "in theory" here. * "volatile" is specifically intended to prevent "smartness". In theory, a company may have a customer base that uses "volatile" incorrectly left, right and center, and then complains about the compiler not being smart enough. * the variable may be in uncached, memory-mapped I/O space. * the virtual address may be unmapped and generate a trap. * the CPU may have set a hardware breakpoint at the address. * the user may be running a debugger and poking with the variable. * an asynchronous event (signal, interrupt) may be updating the variable. And so on. Even if we discount all of these observers of the abstract machine, there is also the point that if the compiler really knows that eliminating the access is undetectable, then it is undetectable, i.e. the code will work as "well" as before (only the probability of error may increase).

Peter Dimov wrote: [...]
bool expired() { return !lock(); }
I don't think that this is an improvement. The user that wants !lock() writes !lock(). The user that wants a (potentially more efficient version of - insert "in theory" here as well) use_count() == 0 writes expired(). ^^^^^^^^^^^
Well, I'm a bit puzzled by its "unspecified nonnegative value" and "number of shared_ptr objects" (if you allow counting of null owners) bits, to tell the truth.
It is deliberately "unspecified" whether expired() msyncs or not.
long use_count() const { shared_ptr<T> const & p = lock(); return p ? p.use_count() - 1 : 0; } Oder? ;-) But ok, just specify "unspecified" behavior a bit more explicit, so to say.
It's PDQ in practice (unless you have a compiler smart enough to ignore volatile hack and use a cached copy in spite of it because you're not supposed to noticed the difference according to the MT memory model rules).
You need to insert lots of "in theory" here.
* "volatile" is specifically intended to prevent "smartness".
Your use of a volatile variable is indistinguishable (no change in behavior whatsoever) from a non-volatile variable if/when run single-thread. I believe that implementations capable to detect it are free to ignore your use of volatile if they operate under POSIX memory model where volatile is totally irrelevant with respect to threading. regards, alexander.

Alexander Terekhov wrote:
Peter Dimov wrote: [...]
bool expired() { return !lock(); }
I don't think that this is an improvement. The user that wants !lock() writes !lock(). The user that wants a (potentially more efficient version of - insert "in theory" here as well) use_count() == 0 writes expired(). ^^^^^^^^^^^
Well, I'm a bit puzzled by its "unspecified nonnegative value" and "number of shared_ptr objects" (if you allow counting of null owners) bits, to tell the truth.
Yeah. http://boost.org/libs/smart_ptr/weak_ptr.htm#use_count is a bit out of date, "the" specification is at http://open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1450.html and in the TR. I'll fix the boost documentation when I rewrite the pointers to use atomics. "Returns: 0 if *this is empty; otherwise, the number of shared_ptr instances that share ownership with *this."
It is deliberately "unspecified" whether expired() msyncs or not.
long use_count() const { shared_ptr<T> const & p = lock(); return p ? p.use_count() - 1 : 0; }
Oder? ;-)
An implementation can do that if it wants to, but I don't want to impose it as a requirement. It's better to allow use_count() to lag behind in MT code for performance reasons, because idiomatic weak_ptr use does not rely on the latest value anyway.
It's PDQ in practice (unless you have a compiler smart enough to ignore volatile hack and use a cached copy in spite of it because you're not supposed to noticed the difference according to the MT memory model rules).
You need to insert lots of "in theory" here.
* "volatile" is specifically intended to prevent "smartness".
Your use of a volatile variable is indistinguishable (no change in behavior whatsoever) from a non-volatile variable if/when run single-thread.
No, it's not; C++ behavior _is defined in terms of_ volatile (and I/O calls), not the other way around.

Peter Dimov wrote: [...]
An implementation can do that if it wants to, but I don't want to impose it as a requirement. It's better to allow use_count() to lag behind in MT code for performance reasons, because idiomatic weak_ptr use does not rely on the latest value anyway.
Agreed.
It's PDQ in practice (unless you have a compiler smart enough to ignore volatile hack and use a cached copy in spite of it because you're not supposed to noticed the difference according to the MT memory model rules).
You need to insert lots of "in theory" here.
* "volatile" is specifically intended to prevent "smartness".
Your use of a volatile variable is indistinguishable (no change in behavior whatsoever) from a non-volatile variable if/when run single-thread.
No, it's not; C++ behavior _is defined in terms of_ volatile (and I/O calls), not the other way around.
As if rule. And C++ says nothing about mutiple threads. http://groups.google.com/groups?selm=4152B42E.2B271094%40web.de regards, alexander.

Alexander Terekhov wrote:
No, it's not; C++ behavior _is defined in terms of_ volatile (and I/O calls), not the other way around.
As if rule. And C++ says nothing about mutiple threads.
Nope. "As if" is defined in terms of observable behavior, and "observable behavior" is defined in terms of volatile and I/O.
http://groups.google.com/groups?selm=4152B42E.2B271094%40web.de
Also nope. Volatile accesses aren't really implementation defined. ;-) You are right about reordering, though.

Peter Dimov wrote:
Alexander Terekhov wrote:
No, it's not; C++ behavior _is defined in terms of_ volatile (and I/O calls), not the other way around.
As if rule. And C++ says nothing about mutiple threads.
Nope. "As if" is defined in terms of observable behavior, and "observable behavior" is defined in terms of volatile and I/O.
int main() { volatile int a = 1; return --a; } prove that it can't be transformed to int main() { } regards, alexander.

Alexander Terekhov wrote:
Peter Dimov wrote:
Alexander Terekhov wrote:
No, it's not; C++ behavior _is defined in terms of_ volatile (and I/O calls), not the other way around.
As if rule. And C++ says nothing about mutiple threads.
Nope. "As if" is defined in terms of observable behavior, and "observable behavior" is defined in terms of volatile and I/O.
int main() { volatile int a = 1; return --a; }
prove that it can't be transformed to
int main() { }
The observable behavior is write volatile @a 1 read volatile @a x write volatile @a x-1 exit(x-1) in the first case, and exit(0) in the second case. By definition. There's nothing to prove. ;-)

Peter Dimov wrote:
Alexander Terekhov wrote:
Peter Dimov wrote:
Alexander Terekhov wrote:
No, it's not; C++ behavior _is defined in terms of_ volatile (and I/O calls), not the other way around.
As if rule. And C++ says nothing about mutiple threads.
Nope. "As if" is defined in terms of observable behavior, and "observable behavior" is defined in terms of volatile and I/O.
int main() { volatile int a = 1; return --a; }
prove that it can't be transformed to
int main() { }
The observable behavior is
write volatile @a 1
nop
read volatile @a x
nop
write volatile @a x-1
nop
exit(x-1)
push 0 call _exit
in the first case, and
Happy now (debugger notwithstanding)? C'mon, volatile is brain-dead. regards, alexander.

Alexander Terekhov wrote:
Peter Dimov wrote:
Alexander Terekhov wrote:
Peter Dimov wrote:
Alexander Terekhov wrote:
No, it's not; C++ behavior _is defined in terms of_ volatile (and I/O calls), not the other way around.
As if rule. And C++ says nothing about mutiple threads.
Nope. "As if" is defined in terms of observable behavior, and "observable behavior" is defined in terms of volatile and I/O.
int main() { volatile int a = 1; return --a; }
prove that it can't be transformed to
int main() { }
The observable behavior is
write volatile @a 1
nop
read volatile @a x
nop
write volatile @a x-1
nop
exit(x-1)
push 0 call _exit
in the first case, and
Happy now (debugger notwithstanding)?
No. A conforming compiler is not allowed to do that.
C'mon, volatile is brain-dead.
Nobody's arguing otherwise. ;-) But a nop it isn't.

Peter Dimov wrote: [...]
write volatile @a 1
nop
read volatile @a x
nop
write volatile @a x-1
nop
exit(x-1)
push 0 call _exit
in the first case, and
Happy now (debugger notwithstanding)?
No. A conforming compiler is not allowed to do that.
I see no reason why. It translated your accesses to a sequence of nop instructions.
C'mon, volatile is brain-dead.
Nobody's arguing otherwise. ;-) But a nop it isn't.
Nop works just fine for your volatile accesses. You can't prove non- conformance without trying to fool the program using debugger (or things like that... beyond the scope of the standard). Innocent until proven guilty, you know. regards, alexander.

Alexander Terekhov wrote:
Peter Dimov wrote:
C'mon, volatile is brain-dead.
Nobody's arguing otherwise. ;-) But a nop it isn't.
Nop works just fine for your volatile accesses. You can't prove non- conformance without trying to fool the program using debugger (or things like that... beyond the scope of the standard).
Argh. The standard says that the compiler MUST ASSUME that volatile variable accesses ARE OBSERVABLE by things OUTSIDE OF THE SCOPE of the standard! That's the whole and only point of volatile! (CAPS are emphasis and not shouting. I wonder what genius came up with the idea that you can shout in a printed medium.)
Innocent until proven guilty, you know.
No, "as if" doesn't work that way. You must prove your innocence. Look at it this way: Nop works just fine for your printf statements. You can't prove non-conformance without trying to look at a screen (or things like that... beyond the scope of the standard). Or even: Nop works just fine for your whole program. You can't prove nonconformance without trying to run it.

Peter Dimov wrote:
Alexander Terekhov wrote:
Peter Dimov wrote:
C'mon, volatile is brain-dead.
Nobody's arguing otherwise. ;-) But a nop it isn't.
Nop works just fine for your volatile accesses. You can't prove non- conformance without trying to fool the program using debugger (or things like that... beyond the scope of the standard).
Argh. The standard says that the compiler MUST ASSUME that volatile variable accesses ARE OBSERVABLE by things OUTSIDE OF THE SCOPE of the standard!
Fine. And the ruler of the outside world (implementation) says: "observe nop and be happy" (or something like that). [...]
Look at it this way:
Nop works just fine for your printf statements. You can't prove non-conformance without trying to look at a screen (or things like that... beyond the scope of the standard).
Right. And any implementation capable to detect that the output "goes to dev/null" is free to JIT-optimize it to nops. Good for global environment (less climate change, etc.), you know. regards, alexander.

Alexander Terekhov wrote:
Peter Dimov wrote:
Alexander Terekhov wrote:
Peter Dimov wrote:
C'mon, volatile is brain-dead.
Nobody's arguing otherwise. ;-) But a nop it isn't.
Nop works just fine for your volatile accesses. You can't prove non- conformance without trying to fool the program using debugger (or things like that... beyond the scope of the standard).
Argh. The standard says that the compiler MUST ASSUME that volatile variable accesses ARE OBSERVABLE by things OUTSIDE OF THE SCOPE of the standard!
Fine. And the ruler of the outside world (implementation) says: "observe nop and be happy" (or something like that).
OK.
Look at it this way:
Nop works just fine for your printf statements. You can't prove non-conformance without trying to look at a screen (or things like that... beyond the scope of the standard).
Right. And any implementation capable to detect that the output "goes to dev/null" is free to JIT-optimize it to nops. Good for global environment (less climate change, etc.), you know.
Why should the implementation bother to detect that? Observe nop and be happy.

Peter Dimov wrote: [...]
Right. And any implementation capable to detect that the output "goes to dev/null" is free to JIT-optimize it to nops. Good for global environment (less climate change, etc.), you know.
Why should the implementation bother to detect that? Observe nop and be happy.
Market forces. ;-) regards, alexander.
participants (3)
-
Alexander Terekhov
-
Ben Hutchings
-
Peter Dimov