Doc. No.: | WG21/N2429 J16/07-0299 |
|
---|---|---|
Date: | 2007-10-05 | |
Reply to: | Clark Nelson | Hans-J. Boehm |
Phone: | +1-503-712-8433 | +1-650-857-3406 |
Email: | [email protected] | [email protected] |
This paper is a revision of N2334. The most significant changes since N2334 include:
This paper has benefited from feedback from many people, including Sarita Adve, Paul McKenney, Raul Silvera, Lawrence Crowl, and Peter Dimov.
It is generally felt that it is important to allow the transformation of potentially non-terminating loops (e.g. by merging two loops that iterate over the same potentially infinite set, or by eliminating a side-effect-free loop), even when that may not otherwise be justified in the case in which the first loop never terminates.
Existing compilers commonly assume that code immediately following a loop is executed if and only if code immediately preceding a loop is executed. This assumption is clearly invalid if the loop fails to terminate. Even if we wanted to prohibit this behavior, it is unclear that all relevant compilers could comply in a reasonable amount of time. The assumption appears both pervasive and hard to test for.
The treatment of non-terminating loops in the current standard is very unclear. We believe that some implementations already eliminate potentially non-terminating, side-effect-free, loops, probably based on 1.9p9, which appears to impose very weak requirements on conforming implementations for non-terminating programs. We had previously arrived at a tentative conclusion that non-terminating loops were already sufficiently weakly specified that no changes were needed. We no longer believe this, for the following reasons:
while (!please_self_destruct.load_acquire())
{}; self_destruct()
) is clearly hazardous, and a bit more clarity seems
appropriate.Proposed changes to the Working Draft begin here:
Add new paragraph 6.5p5:
A loop that, outside of the for-init-statement in the case of a
for
statement,
- performs no I/O operations, and
- does not access or modify volatile objects, and
- performs no synchronization or atomic operations
may be assumed by the implementation to terminate. [Note: This is intended to allow compiler transformations, such as removal of empty loops, even when termination cannot be proven. �end note]
New paragraphs inserted as 1.7p3 et seq.:
A memory location is either an object of scalar type, or a maximal sequence of adjacent bit-fields all having non-zero width. [Note: Various features of the language, such as references and virtual functions, might involve additional memory locations that are not accessible to programs but are managed by the implementation. �end note ] Two threads of execution can update and access separate memory locations without interfering with each other.
[Note: Thus a bit-field and an adjacent non-bit-field are in separate memory locations, and therefore can be concurrently updated by two threads of execution without interference. The same applies to two bit-fields, if one is declared inside a nested struct declaration and the other is not, or if the two are separated by a zero-length bit-field declaration, or if they are separated by a non-bit-field declaration. It is not safe to concurrently update two bit-fields in the same struct if all fields between them are also bit-fields, no matter what the sizes of those intervening bit-fields happen to be. �end note ]
[Example: A structure declared as
struct {char a; int b:5, c:11, :0, d:8; struct {int ee:8;} e;}
contains four separate memory locations: The fielda
, and bit-fieldsd
ande.ee
are each separate memory locations, and can be modified concurrently without interfering with each other. The bit-fieldsb
andc
together constitute the fourth memory location. The bit-fieldsb
andc
cannot be concurrently modified, butb
anda
, for example, can be. �end example.]
Insert a new section between 1.9 and 1.10, titled "Multi-threaded executions and data races".
1.10p1:
Under a hosted implementation, a C++ program can have more than one thread of execution (a.k.a. thread) running concurrently. The execution of each thread proceeds as defined by the remainder of this standard. The execution of the entire program consists of an execution of all of its threads. [Note: Usually the execution can be viewed as an interleaving of all its threads. However some kinds of atomic operations, for example, allow executions inconsistent with a simple interleaving, as described below. �end note ] Under a freestanding implementation, it is implementation-defined whether a program can have more than one thread of execution.
1.10p2:
The value of an object visible to a thread T at a particular point might be the initial value of the object, a value assigned to the object by T, or a value assigned to the object by another thread, according to the rules below. [Note: In some cases, there may instead be undefined behavior. Much of this section is motivated by the desire to support atomic operations with explicit and detailed visibility constraints. However, it also implicitly supports a simpler view for more restricted programs. �end note ]
1.10p3:
Two expression evaluations conflict if one of them modifies a memory location and the other one accesses or modifies the same memory location.
1.10p4:
The library defines a number of atomic operations (clause 29[atomics]) and operations on locks (clause 30[thread]) that are specially identified as synchronization operations. These operations play a special role in making assignments in one thread visible to another. A synchronization operation is either an acquire operation or a release operation, or both, on one or more memory locations; the semantics of these are described below. In addition, there are relaxed atomic operations, which are not synchronization operations, and atomic read-modify-write operations, which have special characteristics, also described below. [Note: For example, a call that acquires a lock will perform an acquire operation on the locations comprising the lock. Correspondingly, a call that releases the same lock will perform a release operation on those same locations. Informally, performing a release operation on A forces prior side effects on other memory locations to become visible to other threads that later perform an acquire operation on A. We do not include "relaxed" atomic operations as "synchronization" operations though, like synchronization operations, they cannot contribute to data races. �end note ]
1.10p5:
All modifications to a particular atomic object M occur in some particular total order, called the modification order of M. If A and B are modifications of an atomic object M, and A happens before B, then A shall precede B in the modification order of M, which is defined below. [Note: This states that the modification orders must respect "happens before". �end note ] [Note: There is a separate order for each scalar object. There is no requirement that these can be combined into a single total order for all objects. In general this will be impossible since different threads may observe modifications to different variables in inconsistent orders. �end note ]
1.10p6:
A release sequence on an atomic object M is a maximal contiguous sub-sequence of side effects in the modification order of M, where the first operation is a release, and every subsequent operation
- is performed by the same thread that performed the release, or
- is a non-relaxed atomic read-modify-write operation.
1.10p7:
An evaluation A that performs a release operation on an object M synchronizes with an evaluation B that performs an acquire operation on M and reads a value written by any side effect in the release sequence headed by A. [Note: Except in the specified cases, reading a later value does not necessarily ensure visibility as described below. Such a requirement would sometimes interfere with efficient implementation. �end note ] [Note: The specifications of the synchronization operations define when one reads the value written by another. For atomic variables, the definition is clear. All operations on a given lock occur in a single total order. Each lock acquisition "reads the value written" by the last lock release. �end note ]
1.10p8:
An evaluation A happens before an evaluation B if:
- A is sequenced before B or
- A synchronizes with B; or
- for some evaluation X, A happens before X and X happens before B.
1.10p9:
A visible side effect A on an object M with respect to a value computation B of M satisfies the conditions:
- A happens before B, and
- there is no other side effect X to M such that A happens before X and X happens before B.
The value of a non-atomic scalar object M, as determined by evaluation B, shall be the value stored by the visible side effect A. [ Note: If there is ambiguity about which side effect to a non-atomic object is visible, then there is a data race, and the behavior is undefined. �end note ] [ Note: This states that operations on ordinary variables are not visibly reordered. This is not actually detectable without data races, but it is necessary to ensure that data races, as defined here, and with suitable restrictions on the use of atomics, correspond to data races in a simple interleaved (sequentially consistent) execution. �end note ]
1.10p10:
The visible sequence of side effects on an atomic object M, with respect to a value computation B of M, is a maximal contiguous sub-sequence of side effects in the modification order of M, where the first side effect is visible with respect to B, and for every subsequent side effect, it is not the case that B happens before it. The value of an atomic object M, as determined by evaluation B, shall be the value stored by some operation in the visible sequence of M with respect to B. Furthermore, if a value computation A of an atomic object M happens before a value computation B of M, and the value computed by A corresponds to the value stored by side effect X, then the value computed by B shall either equal the value computed by A, or be the value stored by side effect Y, where Y follows X in the modification order of M. [Note: This effectively disallows compiler reordering of atomic operations to a single object, even if both operations are "relaxed" loads. By doing so, we effectively make the "cache coherence" guarantee provided by most hardware available to C++ atomic operations.�end note ] [Note: The visible sequence depends on the "happens before" relation, which depends on the values observed by loads of atomics, which we are restricting here. The intended reading is that there must exist an association of atomic loads with modifications they observe that, together with suitably chosen modification orders and the "happens before" relation derived as described above, satisfy the resulting constraints as imposed here. -- end note.]
1.10p11:
The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior. [Note: It can be shown that programs that correctly use simple locks to prevent all data races, and use no other synchronization operations, behave as though the executions of their constituent threads were simply interleaved, with each observed value of an object being the last value assigned in that interleaving. This is normally referred to as "sequential consistency". However, this applies only to race-free programs, and race-free programs cannot observe most program transformations that do not change single-threaded program semantics. In fact, most single-threaded program transformations continue to be allowed, since any program that behaves differently as a result must perform an undefined operation. �end note ]
1.10p12:
[Note: Compiler transformations that introduce assignments to a potentially shared memory location that would not be modified by the abstract machine are generally precluded by this standard, since such an assignment might overwrite another assignment by a different thread in cases in which an abstract machine execution would not have encountered a data race. This includes implementations of data member assignment that overwrite adjacent members in separate memory locations. We also generally preclude reordering of atomic loads in cases in which the atomics in question may alias, since this may violate the "visible sequence" rules. �end note ]
1.10p13:
[Note: Transformations that introduce a speculative read of a potentially shared memory location may not preserve the semantics of the C++ program as defined in this standard, since they potentially introduce a data race. However, they are typically valid in the context of an optimizing compiler that targets a specific machine with well-defined semantics for data races. They would be invalid for a hypothetical machine that is not tolerant of races or provides hardware race detection. �end note ]
15.3p9:
If no matching handler is found in a program, the function std::terminate() is called; whether or not the stack is unwound before this call to std::terminate() is implementation-defined (15.5.1).