Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

Memory that can be shared between threads is called shared memory or heap memory. The term variable as used in this section refers to both fields and array elements [JLS 05]. Variables that are shared between threads are referred to as shared variables. All instance fields, static fields, and array elements are shared variables and are stored in heap memory. Local variables, formal method parameters, and exception handler parameters are never shared between threads and are unaffected by the memory model.

In modern shared-memory multiprocessor architectures, each processor has one or more levels of cache that are periodically reconciled with main memory as shown in the following figure:

...

A further concern is not only that concurrent executions of code are typically interleaved, but also that statements may be reordered by the compiler or runtime system to optimize performance. This results in execution orders that are difficult to discern by examination of the source code. Failure to account for possible reorderings is a common source of data races.

Consider the following example in which a and b are (shared) global variables or instance fields, but r1 and r2 are local variables that are inaccessible to other threads.

...

The Java Language Specification defines the Java Memory Model (JMM), which provides certain guarantees to the Java programmer. The JMM is specified in terms of actions, including variable reads and writes, monitor locks and unlocks, and thread starts and joins. The JMM defines a partial ordering called happens-before on all actions within the program. To guarantee that a thread executing action B can see the results of action A, for example, there must be a happens-before relationship defined such that A happens-before B.

According to section 17.4.5 "Happens-before Order" of the Java Language Specification [JLS 05]:

  1. An unlock on a monitor happens-before every subsequent lock on that monitor.
  2. A write to a volatile field happens-before every subsequent read of that field.
  3. A call to start() on a thread happens-before any actions in the started thread.
  4. All actions in a thread happen-before any other thread successfully returns from a join() on that thread.
  5. The default initialization of any object happens-before any other actions (other than default-writes) of a program.
  6. A thread calling interrupt on another thread happens-before the interrupted thread detects the interrupt
  7. The end of a constructor for an object happens-before the start of the finalizer for that object

When two operations lack a happens-before relationship, the Java Virtual Machine (JVM) is free to reorder them. A data race occurs when a variable is written to by at least one thread and read by at least another thread, and the reads and writes lack a happens-before relationship. A correctly synchronized program is one that lacks data races. The Java Memory Model (JMM) guarantees sequential consistency for correctly synchronized programs. Sequential consistency means that the result of any execution is the same as if the reads and writes on shared data by all threads were executed in some sequential order, and the operations of each individual thread appear in this sequence in the order specified by its program [Tanenbaum 03]. In other words:

  1. Take the read and write operations performed by each thread and put them in the order the thread executes them (thread order)
  2. Interleave the operations in some way allowed by the happens-before relationships to form an execution order (program order)
  3. Read operations must return most recently written data in the total program order for the execution to be sequentially consistent
  4. Implies all threads see the same total ordering of reads and writes of shared variables

The actual execution order of instructions and memory accesses can vary as long as the actions of the thread appear to that thread as if program order were followed, and provided all values read are allowed for by the memory model. This allows the programmer to understand the semantics of the programs they write, and allows compiler writers and virtual machine implementors to perform various optimizations [JPL 06].

There are several concurrency primitives that can help a programmer reason about the semantics of multithreaded programs.

...

The possible reorderings between volatile and non-volatile variables are summarized in the matrix shown below. Load and store operations are synonymous with read and write operations, respectively. [Lea 08]

Note that the visibility and ordering guarantees provided by the volatile keyword apply specifically to the variable itself, that is, they apply only to primitive fields and object references. Programmers commonly use imprecise terminology, and speak about "member objects." For the purposes of these guarantees, the actual member is the object reference itself; the objects referred to by volatile object references (the "referents", hereafter) are beyond the scope of the guarantees. Consequently, declaring an object reference to be volatile is insufficient to guarantee that changes to the members of the referent are visible. That is, a thread
may fail to observe a recent write from another thread to a member field of such a referent. Furthermore, when the referent is mutable and lacks thread-safety, other threads might see a partially-constructed object or an object in a (temporarily) inconsistent state [Goetz 2007]. However, when the referent is immutable, declaring the reference volatile suffices to guarantee visibility of the members of the referent.

...

Volatile variables are useful for guaranteeing visibility. However, they are insufficient for ensuring atomicity. Synchronization fills this gap but incurs overheads of context switching and frequently causes lock contention. The atomic classes of package java.util.concurrent.atomic provide a mechanism for reducing contention in most practical environments while at the same time ensuring atomicity. According to Goetz and colleagues [Goetz 06]:

With low to moderate contention, atomics offer better scalability; with high contention, locks offer better contention avoidance.

...